00:00:00:00 - 00:00:21:19
Speaker 1
I'd like to welcome everybody to this expert panel discussion. Truth, trust and the algorithm navigating Data, AI and Ethics in Storytelling. Hosted at the RMIT Online Skills Fest. My name is Eloise Boyd. I'm the director of market intelligence and proposition at RMIT. My team focus on understanding demand. We use analytics and storytelling to do that.
00:00:21:21 - 00:00:41:20
Speaker 1
So my group, you know, a bunch of number crunchers and at times crystal ball gazers So that's why this topic is right up my alley today. And I'm looking forward to discussing practical strategies about how we apply AI and data responsibly in this new age. Before I introduce you to our expert panel that we've, lined up for you today, I wanted to do a quick acknowledgment of country.
00:00:41:22 - 00:01:12:02
Speaker 1
RMIT University acknowledges the people of the Woi Wurrung and Boon Wurrung language groups of the East Kulin nations. On whose unceded lands we conduct the business of the university. We respectfully acknowledge their ancestors, past and present. I think, for me, in a conversation about data and technology and trust, I recognize that First Nations people have been custodians of storytelling and truth telling, for tens of thousands of years, and their traditions of passing on wisdom via these methods and interpreting signals from the land and sky.
00:01:12:04 - 00:01:35:08
Speaker 1
You know, maintaining a sense of collective responsibility is a reminder for me that with every new tool, whether it's gen AI or algorithms, there's a there's a responsibility of truth and connection that we can definitely learn from First Nations people. Now, let me, introduce our great panel, Alex Papli, He's a principal AI consultant for Hypergen where he helps businesses design and implement AI solutions.
00:01:35:10 - 00:02:02:13
Speaker 1
Supports teams to identify opportunities and, and technologies. Claire Mason is the principal research scientist at CSIRO's data 61. Leads the leads the research, investigate workforce impacts associated with new technologies so extremely relevant. And lastly, David Mcamis the, Amazon Web Services principal solutions architect. He's helping businesses, realize the business of, realize the benefits of analytics.
00:02:02:15 - 00:02:23:13
Speaker 1
And his focus has been on helping customers tell their stories using, you know, insights and new tools and technologies to do that. So thank you, Alex, Claire and David, you can already hear how much, you know, the breadth of experience that we've got here for you today, from designing AI systems to studying how those technologies are going to shape the workforce and helping organizations tell their stories with data.
00:02:23:15 - 00:02:45:04
Speaker 1
But that breadth that's important here in the scope of what we're talking about. I know a lot of you be kind of very familiar with all of the hype surrounding, AI. Yes, I can mention definitely talk a little bit quickly. So that was one of my, my KPIs for today. So that breadth matters. So AI carries both enormous promise and a huge amount of of hype as well.
00:02:45:09 - 00:02:56:12
Speaker 1
So I suppose I wanted to kick things off and, and, and hand it over to the panel to ask kind of what's the biggest misconception about AI in the current age? And yeah what do you guys think?
00:02:58:05 - 00:03:20:24
Speaker 2
Maybe I'll make a start. I think, what I find when I work with organizations, I think there's maybe a misconceptions wrong. I might be being a bit unfair, but I think one of the things that we see is that people aren't sure what's magic and what's not. And so, I think one of the key things we like to do is help them just demystify and understand, what it can do and what it can't do.
00:03:21:01 - 00:03:35:07
Speaker 2
Because I think sometimes we think it, you know, in the absence of understanding, we think it's magic and it can do anything. And so I think trying to bring it back down really pragmatically and say, this is what it can do and this is what it maybe shouldn't do or can't do. Is a really, really good start.
00:03:35:09 - 00:03:41:00
Speaker 2
So that's that's my my initial point of view. And yeah. Welcome. Any other comments on that?
00:03:41:17 - 00:04:01:06
Speaker 3
I'll take the next turn. I guess I'd say, the idea that AI and in particular AI literacy can be understood as a unitary construct or a single entity, because there are so many different types of AI and in many cases we're not even aware we're using it. So when our junk email gets sorted out from our inbox automatically, that's machine learning.
00:04:01:06 - 00:04:24:23
Speaker 3
One type of AI. And when you've got predictive text on your phone, that's another one using natural language processing. And until 2023, most forms of AI were very narrow. They could take a certain very prescribed set of inputs and deliver a very prescribed set of outputs. And now we've got generative AI tools, and they are far more flexible.
00:04:25:00 - 00:04:52:02
Speaker 3
And they can support a wide range of tasks and across multiple stages of that task. So we actually need very tailored skills to understand how to use that generative AI capability effectively so that we maximize its benefits while minimizing the risks. And so I think we need more specific versions of AI literacy. We need generative AI literacy. If you work with generative AI and maybe cobotic literacy, if you work with co bots.
00:04:52:11 - 00:05:17:19
Speaker 4
And so for me, probably the biggest misconception I see, especially when we're looking at, businesses or organizations that use things like chat bots or interactive ways of in our interactive ways of working with, AI and data sets. Is that AI essentially in our concept conscious? It is not. And so a lot of times it will you, you'll have like a chat bot that will mimic a brand and a voice.
00:05:17:21 - 00:05:41:08
Speaker 4
And you may seem, it may seem to you that it is has some intelligence behind there and it does, but, what's happening behind the scenes is that the responses are based on the statistical relationship from that learning data. So with all of those chat bots and all the gen AI, it's not it doesn't have any of your subjective experiences, emotion, self-awareness.
00:05:41:10 - 00:05:59:10
Speaker 4
So the responses were like, well, they can be quite sophisticated, emerge from computational process rather than conscious thought. So I always just need to keep that in mind. We're not there yet. If we do this webinar again in five years, maybe, but for now, AI is neither the sentient or conscious.
00:05:59:12 - 00:06:23:06
Speaker 1
i look forward to booking that webinar in, actually I remember seeing about the curser that they give you in a gen AI chat bot, and it's almost like it's breathing or pulsing, even though the answers are instantaneous, it's there to look human, you know, it's been designed for that, for that purpose. In thinking about, you know, how we sort of, get ourselves in the right, to prompt, you know, give ourselves the best opportunities to get the best outcomes.
00:06:23:07 - 00:06:32:06
Speaker 1
Alex, can you give us an idea on how you help businesses pick the right use cases when they're trying to narrow that down? How do you know if a problem is a good candidate for AI?
00:06:32:08 - 00:06:49:14
Speaker 2
Yeah. It's a it's a really good question. We have, there's about seven criteria that we like to work with organizations on. So typically we'll will engage and we'll run a workshop to help them understand a bit more about, you know, demystifying AI and show them a whole lot of use cases so they understand where it works.
00:06:49:16 - 00:07:06:14
Speaker 2
From there, they usually come back a couple of weeks later with the list. And we prioritize and, and we use effectively just a one to five scoring system, to help them prioritize and then rank the top ones. But the themes are really things around frequency. And we do a lot of custom work. Right. So we might tweak a workflow or things like that.
00:07:06:14 - 00:07:23:14
Speaker 2
So frequency is important because if it's once a year, it's not worth probably investing in. Because it'll work perfectly fine. But if it's every hour, every minute, then it makes a lot more sense. Safety is important, and I know we'll talk about that a bit more. You know, is it something it's high risk or something that we feel is pretty benign?
00:07:23:16 - 00:07:40:22
Speaker 2
You know, the business impacts obviously big to, you know, the ROI. But the other thing I'll just say is that we have, like a little three B’s approach. So we say before you even start tackling some of these big things, just do it some sort of beachhead, just get something going, something small so that as an organization say, hey, we're doing AI.
00:07:40:24 - 00:07:59:13
Speaker 2
It hasn't, you know, eaten us, We're still here. It's being helpful. It's maybe not shooting for the stars in terms of the first use case being massive, but it just gets you started. So we we tend to like to have a formal, matrix to help really prioritize, the best ideas. But then we say, let's not necessarily pick the number one.
00:07:59:13 - 00:08:06:09
Speaker 2
Let's just find something that's going to be really cost effective, cheap, Just gives you a quick win, gets everyone on the journey.
00:08:06:11 - 00:08:19:16
Speaker 1
Yeah, I like the idea that it's not a silver bullet and that the best way to start is by starting. Definitely something that resonates and actually reminds me of when, you know, analytics and dashboards. It was thinking about, well, what do you need a dashboard? If it's something you don't need to do all the time, we'll check all the time.
00:08:19:16 - 00:08:36:07
Speaker 1
So very similar in terms of the AI context. Claire, interested from your point of view, kind of. AI is changing what not only what we create, but also how we collaborate. How are you seeing that shift, the skills and the mindsets in people in, in AI powered workplaces?
00:08:36:09 - 00:08:58:24
Speaker 3
I guess the analogy I would make is that just as the internet opened up access to information that was previously just held in an expert's mind or in a library. Generative AI is definitely opening up access to expertise and intelligence, to the point where I. I've got very poor dexterity, fairly limited creativity, narrow expertise. But I can now produce a professional looking image.
00:08:58:24 - 00:09:19:00
Speaker 3
I can draft a contract, I can produce a podcast in a matter of minutes. And of course, that's creating a bit of fear that we've created an environment where expertise and intelligence are no longer in short supply. And those fears are founded in some contexts where stakes are not high. And maybe we just need a very standard type of output.
00:09:19:02 - 00:09:45:22
Speaker 3
But the thing is that the real world is messy and problems aren't well defined. People seek novelty, and the experience that we've built up as a human in the world is not replicable by artificial neural networks. The building blocks of AI. So I would say that in environments where accuracy, quality, human experience, and trustworthy are important, we're still going to need humans to deal with that novelty, the poorly defined problems.
00:09:45:24 - 00:10:25:20
Speaker 3
And importantly, most of all, perhaps we need our meta cognitive skill sets very high level thinking. It's the ability to think about your own thinking, to engage in things like possible strategies and planning and evaluation and monitoring. AI as the other panelists have said, is not self-aware, so it can't provide metacognition. So our role will be to understand how the AI works, what it does well, what it doesn't do well, self-aware enough to understand our own limitations, what we're good at and what we're not so good at, and understand how best to combine those two things for a task in front of us.
00:10:25:23 - 00:10:30:14
Speaker 3
And I think that's going to be the way we work in the future.
00:10:30:16 - 00:10:44:05
Speaker 1
Absolutely. And we can already see part of that happening. And yeah, I've been having to stop back and really ask myself, what am I actually trying to achieve here and why isn't it And it giving me kind of the desired outcomes. And I know that that's a very transactional approach. But we're talking here. But that's.
00:10:44:05 - 00:10:45:00
Speaker 3
thats metacognition.
00:10:45:00 - 00:11:04:00
Speaker 1
That's right, that's right. Yeah, exactly. But I can tell as well, you know, even when I put in a certain way of doing it, there is there is an inherent bias in some of the outputs there. So, David, I'm very interested in your perspective of how do we stop the reinforcement of bias. Through our prompting, through our work with AI?
00:11:04:02 - 00:11:36:12
Speaker 4
Well, I think a lot of people don't understand AI is built on a foundation of data. And you have to have good data. And so one of the things I look for is when we're training a machine learning model is we need a diverse set of data that's representative. So, for instance, I live in Melbourne, and so if I were to take a data set for, say, household income or household attributes from Toorak, which is quite a lovely affluent suburb, and just take that particular data set, it doesn't necessarily represent everyone that lives in Melbourne.
00:11:36:14 - 00:12:02:05
Speaker 4
So one of the things we do is bias auditing. And so again, when we talk about that human cognition, it's actually a human who looks at the data set and says, is this representative of the cohort of the population that I'm trying to, use to train this model? And so that bias auditing, allows us not only to identify where there is bias in a current data set, but also, there's often historical bias.
00:12:02:07 - 00:12:22:13
Speaker 4
There's historical bias in the way that data was collected. The way that people were described. So we have to be sure that we get rid of that as well, as much as we can. And then also we may need to do data augmentation, and that's where we fill the gaps. If we do have underrepresented groups, and we go and find that.
00:12:22:13 - 00:12:47:24
Speaker 4
So I tell people, what I do is I get data together for smart people to do amazing things. So it's a great job, but I'm really a plumber, so I'm trying to get them all of the data infrastructure they need. And that data requires careful curation. So for that bias to be removed, replaced, we need to find those biased examples, be able to recognize them, and then take some action on that.
00:12:48:01 - 00:12:49:01
Speaker 2
Yeah.
00:12:49:03 - 00:13:06:23
Speaker 1
I remember when we started doing machine learning in my group and we didn't have any data engineers, and I was so focused on data scientists. And I'm thinking the plumbing's not in, you know, the tap is on, but the water just flows. So it's really important in thinking about, yeah, the maturity that we're all going through in the sector as well.
00:13:07:03 - 00:13:17:04
Speaker 1
David, staying with you kind of what are the practical steps that you've used to sort of stay transparent about their use of AI and sort of, yeah. How have you combated that?
00:13:17:06 - 00:13:38:01
Speaker 4
Well, I think it's got to be full disclosure. So I think for commercial reasons, a lot of people don't want to explain exactly what they're doing. But I think in addition to terms and conditions, whenever AI is in use whether it be through a chat bot interface through a letter, I recently got, from a financial institution.
00:13:38:01 - 00:14:01:24
Speaker 4
I received a document recently and it said part of this document has been prepared by generative AI, etc., etc. so what I'm expecting is, that in that case it was signed by a human. But the human had reviewed that. And so I think full disclosure is required because again, I want to know, where information comes from.
00:14:02:01 - 00:14:23:09
Speaker 4
So in that disclosure, I don't want it in just in the terms of conditions or hidden somewhere in the website. I prefer that open approach where you tell me likewise, if I see a report or see something on the news. Attribution is also very important. In the creative world, I say for where that particular piece of content came from, and then how it was generated.
00:14:23:11 - 00:14:26:04
Speaker 4
So I think those are the key principles for me.
00:14:26:06 - 00:14:46:24
Speaker 1
Yeah, I agree. And we can we translate that to we want to know where our clothes are made. We want to know where our food was grown. So why wouldn't we want to know where our data came from as well? I agree with that. I do think, though, we've talked about, you know, the massification and the, the access that people have to it and those ideas about and going back to kind of Clare's point around metacognition.
00:14:47:01 - 00:15:13:12
Speaker 1
It's I mean, in my view, kind of homogenize’s the language in talking about transparency. You can kind of tell what's been generated with AI because it all kind of looks and feels the same. Right? And you start it, you know, Em dashes are an obvious giveaway. You know, there's some other components, but it and the way that everything's on locked these days in terms of the outputs that you see from gen AI, Claire kind of what how how can businesses avoid losing diversity of thought and expression, when they're using those things at scale?
00:15:13:14 - 00:15:40:02
Speaker 3
I think one of the answers is that businesses need to allow workers flexibility to determine how best to use it. The evidence shows really clearly that workers who use these tools, on average, are achieving both productivity and quality gains from using them. But at the enterprise level, those benefits are much harder to see. So part of the answer, I think, is to allow workers to decide where it adds values on a particular task rather than prescribing that.
00:15:40:04 - 00:15:58:03
Speaker 3
But I think the other part of the answer is what I'm calling good generative AI hygiene. And what I mean is that we have to choose the right ways of working with generative AI. For example, we know that as humans, our responses tend to be anchored by the first piece of information that we receive or the first solution we're given.
00:15:58:05 - 00:16:19:22
Speaker 3
So don't go to the generative AI first, because then you're going to be stuck on it's thinking rather than adding value as a human worker. So start with your own response and then go to the generative AI for improvement on that. And another hygienic way of using generative AI that helps is using it as a source of ideas and information, rather than a source of answers.
00:16:20:03 - 00:16:29:01
Speaker 3
So it's not just about maximizing performance on a task, but maximizing things like creativity and diversity in the way you use generative AI.
00:16:29:01 - 00:16:53:12
Speaker 1
Okay. Did the panelists have a kind of point of view around, yeah, sort of homogenization of AI outputs or where can move on to the next question, perhaps. I guess for Alex, you know, you've seen businesses gaining trust, without, you know, without being upfront about their AI use case, kind of what is what is transparency look like in your world?
00:16:53:14 - 00:17:12:17
Speaker 2
Yeah. It's a really good question. I think there's there's a few parts to that. So first of all, when we when we build things from a transparency perspective, we sort of intuitively want to know how it has arrived at an answer And so we we tend to encourage bulk testing wherever possible, as a way to validate and understand output.
00:17:12:18 - 00:17:32:13
Speaker 2
So typically I even have a really simple example if we're going to build a chat bot or a little private information assistant, we would effectively run hundreds and hundreds of tests over it, and simulate, conversation. And then we'd assess those responses against what, the ground truth is to then understand, you know, how it's actually performing.
00:17:32:15 - 00:17:54:24
Speaker 2
But once you've got that, that's one start to give you an idea of how accurate it is. The next thing is about sharing that. So if we're talking about internal employees, which is where we usually start, by the way, we'd say often, even if you're planning to do something external, let's test it internally. So a simple example would be you might make an information assistant to speed up, you know, maybe people in a call center finding information rather than going through it.
00:17:54:24 - 00:18:23:15
Speaker 2
Right. So they might just Q&A. And then from it you might say, right, it's working really well. We'll put it out to a chat bot, or into a voice agent, maybe in the future, but it's not internally. And so the first thing you do, I should say, right, here are the test results across a whole lot of different types of questions that people have asked, through the testing, and what we can then do is use that to help, with the training so we can say, hey, as you're using this, if you ask for this type of question, and maybe if you ask for a list of information, we found
00:18:23:15 - 00:18:46:05
Speaker 2
that in the tests it was maybe 80% then. Right? So we know that it's not necessarily always going to be accurate. So use some judgment before you just sort of respond. But then in others and if it was maybe a very specific question like, you know, what's the boiling point of this? If it was a question, you know, some sort of chemical question, we might find that those answers are very close to 100%.
00:18:46:10 - 00:19:06:24
Speaker 2
And so part of that is, is being able to go back to the training, to actually help people understand how it has is actually performing where we expect it to work. Well, here are the test results. So we can validate that it's actually performing well. And then as we go through it, we're testing it internally first to gain feedback, further, improvements and then finally take it out further.
00:19:06:24 - 00:19:25:16
Speaker 2
And I think doing that, is enabling, a better way to get to trust this because we're actually testing it. We're not just taking it for face value. And I think one of the biggest risks is that it sometimes seems so easy to build something, you know, I, I can go on, use a tool and just suddenly bang, bang, I've got this amazing thing.
00:19:25:18 - 00:19:51:10
Speaker 2
But I haven't actually tested it at volume. Particularly for enterprise and for business to see. Does it consistently perform? And so we find that taking a step back and having a little bit of a methodology, which we automate. So it's not something that is onerous on our clients, but having that extra checking balance is really valuable, when it comes to actually ensuring that you can be pretty confident, about how it's going to perform in the wild.
00:19:51:12 - 00:20:13:21
Speaker 1
Fantastic. Sort of shifting gears into kind of skills and productivity and how we might see, AI influencing that kind of there's a lot of, I guess, concern and fear around job losses. And then another camp that's talking about augmentation rather than replacement, kind of. What do you tell your organizations who are anxious about what that means for them?
00:20:13:21 - 00:20:19:01
Speaker 1
And that work is.
00:20:19:03 - 00:20:19:17
Speaker 2
Is that.
00:20:19:19 - 00:20:22:21
Speaker 1
Yeah. Yeah. I mean, open it up to whoever. Sorry. Yeah.
00:20:22:23 - 00:20:26:01
Speaker 2
You go David. Then I'll fill in the gaps if, if later.
00:20:26:03 - 00:20:47:05
Speaker 4
There you go. So I think it is a question of augmentation and not replacement. So when we look at history, again, it's if you think about the industrial revolution, it's actually creating a new type of work. And it will eliminate other roles, but it's not as widespread as I think the media would like you to think.
00:20:47:07 - 00:21:10:24
Speaker 4
So for me, my advice for folks who are worried about this is to upskill, upskill themselves strategically. In other words, you want to work a side eye. And you also want to be able to remember that it's your creativity, emotional intelligence. It's your complex problem solving skills, your ethical judging that AI or AI does not have.
00:21:11:01 - 00:21:35:11
Speaker 4
So in your role, regardless what your specialty or your domain or what you'd like to do, you've also got to stay current with, what's in your field in terms of tools, technologies where I have been used. And then also you want to look for an organization that will actually provide training, and support for you to use these new set of tools, instead of replacement.
00:21:35:11 - 00:21:57:22
Speaker 4
So, I tend to view organizations that invest in people and skills, a little more desirable for jobs and someone who just thinks, oh, I can get rid of five people in that department with AI. So you really want to find that home? Where are you going to be able to grow and learn and like an industrial revolution?
00:21:57:24 - 00:22:04:16
Speaker 4
You'll have a job. It may not be the same job you had before. But you want to be able to continue on.
00:22:04:18 - 00:22:21:16
Speaker 1
We've seen that, right? Like, you know, businesses that have fired a group of people have ended up rehiring them because they weren't able to do it. And there were there were, you know, issues with the the outputs. So I think, you know, that's always going to be a little bit uneven in its distribution, but you can tell that things are gradually changing.
00:22:21:16 - 00:22:34:00
Speaker 1
And for those skills, needs are changing. Alex. You know, there's lots of productivity gains. So what are some real world examples of AI being used effectively that others can learn from in this space?
00:22:34:02 - 00:22:56:16
Speaker 2
Yeah. So this is a there's a lot out there. I think what first of all, I'd say that, yeah, agreed I'm not seeing a lot of job losses, to be perfectly honest. And I think part of it is that all of our jobs, a very diverse we often have, like, I think there was some study done with a lot of guides, 200 or so tasks that any individual person might do, and about 30% were, you know, automatable, for want of a better word.
00:22:56:16 - 00:23:28:03
Speaker 2
So there's a lot left. So we're, we're im seeing the business cases, falls although labor savings is floating around there, it's actually about being added to more with the same. And so call centres are a really good example. If you think about a call, you know, it might be eight minutes talking to a customer in three minutes and what we call a wrap, which is the time you take to make a summary of the call, you know, you can do some amazing things even with, you know, off the shelf, products which will summarize the call, and then you can save it straight back into CRM and, you know, potentially save two
00:23:28:03 - 00:23:45:02
Speaker 2
minutes of that wrap time. And so that's a massive productivity. Did some work recently with an organization where we did that, and all of their dashboards in the call centre went from red to green, and they didn't sack anyone instead what they did was they said, right. Well, first of all, our customers are happy because they're actually getting a human rather than music.
00:23:45:04 - 00:24:08:17
Speaker 2
And secondly, we're going to take some of that inbound time and point them on outbound calls. And so we're actually going to make more money because we're going to go and do more selling and more proactive work. So I think that it's about thinking about your organization and saying, what can we do to either increase that competitive advantage using AI and things like that, or defensively, do something so that we can prevent, avoidance of change.
00:24:08:19 - 00:24:29:04
Speaker 2
There's also a whole lot of, information, obviously, often caught up in, documents that people just don't have the opportunity to go through manually. I mean, financial services, you know, we know that they're cracking every single company report and doing those things that they just couldn't humanly do previously. Right. And then, you know, I imagine they listen to every podcast ever published as well.
00:24:29:04 - 00:24:49:06
Speaker 2
So, to get little messages that the senior manager might say inadvertently when they're being interviewed, just to try and figure out what's extra bias signals. So it's all about thinking, how do I use AI as a tool to help me be either more competitive or more productive, or provide a better experience for my customers? And that's what we're seeing overriding everything else.
00:24:49:08 - 00:24:59:08
Speaker 2
It's about doing more with the same, and generally just being being more adaptable and trying to lead the market. So that's what that's what I'm seeing First hand.
00:24:59:10 - 00:25:25:24
Speaker 3
So, I mean, I guess I'd say from a research perspective, there's a couple of different things going on here. One is the AI has changed. So before it could just automate a very small routine task. Now it can understand a bigger picture goal that you're working towards. It can keep that in memory and respond to feedback and direction so that it's working with you over time on a shared objective.
00:25:26:01 - 00:26:03:15
Speaker 3
And so that means we can now work with it in a collaborative way so that really shifts the focus from AI as an automation tool to AI as an augmentation tool. To understand which way AI is operating in the labor market, researchers have adopted two different approaches. One is to go which of the occupations who performed the tasks that AI can now perform, and they tend to be fairly high skilled, high qualification roles, things like accountants and actuaries and now doing tasks that AI can do, but we're not seeing any decline in demand for workers in those tasks.
00:26:03:17 - 00:26:26:11
Speaker 3
The other thing researchers have been doing is using job ads, and the signal of hiring a worker with AI skills to say, this firm is adopting AI, and then comparing it with similar firms that haven't had any AI skilled workers to see what differentiates them. And we're seeing absolutely no evidence that the AI adopting firms are hiring fewer workers.
00:26:26:13 - 00:26:47:05
Speaker 3
And that's perhaps because of the problem of bundling. Much of the work that we do requires a human to do at least some of it. Still, you know, a human has to Greet The patient has to help them, like, get out of the bed if you're a nurse. So, make them feel understood or provide empathy and support.
00:26:47:07 - 00:27:06:24
Speaker 3
And as long as the. AI can't do all of it, you can't take the human out of the equation. So that's why, even though AI can do part of these jobs, you still need the human and the human is just able to add value in new ways. And so the final point is that technology is a tool and humans determine how we use that tool.
00:27:06:24 - 00:27:20:24
Speaker 3
And so it's the choices we make with our regulation, with our consumer choices, where we prefer the human mediated experience rather than the AI only experience that will determine how this plays out in the long run
00:27:21:19 - 00:27:22:12
Speaker 1
David, I think.
00:27:22:12 - 00:27:24:16
Speaker 3
I may have come up with more than two things there, but never mind.
00:27:24:16 - 00:27:39:18
Speaker 1
No, so valuable. Claire. And I think, yeah, that perspective from research is really important that we're saying, David, from your perspective, you know, what kind of frameworks and are you going to recommend for people as they're trying to just get on this journey to start out?
00:27:39:20 - 00:27:59:22
Speaker 4
Well, just like Alex, I think one of the most important things for me is, a clear business case other than everybody else is doing it. Because that never works out. Well. And so what we're looking for to start that, and the first part of a framework of working with AI is a specific problem that AI can solve.
00:27:59:24 - 00:28:22:17
Speaker 4
And so once we do that, we start looking at stakeholder mapping. So who's going to benefit from this? Who's going to be responsible for building this, for maintaining this, for securing this solution that we've created? And so there are some existing frameworks like nist is one of them. N-I-S-T and so what we want to start with, look at first is a governance framework and some policies.
00:28:22:17 - 00:28:50:12
Speaker 4
And I know that sounds very strange, because as a technology person, I want to start building that can go on, in parallel to do some proof of concepts or try and understand the solution. But you really want that governance framework, and you want that to be a cross-functional team, who are creating that for your organization and also developing those policies and what we will use Gen a or AI for, I use the general term and what we want, which is quite neat.
00:28:50:14 - 00:29:08:04
Speaker 4
And then also a lot of people in those frameworks they forget about monitoring, monitoring and reviewing. So whatever solution we put into place, let's have a process that we have some automated monitoring. So if it's a chat bot, for example, the answers that it's giving, but then let's also keep the human in the loop.
00:29:08:06 - 00:29:21:13
Speaker 4
So we're doing reviews of that information that we're providing to people, to make sure that's accurate, unbiased and fair. So NIST is definitely a good place to start if you want to go have a look.
00:29:21:17 - 00:29:43:05
Speaker 1
Yeah. Fantastic. I guess before we hand over to a couple of, audience questions, I just wanted to sort of ask the panelists one principle that they would apply to AI in their work tomorrow. Like, what would everyone if you could tell them to take away one principle as they go to apply this work to their, you know, AI work what would it be?
00:29:43:07 - 00:30:09:08
Speaker 3
And my answer, and it's because I struggle so much with this myself is find time to keep learning. I know the pressure is always to be more productive, but AI is changing so fast that the leading large language model of benchmark is updated on a daily basis. So even if you aren't one of the early technology adopters and I'm not, it's a really good idea to be connected with those who are.
00:30:09:09 - 00:30:31:23
Speaker 3
And that might be a colleague, or it might be following a really good blog. And one of the books I love is called One Useful Thing by Ethan Malik, and he explains things in a really user friendly way. So I would just say make sure you're keeping up to date with the ever changing answer about what best practice the use of AI looks like.
00:30:31:23 - 00:30:33:17
Speaker 1
I think, what about Alex or David?
00:30:33:19 - 00:30:51:18
Speaker 2
Yeah, I love that. I mean, my, my take out would be, Yeah. It's testing. Is everything right? So if you test it, you know what you're dealing with. I think there's so much talk about hallucinations. And fundamentally, I sort of boil it down to a hallucination is either a lack of information going into the context or the wrong prompt.
00:30:51:20 - 00:31:10:06
Speaker 2
Because the AI affectively is just a summarization tool. So if you don't give it enough information or you give it confusing information, it's going to come up with hallucinations. And so the best thing you can do is test, and then you actually get to a point where you can be very, very confident. So, yeah, the word hallucination to me actually just tells me that it's probably not.
00:31:10:08 - 00:31:20:24
Speaker 2
There's something else. Not probably done effectively enough, particularly in enterprise sense. So, yeah, testing is for me. That's the one that I look at.
00:31:21:01 - 00:31:40:24
Speaker 4
And lastly, for me it's about AI responsibility. So you've got to keep the human in the loop. So don't let AI make a final decision alone. So you want to review those outputs before you act on that. Likewise you want to make your maintain your expertise. So your particular skills and expertise will often guide you in if that's the right question.
00:31:41:03 - 00:32:03:20
Speaker 4
So you got to think about what decision am I asking AI to help me with? What are the consequences if we get it wrong? And then you want to align that to your professional knowledge. So I'm a IT guy from way back, and a lot of times when I ask Gen AI models for recommendations to solve a particular problem, I look at that go, but is that it?
00:32:03:22 - 00:32:24:21
Speaker 4
Or is my experience telling me it could be something else? You've also got to be careful with any AI that if it produces some output, am I comfortable putting my name on this? What's generated and would i be able to explain this to someone where it has a negative effect for that. And that's the other key.
00:32:24:21 - 00:32:34:17
Speaker 4
So there's a responsibility for all of us to use AI responsibly. Hey, how do you like I use the definition in the sentence. So we definitely want to keep that human in the loop.
00:32:34:19 - 00:32:55:08
Speaker 1
Great. And nobody asked me, but mine would be to stay curious. Because I feel like. And Claire, your point around making time to keep learning is really important. My group have a share session every week where we come back with all the tips and tricks that we've learned. And, you know, one person's, heartache over a bad prompting experience has saved us countless hours.
00:32:55:08 - 00:33:21:14
Speaker 1
I can't recommend it. There's no shame in it. Everyone should be using it. You know, it's it's absolutely the way for us to continue to improve our skills. But we need to be honest about what's working and what's not working and how sometimes the desired outputs weren't exactly what we were hoping for. So yeah, it's staying curious and carving out time that even if it seems like, I'm just spending more time may in the prompt, it's been really important for us over here to.
00:33:21:16 - 00:33:30:16
Speaker 1
So I'm gonna, hand over to a couple of the audience questions that have come through. One of them is.
00:33:30:16 - 00:33:39:04
Speaker 1
All right, so if the panelists can tell me kind of if they could only recommend one non gen, generative tool for businesses, what would it be?
00:33:39:04 - 00:33:44:15
Speaker 1
If non generative AI to just max out my annunciation here.
00:33:45:23 - 00:33:51:10
Speaker 1
No takers. No one has a good recommendation on a non generative AI tool or generative AI. See what I.
00:33:51:10 - 00:34:12:14
Speaker 3
Think is that Eloise that the non generative AI tools are very sort of task specific. And so saying what's the one that serves all businesses doesn't make so much sense because you said we couldn't say generative AI and it's the one that works across businesses I think. Alex and David, would you agree with that?
00:34:12:16 - 00:34:30:19
Speaker 2
Yeah. I mean, I would say that the the one thing I would actually do, if you haven't already, I tend to get really lazy at typing now. And in general, I would just say whatever, tool you like to use, if you have the ability to hit the microphone button and talk to it. That to me is my one thing I love.
00:34:30:21 - 00:34:34:20
Speaker 2
It's not quite answering the question, but I'll just share it because it's just make such easier
00:34:34:21 - 00:34:57:12
Speaker 3
A really good point because I type really well. And so for me, that is not the application. And that's one of the principles is use it for the stuff. You aren't great at but the best performing humans in a field still outperform the AI these days. So where you're strong, don't go to the AI unless you just need to do something really fast.
00:34:57:12 - 00:35:00:23
Speaker 3
Or are you staring at a blank page and you just need help filling it?
00:35:01:04 - 00:35:26:13
Speaker 1
Very good advice. Let me have a look at what else we've got. So there's a few questions about, transparency, AI still continuing on that and responsibility in practice. So young people feeling like, you know, that's, the technology that now there's, you know, grown up with and they're using it extensively, but they don't have that discerning knowledge that maybe we have because we've become, you know, SMEs and domain experts in our own, in our own right.
00:35:26:13 - 00:35:50:22
Speaker 1
So we can tell whether those outputs don't look quite. You know, when you say a line chart and you go, oh, there's a data like there is, data issues happening here, you can tell that's not quite right. How can we help younger people in getting that expertise when they're just starting out to think, yeah, I know we've said take curious mindset and double check your work, but what are the other kinds of ways that we can help people be more transparent and responsible with their with their usage?
00:35:50:24 - 00:36:07:09
Speaker 4
Well, I think for me, so I one of my major influences when I was a kid was the school librarian. Miss Lemon, shout out to you, who is lovely, and she had the rule of three. And so that was when I would come looking for information. She would say, now let's go. And that it was books.
00:36:07:09 - 00:36:42:06
Speaker 4
That's how old I am. She would say, let's find three sources, at least to confirm, the data that you're seeking. And so she was very militant about that whenever working. And I think I have young children. And one of the things when we talk about Gen AI which they're actually using, like in tools in school, like in Canva to create presentations and so they need to be taught early that, yes, that's one source of information that has been generated for you, but you need to use your cognitive skills and your thinking skills to go and say, well, can I confirm that for 1 or 2 other data sources?
00:36:42:08 - 00:36:58:16
Speaker 4
Likewise, I'm still a fan of full disclosure, of disclosing when they've done some part of work, using, any of these tools. Because I think that's only fair. To actually show their work.
00:36:58:20 - 00:37:05:06
Speaker 3
I don’t know about that one David, I started go. Well, I don't disclose I used a calculator. I don't disclose i use the internet.
00:37:05:08 - 00:37:05:12
Speaker 1
That.
00:37:05:13 - 00:37:41:16
Speaker 3
theres different ways of using it. So, I wonder if that's going to become too cumbersome in the future when, of course, we're using generative AI to some extent, and it might be just, well, for a start, if it's fully AI, we need to make that clear. And I think that issue around understanding what data it's drawing upon, whatever you've produced, whether it's fully synthetic, if you like, versus it's being grounded in some source of truth, is also going to be important.
00:37:41:16 - 00:37:49:05
Speaker 3
But I feel like it might end up being a little unnecessary to say every single time you use generative AI.
00:37:49:07 - 00:38:07:12
Speaker 4
So I think we could do a whole webinar just on that. Because it's a question of like one of the things the teachers in schools do and at university level as well, is often for creative writing. They'll get prompts. So when students don't have, an idea for a story, they'll say, here's three prompts that we can write about.
00:38:07:14 - 00:38:32:03
Speaker 4
So the question of the day is if it's a topic and a student uses gen AI for those prompts, which one of my children has do we necessarily need to disclose? I've used, gen for that initial idea. And I think I know it's an ethical line. Maybe not, but if you get Gen AI to write the entire story, or then I think that's a different situation.
00:38:32:03 - 00:38:34:19
Speaker 4
So I think there's a line we just have to.
00:38:34:19 - 00:39:00:09
Speaker 3
There’s a line. But every time we us it I'm not so sure. And I think a really interesting model that we're seeing i classrooms now is get generative AI to explain something. You know, what was the plot of Emma and how is it influenced our thinking about law? And then, the model was to get the students to come up with questions to interrogate the AI about that book.
00:39:00:11 - 00:39:30:05
Speaker 3
Now, that's what the students would be marked on, is how good the questions were for the AI, because you couldn't ask good questions without having some understanding of the book and how it differed, or what might be argued against or in favor of the AI explanation. So teachers are being quite creative in how they ensure that students are still learning to evaluate the AI, even though they're pretty much all using it.
00:39:30:07 - 00:39:31:16
Speaker 1
And that comes back to the question need.
00:39:31:16 - 00:39:33:03
Speaker 3
To take time off.
00:39:33:05 - 00:39:57:02
Speaker 1
Yeah, sorry. Oh, it's going to say that comes back to the human connection for me. Like we see in universities, we're increasingly using conversation and oral kind of examination techniques because that, you know, you can see what people know about a subject in a conversation with someone. So it is more of a human, element than, you know, submitting an essay and just and being that the written form of those things.
00:39:57:04 - 00:40:14:01
Speaker 1
But then on the on the flip side, you know, we'd say with referencing your website sources. So it is like, you know, kind of, walking that line, aren't we? But I agree with you. I never disclose, you know, something was generated by the help of a calculator or that all my presentations are supported with the use of PowerPoint.
00:40:14:03 - 00:40:16:23
Speaker 1
But, you know, the internet
00:40:17:00 - 00:40:40:12
Speaker 2
Yeah. It's funny, my my point of view on this. Just listening. It's it's such an interesting area. But, when I think about business particularly and how it's used right I actually don't say we’re specialists. I mean, we, we specialize in AI, but we're actually specialists in search. And the reason I say that is that, again, when you're looking at AI in a business context and this is very similar, you know, often a lot of research projects in students, right?
00:40:40:14 - 00:40:59:18
Speaker 2
You're actually having to think about, well, what sources am I going to find? And if I'm doing an information search, I would say a company like set of policy documents internally in a company. What I'm actually doing is, behind the scenes, we're actually running a search, taking snippets out of those policy documents, putting them in the prompts, and in summarizing that response, back out to the user.
00:40:59:20 - 00:41:16:17
Speaker 2
And so the key thing here is that it's about saying, well, how confident can I be of the accuracy? And really what there are is several sources. There's going to be the snippets from the different documents in the policies, but then the AI model may have brought in its own knowledge because the model itself was trained on something.
00:41:16:19 - 00:41:45:11
Speaker 2
And so it comes down to the confidence level in terms of saying, am I really confident that the the response came from actually the cited sources, or has it actually sort of filled in the gaps based on its own knowledge? And so the way I think about that is that and that's what we try to test for, because it's not necessarily wrong if the AI models pulling off content from within its own knowledge, but there is much more likelihood for issues to happen because it's actually not going back to the source and relying on it.
00:41:45:11 - 00:42:08:09
Speaker 2
And so when we think about, you know, the students or any work at all, it's really about, well, this is a search problem and an insighting problem based on the fact that if I've got three, you know, David, you'll comment like I've got three books outside of these things. And that's sort of my research. If there's a fourth thing that's sort of fuzzy that I can't really attribute to any of those three sources, to me, that's where you cite AI going.
00:42:08:09 - 00:42:31:21
Speaker 2
Well, I think it's right, or I'm pretty confident its Right. But I haven't been been able to find a source to, to, to identify that. Now, that's easier said than done. But I think the key thing here is that the models themselves had some level of knowledge. But then there's also, most importantly, the search that actually adds information behind the scenes to the prompts, which then summarizes that in a different way that we can, you know, go along and take it.
00:42:31:21 - 00:42:40:18
Speaker 2
And so it's about understanding how much are we relying on the model versus how much we relying on the external information to actually come up with the result?
00:42:40:20 - 00:43:06:05
Speaker 3
Absolutely. You cannot be totally dependent on the model. You have to have the expertise to be able to critique what it has done, or to know how to verify what it has done. So we need skills ourselves, that's for sure. But I would say that sometimes those hallucinations that the AI provides are kind of useful. For example, I did some work looking at the future of truck driving and what skills truck drivers would need with increasing automation.
00:43:06:07 - 00:43:29:01
Speaker 3
There is no answer to that question, but the AI can learn from the patterns it has seen in similar occupations that have automated to infer what might happen. And that's actually really useful for me. That's a starting point for us to think about it, that I can then evaluate based on other information. So there's going to be so many different ways of using this.
00:43:29:03 - 00:43:34:16
Speaker 3
And the key thing is that we know how to add value over and above what the AI is giving us.
00:43:34:18 - 00:43:51:24
Speaker 1
Of that Claire, And I'm going to have to call it because we're out of time. But I think that's actually just put a put a cross to the T. We're going to try to get back to the questions on the blog, the ones that we weren't able to answer. I want to thank all of our panelists today. Really riveting conversation I could have yeah
00:43:51:24 - 00:44:11:13
Speaker 1
There's so much we could, felt like we just scratched the surface, but I hope we can come back and do another one. If everyone had a takeaway from today's session, I encourage you to go post that on to your LinkedIn stories RMIT online is giving away a couple of free futures, future skills courses. So highly recommend tagging RMIT Online in the takeaway from today's session.
00:44:11:15 - 00:44:24:19
Speaker 1
And if you're looking, there's. This prompted you to think about changing your career. There's a session tomorrow and career pivots the last session of the skills Fest of tomorrow, so please go and register for that. But thank you again to all the panelists and bye for now.