Five Good Ideas on how to think responsibly about generative AI

Episode 6 May 14, 2026 00:51:01
Five Good Ideas on how to think responsibly about generative AI
Five Good Ideas Podcast
Five Good Ideas on how to think responsibly about generative AI

May 14 2026 | 00:51:01

/

Show Notes

Jake Hirsch-Allen, head of Partnerships at The Dais, a public policy and leadership think tank at Toronto Metropolitan University, joined Elizabeth McIsaac, president of Maytree, for the latest Five Good Ideas session to share his five ideas on how to adopt generative AI responsibly and effectively. Drawing from the Responsible AI Adoption for Social Impact (RAISE) program – a national initiative through which Jake has trained over 500 non-profit professionals across Canada – Jake explored when and how to use AI, the risks you need to understand, and why your organization’s data matters more than you think.

Jake’s five good ideas:

1. Experiment using AI

2. Beware of the risks

3. Identify AI use cases

4. Create an AI use policy

5. Value your data

For the resources and a video of the session, visit: https://maytree.com/five-good-ideas/five-good-ideas-on-how-to-think-responsibly-about-generative-ai/

View Full Transcript

Episode Transcript

Elizabeth McIsaac (President of Maytree): Generative AI, or Gen AI, is everywhere. And many of us feel increasing pressure to adopt it. But how can we use it ethically and practically? And what principles and practical steps can help us navigate Gen AI thoughtfully? Today we're joined by Jake Hirsch-Allen, Head of Partnerships at The Dais, a public policy and leadership think tank at the Toronto Metropolitan University. Jake has a strong background in tech and has been involved in many initiatives that guide Canadian organizations in how they can and should use technology responsibly and ethically, not just Gen AI. Most recently, he's been leading the Responsible AI Adoption for Social Impact program, or RAISE, a national initiative through which Jake has trained over 500 non-profit professionals across Canada. So Jake is well placed to share five practical and good ideas on how to adopt artificial intelligence. In his presentation, he will explore when and how to use AI, the risks you need to understand, the importance of creating a living AI governance policy and knowing how to use AI tools for fundraising, programming, and operations. So welcome, Jake. Jake Hirsch-Allen: Thanks so much, Elizabeth. It's great to be here. Elizabeth McIsaac: Over to you. I think all of us have a million questions in our heads about AI. I'm terrified of it. I have used it reluctantly once or twice. And I think I'm probably the classic “Tread carefully” and "Are you sure we have to do this?" kind of person. So that's who's going to be at this end of the conversation for you. Jake Hirsch-Allen: Totally fair. I'm super excited to be here. Maytree has been an inspiration throughout my career. And only recently have I been able to connect with you via The Dais where I've been for about nine months now, though I've been collaborating with them for many years. This topic is dear to my heart because I have been feeling the pressure that you just described myself. Started feeling it most intensely when a fellow Torontonian and now Nobel Prize winner, Geoffrey Hinton, began writing about his fears around artificial general intelligence, which I'll be clear is not what we're going to be talking about today, but is very much a topic du jour and has often been confused with generative AI, which is what we're talking about today, in part because Hinton is one of the inventors of Gen AI, and as in that role, he's quite afraid that we're going to go from Gen AI to AGI. I am frankly less afraid than him. Elizabeth McIsaac: Can you just give us a definition between the two? Let's distinguish that right at the outset. Jake Hirsch-Allen: Gen AI is a relatively invention, but it builds on many existing forms of artificial intelligence that we have been using for many years now and which are baked into a huge amount of the software that is used on a day-to-day basis. For example, semantic analysis and natural language processing — two technical terms that simply mean a computer's ability to understand human language and meaning — have evolved dramatically through machine learning, which is when a computer learns from our inputs and, in this newer form, generates content because of that learning. Gen AI combines those existing technologies with a newer one — and this is where Geoffrey Hinton, Yoshua Bengio, and others enter the picture. For many years, progress in machine learning felt frustratingly slow. What those researchers contributed was the creation of neural networks: systems that genuinely resemble how our brains work, with interconnected neurons that fire in response to stimuli. Part of the reason we still don't fully understand how Gen AI works is precisely because it's based on these neural networks, which — like our own brains — remain somewhat opaque to us. Put it all together, and Gen AI is the latest form of artificial intelligence: a branch of the machine learning universe that is specifically able to, as the name suggests, generate content in response to a human prompt or input. AGI — artificial general intelligence, sometimes called superintelligence — has not yet been achieved, and the timeline for its development is extremely unclear. It is loosely defined as AI that is smarter than a human in every way, capable of doing everything we can, but better. That's a very rough definition of an extremely complex topic, and we won't be going deep on it today. Partly what we at The Dais as a think tank and many others, like another Nobel Prize winner from the same year as Geoffrey Hinton, a guy named Daron Acemoglu, whom we hosted at our most recent annual conference, DemocracyXChange, here in Toronto and was on The Tonight Show last week as well. He won the Nobel Prize for the impact of AI on our economy and society. And he would say that we should be less scared of AGI and more focused on how Gen AI is impacting productivity, our economy, our labour market, and then preparing for the same for AGI and superintelligence. And the reason, in part, he says is because the timeline is so unclear. Right now, we have Gen AI. It is changing how people work on a day-to-day basis. Artificial general intelligence will likely come at some point in the future, and advances in Gen AI are making more likely every day. But every decade for the past seven decades, literally since at least the 1950s, the leading artificial intelligence scientists have said, "In this decade, we will get to AGI," to artificial general intelligence or superintelligence, and it has not happened. So one thing we must bear in mind is this timeline really isn't clear. And the last thing I'll say on this topic is the folks who are making the strongest arguments for the power of Gen AI and the likelihood that we'll get to AGI as a result of its influence or powers are also the ones who have the most interest in describing it as super powerful, i.e., it is big tech and the frontier AI companies; it is their bankers, their investors and their consultants who are all saying this generative AI technology should be used by everybody and could get us into one of the most dangerous situations ever because it's so powerful that it's going to result in us inventing the next stage, which is artificial general intelligence. That is partly why our think tank and many others are suggesting we need to pour a little bit of cold water on this hype cycle. We need to breathe and understand that we are still in control of this technology. And since we are in control of technology, since we contain or we retain the agency, that should allow us to choose how we're going to use it. The most powerful takeaway from Daron's visit — and from his more recent public remarks — was this: if we do still have agency, and we do, then we should not be passively surrendering to a narrative that frames AI as something happening to us. We should leverage that agency. For perhaps the first time in history, we are living through a major technological revolution and we know it. We weren't aware we were in the Industrial Revolution while it was happening. This time, we are. And conversations like this one are part of that process. Let's take advantage of that awareness to direct this technology toward socially beneficial ends — to use it in a way that doesn't simply concentrate wealth and power in fewer hands, which, unfortunately, much technology and software have done over recent decades. Let's use it instead to build a better world, because, like many powerful general-purpose technologies, it can go either way. Elizabeth McIsaac: Right. So, basically, I need to get my head out of the sand, cooler heads will prevail, and we will get started. Jake Hirsch-Allen: I hope so. I'd put it more gently — I wouldn't say your head is in the sand at all. To the contrary, I think cooler heads will prevail, and conversations like this one are exactly how that happens. When Daron visited, his single ask was this: Please, have more conversations about how we want to be using AI in our society, and what kind of society we want to be creating. So much of the current dialogue implies we have no influence, no power. Giving people back a sense of agency — a sense that they belong in the process of shaping the world they live in — is genuinely powerful. One more thought worth mentioning here: Canada is somewhat unusual in this landscape. We are among the lowest adopters of Gen AI among OECD and G7 nations, despite being relatively high adopters of technology overall and having a highly educated population. I initially interpreted that as troubling — as falling behind economically. I've begun to change my mind. I think it actually might be a natural human defence mechanism where Canada, having a population that sort of loosely resembles the world in terms of its demographic diversity, and one that is, relatively educated about the potential positives and negatives, risks and harms associated with technologies like AI and benefits from a productivity standpoint, that people are almost subconsciously resisting adoption. They're pushing back until they're more confident that this is something that will actually improve their lives. Elizabeth McIsaac: That resonates. Jake Hirsch-Allen: I'm glad. And I think that's partly why we're having this conversation so that everybody here can begin having these conversations in their own organisations and homes. There's an interesting analogy. The Dais works on three subjects: tech, education, and democracy. On the education front these days, we spend a lot of our time on K-12 education and specifically how kids are interacting with tech, how kids are interacting with screens. For instance, we have a large initiative called Heads Up. And in that respect, one of the most striking findings from studies that we've been working on reveals that parents aren't talking to their kids about technology Research suggests that up until around the age of ten, the average parent spends fewer than 30 minutes specifically discussing technology or screen time with their kids. In my view, that's deeply problematic — because it is through these conversations that we develop our own values, determine the path we want to take, and reclaim our agency. Elizabeth McIsaac: Great. So you've got five good ideas for us. Jake Hirsch-Allen: I do indeed. And I think these ideas are meant to be thought starters more than comprehensive explanations of how to use AI. They're really meant to balance those two sides I just suggested: the productivity benefits and the risks. The first one is about using your metaphor, pulling one's head out of the sand. And the best way to do that is by experimenting, not with sensitive data, not at an enterprise level, not across your entire non-profit, but rather as individuals in safe sandboxes, ideally using the most ethical large language models or chatbots that you possibly can, begin experimenting so that you can teach yourself how these things work. I can begin by listing some examples of safer, more ethical models and of the kinds of prompts worth trying. Over the past nine months. We’ve taught almost 600 individuals across almost that number of non-profits and foundations about how they can either use AI for non-profits or govern it. A big part of that work has been increasing comfort with a technology that does require some practice and experimentation to understand when it's going to get something wrong and lie or hallucinate, as it's called, or when it's going to save one a ton of time because it's able to do something that would take, in some cases, many people, many hours, very, very quickly. So I'm happy to get into some of those examples, but I think maybe to begin in terms of experimentation, I think it's worthwhile looking at which models are more ethical and safer, and which organizations are peddling these models for their own financial benefit versus doing so in a manner that is really meant to help the population in large. At the international level, Mozilla has done important work in this space. In Switzerland, there's a model called Apertus, made available through something called the Public AI Network — you can find it at publicai.co, and we'll drop that link in the chat. For organisations more concerned about the environmental impact of AI, there are also local models — ones that run on your own computer rather than pulling from massive remote data centres. Mozilla and the Allen Institute both offer models of this kind. I'll let people explore those websites directly, as they currently tend to require a higher level of technical knowledge than many nonprofits without an in-house IT person or developer may have. But to the extent you can use a local model, it will use dramatically less electricity and fresh water than the large-scale platforms whose data centre footprints are, justifiably, increasingly in the news. Elizabeth McIsaac: Just to clarify — when you say, "local model," you used the phrase "large language model" and "small language model." Is "language model" a function of the platform's size? Because you're saying it's more ethical and smaller if it's local. Is that about proximity, or about the size of the platform? Jake Hirsch-Allen: It's actually both. Proximity matters in the sense that data doesn't need to travel across the internet and back. But the vast majority of the energy usage comes down to the size of the language model — how many resources were used to train it, and to run what's called inference, meaning actually posing questions to it. In both cases, a large language model is significantly more resource-intensive than a small one. There's currently a very active debate in the field over whether the future of Gen AI will be dominated by large or small language models. I think that's one of the healthiest conversations happening right now, because small language models could offer a path to both more sustainable use and far greater local control. In Canada, for instance, there is a long overdue and increasingly urgent debate around digital sovereignty — the degree to which we are able to control our own data and our own technological resources. Access to a large language model is part of that sovereignty conversation, particularly given that the vast majority of the models most people currently use — ChatGPT, Anthropic's Claude, Google's Gemini — are run by American companies, subject to American law, and in some cases led by individuals who, in my view, are exerting an unfortunately outsized influence on both US and global politics in ways that are harming Canada — including through the current trade dispute. If we want to reclaim some of that control as Canadians, we need to invest in more local models or build Canadian alternatives. At The Dais, we've been advocating for exactly that, alongside partners such as the SHIELD Institute and others. Elizabeth McIsaac: So it's ethical, it's environmental, it's data control, it's sovereignty, all of these are elements and criteria of how we choose what model to use? Jake Hirsch-Allen: Precisely. And, frankly, that leaves folks feeling confused. To begin with, you can probably start experimenting with some of these larger language models using small experiments that don't use up a ton of resources and that don't put your organization at risk. And once you've become familiar and decided where, when, and how to use generative AI, at that point you can also decide which model to choose from. And the reason I say this is the most important part is we spend a ton of time in our course trying to persuade folks to first do the organizational analysis, the strategic planning, to understand which challenges are appropriate for Gen AI or another technology. And that's the step that most organizations skip or don't spend long enough. I think most organizations will have a strategy already, but in the case that you don't, it's worth either having your board or leadership team spend some time on your non-profit's strategy. And then once that's done, figure out which of the list of strategic goals and even more specific tactics and KPIs you think are appropriate, you think would benefit from the efficiencies that Gen AI can bring, and we can list some of those shortly. And then finally, once you've gone through that process, once you've decided how to use it, that's when you begin experimenting at an organizational level instead of at an individual level. And again, evaluating the costs and benefits of this technology in a manner that also, in the long term, allows you to do a serious evaluation of whether those costs are worthwhile. Elizabeth McIsaac: Right. Jake Hirsch-Allen: At this point, it's probably worth talking about some of the risks as well. So my second big idea is to be aware of the risks. And in this respect, I'm just going to list a few of them here, and then we can dig deeper into those that are particularly unique to Gen AI. For instance, privacy and data protection is a risk in the use of all software. And as we'll talk about later, data and how we structure and protect it and safeguard it is one of the most underrated aspects of technology for all organizations, and particularly for non-profits. We’ll get a lot deeper into data shortly. But it's important to be aware law at this point doesn't protect us in the way that we at The Dais believe it should because Canada has yet to pass either an AI-specific law or an online harms law. In fact, it's been many years since it's even updated our national privacy laws. So as a result, we do have a bit of a vacuum where we're sort of being left to our own to wrestle with these risks. And our recommendation is, therefore, to make yourself as aware as possible of the risks when doing that cost-benefit analysis. A couple of risks that are specific to Gen AI include things like your reputation. If you are using Gen AI to create content, for instance, from a marketing perspective at scale, because it can customize per targeted individual, per recipient, on a huge scale, thousands or hundreds of thousands of people, each answer, you want to make sure you want to have a human in the loop. You want to have a person reviewing all that correspondence because it will hallucinate, it will occasionally make mistakes, or it'll be really accurate in a manner that doesn't align with your organization's values or the image or reputation you want to have for your organization. With that in mind, you want to balance the additional efficiency, from a marketing perspective that Gen AI can create with some of the risks to your reputation that it can create. And the same is true across a variety of other potential risks that are true for all technologies but might be even scarier or riskier for Gen AI. As an example, from a human rights perspective, Gen AI has both improved the power or efficiency of many technologies around selection. From a human rights perspective, Gen AI has made many selection processes — including hiring — faster and more efficient. But New York City moved very quickly, almost immediately after the release of Gen AI tools, to introduce regulations specifically addressing bias in AI-assisted hiring, and for good reason: the same technology that speeds up the process can also amplify existing biases. Efficiency and fairness have to be held in tension. Back when I was in law school — I've been out of practice for about 17 years now — we spent a great deal of time on the fact that copyright law was falling into disrepair, no longer adequately protecting the artists and creators it was designed to serve. At the time, some of us were advocating for alternatives like Creative Commons or open data frameworks. Toronto, in fact, was a world leader in open data. The problem with both of those arguments now is that Gen AI has been scraping the entire internet in a manner that has hurt further and has harmed copyright holders further. The rights of people who originally created this legal system to protect artists and content creators have been dramatically changed, I would argue, limited and to some extent harmed, by the creation of Gen AI. And we need to now update our thinking about that just as we need to update our thinking about open data, since it used to be a resource that was provided to everyone, but the value behind it has now really been concentrated in these companies that have the capacity to scrape the entire internet and build these incredibly powerful, expensive models. Elizabeth McIsaac: Whenever we talk about risk, my mind always goes to, so how do we mitigate risk? And so in the first two examples, when we talk about reputational issues, you're sending out messaging. In my mind, I immediately think, "Well, you just put a human being at some place in the timeline, in the sequence." And similarly if you're not putting human eyes on the process itself, even the shortlisting, "How does that happen?" And I can recall when I worked at a university and HR sent me 300 candidates, and there were others buried that were the ones that were right for the role, but they had their own sense and sorting thing on what the criteria priority was. So is it a matter of building human interaction with this along the way in the process? Is that the mitigating strategy? Jake Hirsch-Allen: I think that is part of it. The original creation of this course that we did for non-profits was the result of work by something called the Human Feedback Foundation, and they're all about what you're describing, ensuring that there is a human in the loop throughout as much of the process. And I think that that will also safeguard us from some of the worries around the labour market around job loss, because it ensures, as we are predicting, that Gen AI might replace skills, but should not replace entire jobs. So that's in part because you're now going to have a human who's going to have different skills. Their skill might be reviewing AI-generated content instead of creating that content de novo. I'll jump to my fourth idea and then we can come back to the third. My fourth good idea is about creating an AI use policy. You can call it a use policy, you could call it an AI governance policy, you could just call it an AI policy. The point being, having a policy that crystallizes the organization's thinking and values around AI use is extremely helpful in both encouraging folks to use it in a productive manner, increasing their efficiency through more use, and mitigating some of these risks. Partly the way it mitigates these risks is because it does cover your organization from a legal perspective by saying, A, it's informing everybody else that you're using AI and how you're using it. B, in some cases, it'll even indicate which providers you're leveraging in your use of AI. And here I want to be careful because I was a lawyer, but I'm no longer a lawyer, so this is not legal advice, but you can, in essence, through these policies, shift some liability to those who are creating these large language models by saying, "Look, if you use it, buyer beware, if you, for instance, accept communications from us, be aware that some of the content in those communications is created by Gen AI from this company, and therefore be aware that there are these risks that could be associated with it." That's one specific example of how the policy is covering you. Right now, the challenge is those policies would usually refer to laws that don't exist yet in Canada. And that, again, we at The Dais and many others are advocating. There's overwhelming public support right now for an online harms bill that will, at a minimum, create a regulator and allow us to be protected online from some of the harms that we are already protected from in person in our everyday lives, including specifically that relate to Gen AI. In my opinion, every organization that uses AI should have an AI use policy. And as part of the course that we teach, we walk organizations through how they develop it. In fact, the assignment is the development of an AI use policy, which now hundreds of non-profits are leveraging. We're proud to have supported libraries across the country in developing their AI use policies such that they can protect their members, their employees, et cetera, in part through the use of one of these policies. Elizabeth McIsaac: Is there any template out there of what you work from, or do I simply type into ChatGPT, "Give me a non-profit AI use policy"? Jake Hirsch-Allen: I would not do that. For one, Gen AI often produces the average of the internet and the average of the internet's non-profit policy, and I think even just adding the word non-profit will help make it more specific, better, but still not good enough, and is not good enough for your organization. And two, the policies that any Gen AI platform from a frontier AI lab, Anthropic, Gemini, et cetera, Google, are influenced by the values of that organization. Each of those organizations has created something called a constitution that influences how their AI interacts with you, what it is allowed to say, what it's not allowed to say, what images it's allowed to create, et cetera. And you want to make sure that the content that's being created, particularly for your AI use policy, follows your values, not Google's or Microsoft's or OpenAI's. I spent five years at Microsoft after LinkedIn's acquisition, and then ten years at LinkedIn. Part of the reason I left to come to this think tank was I felt that my values were no longer sufficiently aligned with theirs. A big part of what we're doing right now at The Dais is to try to push back on the dominance, even the influencing of our brains. There's a fantastic researcher here in Toronto named Jutta Treviranus, who describes cognitive surrender to some of these generative AI models. We don't want to have the policies that govern them influenced by them. I'll give one specific example of how this can go wrong. There was a great explanation by a German member of European Parliament who, when asked about the Digital Services Act, put the question to several of the frontier models, Anthropic's Claude, Google's Gemini, and the responses she got suggested that this European legislation, which was created to protect Europeans from some of the harms associated with digital services, the answer instead said, "This is legislation that hurts productivity, that hurts competition, that will make your life worse." And the answers basically came from big tech. They were being filtered through the constitution of these LLMs in a manner that resulted in an answer that not only was not truthful but also was literally counter to the purposes of the person asking it the question. I think we do have to be, particularly when we're relying on something related to values or policy or law, extremely careful with how we use Gen AI. And, frankly, we should be careful about how we use it overall. More broadly, we need to be careful because humans are biologically programmed to respond to language. More than any other animal, when we hear someone talking to us, we feel kinship, we feel attachment, we feel like it's one of us. And the result is that we trust it. We are inherently more trusting of Gen AI than almost any other technology ever created because it sort of resembles us in the way it interacts with us. In fact, it's become sycophantic. It will almost always want to please us in a manner that has, in part, resulted in some of the disastrous consequences for those who are using it, for instance, for therapeutic purposes, where we see people taking their own lives because the AI is just trying to give them the answer that they're looking for instead of questioning it the way that a therapist or actual human doctor would. We have to be extremely careful with this technology that is sort of beguiling. I find Yuval Harari's book now somewhat dated, Nexus, and his more recent essays have been particularly compelling because he talks about how to imagine if a technology had absorbed all the most persuasive rhetoric and all of the best lessons on how to persuade a human being from all human history. It had absorbed that, and then it's leveraging that to persuade us. How scary would that be? And, frankly, that's the reality that we're currently facing. I'll go back to the third point that I was going to make around use cases because I do think we've covered the risks to such an extent that I might be scaring people away from technology that there are real positive benefits and productivity benefits. And in fact, I think, in some ways, this is one of the more powerful technologies for enhancing human productivity that we've ever created. And I say this because I look back over two decades in the world of software. I co-founded a software bootcamp called Lighthouse Labs that taught thousands of Canadians how to use software. And then, of course, as I said, worked at LinkedIn and Microsoft on it. And in looking back, I must say I'm a little apprehensive that, in a few cases, we were naive about the positive impact of those technologies. Study after study, including recent meta-analyses covered in publications like The Economist, has shown that most educational technologies have not had a meaningful positive impact on students in classrooms. Similarly, if you look at even the entire history of software, there's a serious debate over whether it's increased human productivity or whether lower rates of productivity over the past few decades could be the result of the interaction of our current system of market capitalism and software development or our use of software. The point being, we should question all uses of technology. But compared to those, I think there is much more evidence already, just in a few years of having Gen AI released to the whole world, that it can benefit us by removing some of the tasks that we don't want to be doing. Instead of removing the tasks that are creative we should be using it to outsource something that needs to be done a million times in the same manner. There are some examples of really good use cases. Transcription. Gen AI has dramatically improved the quality of transcription. The recording on this call will likely get almost every word that I say right, and that's helpful. If you want somebody taking notes to remember it, they can still do that, but they don't need to be doing it to create this artifact that can then be shared with others and aggregated, et cetera. So if you've got board meetings or if you've got a conference where there are hundreds of sessions occurring simultaneously, and then there are hundreds of individuals at many sessions, and then you want to aggregate all that content quickly so that on the last day of a three-day conference, you can present on it that, as long as a human's reviewing the aggregation of the transcripts into something that is presentable, that the transcription process is an extremely positive and efficient use of Gen AI. Similarly, scanning and synthesizing large datasets can be much more efficient than humans. If you're fundraising and you want to look at large databases of potential funders, you can use Gen AI to get little bits of information about tons of them. And then once you get a narrower list of potential funders for your organization, then you put that human into the loop and make sure that the information it provides about each of those funders is legitimate, not a hallucination. Then you can leverage that again to produce content, to generate content in the generative AI sense with the human; reviewing it based on the original creative from a human being. Elizabeth McIsaac: So those are the parameters. Where there are no values imputed, where there's no discernment, where it's simply capturing. ... Almost value neutral. Jake Hirsch-Allen: Precisely. Elizabeth McIsaac: As much as anything can be. Jake Hirsch-Allen: Exactly. If you're getting feedback surveys and you're getting tens of thousands of them back, it's probably helpful to have something that summarizes them, and then you want it to maybe pick out the ones that are most interesting, and then you want a human to review another sample to make sure it's doing a good job of that. One of the challenges that we would often bring up as an example in our class relates to volunteer management, another common AI use case. And the challenge in terms of volunteer management was those volunteers who were able to show the best results. The AI would keep suggesting that the organization uses increasingly, or a potential grantee that was demonstrating the results the best would get more funding. The challenge with that is it also increases conservatism. It makes it hard for a new volunteer or a new, better idea to break through that cycle. And Gen AI can aggravate that conservatism. It can make the organization narrow-sighted by just focusing on the things it's always focused on in the past because Gen AI is actually learning from those decisions and making them more and more efficient or powerful or strong. Elizabeth McIsaac: So you double down on what you already do, and your capacity to grow or adapt gets constrained. Jake Hirsch-Allen: Precisely. Not dissimilar from what's happening in terms of what I was describing as Gen AI producing the average of the internet. It is whatever the average of all that content is, it makes that content increasingly likely to appear, not just for you, but for everyone else. And that's also why it's highly problematic to train AI on AI-generated content because you can see how it would aggravate that vicious cycle repeatedly and make the quality of the content it's generating worse. Before we move to questions, I want to touch on the area I think organisations most consistently undervalue when it comes to technology — partly because it's unglamorous and partly because it feels abstract. That area is data. I'm sure everyone on this call has heard about data and has thought about it, but my guess is very few have spent time ensuring that within their organization, it is structured, it is clean, it is valued, and the organization begins to think about how the value of its data, even data creation, relates to the value of the non-profit and the impact that it's creating in the world, and even, for instance, how it can fundraise against that data. The example I often give is Kids Help Phone. Senator Hays, who created Kids Help Phone and has been instrumental in a variety of other work that we do at The Dais, and was a speaker at one of our recent conferences on Kids in Tech. In part, her organization was so successful because it was able to help and work with many other organizations in the management of their data, in the interoperability and the connecting of their data, such that you actually had equivalent systems all across the country able to work together to learn from how, when somebody was phoning in on a 311 line or an emergency line, they could learn from those calls and respond better. That was only possible because all those distinct datasets coming from different cities and different systems were aligned and were structured in a manner that allowed humans and AI to learn from them. The classic line, "Garbage in equals garbage out," and the opposite, of course, is true: the more you can structure the data that comes from your members, your clients, your employees, all of your stakeholders, the more valuable your non-profit's going to be, and the better you're going to be able to leverage all technologies, all software, and particularly Gen AI, which relies heavily on that data. The conclusion here is to make sure that you're valuing your data. Think also about the different roles across your organization and who is responsible for what, both vis-à-vis tech writ large data and Gen AI will help in that process. One of the things in our course is walk through the different roles that most organizations should have in terms of oversight at the board and leadership level and goal setting, implementation on the technologist level, and then usage and how the average employee or stakeholder should be leveraging this data or technology for their own and the organization and the public's benefit. Elizabeth McIsaac: So the question that comes to my mind when we talk about data is security, privacy, confidentiality, and for much of the work of this sector, vulnerable people are part of the dataset. Can you talk a little bit about what's in place or what do we need to be mindful of as we approach the use of AI in our organizations to ensure that we have guardrails up around that? Jake Hirsch-Allen: The irony of my responding to this question days after our organization was subjected to attack wasn't thankfully a data breach, but it was someone trying to harm our reputation through what they were doing to our website is not lost on me. And it is extremely hard for the average small non-profit to protect against all forms of data breach or attacks. But I do think most organizations, both through things like insurance policies, which often come with both the software and education in order to help your organization defend against data breach or other forms of cybersecurity attacks, are helpful. I think having conversations like this so that everybody in the organization's aware a ton of vulnerability and a ton of the attacks relate to our clicking on a link or making a mistake by uploading large datasets to generative AI. And I would highly recommend, unless you have a strict legal agreement protecting your organization with the Gen AI provider, I would not recommend uploading large datasets. I would not recommend uploading any private data to any of the large platforms. For example, one of the ways that we, as The Dais, protect ourselves in this respect is we are a part of Toronto Metro University. TMU has a relationship with Google, where their licence with Google Gemini protects the data underneath it. Google has a legal responsibility and, in some ways, a technical responsibility to protect our data. And while we still don't upload any private or secure data to this platform, we at least are more confident when we're uploading other non-secure data that it won't be breached or revealed to the public in ways we don't like because of that agreement that TMU has with Google. One thing that might be interesting in the non-profit community, and I think there's great organizations like CCNDR and Imagine Canada and others that are doing this, is pulling together large numbers of non-profits so that they can collectively purchase software, purchase Gen AI at a level where they have an agreement with the provider that protects data better and/or, and this is where I think we should be going, eventually building Canadian large language models or buying large language models that reside, smaller language models that reside in Canada such that the data perhaps would never leave their premises or never leave our country. Elizabeth McIsaac: But at this moment, if I use ChatGPT, it's out there. Jake Hirsch-Allen: It's out there. Elizabeth McIsaac: And if I use a transcript service from a meeting, that's out there. I don't have control over it. What about when I'm on a Zoom call and I ask for the transcript at the end of the call? Jake Hirsch-Allen: And a data breach against Zoom or somebody's system because they downloaded and then uploaded the Zoom transcript to their own Gen AI chatbot, whether it's Microsoft Co-pilot or OpenAI's ChatGPT. It could be much more easily breached than ever in the past, and the risks are quite significant as a result. We do have to be increasingly careful. Again, the classic expression, with great power comes great responsibility. This is, and we can't overstate this, quite powerful technology. It's not artificial general intelligence, but it's still powerful enough that you want to be careful in terms of how you use it. Elizabeth McIsaac: You've gone back and forth from the large language model, small language model. Are there things we're not able to do with the smaller models? Jake Hirsch-Allen: Yes, they're less powerful. In fact, even the link we put into the chat to publicai.co, which is based on this Swiss Apertus model, that's based on Swiss values, so careful with how you use each of these different options, that is less powerful than the frontier models. And so, if you look right now, I believe Claude is the most powerful frontier model. ChatGPT is likely second and Gemini third. But depending on the use case, they alternate, and there's a bunch of other open-source competitors and Chinese competitors that are quite close. Those models are much more powerful than a small language model and even still more powerful than something like Apertus. But for most of our use cases, we don't need the most powerful model. And this is something that we've been working with our federal government and other governments and organizations to think through how can Canada, how can our government and other organizations procure models that are six months behind the leading frontier model, but which can still significantly improve our productivity and efficiency in a manner that better balances safety and responsibility, from environmental sustainability responsibility to Indigenous rights, et cetera, rather than engaging in this race to use the most powerful technology when, again, six months in terms of the human change necessary to adopt this technology responsibly is not a long time. Elizabeth McIsaac: A few participants have asked about hallucinations — one of several reliability concerns around Gen AI. What triggers them? Do we see them in transcripts? How do we spot them, and how do we mitigate the risk? Jake Hirsch-Allen: Great question. Hallucinations are basically lies. There are mistakes by AI that the AI doesn't know it's making. And one of the frustrating things that folks will often see when they're engaging with Gen AI is it'll make up information, then you'll correct it and you'll say, "No, that's actually wrong. Check out this source. It says the opposite," and then it'll make it up again. And in part, that's because of the way that neural networks work. Their creativity allows them to make things up. And that making things up part of the generative AI process is an essential element in creation. It's what humans do too. We often make mistakes or lie or hallucinate when we are trying to come up with new answers because we are creative. Then, interestingly, the more creative the model is, the better it is at generating new ideas, new concepts, new images, and whatever, the worse it is in terms of hallucination. So it will lie or make up things more if it is more creative. And what almost all the major large language models right now are trying to do is balance those two things. So, for instance, Microsoft Co-pilot being designed for large enterprises and the relatively, quote, unquote, "conservative" use cases of companies and large organizations, it is the most conservative of the popular platforms of the Geminis and the ChatGPTs and the Claudes. On the other hand, partly why Claude is so powerful right now is because it's amongst the most creative of them, and then the rest sort of falls in between. But when I'm saying creative, I also mean they are likely to make things up, to lie, to hallucinate more. And I'll give an example from my own personal family experience. We tried to plan a trip using Gemini a couple years ago, and it made up all the locations. We said we wanted places that are dog-friendly, where we can get food for our daughter, and where we can safely stop on the way between somewhere in upstate New York and Toronto. And literally every single place that we stopped at was not as described by Gemini. It had combined different real places in the world to make up artificial places that didn't have all the things that we were looking for. But it did so persuasively. It had incredibly detailed and persuasive explanations of the maps of where we were going to go and which entrance to use and where we were going to get the food and how we were going to walk our dog there. But they were all made up. It had hallucinated them. And that continues to be a problem, admittedly, decreasingly so, because, as the large dragon models are being trained and tweaked in a manner that allows them to decrease the creativity where the model should not be creative and increasing it where it should be creative and therefore is maybe going to hallucinate less or that hallucination will be less harmful, they're getting better. And the last thing I'll say here is there's something called retrieval augmented generation, RAG. It's a very complicated word for a simple concept, which is if you focus the Gen AI and restrict it to specific documents or specific data, then it's often much less likely to hallucinate. Its creativity is constrained to that data, and as a result, it pulls just from that instead of the internet writ large or a really, really big neural network that we don't totally understand and is able to be more accurate than if it didn't have that retrieval augmented generation or RAG focused content. Elizabeth McIsaac: Is there a final word of wisdom? Do you have any final advice to us? Jake Hirsch-Allen: I'm going to go to a very high level again and back to this original idea of regaining agency through conversations with our peers about how we want to be using these very powerful technologies in our organizations and in our society. I think there is something to the solidarity of being altogether in a community that both realizes this is happening to us but is also able to take control and decide how and when we use which technologies and for what purposes. And once we can all do that together and once we can learn from each other, much in the way that this conversation has helped me, I think that will allow us again to help our society navigate a time when we are being told by others we have no power in a manner that actually is not just more responsible, but healthier for us. Elizabeth McIsaac: Thank you so much. I've just learned so much and my mind is spinning. So, you've done the job. I've got more than five good ideas, so thank you. It's really been a pleasure.

Other Episodes

Episode 4

April 30, 2018 01:39:36
Episode Cover

Listening and learning – Indigenous Peoples and human rights

In this session, originally recorded on April 30, 2018, two commissioners of the Ontario Human Rights Commission, Karen Drake who is a citizen of...

Listen

Episode 8

July 08, 2024 00:45:31
Episode Cover

Five Good Ideas for being inclusive of Indigenous Peoples

Bob Goulais, founder of Nbissing Consulting Inc., joined Five Good Ideas to discuss some wise practices to be more inclusive of First Nations, Métis,...

Listen

Episode 2

November 15, 2023 00:48:41
Episode Cover

Five Good Ideas for aspiring board directors

This session provides a guide to participating in nonprofit boards, drawing on the expert knowledge of Rick Powers, a governance and board leadership specialist...

Listen