Five Good Ideas for building a human-centred workplace in the AI era

Episode 7 June 03, 2025 00:45:52
Five Good Ideas for building a human-centred workplace in the AI era
Five Good Ideas Podcast
Five Good Ideas for building a human-centred workplace in the AI era

Jun 03 2025 | 00:45:52

/

Show Notes

In this session, Zabeen Hirji, former Chief HR Officer at RBC and now in her Purposeful Third Act, explored how AI is reshaping the nature of work. As AI changes the relationship between employers and workers, why should people choose to work for your organization? How can organizations help their people thrive in this new era?

Her five good ideas:

1. Help every employee understand their purpose

2. Start small, learn together, and stay curious

3. Build AI fluency, not technical mastery

4. Focus on change management

5. Invest in human skills

For Zabeen's full bio and resources, vist the session page.

View Full Transcript

Episode Transcript

Elizabeth: Artificial Intelligence, commonly known as AI, is reshaping the nature of work and redefining our relationship between employers and workers. As AI becomes more embedded in our workplaces, critical questions are emerging. Why should people choose to work for your organization? Can a machine be a trusted collaborator? Is the machine a coworker? With six in 10 workers already considering AI a coworker, these questions are no longer hypothetical, and they demand thoughtful answers. Organizations have to consider how to help their people thrive in a changing landscape, clarify why employees join and stay, and create the conditions for them to do their best work. To help us think through how to lead in this moment of transformation, I’m delighted to welcome Zabeen Hirji, a strategic advisor on leadership, talent and culture. She’s the former Chief HR officer at RBC. I first met Zabeen, – I’m afraid to say it’s more than 20 years ago, Zabeen – when you were chairing the Toronto Region Immigrant Employment Council and all the really great work you did in that role. But since you’ve left RBC, you’ve gone on to become a strategic advisor to so many different groups and organizations. And, importantly, you are a thought leader in this space. And, you’re also a LinkedIn Top Voice and I mention that so that people are able to continue the conversation with you after this. Zabeen is going to share her five good ideas for building a human-centred workplace in the age of AI and then we’ll open it up to a broader conversation, building on questions from you, the audience. And so without further ado, Zabeen, over to you. Zabeen: Good afternoon, good morning, and thank you so much for inviting me today. I’m super excited to spend time with this large group of nonprofit leaders and professionals. And I know we also have a smattering of public sector and business professionals. And as you say, Elizabeth, we met a long time ago, but it’s also been a while since we’ve done something together, so I’m really excited about that as well. And just before I get started I would like to note that I’ve used many sources here, including published research, my experience mostly working with larger organizations, but I’ve certainly done some work to scale this down, interview with a not-for-profit CEO, head of HR, and Gen AI, with sources verified where I felt it was necessary. So I’d like to start with setting a bit of context for today’s session. As Elizabeth said, one thing is for sure, AI and in particular Gen AI, is reshaping the work that we do. It’s reshaping workplaces and redefining the relationship between employers and workers. I’ll focus primarily on Gen AI, and why nonprofits and smaller businesses should embrace it and do so thoughtfully and responsibly by understanding and mitigating the risks. This really is a story of “and’s.” It’s not a “should we or shouldn’t we,” but it’s “how do we.” It takes me back to when the internet first arrived and I’m already aging myself, because some of you were probably in kindergarten then, and I remember business leaders saying, “Oh my God, we can’t give access to internet to our employees. They’re going to waste their time and they’re going to play on the internet.” And I stepped back and I said, “Hmm, maybe we should take their phones away as well.” And those were desktop phones at the time because they could be wasting their time there, too. So this really furthers that need for a very trust-based relationship with employees and providing tools that help them be their best selves. So I wanna get a little bit of audience engagement here. It’s always hard when, for me anyway, when I can’t see people. I wanna make some sort of connection. The first question I have for you in just one or two word answers, why are organizations embracing AI? What do you see as the benefits? Just drop it into the chat. Peer pressure, productivity, efficiency, consistency, capacity, supports the top line, quick searches. Thank you for for sharing that. Clearly there are themes there around increased productivity. McKinsey report shows that productivity can be improved by up to 40% in writing intensive tasks and knowledge work. Another study in Canada showed that 63% of early adopters say it helps people save a few hours per week on repetitive tasks, but it’s more than just about saving time. It’s also about creativity and innovation. How can we use AI to augment us with idea generation, content design – and I’m gonna go through that in a little bit more detail –, drafting reports, writing emails, and one of the things that I found particularly germane to this group is for small- and medium-sized businesses and nonprofits, it also offers an enterprise grade toolset without the enterprise cost. And that’s really the first time that we’ve seen that. Thank you for your answer here in that improved employee satisfaction. And also , there are clearly risks and issues. So Basim, thanks for your, or Baso, I think it might be, thanks for your response that AI doesn’t need an RRSP. This really gets us to addressing the issue of what is AI going to do to jobs? And I will double click on that shortly. A couple of things I would add here. There’s also the opportunity to scale content. Many of you, your service offering includes training, for example. You of course do grant proposals and the opportunity there is really to improve quality, not just productivity. I said earlier that I’ve used Gen AI as a companion to me in preparing for this because I typically work with larger organizations. Although I do things with not-for-profits I don’t actually work in this particular space. And to be honest, I spent more time on preparing for this, but what I hope I’m getting is more depth and more comprehensive content as opposed to just thinking about it as as saving time. We have to make choices on which tasks we’re doing warrant that and which ones don’t. It’s efficiency, it’s empowering staff, it’s building agility, and then it also needs to be purpose, connected and purpose aligned. So it’s crystal clear that AI is much more than a technical upgrade. Frankly, that is the easier part. It’s much more of a change in skills, in culture, in leadership. And change management becomes a huge, huge part of it. With that as the context, having established that there’s a profound impact of AI on work and jobs, let’s get to the five good ideas for building thriving workplaces, in the world of AI. At one level, the framework that I like to use is there’s a top-down leadership. This is one of those areas that absolutely requires your CEO, your top of the house to be championing and driving this change and engaging and listening to employees. You need organization-wide engagement. And as individuals, we all need to take ownership of this. It sort of takes me to other areas where we have taken ownership in our lives. Think about health and fitness. Yes, our organization supports us with good policies, with good benefits to support our needs, but ultimately it’s our responsibility and it’s something we take seriously. It’s not something that stops after university. The older you get, the more you have to focus on your health and wellness. So this notion of continuous learning and learning in an experiential way, certainly applies to Gen AI in all generations. So the first idea I put out there was purpose and making purpose tangible. Now, one of the things with not-for-profits is that you are all mission-, purpose-driven. So as a starting point, that’s something I think in a human centered workplace with AI, doubling down on that with your employees and helping them see how their job directly helps to deliver purpose. Sometimes it’s easy if you’re closer to the recipients of your services or if you see the outcomes, you can see that link. But other times you might be working in accounting, for example, and it’s hard to see what the connection to purpose is. And one of the things that we know and that’s getting more and more of a priority is that in the surveys that are done, it was one recently done by Deloitte where 90% of Gen Zs and Millennials consider a sense of purpose crucial to their job satisfaction and wellbeing. And keeping in mind that Millennials are becoming an increasingly larger part of the population there’s influence on career choices. People tell us that 75% of them consider societal impact as a key consideration when they join an organization. They have expectations that their employers are going to be involved in community engagement. So as not-for-profits, a human-cent red workplace is really making that purpose tangible. Show employees how their work connects, engage them, go through your impact reports with them and draw those lines to how it’s connecting to their work. Have them involved. Just like corporations have impact days, there are opportunities for not-for-profits, for people who don’t typically engage with your clients and people that you’re serving. Help them see what that is. Again, I’m gonna ask you as we’re chatting, to continue in the context of purpose, tell us what you are doing in your organization to really leverage the purpose and draw that connection. Idea number two is learn together. Start small. I say, think big, start small and scale fast. Stay curious. There’s a lot of research out there again that tells us that upskilling and reskilling is a top priority. Elizabeth started in her comments by sharing that six in 10 workers will require reskilling or upskilling. This comes from the World Economic Forum Jobs Report, which is a global report. 63% of employers identify skills gaps is the biggest barrier to transformation. 85% say upskilling is a top priority. Yet what we do hear is from employees that they don’t feel that they get sufficient support in that space. This is the reality. The way that I think about it and AI is here to stay. I mean, you can remember when we first got our phones and how there were concerns about what’s this gonna do to people’s jobs, and of course, it impacted certain jobs negatively. Think about administrative assistants who did all the calendaring and all the booking. Now most of us take care of a lot of that ourselves. But what that’s meant is for people in roles like that, they’ve had to learn new skills and move into adjacent or completely different roles. My experiences in banking, and I remember when bank machines were first introduced and the concern about what’s this gonna do to the jobs of tellers. Well, the tellers became customer service reps and they got involved in more complex transactions and the organizations helped them to reskill, to learn the skills required for new tasks and actually created better, more fulfilling and satisfying jobs. And banking today has a lot more jobs than it did before the automation. But it really requires individuals to take it upon themselves to build those new skills. So what are some of the tactical things that you can do? Pilot tools in low risk scenarios. For example, use them for internal presentations. The use of AI nudges. I’ve seen a not-for-profit that uses it to complete their quarterly reports by auto filling the metrics from their impact dashboards. Writing emails to a donor through your CRM or your email platform. Asking, what did the donor do? What activities did they do last year? Oh, they were involved in Giving Tuesday. So how can I now personalize that message in connecting with the donor? Absolutely essential to involve your team’s feedback, iterations, asking them what their concerns are, and also creating small wins. And let me just double click a little bit more on some of the learning activities that I’ve seen and most of them don’t require large investments of money. They certainly require investments of time. For me, the starting point is really around purpose and transparency. Explain to employees why AI is being introduced. Are you focused on service delivery improvements? Are you focused on freeing up staff time? Link it to your mission and how it’s gonna better serve your clients and really have those conversations and those difficult conversations. And thank you for those of you who put it in the chat to address that, yes, the reality is some jobs will have massive amounts of automation. Most jobs will have tasks that are automated that are made easier, and how are you going to use that time to help people actually have better outcomes? And in some cases, my hope is to help people actually have better wellbeing because we know that, across all sectors, but certainly in the not-for-profit sector that there is a workload overload situation that impacts wellbeing and can some of that time actually be really liberated so that people actually are working lesser hours but having better outcomes? And that’s the promise we are all really looking for, and within that, how are we managing the risks? I would say the other piece I’ve seen that works really well is really creating feedback loops. Bringing people together, having short, quick huddles on what have you learned? Introducing simple forms, shared documents, and also sharing the challenges, not just the positives, the time saved, et cetera. But what issues did you run into? How did you handle verifying data, going back to source , and being comfortable that that data is actually accurate. And iterate and tell people, tell employees you said X and so we tried Y. Be really clear about how their input is being considered. Then just this notion of peer learning. One organization calls them sensais. If you’re into martial arts, you’d be familiar with that term. And they’re people you can go to for help, as you’re learning how to use AI. And what I would say there is make sure you recognize and reward people who are doing that. And then the conversation and the mindset around the goal is really AI fluency, not technical expertise. And, of course, with Gen AI one of the things is that it’s conversational and everyone has access to it, whether you give it to them or not. Anyone can sign up for free for many of the tools that are out there. The goal really is how to use it well, how to use it responsibly, how to understand the implications and what actions can be taken. So it’s really about this culture of continuous learning, of normalizing the learning and the mistakes and the things that through trial and error that you’ve learned and this encouragement of experimentation, including leaders talking about how they’re using it. I’m going to request another kind of chat conversation here around what are you doing in your organization to build AI fluency? So within that conversation I’d like to now double click on this whole idea or concept of trust, transparency, privacy ethics. That is clearly something that’s really important to understand, to take action, and to always be mindful about that. One thing I would start with is that privacy and data security is an issue already. Gen AI and open systems might amplify that, but really if you think about the information that you would not share in a conversation or that you would not put in a report, or that you would speak quietly with a client so nobody else can hear it. That’s really the same type of information that we don’t wanna put into systems. It’s a different medium and so it makes it harder sometimes to follow the same rules because now it’s digital and you don’t actually have someone watching you or you’re not physically giving someone a document. And this could be a session all on of its own. And so I don’t wanna spend too much time on it. There’s a lot out there. Larger organizations, of course, bring the systems into their own organization. So they’ll have Chat GPT, but within their firewalls, which obviously smaller organizations are not doing yet. But you need policies, you need people to understand why they shouldn’t put information out there, how it’s used. You also have to trust people because if they’re not getting access at work, many of them are doing it from home. There was a survey of large companies that was done and they found something like 40% of employees were already using it at home, and that’s actually worse. How do you bring that inside and how do you manage that? The other thing around trust and transparency is that Gen AI content can also sound artificial and lack warmth. So it requires editing. However, I will say that as a user of it, every day and by training it through what I say, I have said I want you to write in a professional, warm and inclusive tone. And very quickly it has really captured my, my voice and my tone. That’s where the prompting comes in. That’s where the trial and error comes in. And then there’s what I mentioned about inaccurate or biased content. It’s hard to to check for bias, but I do want to share a couple of ideas with you there. They’re things like use, using yourself, neutral and respectful language, avoiding stereotypes, assumptions. So for example, saying parents or caregivers instead of mothers and fathers. Right away you’ve got an inclusive response that’s gonna come as a result of that. Include dimensions like race, gender, disability as you’re doing your prompts. You might say, write a case study on equitable access to healthcare that reflects immigrant, indigenous, and rural community experiences. This is where the prompting comes in and tell ’em what kind of tone you want. Involve diverse staff in your prompt creation and provide explicit DEI guidance in the prompts as well as to your employees. So there are some things that can be done to manage for and mitigate that bias. And then it’s about using your judgment at the end of the day, like many things that we do. The other issue, of course, is hallucinations. I have seen a huge improvement in hallucinations. The first time I would ask questions even about myself. I spent 40 years at RBC, and the first time I asked, there was no mention of RBC. Today my bio is a hundred percent accurate. But I did ask chat GPT today, tell me something about Zabeen Hirji that no one knows. And the answer was, “Though known for leadership, insiders say Zabeen’s true claim to fame is her secret karaoke rendition of Ain’t No Bias High Enough. Performed only at offsite retreats and leadership summits, it reportedly brings down the house and the hierarchy.” So it’s still happening. So we need to pay attention to that. And then, finally, transparency and disclosure. As I disclosed to you upfront, you need policies in your organizations and guidance on how you want to do the disclosure. So let me pause here and do a quick case study. I spoke with Scott Stirrett, who’s the CEO of Venture for Canada, and they have 30 employees and it’s a not-for-profit. How are you using it? Number one, leadership. Scott and his two IC lead it. That’s, as I said earlier, it’s gotta come from there. After we had a conversation and I shared the materials that I’m sharing with you today, he actually decided to make all managers accountable and to provide more formal learning. They use the paid version of Chat GPT, so while there is a small cost associated with it, they find that they get much more value out of it, and certainly I do as well. Change management is key. Human psychology kicks in. Will it take my job? In most cases, yes, it will change your job. This has happened before. I’ve shared some examples. I do a lot of advising in consulting today. It’s massively gonna change the kind of work that’s done and the tasks that are completed. So how do you get ahead of that and develop new skills to move and to do different kind of work and possibly go into different professions? Because there are professions as well that are growing. And finally, he said senior people tend to be more skeptical. I loved some of the prompt suggestions he had for critical thinking and bias check. For example, how would you critique this is a question that they ask or what are the pros and cons of the proposal? They use it for many of the things I’ve talked about. They do a lot of educational sessions. They’re also using it to reduce fraud risk. There’s a subsidy program that they run for the government with about 3000 employers and they do fraud checks to identify a fraud or potential fraud through the subsidy program. They have a policy which says everybody should be using it. It’s mostly well received, but it’s not part of performance management. They do learning all staff calls. And one learning I do wanna leave with you because I think it applies to you as well as to you as a service provider. The highest performers tend to have the highest agency. And they use Gen AI more and they use it more effectively versus lower performers because they step into it. And this does risk a widening of inequities. And so it’s important to keep that in mind and to engage your high performers in supporting the rollouts, but to also think about how you’re supporting your various clients. I’m gonna quickly go through the next two and then we’ll move into q and a. Change management. As I’ve said, 70% of the success is change management. Clarifying roles and accountability. Communicating, involving staff, addressing the difficult issues and the tensions. What’s this gonna do to my job? Am I just accelerating my job redundancy? What are you going to do to help me build new skills? Over-reliance on AI can reduce engagement and fulfillment. Somebody had a question earlier saying, I feel lazy when I when I use AI. I shared my example of preparing for this session where I didn’t really feel lazy because I was putting more into it, in making sense of it, and bringing it together, in exploring new things that I could be talking about. I think it also comes down to how you use it, not just using it. And how you’re also building in collaboration between the hybrid workplaces and more use of AI. It’s very easy to take a step back further from engaging with people and something that really many people love about their jobs. The final one is around human skills, which are the enduring edge in the age of AI. Research after research reminds us that it is the human skills, like adaptive empathy, adaptability, creativity, judgment, critical thinking, collaboration, communication. One of the things that someone said, with Gen AI I don’t have to be a good communicator. I would argue that you have to be a really good communicator because you need to ask great questions. You need to think critically. You need to then package it in a way that is gonna align with your audience. The issue really is around how do you develop human skills and as always, I am running out of time, so I’m going to give you the headlines and I’m gonna post the rest of it on my next LinkedIn post. But it’s experiential and immersive learning, peer coaching circles. Bring examples, story tell your ethical dilemmas where you had to use judgment in terms of making a decision and or journaling prompts. Ask people to be self-reflective. When did I really listen this week? What did I do to challenge my assumptions? And the whole sort of reflective thinking, dialogue, leaders role modeling. You set the tone for a human centered environment. Are you demonstrating empathy? Are you communicating in an effective way? Are you really promoting critical thinking by asking for pros and cons and different perspectives. So in conclusion, Gen AI is not just a tool, it’s a cultural shift and we need to invest. We need to be aware of their risks. We need to address them, we need to talk about them. And I’m so thrilled, Elizabeth, that you brought this topic to this group, to the not-for-profit sector. And I hope that it catalyzes more collaboration and learning together. Elizabeth: Thank you, Zabeen. That was just terrific and thank you to everybody who’s been engaging in the chat. It’s great to see that interaction and raising questions that have come into the Q and A box, which is great as well. I love where you finished, that it’s actually a cultural shift, because culture is always being made and remade and we are agents in doing that. How we use AI, how it gets integrated into our workplaces, that we’re not subject to it, but we’re actually agents in it. We are we’re going to be part of defining that. And I think a lot of what you talked about today is about that. How do you engage in shaping that culture and shaping how you’re going to do that. You’ve given us so much to think about and obviously this is a conversation we’re all gonna be having for months and years to come. It’s only going to evolve and it’s nascent right now. You mentioned a question that somebody asked about, does it mean that I’m lazy because I use AI? And it’s funny because that’s the question that has been going through my mind, and it’s not about laziness so much as an there’s an overreliance risk atrophying. Our muscles around thinking through and critical thinking. And I’m hearing from you, not necessarily. Again, it’s the how you engage, but, there is that thing, but when you are over reliant on a tool, is there a risk in that? Zabeen: Yeah. The one thing I’ve learned in life, it’s about balance. And that applies to pretty much everything. It’s a lesson that I learned about 20 years ago, when I was the head of HR in particular, because when you think about people and their talents, you need to bring balance. And so, I would say that Google searches or whatever search engine you use, there’s also a risk of overusing that. Because there is a risk. Absolutely. And I worry about it with younger people coming into the workforce because for many of us, we grew up building that knowledge. And so we actually have some of that knowledge that we can draw on and there’s no answer yet to how are we going to develop? Because you do need some domain expertise. If I went in and tried to do something on auto mechanics, I’m not sure that that would work because I wouldn’t be able to manage your information well. So, use your colleagues, talk to people, set limits for yourself, verify sources and, think of it as a companion, just as we have been using our phones for digitally searching information. Elizabeth: So that leads actually very neatly into the question of organizational policy, ’cause I think a lot of organizations are saying, Okay, we’re obviously going to use this, people are already using it. Do we buy the purchased Chat GPT? Do we upgrade? And how do we direct people? Do we ask them to use it? Is that of their own intention? So, how do we shape an organizational policy so that employees, and you’ve touched on this a little bit, so that employees know how they’re expected to interact with AI? Are there ethics components that they are going to be expected to be accountable for? So building out really the human engagement in setting the culture of it in your workplace. Zabeen: I’d like to think of it as building blocks. We did that with the internet in many cases. We did that with social media and there policies around how they can be used, who are you representing, et cetera. So how do you take that to the next level? So there’s the ethical, responsible use, and those are things like verifying sources, just as you should be verifying sources on Google. One of my favourite prompts is, I like credible sources because sometimes I get sources I’ve never heard of, and so I ask another question and, if I see the credibility in the organization, it already helps me quite a bit. The bias issue that we talked about. We need to talk about those. You learn when, as something happens, somebody does these learning circles, Hey, you know, I did this and this is what I ran into and this was an ethical issue that I caught. The other policy around, do you have to use it? Is it a tool? That varies. Organizations are across the spectrum. I mentioned with Venture for Canada, they ask that people use it. They don’t monitor it, they don’t include it in performance reviews, but they say that we think everybody can get some good use out of it. And there are others who are further along the maturity curve who are building it into people wouldn’t be able to get their jobs done without it. Elizabeth: In your comments earlier, you raised feedback loops as an important part of building that into the workflow. And so some of the early things that we’re hearing recently was the Chicago Sun generated a list of readings and most of them weren’t actually real. They were sort of generated by AI. So there’s fact checking that has to happen and those feedback loops. So part of this I’m hearing from you is how do thinking about hard wiring that loop into the workflow itself and making people accountable for it? Zabeen: Yeah, and I’m hoping that over time the systems will also, as they get trained more and more, will also improve. Healthy skepticism is a good thing. But with many new things, that’s often the case. Elizabeth: Just going back to the question of are we building the right skills and the next generation coming up, so the newer employees, the younger employees. You and I can talk about a history of becoming deep in our fields and understanding the work itself. But somebody has commented, kids are arguing about the necessity of going to school to learn math, physics, writing. Since AI is gonna take care of all of that. Does the working world have a role to play in influencing schools but also setting up expectations for new labour market entrants and the younger generation coming in? Zabeen: Yeah, I mean, that’s a big question. Thank you for that. And it’s gonna have multi-pronged responses. One thing I would say is that with our big issues today, big issues and opportunities, the collaboration that’s required between employers, governments, educators, not-for-profit, civil society, the need for that and the kind of collaboration we need to do is something we’ve never seen before. And I don’t think that those sectors individually, I don’t see people having stepped up to say, Hey, you know, we really need to do this. Not just wearing our own organization or sector hat, but thinking about it more broadly. And so I think that is important. From that big picture, I’ll go to the comment around math. Calculators could do a lot of the math before too, and I saw that. I saw that in the bank. Somebody comes to get change for $5 and the teller doesn’t know how to do it because they’re used to the machine telling them how much the change is. Whereas for us, it’s like you can count it out or you know, oh, it’s $5. You get $3 change if something’s $2. But what’s needed here, there is a math mindset that allows us to get through life. You’re out shopping, you’re on a budget, you are making those decisions in the moment. You are able to think, analytical thinking, mathematical thinking, and that comes through actually studying math. I’m not doing a great job of articulating it because it’s hard to articulate, but how do we actually get those things across. Having said that, the world is moving so we’re gonna have to find solutions as opposed to putting up barriers, ’cause I think that train has left the station. Elizabeth: There’s a number of questions around the environmental impacts and climate change and AI and how that relates to the ethics of engaging in all of this. So I wonder if you have some thoughts that you’d like to share on that. Zabeen: I mean, my thoughts are that I’m concerned about that too. And it uses a lot of energy and as an individual user, I try to think about that. These will require, again, big cross sectoral, and it probably drives the need for clean energy or that sort of acceleration even more. Elizabeth: But, for me, it also brings back to, as I think about who’s in the room right now and we’re running organizations, we’re leading organizations where there are people who feel perhaps threatened by AI and the security and relevance of their role. People who are outta the gate wanting to use it. Fears around the ethics. Some of what you talked about around teaching your ChatGPT your bias so that those guardrails are in place. Is there a way to build this organizationally? I think you alluded to this in terms of the openness and explicit conversations, but how intentional and how deliberate do we need to get in thinking through everything from the climate impact of what we’re doing to the bias that our organization should have or not have? How do we bring all that together? Zabeen: Yeah. It’s a great question and we have to be intentional. This is the most significant revolution – and it’s not even an evolution, it’s going so fast – in our lives, not just our work lives. And, and it’s urgent because it’s moving. I think this is a great opportunity for not-for-profits to come together in some collaborative ways, because I know that you’re resource strapped in most situations, to learn together, to build together. And I think that other sectors, business sector, for example, can help to support, to build that capacity. It’s not only about helping to build capacity through donations or the money side. It’s also about the talent and the time. I’d be looking to some of your partnerships that you have with organizations that are quite advanced in this area to see if perhaps they might be able to provide you with some ad advisory support in building out what you’re doing. But I think that really having your framework and it’s starting small, but I think it’s really important. People are looking for that. People want direction and it’s an enabler for them to adopt. Elizabeth: And maybe it’s premature, but are there examples of good organizational policies that are providing frameworks for this? I know at Maytree we’re having a conversation, what should our organizational policy look like as it relates to this? It’s going to be a conversation among staff, but it would be interesting to know if there’s examples that we can point to and say that works for them. If that works for that group over there, this is what would be right for us. Have you come across good examples? Zabeen: Ones I’ve seen are in very large enterprises and they really have to be scaled down. But I think it’s worth looking for them and sharing them. And it’s certainly something that between our networks and others, maybe there’s somebody on the call who’s got one? ’cause that’s the kind of thing I would love to share. Elizabeth: Agree. So if anyone has one, please drop it in the chat room. And speaking of the chat room, we’ve got a few minutes left. I have a question before we leave you. You spoke quite a bit about risk and the fact that there’s risk mitigation on privacy and data and so forth. And so there’s often good practices around that already in organization. So it’s really a matter of just layering that into AI use and that type of thing. Can you talk a little bit about the risk of having blinders on around AI and doing all the things that you talked about. Can you comment on that a little bit? Zabeen: They’re definitely there and to some extent we have been taking those risks personally. I go back to the phone. Don’t tell me that the phone isn’t listening to the conversation. You talk about, Hey, let’s get some new skis and the next thing you know, you’re getting ads for skis and yes, you can maybe shut some of those things off or whatever. But now it’s your organization’s data that’s at risk. Although I would argue that you’re having conversations about your organization while your phone is on too. And some of that data is actually going out there. So building the AI fluency, which includes all of this, must precede doing the other things. Having said that, there’s some things that are really very low risk. So, for example, take a job posting. You’re gonna post it out there. It’s going to be out there, right? Maybe there’s a little part you don’t wanna put in there because you’re not sure, but by and large it’s pretty safe. Create an HR form with these things pretty generic. If you’re doing a presentation externally like I am, there’s no real risk there. And so I would say pick the low risk things first. But really really get clear on the privacy and protecting the information of employees, of donors, of partners, health information if you’re in that space. Elizabeth: Terrific. Thank you. And there is actually a lot of work that is being done on AI practice and features and how to navigate it. I think what’s really important, what you’ve really highlighted here today, Zabeen, is the human dimension as we think about our teams, our people in our organizations and how they are able to feel empowered by AI not dominated by it, able to engage, able to learn safely and with support from the workplace. And I think those are really key takeaways for all of us to think about. And I think most important is the culture piece, which you highlighted. So that leads us to the last couple of minutes, and I wanna give the final word to you. As you think about this room, leaders in the nonprofit sector, you’ve seen some of the chat that’s going on, you’ve seen some of the questions that have come in. What would be your final piece advice to us as we think about how to navigate this new terrain? Zabeen: So for leaders, you need to be driving it. Distribute the responsibility, accountability across your management team. For individuals, take charge of it. This is your career, this is your life. Your career has the greatest impact on the quality of your life, your financial success. Yet sometimes the view is that it’s the organization’s role to do it. And the final thing I would say is use it yourself. To develop your capabilities in a safe, not for your work, for your things that you do outside of work, so you are building the skill, building the muscle, seeing the issues that you can then bring into the workplace. And for the workplaces, pick one thing that you think you can do collectively that’s low risk, that’s moving you into the right direction. And so as you engage and start to build out your policy, your plans, you’re speaking about it from a reality perspective. And please find a way to build a collaborative across not-for-profits. Elizabeth: Okay. Great advice. I’m gonna say I came into this, Zabeen, not a user of AI I came in a little bit of a skeptic. I’m a slow adopter, so I am now encouraged, I am empowered, and I am curious. So thank you for giving us all of that to prompt us and to push us forward. Thank you so much. There’s so much going on in this space and you’ve done an incredible job of distilling it and bringing it down to five, I would say five plus plus good ideas that we can take away.

Other Episodes

Episode 2

November 14, 2021 00:50:38
Episode Cover

Five Good Ideas for greater governance – making bad boards better

In this session, originally recorded on October 25, 2021, we asked Owen Charters, President & CEO of BGC Canada, to present his five good...

Listen

Episode 3

December 15, 2021 00:46:59
Episode Cover

Five Good Ideas for building community-labour relations

In this session, originally recorded on November 25, 2021, we asked Rosemarie Powell, Executive Director of the Toronto Community Benefits Network (TCBN), to share...

Listen

Episode 1

October 13, 2021 00:49:52
Episode Cover

Five Good Ideas about creating a successful hybrid workplace

In this session, originally recorded on September 28, 2021, we asked Neena Gupta, a partner at Gowling WLG (Canada) LLP, to present her five...

Listen