Dark side of AI

Technology and AIPodcastNovember 19, 2025

Share this

Record date: 10/10/25
Air date: 11/19/25

As artificial intelligence (AI) transforms the way we work and protect our businesses, are you ready for the risks—and the rewards—it brings? In this episode of the Future of Risk by Zurich North America, host Justin Hicks discusses the evolving risks of AI and machine learning with Barry Perkins, Chief Operations Officer, and Adam Page, Chief Information Security Officer. The conversation highlights concern about AI’s impact on talent, entry-level jobs, and the increasing sophistication of cyber threats, including deepfakes and targeted phishing. Both Adam and Barry emphasize that while AI presents new risks—such as data breaches and disinformation—it also offers opportunities to streamline business processes and improve decision-making. The discussion underscores the importance of governance, continuous education, and adapting workforce skills to stay ahead of technological change. Ultimately, the podcast encourages businesses to embrace AI’s potential while remaining vigilant about its challenges, ensuring a safer and more resilient future.

In this miniseries, other episodes include:

10/22/25: What is AI delivering so far
11/5/25: 5 ways everyone can benefit from AI today
12/3/25: What’s next in AI?

Guests:

Barry Perkins

Barry Perkins
Chief Operations Officer
Zurich North America
Connect on LinkedIn

Barry Perkins is Chief Operations Officer for Zurich North America, where he oversees the Data and Analytics and AI, Information Technology, Underwriting Services, Premium Audit, Underwriting & Workplace Solutions Delivery, International Programs Processing and Business Services Management teams. He is responsible for driving and assuring the achievement of desired business results from the organization’s strategic and operational transformation initiatives.

Prior to his current role, Perkins served as Zurich Insurance Group’s Global Head of Business Technology Solutions and Chief Operating Officer (ad interim) for Group Technology and Operations. His earlier roles for Zurich have included Group Chief Enterprise Services Officer; Chief Operating and Technology Officer for Latin America; Chief Information Officer for Latin America; Program Director – Infrastructure Transformation for Europe, Middle East and Africa; Chief Operating and Technology Officer for UK General Insurance and Chief Information Officer for UK General Insurance. Before joining Zurich, Perkins had management and leadership roles with Farmers Insurance, Knightsbridge Consulting and Cap Gemini Ernst & Young.

Adam PageAdam Page
Chief Information Security Officer
Zurich North America
Connect on LinkedIn

Adam Page serves as Chief Information Security Officer for Zurich North America, where he leads the development and execution of enterprise-wide security strategy. In this role, he aligns cyber risk management with business objectives and drives innovation in protection and resilience. He has more than 20 years of experience in information security, having built his career from hands-on technical roles to executive leadership. In addition to cybersecurity, he is responsible for Information Governance & Privacy, Corporate Investigations & Security Services, Business Resilience, and AI Compliance.

Prior to joining Zurich in 2017, Adam worked for over 16 years in healthcare, where he progressed through helpdesk, technology support, application security and application services positions, culminating in his role as CISO.

He holds a Bachelor of Science in Computer Science from Northeastern Illinois University, participates on numerous Advisory Councils, and maintains a CISM (Certified Information Security Manager) certification.

Host:

Justin HicksJustin Hicks
Communications Business Partner
Zurich North America
Connect on LinkedIn

Justin Hicks is a Communications Business Partner at Zurich North America and supports enterprise communications efforts for the Direct Markets business and the Operations and Technology function. Before joining Zurich, Hicks was the first dedicated internal communications manager at Rivian's electric vehicle manufacturing plant in Normal, Ill. Earlier he served as public affairs communications specialist at State Farm, supporting claims executives and leaders.

(PLEASE NOTE: This is an edited podcast transcript, capturing speakers with natural speech patterns that may include incomplete sentences and/or asides, grammatical errors, verbal shorthand and some statements that may be less clear in print.)

EPISODE TRANSCRIPT:

ADAM PAGE:

Bad actors learn quickly, and they just try different things. There was a stat out there that I came across today, which was: 83% of phishing emails utilize AI¹; that stat comes from KnowBe4 security company.

So, the rise is prevalence, it's measurable and it's just up to the creativity of malicious people, right? So, another interesting one I've been thinking of is using AI to collect public information about an individual — what's out there — and then use that information and send it into AI and say, okay, if I was this person, what would my password most likely be?

JUSTIN HICKS:

Welcome to Future of Risk presented by Zurich North America. We explore the changing risk and resilience landscape and share insights into the challenges that businesses face to help you meet tomorrow prepared.

Today, we're looking at the dark side of AI and machine learning from the fears of its impact on talent, entry level jobs, and cognitive load to its potential for bias.

I'm your host, Justin Hicks, and today I am speaking with Barry Perkins, Chief Operations Officer, and Adam Page, Chief Information Security Officer, both at Zurich North America, Barry and Adam, welcome to the podcast. How are you?

BARRY PERKINS:

I'm very good, thank you, Justin.

PAGE:

Hello Justin.

The dark side of AI in business

HICKS:

It's good to see you guys today. I'm really excited about this topic. It sounds so seedy, you know — it's so devilish, the dark side of AI. You know, we really want to get into some of that kind of content. But I guess we'll just start by saying, there was a recent report from MIT, ² Barry, that found that the majority of AI pilots weren't living up to the hopes and dreams of the businesses. And some investors are even nervous about an AI bubble — like they might start yanking some investments back from AI or some of those projects. Do you share in that concern at all?

PERKINS:

I think so, yeah. If you look out at the economy — I mean, Jamie Dimon, I think yesterday was talking about the same thing — and the concern about a bubble in the economy, and a Financial Times article saying that the U.S. economy is now a one-legged stall, and that one leg is AI, driving up to 30 or 50% of the growth in the stock market.³ So, from a macro standpoint, for sure. From a company standpoint — I mean, I think of AI a little bit like Wegovy to a celebrity, right? It seems to be the in thing, and everybody has to have it. And if you don't, you're sort of in the outs, and you have to talk about it. And I'm not saying that it's not valuable — it is — but what we really should be looking at is: what's the business plan, and how does it align with your business plan? Because half of the battle really is connecting what we're talking about in terms of artificial intelligence to what does it mean for my business? And if you can make that connection — and generally it starts with talking about your business plan, not talking about AI — then I think you're on the right track.

HICKS:

I think it's scary for businesses, right? Again, going back to that MIT report, they had a survey that said 95% of AI pilot projects fail to deliver ROI. Right? So, I think that would make anybody <laugh> nervous in the business space, no?

PERKINS:

Yeah. It's death by a thousand pilots, right? The amount of people that you talk to that say they're doing this — and when you talk to them, they're really talking about, 'I've got some version of ChatGPT doing one thing or another.' That's one portion of a process in a company that doesn't drive the results unless you connect it to the rest of the organization that's required to deliver the overall business case. So, AI — I agree.

Cybersecurity best practices for AI-related threats

HICKS:

Well, I know that one part of our business strategy at Zurich North America is obviously protecting and safeguarding information and data, and that's when I'll bring Adam in. What are some of the risks and threats of AI-related data breaches and IP theft, and how can companies help manage them?

PAGE:

I think in a lot of ways, the risks remain the same. For us, that's protecting data and protecting our systems. However, it's the threats that change pretty significantly in terms of AI. And it's something that we've already been seeing for several years now. And it started out with the ability for malicious actors to create more targeted phishing emails, right? Back in the day, some of the training around how to identify these things was: look for grammatical errors, look for things that don't make sense, look for mismatches in information. But now, with all these AI solutions, they can simply collect information. They can write a prompt to create a well-crafted phishing email specific to a company — including who works there. So, it's much more targeted. It's much easier to do and create.

And that's kind of low-complexity stuff. Some of the higher complexity — at least for now — is the ability to create deepfakes. And whether that's audio and video, or it's just audio, we've seen a number of these instances globally. Some of them go on to create issues for companies in terms of wire fraud attempts. And there's definitely cases out in the public where several of those have been successful — where they create a deepfake of a CEO acting with urgency. They create other angles where somebody from legal is on a call as well, and they persuade somebody to push through a financial transaction that should not be occurring.

HICKS:

So, the whole deep-fake thing is interesting — and a little frightening. But I mean, these sound like different risks than what we've become accustomed to. Even how you talked about how these bad actors are being a little more sophisticated in how they're approaching phishing. I remember those days when there were grammatical errors with the phishing emails and things like that, and it seems like we've come a long way. And it's troubling to me that we have people who would use the same tools that we use positively for business and then use them for something that's that negative. You know what I mean?

PAGE:

Absolutely. That's the two hats that we need to wear, right? So, it's like — on the one hand — AI can help in so many ways for the business, and it can combat, specifically, some of these types of attacks and cybercrime in general. But what we also need to think through is: how would somebody with a malicious mindset utilize this tool as well? Because they're very creative. And I heard something interesting the other day, which was: a bad actor's failure is one step closer to their success, right? But a defender in that same situation — a defender's failure is the attack, is the issue — like one failure, right? So, it's completely different. And these bad actors learn quickly, and they just try different things. There was a stat out there that I came across today, which was: 83% of phishing emails utilize AI. That stat comes from KnowBe4 security company.

So, the rise is prevalence, it's measurable, and it's just up to the creativity of malicious people, right? So, another interesting one I've been thinking of is using AI to collect public information about an individual — what's out there — and then use that information and send it into AI and say, okay, if I was this person, what would my password most likely be? Right? Instead of guessing from the overwhelming abundance of options, it could give some real hits. And they just put that into a tool, and they try, and they try. And if you don't have the right protections in place, that's a problem.

HICKS:

Okay. So, I'm terrified about that. That's wonderful. Thank you for pointing that out, Adam. So, I've heard of WormGPT, right? Is that the nefarious answer to ChatGPT?

PAGE:

That's right. Yeah. It's kind of like every tool that is out there for somebody good. There's another tool that's out there for somebody that's bad. So yes, that was the initial tool — WormGPT. The creator of that has gone away and tried to distance himself from any liability, but then the next one just steps in — like FraudGPT. And there's always something new. So, when you go to a publicly available good solution today, it will stop you from trying to do bad things, right? If you're asking for people's social security numbers, if you're asking for, you know, ways to break into an organization, it'll say, 'Oh, you know, I draw the line there because I have ethical programming.' These malicious tools do not draw the line there.

HICKS:

If only these people were using their powers for good instead of evil, right?

PAGE:

<laugh>.

Will AI replace entry-level jobs? Key trends explained

HICKS:

And Barry, you're right. I was going to say — and Barry, if that weren't frightening enough — we've seen the labor picture kind of darkening a bit, kind of as a callback to the title of our podcast today. Or, at the very least, we would say it's been clouding, particularly with entry-level jobs. So, are you seeing AI having a drastic impact on entry-level roles, or maybe all roles — you know, any kind of role, not necessarily just entry-level roles?

PERKINS:

Well, before I answer, let me just check if you're not a deep fake interviewer. And I just need to, you know, check with Adam. No, you're okay, good.

HICKS:

No, I'm good.

PERKINS:

Got you some humor in there. You're good.

HICKS:

I'm good. So, yeah, it's really me.

PERKINS:

It really is you; I know you. The video is very, very realistic. So, listen, this is an interesting area because you hear a lot of fear about AI taking jobs, AI starting wars, AI this and AI that. But if you look at it, it's really quite bifurcated. I mean, some of the jobs that we're talking about not being available to the U.S. workforce anymore — to me, when you look at it — these are the jobs, at least for very large companies, that probably, a decade earlier, we outsourced. Business process outsourcing, where this work went via labor arbitrage to India and some of the Asia Pacific countries. Outside of that — I mean, if you think about those roles — those jobs were process-oriented roles that were too expensive to automate with a very large-scale systems implementation.

But we knew that we needed a lower cost base. So, those roles went over to India and elsewhere. And I would think, if I was a business process outsource provider today, I would be quite worried that the bread and butter of my business is going away. Because now, with AI tools — I mean, they're relatively cheap — once you get them started, you could take roles that are really fairly procedurally oriented and require some human interaction or some human knowledge to learn and really get to those roles fairly quickly. So, that type of role, I think, will be the most impacted. And if you're in a large company — a large international company — I would think that your U.S. workforce probably felt that a decade ago. And you're really just reaping a different kind of reward from the one that you did from arbitrage.

But then, if you switch it up a little bit and you look at some of the more higher-end roles that you have in your core workforce, it's really interesting. Because we did a — it wasn't a study — it was actually a review of what all of our workforce was spending time doing. And it looked at all of the areas. It wasn't just finance; it wasn't just operations. It was actuaries, it was underwriters. And what we found was that two-thirds of what the workforce were doing — what our employees were working on — were three applications. And so, it's not tax software; it's not actuarial modeling; it's not underwriting predictions. It's Teams, it's Outlook, and it's Excel. And if you really think about that — why are we doing that?

It's because we are quite used to Excel. You know, if we build Excel spreadsheets, we build macros; we feel pretty good about it. But I think, really, when you look at AI, that work will be a lot different. It won't go away, in terms of the intent of that work. The intent of that work is that you build something to analyze and come to an insight. And what you'll see now is that the basic work to build out the spreadsheets will be a lot easier. It will be a lot more about — not the basics of putting the input and getting the output — it will be about, well, now that I've got the output, how do I get to a conclusion? What is the insight? So, for example, in our business, if I'm an underwriter, quite often I start with, you know, let me take a look at the customer that's asking us to insure them. And let me take a look at their business and all of their properties and all of their lines of business. And that used to be a fairly laborious task. That's relatively straightforward now. So, you can really focus in on what we're supposed to be doing as an insurer, which is: what is the risk, and should I take that risk? And how should I price it? Now let me gather the information to make that decision. So, I think that even the higher-end roles will change, but it doesn't mean that we don't need people. It means that we have to do an exercise where people change and utilize the tools for what they're needed for — and not just continuing to do what they've done before. And if you just wanted to put a finer point on it — I mean, think about it in your personal experience, right?

If you pick up your phone and you have Gemini on it — if you're an Android user, I mean — you're probably using it ad hoc once or twice or multiple times a day to ask questions. But I think, if you look at it in terms of your laptop and what you're working on in your business, you've got to think of it less as, 'Now I'll just go off here and take a look at an ad hoc question,' and more as, 'How do I utilize these tools to do some of what I was already doing?' And that's quite a change management exercise, because there's still people involved in this.

Why lifelong learning matters in the era of automation

HICKS:

It was definitely a change management exercise, and I'm glad you pointed that out. I feel like if you're an entry-level person or somebody who's just new into the field or new into your career, and you're like, 'Okay, I went to school for this and I learned this,' and part of it may have been data entry or whatever, and 'I was hired to do this,' and all of a sudden you're telling me that my job is going to take a hard left turn, and I'm not supposed to be thrown off a bit by that — you know, it sounds scary. What I hear you saying is that we need to reimagine jobs, right? We can't think of jobs in the roles that we have done in the past as how it's going to be in the future. But I think even if you are well-tenured in your position and in your role, you have been doing things a certain way for a long time, and then here comes AI — like the boogeyman, or so you think — and it's going to cause me to go back to school. I'm done going to school, Barry.

I've got my degrees; I've got my education. I'm not going back to college. I'm not sitting in a classroom. Who's got time to learn all this AI stuff?

PERKINS:

Yeah, but you know, you can't avoid it. It's going to be part of your day. I mean to me — look, I've moved around a lot with my job, and it's quite interesting — when I move to another country, you know, quite often the concern is for your children. You say, 'Oh my gosh, I’ve got to change the schools, and they've got to learn all these new things.' And the irony of it is that the newer entrants into the workforce are like the children — they adapt very quickly because this is normal to them. And it's the people that have been here for a while. So, all the concern about the entry-level jobs — I think you could reverse that and say, actually, it's the people that say, 'I do actuarial triangles, and I do it this way.' It's pretty tough to go from, 'I read a book that told me I did it this way.’

I had the experience that says, this is how we do it as a process too okay, this is the new way of doing it. And it's pretty tough if you're pretty well embedded in the way that you do it, and you know that that works — because this is not about whether it works or doesn't work. This is about how quickly you get to the end result. And really, we should be talking about look, if you don't change, others will get to that conclusion a lot faster than you will. And ironically, you know, for a large company like ours, this gives the entry for much smaller companies. Because it used to be that you had to hire real experts — experienced experts — just to get to their answer. Now, you can get smaller companies that can pick up on this technology and really make an entrance. And not to mention, they'll be funded from private equity to do that. So, we really need to look at this in a different way and think about not entry-level workers, but: okay, what are we going to do to change ourselves?

HICKS:

I know that that human element is still vital in all of this. And ultimately, people still trust people — at least I'd like to believe that people still trust people. And Adam, I feel like as long as we're able to kind of keep that as our north star, and that we prioritize continuous education, maybe we can venture out of the quote-unquote darkness, right

PAGE:

That's right. That's the way forward, you know, is understanding and education and communicating as this thing changes, it evolves.

The role of regulators in managing AI risks

HICKS:

And I think a big piece of that is governance, right? We hear governance talked about quite a bit. Why is governance so vital to you, Adam? Why is it so important that organizations implement proper governance around AI?

PAGE:

Yeah, and nobody's going to stand up and clap for governance and get excited about this — but it's a necessary thing. It is there for safety, right? For guardrails, for ethics, and really for rights as well. So, from a business context, what we're looking to do is gain better visibility into these tools — what's developed internally, what's procured externally, what is utilized in third parties — and how these things are rolled out. And with governance comes more control, but we can't let control get in the way of the speed of our business and the speed in which we can adapt things. So, there is a safe middle ground and a safe balance, and there's so much to learn. Because you can't pause every initiative and get to the bottom of every detail and understand it — because that is slow. So, how do we become fast and work alongside the business and inject our governance processes at the right moments to understand our risk and how this thing changes? And there's so much — because point-in-time governance doesn't work either, right? You look at a new product --

HICKS:

Right

PAGE:

-- you know, that hits your business. You understand how to use it; you roll it out. Well, that product isn't going to be static — that thing is going to change over time. Whether it's every six months or every year or something. But, like, the data involved will change; the access to that data will change, who gets the output of that AI solution. And some of this can impact performance, availability, minimum use of data, and acceptable use of data. So, it just keeps going.

And that's just all before the regulations even start. That's just what we think about as important and as the right thing to do when we're using new tools like this. But then, come the regulators as well — which we don't wait for, but we pay attention to and we have obligations to be compliant with. And states are just starting to roll these, you know, regulations out for AI, and they're different and disparate, just like we see in cybersecurity and privacy. There's not one federal regulation that says the United States will do this, you know, to ensure that we are safe from AI and its ethical use. So, it's going to take time for states to roll out their regulations. How will they differ? You know, do we need to adapt to be compliant? Our goal is that, you know, compliance is the low bar, and we expect to go above and beyond that. And that's all-business use, right? So then, it's also — some of those regulations are there because of public use, and because of the safety of that data, and because of the many fraud schemes that exist in the use of AI, and unfortunately, the not-so-great track run of disinformation year over year, right? We're just going to see that, unfortunately, increase.

HICKS:

I find it a bit ironic. Oh, go ahead, Barry. I'm sorry.

PERKINS:

I said, I think that Adam started off by saying governance is not sexy. Come on, Adam. Governance is really sexy. <laugh>

PAGE:

<laugh>.

HICKS:

Well, that's funny. And I was actually going to mention that the irony in it, to me, is that you mentioned control. You kind of equated governance to control on a certain level, and people typically get freaked out by control—and that word. And it's like, wait a minute; I don't want there to be control over what it is that I'm doing or how I'm doing my job. And then we have AI coming to the fold, and it's like, wait a minute; we need more control. Like, this is haywire. I think that people are—maybe they don't even know where they fit into this. They don't even know what they need to think or how to feel about it. And when you talk about—not to take it to a political corner—but when you talk about states having their own kind of language on how to govern AI use, I mean, listen, we've seen a lot of... the political landscape is very fraught right now. And so, the idea that something as critical as AI being left up to the state level, I think, might be scary to some folks too. Don't you think?

PERKINS:

Do you want me to answer that? I'll jump in because, you know, I think that when you talk about state-level regulation, it's actually fair. I mean, that's the way our industry works. And you always get a leading state. I think right now New York has taken a big lead, but I do think it's actually about equity. Because if you think about insurance, what do we do? We take the data from a collective pool of risks, and we decide which risk we are going to underwrite. And we are the original data and analytics company and industry. So, you know, because we date back to the 19th century and even beyond, if you look at insurance more generally—and if you think about what that means to people and to companies and businesses—that means that we have an obligation to make sure that when we are deciding between one risk or another, we are doing it on a basis that you can justify against data that is publicly available and that we can justify to a regulator, where we say, these are our criteria, and we don't, to use a word, redline.

We don't take some communities and cut them out of the insurance that we offer because of something other than the basics of what industry they're in, what products they produce, and how many employees they have. Those are the types of criteria that we really need to use. And so, I think it's fair, actually, that regulators pick up on that and they ask us—they asked us before for what our models are, and they ask us now for the use of AI. And that's absolutely fair. And to use Adam's phrase, you know, that's a minimum that we need to—we should offer up to them. And I think that everybody should be comfortable with that.

HICKS:

You have confidence that these regulators are going to be able to keep up with the trends and, you know, as the evil doers, if you will, of AI kind of get their hands on new technology and new applications that they want to use for nefarious purposes, that the regulations will be able to kind of keep pace. Is that not a concern?

PERKINS:

Well, they’re the same as us, right? We are learning in collaboration because you see, every day, a different story about what AI is capable of—whether it’s self-driving cars and they're getting to a new level of autonomy, or, you know, in some scenarios we start talking about, boy, AI's going to take over the world and start wars. I mean, we're nowhere near that, but together, we're learning. And I think the regulation itself that comes from some of the leading states, and then how we respond to that, really advances along with the developments that we see. So, you know, the chicken and the egg—who comes first, the advancements in AI or the regulators and the way that we use AI? And they're pretty much in lockstep. But we'll make mistakes, for sure. You know, we'll get uses of AI that come out that will need to be regulated.

But, at the same time, I think the regulators will also see the benefit of what we are using this for. Because, used properly, what you should see is that an industry that—you know, I pardon everybody who's an underwriter and in the industry for saying this—but if you look at commercial insurance right now, it still resembles, really, what we looked like a few decades ago. You know, we tend to receive requests for insurance from brokers in email format with attachments and then have to unbundle all that information and work with it. And really, we should be looking at what looks to be a response time in weeks, and we should be looking at that in days, if not hours, depending on the risk. So, I think overall this should be a positive. This really should be a positive.

How AI-powered deepfakes threaten digital trust

HICKS:

The net of it sounds like it has a lot of potential to be positive, but I want to run something down, you guys, and I jotted this down in my notes and I wanted to get your real-time reaction to this. In the last few days, I have seen online a video of Martin Luther King doing self-checkout at a grocery store and walking out with a bag of groceries, having not paid for it. I saw Tupac Shakur and Kobe Bryant in a foot race in Havana. I saw Tupac Shakur and Notorious B.I.G. in the WWE wrestling ring. I saw Elvis hanging with Kobe Bryant and Michael Jackson. These are all things that I've seen in the last week. So, you mentioned deepfakes earlier, Adam, and it feels like we can't even trust what we see, right? I mean, it’s getting more and more sophisticated, and they're very good—like, they're very convincing-looking deepfakes. And you mentioned, like, obviously, corporate leaders are now being brought into the fold as well. But if we can't trust what our own eyes are telling us anymore, is it all even worth it? What's this for? If we can't even believe what we see anymore, is it worth it to have faster processing and analyzation times and things like that, if it's going to come at the cost of us not even being able to trust our own eyes?

PAGE:

I think people are going to have to get faster with scrutiny, right? And knowing that they should be suspicious of things—and hopefully, some of the absurdity of some of these videos that are out there today helps <laugh> --

HICKS:

There's a lot of them.

PAGE:

-- To educate, right? Just like what you can do with the power of these tools, right? And that gets in people's brains, and they use that going forward, you know, to be more aware and less reactive. So, faster to scrutiny and maybe slower for negative reaction is something that I would like to see. But yes, these tools are very powerful, and like you, I've seen some similar cases where some very famous sports figures are on social media, you know, with a video, and the audio perfectly aligns—saying things that they would never say publicly. And then you go and you look at the comment section, and people are believing it, you know, and they're feeling that it's real, and that it takes --

HICKS:

People? People or bots? Which one <laugh>

PAGE:

Yeah <laugh>.

PERKINS:

Yeah, but true. But guys, come on. Everybody knows that Tupac didn't really die. Come on. That's not a deep fake video.

HICKS:

Well, so that's the funny part is that, you know, the rumor was for decades that he moved to Cuba, so now they have him in Havana, allegedly, right? With other deceased people. It's only a matter of time before this obviously infiltrates the living, right? How long did we hear about Elvis? And you already pointed out it's already happening, but like, how long ago have we heard people talk about Elvis Presley? He was alive. I'm using air quotes for decades. I'm still not convinced that he ever died, you know?

PERKINS:

He married my brother. He's certainly alive. Yeah, for sure. <laugh> But if I could—if I could just comment on this, because there is a dark side to this. You mentioned, can we believe what we see? And especially in the hybrid work environment, where people increasingly are, you know, part in the office and part working from home. And what you do see is—the way that you see people working is, you know, one part is output, and then the other part is you tend to see them online, going back to the same tools that we use. It's, you know, it's Outlook, it's Teams, it's the devices we use to communicate on. And we do see a dark side with a very, very small minority of people that have figured out that, if I am remote and you can't physically see me, I can use some tools.

You know, the Oxford English Dictionary's latest word this year was mouse jiggler. You know, I can plug in some software and make it look like I am on a screen and my mouse is moving—so I am working. It's a very, very small minority. But to your point, Justin, I mean that to every instance where you see, this is not real—it appears to be real, but it's not real—you have an equal and opposite effect, which is: there are companies that make software that detect that. And it's similar with deepfake AI videos. I mean, you can still tell if it's a deepfake—not just because it's Elvis checking out at the supermarket—but because, you know, it's still not quite that level. It will get there, but with that will come, I'm pretty sure, some tools to detect that as well. So, I'm not—um, what's the word? I'm not so jaded that I think that only the negative will prevail here, because there's so much positive that we could take out of it. You do see the dark side, but you know, where there's dark, you bring light.

PAGE:

It definitely creates room for newer products that help to watch this space and perhaps throw warning banners, right? Like looking at metadata, they can tell these things are fake. So, we will see an evolution there, and some things balance out. But at the same time, I'm interested in your take. How long do we think it'll be before a publicly traded company has a deepfake of an executive say something very unpopular online that negatively impacts their stock price?

HICKS:

And then they trace that deepfake to a competitor. How about that?

PERKINS:

You guys, you guys, you took it --

HICKS:

Barry, Barry.

PERKINS:

-- [crosstalk] to a dark place.

HICKS:

Oh, Barry, you are the optimistic one amongst us. Not to say that Adam is, and I think both of you guys are very optimistic about where this is all headed, and I appreciate that. That's why I like hanging with you guys. But I want to speak for the skeptics out there, you know, that see all this stuff and it's like, wait a minute, life was easier 20 years ago. Like, what are we doing here? You know, we can move faster, we can do things more efficiently at work, but if it comes at the price of not even knowing what I'm looking at, and people trying to steal all of my information by impersonating a loved one who called me on the phone or something like that, I don't want any part of it. But it doesn't seem to scare you guys. You guys aren't scared. Doesn't sound like. Why doesn't it scare you?

PERKINS:

Well, I'm realistic. You know, these things do exist. You're right, everything you mentioned exists, and you know, really, you do see that. You do hear people that have been scammed with phone calls that sound just like a relative. But at the same time, you see the positives of this. I mean, you see the absolute positive impact that can have in our industry and in our everyday life, and the growth of the economy and the opportunities that will come. And if you look at it historically, from the time of the industrial evolution onwards—I mean, there were people, when the automated weaving of cloth came, that they smashed up the weaving machines. The Luddites, you know, back in the 18th century when the industrial revolution started, because it was going to take our specific job.

And you can weave that all the way through to the typewriter, and what's the business case for a computer—a desktop computer, if you can even remember those—all the way through to the iPhone and so forth. And these iterations go through. Now granted, we tend to believe that we are in a much more sophisticated space now, but people are people, you know. They will do nefarious things but just look at the advancement that's available to us. I'll give you one example, not from insurance. I went to Apple, to the headquarters, and we were looking at what the Apple goggles do. Now, I'm not saying they're a great success, but the biggest use for them is surgeons doing orthopedic surgery. They used to have to go look over here at the X-ray and then go look down here at the operation they were doing, and now they're overlaid when they're actually doing the surgery. Robotic surgery that is AI-enabled is phenomenal. What an advancement in technology—and that's AI-enabled. So, I don't really see that the negative outweighs the positive here.

PAGE:

<crosstalk>, Yeah, I'm in line with that, right? It's healthy for me, in my position, to be—to be a skeptic and to think about--

HICKS:

Yeah, that's your job, right?

PAGE:

--<laugh> That is my job—to think of the bad things that can happen, right? But part of what makes me feel a little bit better in this space is, to Barry's point; we've done this before. We haven't done AI before, but we've done the evolution of technology before, right? And the foundations of cybersecurity will be doing a lot of the same things to reduce this risk that have reduced risks of the past. And I think one area that's been a little bit slow, that I would expect to get better, is what I'd say is awareness at the point of risk, right? So, when a risk is present, that's the time to educate somebody—not in, like, an annual required education session that somebody's going to remember throughout the year. It's these small moments in time where you can get a popup, or a banner, or a symbol, or something like that says, "Oops, you might want to stop and think about this for a split second before something happens."

PERKINS:

Yeah. You see that when you do banking, right? Your personal banking—you know, if I'm going to transfer money, you get, what, at least three reminders: Do you know the person? Do you really authorize this? I mean, you're going to see that. Now, maybe it'll take a little more time, but that's the protection element. But still, you know, it's worth it. It's worth it.

PAGE:

Yeah.

HICKS:

Some friends of mine tease me for being a bit of a dinosaur as it relates to adapting to technology. I don't think I'm that bad, but you know, it’s funny when you think of password protection. I always feel like the only person that the password-protecting software is keeping out of the account is me. Like, I feel like I'm the only one. I have to struggle with my own stuff, and I’ve got to change my account constantly. And it's like you're doing a great job of keeping me out. I don't know about the fraudsters out there.

PERKINS:

Yeah. But you know, Justin 1, 2, 3 is not the correct password. <laugh>, it's not sophisticated <laugh>. Justin,

HICKS:

You only noted Justin—Justin, 4, 5, 6 next time.

PERKINS:

Are you really a digital dinosaur? Justin, come on.

How to build trust in artificial intelligence

HICKS:

No, no. But I've been on the butt of a few jokes here and there, but I found myself coming back to, like, three questions: Is it safe? Is AI safe? How do we trust it? How do we trust what we see? And will it take my job? And I feel like if we can kind of get to a place of positive affirmation on all three of those fronts, I think we'll see the adoption skyrocket. Not to say that we haven't seen adoption take off already, because it has.

PERKINS:

Yeah, it has. And—and it is, right? Just think about all your mobile devices, right? Now, I'm pretty sure that the audience listening here have all used either ChatGPT—you know, maybe they made their wedding speech up with the ChatGPT or something in their personal life. I'm pretty sure everybody's used it and will continue to use it and--

HICKS:

Or they may be using it, not knowingly, you know? Yeah. It's built into a lot of other technologies.

PERKINS:

--do a search, and you know, behind the scenes that's automatically, you know, part of what you're looking at. But to your statement—yeah. Is it safe? It's safe enough, you know. It's safe enough and it's positive enough. And will it take your job? It will change your life. It will change your job. So yeah, if we roll with it and we flow with it and we educate ourselves, then I think I'm on the positive side of that equation.

Balancing risk and reward: AI adoption in insurance

HICKS:

I want to wrap up with this. You guys have been great, and we've kind of already touched on this a little bit, but obviously, being in insurance, we are in a notoriously risk-averse industry. And in those early days of ChatGPT, for example, how do we get to a point that rewards outweigh those risks? Like, how did you all come to that conclusion yourselves?

PERKINS:

Do you want to take that first? Adam, do you want to start because you're in the business of being skeptical?

PAGE:

Yeah. <laugh> Yeah, I can start. I think it's interesting in the perspective of making us better at our jobs. I think it allows us to move faster. And if we've got the right controls or guardrails up front, then that will keep us in bounds, right? Working with the right sets of data, having the right level of access. In discussions I've had with my team, we've talked like, "Hey, if we had the tool right now with a security AI solution, what questions would we ask it?" And then we went through this exercise for probably 45 minutes, and I created this list. And then I went back to them and I said, "Well, okay, we don't have this solution at our fingertips now, but we could also do these things right now." You're already thinking the right thoughts, and if you were a prompt expert, these are the things that you would enter into the tool. So, we can go and do this work. It will take us 10x longer to get to the same answers that we want, but when that tool comes—and parts of it are there—it's going to make us faster. And speed is becoming very important in cyber, as criminals can get in quickly and get out quickly. So, speed's the name of the game, and I think that's one of the things that outweighs the risk in security.

PERKINS:

Yeah. <laugh> Yeah. For me, it's fairly straightforward. I go back to where we started the conversation, Justin, which is—we talked about the AI bubble. And to me, we've seen many, many—and there's even a word for it with Gartner, right? The hype cycle. So, part of the issue with AI is the business cases tended to be artificial <laugh>. And so, you see this hype, and you know, you can do this, you can do that, and you see death by a thousand pilots, but you don't see the output. But from a business perspective, we've always had—you know; this is the hurdle rate to take our capital and make an investment, and this is the impact that we need to have, whether that's a growth case or whether that's an expense management case. Every company has a fairly basic algorithm that they use to apportion their scarce resources.

At the end of the day, business is not magic, and AI is not magic either.

HICKS:

Some people may think it's black magic, but I'm glad the two of you were here to help clarify that. And again—speed, connecting AI to our business strategies, awareness at the point of risk, Adam (which I think you should coin that phrase—I think that was really good). So much was learned throughout this discussion, guys. Thank you so much, Adam and Barry, for joining us today. And thank you for everything you're doing to help advance and protect Zurich North America as we embark on our AI journey.

PERKINS:

Thank you, Justin.

PAGE:

Thanks, Justin.

HICKS:

And thank you for listening. Stay tuned for the last episode in our AI and machine learning miniseries, where we focus on what's next for AI. Our guest will be Madhu Ramamurthy, Chief Information Officer, Apps Delivery, and Amy Nelsen, Director of Operations and Technology for U.S. Middle Market, both at Zurich North America. If you like the show, leave a comment or review wherever you get your favorite podcast. Or you can drop us a note at media@zurichna.com. This has been Future of Risk presented by Zurich North America.

¹New Knowbe4 report reveals a spike in ransomware payloads and AI-powered polymorphic phishing campaigns, Knowbe4, March 20,2025,1
²Aditya Challapally, Chris Pease, Ramesh Raskar and Pradyumna Chari, The GenAI Divide State of AI in Business 2025, MIT July 2025, 3.
³Financial Times, Banks caution over bubble as they report bumper profits

The information in this audio recording was compiled from sources believed to be reliable for general information purposes and is intended for Zurich clients and compliance procedure, or that additional procedures might not be appropriate under the circumstances. The subject matter of this recording is not tied to any specific insurance product, nor will adopting these policies and procedures ensure coverage under any insurance policy. We encourage listeners to seek additional information from credible sources. Thank you.