[fusebox_track_player url=”https://traffic.libsyn.com/ducttape/DTM-Kenneth-Wenger.mp3″ title=”Unveiling the Future Of AI” social_linkedin=”true” social_pinterest=”true” social_email=”true” ]
Marketing Podcast with Kenneth Wenger
In this episode of the Duct Tape Marketing Podcast, I interview Kenneth Wenger. He is an author, a research scholar at Toronto Metropolitan University, and CTO of Squint AI Inc. His research interests lie at the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology.
His newest book, Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. Kenneth explains the complexity of AI, demonstrating its potential and exposing its shortfalls. He empowers readers to answer the question: What exactly is AI?
Key Takeaway:
While significant progress has been made in AI, we are still in the early stages of its development. However, the current AI models are primarily performing simple statistical tasks rather than exhibiting deep intelligence. The future of AI lies in developing models that can understand the context and differentiate between right and wrong answers.
Kenneth also emphasizes the pitfalls of relying on AI, particularly in the lack of understanding behind the model’s decision-making process and the potential for biased outcomes. The trustworthiness and accountability of these machines are crucial to develop, especially in safety-critical domains where human lives could be at stake like in medicine or laws. Overall, while AI has made substantial strides, there is still a long way to go in unlocking its true potential and addressing the associated challenges.
Questions I ask Kenneth Wenger:
- [02:32] The title of your book is the algorithm plotting against this is a provocative question. So why ask this question?
- [03:45] Where do you think we really are in the continuum of the evolution of AI?
- [07:58] Do you see a day when AI machines will start asking questions back to people?
- [07:20] Can you name a particular instance in your career where you felt like “This is going to work, this is like what I should be doing”?
- [09:25] You have both layperson and math in the title of the book, could you give us sort of the layperson’s version of how it does that?
- [15:30] What are the real and obvious pitfalls of relying on AI?
- [19:49] As people start relying on these machines to make decisions that are supposed to be informed a lot of times, predictions could be wrong right?
More About Kenneth Wenger:
- Get your copy of Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI.
- Connect with Kenneth.
More About The Agency Certification Intensive Training:
Take The Marketing Assessment:
Like this show? Click on over and give us a review on iTunes, please!
John Jantsch (00:00): Hey, did you know that HubSpot's annual inbound conference is coming up? That's right. It'll be in Boston from September 5th through the eighth. Every year inbound brings together leaders across business, sales, marketing, customer success, operations, and more. You'll be able to discover all the latest must know trends and tactics that you can actually put into place to scale your business in a sustainable way. You can learn from industry experts and be inspired by incredible spotlight talent. This year, the likes of Reese Witherspoon, Derek Jeter, Guy Raz, are all going to make appearances. Visit inbound.com and get your ticket today. You won't be sorry. This programming is guaranteed to inspire and recharge. That's right. Go to inbound.com to get your ticket today.
(01:03): Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Kenneth Wenger. He's an author, research scholar at Toronto Metropolitan University and CTO of Squint AI Inc. His research interests lie in the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology. We're gonna talk about his book today Is the Algorithm Plotting Against Us?: A Layperson's Guide to the Concepts, Math, and Pitfalls of AI. So, Ken, welcome to the show.
Kenneth Wenger (01:40): Hi, John. Thank you very much. Thank you for having me.
John Jantsch (01:42): So, so we are gonna talk about the book, but I, I'm just curious, what, what does Squint AI do?
Kenneth Wenger (01:47): That's a great question. So, squint ai, um, is a company that we created to, um, do some research and develop a platform that enables us to, um,
(02:00): Do, do AI in a more responsible, uh, way. Okay. Okay. So, uh, I'm sure we're gonna get into this, but I touch upon it, uh, in the book in many cases as well, where we talk about, uh, ai, ethical use of ai, some of the downfalls of ai. And so what we're doing with Squint is we're trying to figure out, you know, how do we try to create a, an environment that enables us to use AI in a way that lets us understand when these algorithms are not performing at their best, when they're making mistakes and so on. Yeah,
John Jantsch (02:30): Yeah. So, so the title of your book is The Algorithm Plotting Against, this is a bit of a provocative question. I mean, obviously I'm sure there are people out there that are saying no
Kenneth Wenger (02:49): Well, because I, I actually feel like that's a question that's being asked by many different people with actually with different meaning. Right? So it, it's almost the same as the question of is AI posing an existential threat? I, I, it's a question that means different things to different people. Right. So I wanted to get into that in the book and try to do two things. First, offer people the tools to be able to understand that question for themselves, right. And first figure out how, where they stand in that debate, and then second, um, you know, also provide my opinion along the way.
John Jantsch (03:21): Yeah, yeah. And I probably didn't ask that question as elegantly as I'd like to. I actually think it's great that you ask the question, because ultimately what we're trying to do is let people come to their own decisions rather than saying, this is true of ai, or this is not true of AI
Kenneth Wenger (03:36): That's right. That's right. And, and, and again, especially because it's a nuanced problem. Yeah. And it means different things to different people.
John Jantsch (03:44): So this is a really hard question, but I'm gonna ask you, you know, where are we really in the continuum of, of AI? I mean, people who have been on this topic for many years realize it's been built into many things that we use every day and take for granted, obviously we ChatGPT brought on a whole nother spectrum of people that now, you know, at least have a talking vocabulary of what it is. But I remember, you know, I've been, I've been, I've had my own business 30 years. I mean, we didn't have the web
Kenneth Wenger (04:32): You know, that's a great question because I think we are actually very early on. Yeah. I think that, you know, we, we've made remarkable progress in a very short period of time, but I think it's still, we're at the very early stages. You know, if you think of ai where we are right now, we were a decade ago, we've made some progress. But I think the, fundamentally, at a scientific level, we've only started to scratch the surface. I'll give you some examples. So initially, you know, the first models, they were great at really giving us some proof that this new way of posing questions, you know, the, uh, neural networks essentially. Yeah, yeah. Right. They're very complex equations. Uh, if you use GPUs to, to run these complex equations, then we can actually solve pretty complex problems. That's something we realized around 2012 and then after around 2017, so between 2012 and 2017, progress was very linear.
(05:28): You know, new models were created, the new ideas were proposed, but things scaled and progressed very linearly. But after 2017, with the introduction of the model that's called the Transformer, which is the base architecture behind chat, g, pt, and all these large language models, we had another kind of realization. That's when we realized that if you take those models and you scale them up and you scale them up in, in terms of the size of the model and the size of the data set that we used to train them, they get exponentially better. Okay. And that's when we got to the point where we are today, where we realized that just by scaling them, again, we haven't done anything fundamentally different since 2017. All we've done is increase the size of the model, increase the size of the dataset, and they're getting exponentially better.
John Jantsch (06:14): So, so multiplication rather than addition?
Kenneth Wenger (06:18): Well, yes, exactly. Yeah. So, so it isn't, the progress has been exponential, not only in linear trajectory. Yeah. But I think, but again, the fact that we haven't changed much fundamentally in these models, that's going to taper off very soon. It's my expectation. And now where are we on the timeline? Which was your original question. I think if you think about what the models are doing today, they're doing very element. They're doing very simple statistics, essentially. Mm-hmm.
John Jantsch (07:39): Absolutely. I mean, I totally agree with you on artificial intelligence. I've actually been calling it ia. I think it's more of informed automation.
Kenneth Wenger (08:06): Yeah. So the, the, the simple answer is yes. I, I definitely do. And I think that's part of what, what achieving a higher level intelligence would be like. It's when they're not just doing your bidding, it's not just a tool. Yeah, yeah. Uh, but they, they kind of have their own purpose that they're trying to achieve. And so that's when you would see things like questions essentially, uh, arise from the system, right? Is when they, they have a, a, a goal they wanna get at, which is, you know, and, and then they figure out a plan to get to that goal. That's when you can see emergence of things like questions to you. I don't think we're there yet, but yeah, I think it's certainly possible.
John Jantsch (08:40): But that's the sci-fi version too, right? I mean, where people start saying, you know, the movies, it's like, no, no, Ken, you don't get to know that information yet. I'll decide when you can know that
Kenneth Wenger (08:52): Well, you're right. I mean, the question, the way you asked the question was more like, is it, is it possible in principle? I think absolutely. Yes. Yeah. Do we want that? I mean, I, I don't know. I guess that's part of, yeah, it depends on what use case we're thinking about. Uh, but from a first principle's perspective Yeah, it is, it is certainly possible. Yeah. Not to get a model to
John Jantsch (09:13): Do that. So I, I do think there are scores and scores of people, they're only understanding of AI is I go to this place where it has a box and I type in a question and it spits out an answer. Since you have both layperson and math in the title, could you give us sort of the layperson's version of how it does that?
Kenneth Wenger (09:33): Yeah, absolutely. So, well, at least I'll try, lemme put it that way,
(10:31): So basically for any word in a, in a, in a prompt or in a corpus of text, they calculate the probability that word belongs in that sequence. Right? And then they choose the, the next word with the highest probability of being correct there. Okay? Now, that is a very simple model in the following sense. If you think about how we communicate, right? You know, we're having a conversation right now. I think when you ask me a question, I, I pause and I think about what I'm about to say, right? So I have a model of the world, and I have a purpose in that conversation. I come up with the idea of what I want to respond, and then I use my ability to produce words and to sound them out to communicate that with you. Right? It might be possible that I have a system in my brain that works very similar to a large language model, in the sense that as soon as I start saying words, the next word that I'm about to say is one that is most likely to be correct, given the words that I just said.
(11:32): It's very possible. That's true. However, what's different is that at least I already have a plan of what I'm about to say in some latent space. I have already encoded in some form. What I want to get across, how I say it, that the ability to pro to produce those words might be very similar to a language model. But the difference is that a large language model is trying to figure out what it's going to say as well as coming up with those words at the same time. Mm-hmm.
John Jantsch (12:20): I, I, I have certainly seen some output that is pretty interesting along those lines. But, you know, as I heard you talk about that, I mean, in a lot of of ways that's what we're doing is we're querying a database of what we've been taught, are the, the words that we know in addition to the concepts that we've studied, uh, and are able to articulate. I mean, in some ways we're querying that to me, prompting or me asking you a question as well, I mean, it works similar. Would you say
Kenneth Wenger (12:47): The aspect of prompting a question and then answering it, it's similar, but what is different is the, the concept that you're trying to describe. So, again, when you ask me a question, I think about it, and I come up with, so I, again, I have a world model that works so far for me to get me through life, right? And that world model lets me understand different concepts in different ways. And when I'm about to answer your question, I think about it, I formulate a response, and then I figure out a way to communicate that with you. Okay? That step is missing from what these language models are doing, right? They're getting a prompt, but there is no step in which they are formulating a response with some goal, right? Right? Yes. Some purpose. They are essentially getting a text, and they're trying to generate a sequence of words that are being figured out as they're being produced, right? There's no ultimate plan. So that, that's a very fundamental difference.
John Jantsch (13:54): And now, let's hear a word from our sponsor, marketing Made Simple. It's a podcast hosted by Dr. J j Peterson and is brought to you by the HubSpot Podcast Network, the audio destination for business professionals marketing made simple, brings you practical tips to make your marketing easy and more importantly, make it work. And in a recent episode, JJ and April chat with StoryBrand certified guides and agency owners about how to use ChatGPT for marketing purposes. We all know how important that is today. Listen to marketing Made Simple. Wherever you get your podcasts.
(14:30): Hey, marketing agency owners, you know, I can teach you the keys to doubling your business in just 90 days, or your money back. Sound interesting. All you have to do is license our three-step process that's gonna allow you to make your competitors irrelevant, charge a premium for your services and scale perhaps without adding overhead. And here's the best part. You can license this entire system for your agency by simply participating in an upcoming agency certification intensive look, why create the wheel? Use a set of tools that took us over 20 years to create. And you can have 'em today, check it out at dtm.world/certification. That's DTM world slash certification.
(15:18): I do wanna come to like what the future holds, but I want to dwell on a couple things that you dive into in the book. What are the, you know, other than sort of the fear that the media spreads
Kenneth Wenger (15:38): I think the biggest issue, and one of the, I mean the, the, the real motivator for me when I started writing the book is that it is a powerful tool for two reasons. It's very easy to use, seemingly, right? Yeah. You can spend a weekend learning python, you can write a few lines, and you can transform, you can analyze, you can parse data that you couldn't before just by using a library. So you don't really have to understand what you're doing, and you can get some result that looks useful, okay? Mm-hmm.
(16:42): In a, in, in a way that can affect other people. For example, you know, let's say you work in a financial institution and, and, and, and you come up with a model to figure out, uh, who you should, who you should give some credit, get, you know, approved for, for credit for a credit line, and who you shouldn't. Now, right now, banks have their own models, but sure, if you take the AI out of it, traditionally those models are thought through by statisticians, and they may get things wrong once in a while, but at least they have a big picture of what it means to, you know, analyze data, biasing the data, right? What are the repercussions of bias in the data? How do you get rid of all these things are things that a good statistician should be trained to do. But now, if you remove the statisticians, because anybody can use a model to analyze data and get some prediction, then what happens is you end up denying and approving credit lines for people who, with you, you know, with repercussions that could be, you know, driven by very negative bias in the data, right?
(17:44): Like, it could affect a certain section of the population, uh, negatively. Maybe there's some that can't get a credit line anymore just because they live in a particular neighborhood mm-hmm.
John Jantsch (17:57): But wasn't that a factor previously? I mean, certainly neighborhoods are considered
Kenneth Wenger (18:06): Yeah, absolutely. So like I said, we always had a problem with bias, right? In the data, right? But traditionally, you would hope, so two things would happen. First, you would hope that whoever comes up with a model, just because it's a complex problem, they have to have some satis statistical training. Yeah. Right? And a, an ethical statistician would have to consider how to deal with the bias in the data, right? So that's number one. Number two, the problem that we have right now is that, first of all, you don't need to have that set decision. You can just use the model without understanding what's happening, right? Right. And then what's worse is that with these models, we can't actually understand how the, or it's very difficult traditionally to understand how the model arrived or prediction. So if you get denied either a credit line or as, as I talk about in the book bail, for example, in, in a court case, uh, it's very difficult to, to argue, well, why me? Why, why was I denied this thing? And then if you go through the process of auditing it again with the traditional approach where you have a decision, you can always ask, so how did you model this? Uh, why was this person denied this particular case in a, in an audit? Mm-hmm.
John Jantsch (19:21): So I, I mean, so so what you're saying, one of the initial problems is that people are relying on the output, the data. I mean, even, you know, I use it in a very simple way. I run a marketing company and we use it a lot of times to give us copy ideas, give us head headline ideas, you know, for things. So I don't really feel like there's any real danger in there other than maybe sounding like everybody else
Kenneth Wenger (19:57): Yes. And, and there's very, so the answer is yes. Now, there's two reasons for that. And by the way, let me just go back to say that there are use cases where, of course you have to think about this as, as a spectrum, right? Like yeah, yeah. There are cases where the repercussions of getting something wrong is worse than other cases, right? So as you say, if you're trying to generate some copy and you know, if it's nonsensical, then you just go ahead and change it. And at the end of the day, you're probably gonna review it anyway. So, so that is a lower, probably a lower cost. The cost of a mistake there will be lower than in, in the case of, you know, using a model in a, in a judicial process, for example. Right? Right. Right. Now, with respect to the fact that these models sometimes get, make mistakes, the reason for that is that the way these models actually work is that they, and, and the part that can be deceiving is that they tend to work really well for areas in the data that that is, that they understand really well.
(20:56): So, so if you think of, of a dataset, right? So they're trained using a dataset for most of the data in that dataset, they're gonna be able to model it really well. And so that's why you get models that perform, let's say, 90% accurate on a particular data set. The problem is that for the 10% where they're not able to model really well, the mistakes there are remarkable and in a way that a human would not be able to make those mistakes. Yeah. So what happens in those cases that, first of all, when we're training these models that we get, we say, well, you know, we get 10% error rate in this particular dataset. The one issue is that when you take that into production, you don't know that the incidences rate of those errors are gonna be the same in the real world, right?
(21:40): You may end up, uh, being in a situation where you get those data points that lead to errors at a much higher rate than you did in your data set. Just one problem. The second problem is that if, if you are in a, if your use case, if your production, you know, application, it's such where a mistake could be costed, like let's say in a medical use case or in self-driving, when you have to go back and explain why you got something wrong, why the model got something wrong, and it is just so bizarrely different from what a human would get wrong. That's one of the fundamental reasons why we don't have these systems being deployed across safety critical domains today. And by the way, that's one of the fundamental reasons why we created splint, is to tackle specifically those problems, is to figure out how can we create a set of models or a system that's able to understand specifically when models are getting things right and when they're getting things wrong at runtime. Because I really think it's, it's one of the fundamental reasons why we haven't advanced as much as we should have at this point. It's cuz when models work really well, uh, when they're able to model the data, well then they work great. But for the cases where they can't model that section of the data, the mistakes are just unbelievable, right? It's things that humans would never make those kinds of
John Jantsch (23:00): Mistake. Yeah, yeah, yeah. And, and obviously, you know, that's certainly gonna, that has to be solved before anybody's gonna trust sending, you know, a man spacecraft, you know, guided by AI or something, right? I mean, when you know human life is at risk, you know, you've gotta have trust. And so if you can't trust that decision making, that's certainly gonna keep people from employing the, the technology, I suppose.
Kenneth Wenger (23:24): Right? Or using them, for example, to help in, as I was saying, in medical domains, for example, cancer diagnosis, right? If you want a model to be able to detect certain types of cancer, given let's say biopsy scans, you wanna be able to trust the model. Now anything, any model essentially, you know, it's going to make mistakes. Nothing is ever perfect, but you want two things to happen. First, you wanna be able to minimize the types of mistakes that the model can make, and you need to have some indication that the quality of the prediction of the model isn't great. You don't wanna have that. Yeah. And second, once a mistake happens, you have to be able to defend that the reason the mistake happened is because the, the quality of the data was such that, you know, even a human couldn't do better. Yeah. We can't have models make mistakes that a human doctor would look at and say, well, this is clearly Yeah, incorrect.
John Jantsch (24:15): Yeah. Yeah. Absolutely. Well, Ken, I wanna take, uh, I wanna thank you for taking a moment to stop by the Duct Tape Marketing Podcast. You wanna tell people where they can find, connect with you if you'd like, and then obviously where they can pick up a copy of Is the Algorithm Plotting against Us?
Kenneth Wenger (24:29): Absolutely. Thank you very much, first of all for having me. It was a great conversation. So yeah, you can reach me on LinkedIn and for the cop for a copy of the book and get it both from, uh, Amazon as well as from our publisher website, the, it's called the working fires.org.
John Jantsch (24:42): Awesome. Well, again, thanks for solving by great conversation. Hopefully, we'll maybe we'll run into you one of these days out there on the road.
Kenneth Wenger (24:49): Thank you.
John Jantsch (24:49): Hey, and one final thing before you go. You know how I talk about marketing strategy, strategy before tactics? Well, sometimes it can be hard to understand where you stand in that, what needs to be done with regard to creating a marketing strategy. So we created a free tool for you. It's called the Marketing Strategy Assessment. You can find it @marketingassessment.co, not.com, dot co. Check out our free marketing assessment and learn where you are with your strategy today. That's just marketing assessment.co. I'd love to chat with you about the results that you get.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
This episode of the Duct Tape Marketing Podcast is brought to you by the HubSpot Podcast Network.
HubSpot Podcast Network is the audio destination for business professionals who seek the best education and inspiration on how to grow a business.