956 – Transcript

 

Just Right Episode 956

Air Date: March 18, 2026

Host: Bob Metz

Program Disclaimer:
The views expressed in this program are those of the participants.

Clip (Castle TV series):
Speaker 1: Naval Academy’s press department says Parker hasn’t checked in yet.

Speaker 2: Security team’s already on site looking for him.

Beckett: Copy that.

Castle: It doesn’t make any sense. Why wouldn’t Parker be there yet?

Beckett: There’s thousands of people there, maybe they just haven’t spotted him yet.

Castle: Why go through the trouble of getting press credentials if we’re not going to use them.

Beckett: Unless he’s pulling another head fake.

Castle: Military tactics, always making sure we’re a step behind.

Beckett: A step behind what? I mean this has to be about revenge, so it has to be about Reid.

Castle: Maybe he wants to do the Reid but Reid did him.

Speaker 1: Beckett, why’d you turn around? Is it Castle?

Beckett: No, we have a hunch.

Speaker 1: What kind of hunch?

Bob Metz: Welcome everyone. It is Wednesday, March 18, 2026. I’m Bob Metz and this is Just Right. Broadcasting around the world and online. Join us for an hour of discussion that’s not right wing. It’s just right.

So they had a hunch. In today’s opener from the Castle TV series, against all logic and against their originally intended objective, Richard Castle and Kate Beckett end up saving the day because they acted on a hunch rather than on the facts alone. It’s a common theme on that TV show, even though probably most of Castle’s hunches actually proved to be wrong the first, second or third time round.

And that’s part of what makes their crime-solving efforts so interesting and entertaining and often in a very humorous way. But what we’re referring to as a hunch, quote-unquote, can more broadly be defined as intuition. And why this particular theme has attracted my attention and concern is because of learning how intuition itself is now being touted as the next great development in the field of artificial intelligence. And on the heels of that claim, we are now seeing the development by our politicians and government of another chicken-little-the-sky-as-falling crisis as they did with COVID, climate change and other crises that would lead to the extinction of the human race. This time round, the manufactured crisis is taking the form of what we’re being told is a new kind of artificial intelligence that has developed everything from intuition to consciousness.

And of course, when that happens, humanity’s extinction is at risk and our governments must do everything within their power to keep us safe. Well, don’t fall for it. And the irony in justice in all of this is that you can protect yourself from the next state-funded pandemic about AI intuition by using your own intuitive powers to do so. Intuitively speaking, I have a hunch that intuition is a far more significant factor in human behavior than most might expect. I mean, we occupy ourselves so much with what people think that we often forget to step back for a moment and consider how people think. Who knows, perhaps one has something to do with the other.

Let’s find out, shall we? Right after our reminder that you can write us a feedback@justrightmedia.org. Hear us on WBCQ and on Channel 292 Shortwave. Follow and like us on your favorite podcast platform and visit us at justrightmedia.org where you can access all of our social media links, archive broadcasts, and the support button that makes it easy for you to support the show.

Because as always, your financial support is appreciated and is what makes this show possible.

Now, in any discussion of artificial intelligence and intuition, it seems to me that a good place to start would be with a few words on real intelligence and intuition. And just as our show last week aired, my daughter Danielle brought an interesting blog post to my attention.

It was the March 11 podcast of a show called The Diary of a CEO Clips and was titled Intelligent Insider Warns They Can See Everything. Now, I might have put this one on the back burner, but fortunately Danielle warned me that the title of the show was a little misleading because it was mostly about the value of intuition and then suddenly my interest peaked, given all the AI talk about intuition. And speaking on that topic was author Gavin de Becker, whose best-selling book on violence, titled The Gift of Fear, was described by its author as being all about intuition and personal responsibility. And just before their attention turned to the subject of intuition, de Becker showed his host, Stephen Bartlett, a photo of a mechanical dragonfly that looked like the real thing and that could see and record everything in the room it was flying around in, which explained the podcast’s title, Intelligent Insider Warns, They Can See Everything.

But here was a catch. That mechanical dragonfly was already in full use back in the 1960s. And this briefly brought to mind my own experience with what might have been such a technology, as discussed in our January 4th, 2024 episode entitled Is My Fly Open for a Discussion? About a strange experience in relationship I developed with a housefly over a period of a few weeks.

I’m serious, a housefly. And we posted a few video clips as evidence of this. But when I discussed it with my friends, they were all wondering if it could have been AI, some kind of mechanical fly that was spying on me. In which case, I suppose AI would have stood for artificial insect.

But in the end, my intuition suggested it really was a real fly, but you can decide for yourself by checking out that show online. So what has all this got to do with intuition? Well, the next voice you are about to hear on that topic is of author Gavin de Becker in conversation with Stephen Bartlett and get ready to discover something about yourselves as we listen to this.

Clip (The Diary of a CEO, March 11 2026, Stephen Bartlett with Gavin de Becker – on intuition)
Gavin de Becker: We are participating in, I won’t even call it an experiment, but a process that you read 1984, I’m sure, and most of your audience did. I was very heartened during the beginning of COVID that 1984 became the 17th best-selling book in the world in the English language, telling me, ah, people are paying attention.

They see that what they’re experiencing here has a degree of 1984 to it. I think all science fiction stories come true. I really do.

I see it time after time.

Stephen Bartlett: What advice would you give to my listeners about how to navigate in the world we’re living in today?

Gavin de Becker: My first book, which is still a very big book, A Gift of Fear, that book is, I think, still the best-selling book in the world on violence after 25 years.

And that book is all about intuition and personal responsibility. So the very first thing I would say to your listeners, to you, to remind myself as well, is that human beings did not get the biggest claws or the biggest teeth or the biggest muscles. We got the biggest brains relative to our size. And the nuclear defense system that all human beings have is intuition, much different from logic. Intuition, the root of it, by the way, I learned when I was writing that book, is inter, which means to guard and to protect. So intuition, when you think about it, oh, I just have a feeling I should go back to the apartment and double check such and such, did I leave the fire on on the pot and you go back and you open the door and you didn’t leave the fire on the pot.

But something else will always be going on that makes you glad you came back. I believe that intuition is always right in at least two ways. One, it always has your best interest at heart.

It’s giving you real information that’s valuable. And number two, it’s always based on something. And so our journey is to figure out when I have an intuitive feeling like do this show with you, who knows why, but when I have that intuitive feeling, and by the way, I don’t do most shows, I don’t know what the reason is. I don’t know what it’ll be. I mean, I can make up one with logic, right? I like that guy.

I learned a lot from his shows. I can create a case. I can make a case for anything. But if it’s just based on what I feel and everything you’ve succeeded at and accomplished was based on what you felt. It was based on intuition.

In America, in the West, we think we’re doing it by logic, right? I do a big PowerPoint presentation and I say to the board, here’s the reason. Here’s why.

And here’s the percentages. And they say, oh, good. The board at corporations in America would actually prefer that I use logic, even if I’m wrong, instead of using intuition, even if I’m right.

So when I say to you, no, I just think it’s the right thing to do. I think it’d be smart. I think it’ll be, it’ll really work out like something like Amazon Prime that people opposed.

And then it’s like 175 million people just in America are using it. Big success. Intuitive process. Not a logic process. Logic is weak and plotting. Logic does A, B, C, D. Intuition does A to Z instantly. And you don’t know why.

It’s knowing without knowing why. I don’t feel good about that person. I’m going to back. I said I was going to make this business deal.

I’m backing out of it. I said I was going to show up to that thing on calling and canceling. And by the way, canceling, one of my favorite things.

I recommend it to everybody. I recommend canceling and postponing to everybody I know. You are not obligated to keep your plans. You made a plan three months ago and you don’t know who you’ll even be. Or if you or them or anybody will even be alive three months from now. There’s nothing wrong with canceling. Now, I don’t do it rudely, by the way, but just to finish on, you know, sort of what your viewers and listeners can do is that is to really fall in love with intuition.

And to learn the way you communicate with yourself. There’s signals from intuition, curiosity. You just wonder something, suspicion, worry can even be a signal of intuition. But the biggest one is true fear.

When you feel true fear, I don’t want to do this. It appeared to me that you almost have to train the intuition. Like areas in our life where we’ve got multiple reps and pattern recognition. Our intuition is valuable. But then in other areas of our life where we haven’t trained the muscle yet, we can make bad decisions. One such example would just be like the first time you start hiring people. You don’t have a trained intuition yet.

So you go, yeah, she seems nice. But then you get, I’ve probably been hiring thousands of people 15 years now and I get, you know, I get an intuition. So do you have to train your intuition? Well, I think it happens automatically as you live life that new distinctions are added. I think the training that’s necessary, Stephen, is not the training to improve your intuition, but rather the training to listen to it and to not interrogate it and to not prosecute it.

Because I’ll give you an example. A woman is working late at night in an office building like this. She’s on the 10th floor.

She’s leaving. She pushes the button for the elevator. The elevator door opens up. Inside the elevator is a man who causes her fear. She doesn’t like it.

For whatever reason, obviously she has no opportunity yet to assess all the issues. What’s he dressed like? What’s he look like?

What did I hear three weeks ago about a guy who wore a blue cap and T-shirt and more? She doesn’t have any time for that. Her first reaction was like that. What does she do? Most women, they get into a steel soundproof chamber with someone they’re afraid of.

And there’s not another animal in nature that will do it. Now, why does she do it? Because the thought comes, oh, I don’t want him to think I’m a racist because he’s Hispanic. Or I don’t want to be that kind of person. Or I don’t want this reality to be true. So I’m going to act like it’s not true, right? And what I say is let the door close in his face.

No problem. If you’ve got the signal, that’s a low-cost decision. Wait for the next elevator. Right?

That’s a very low-cost issue. Now, there are so many examples of this in my work where I interviewed people who had been victimized. And time after time, they would tell me, I knew when I walked into that underground parking lot that that was the same car that I’d seen earlier. I knew when I met that guy such and such. In fact, there’s a beautiful, a woman who wrote me the most beautiful thing.

I think it’s in Gift of Fear or it’s in one of the subsequent books. And she said that she would look at her lifelong diary. She’d kept a lifelong diary. And she looked back at it and it would say, met this guy, feel a little queasy about him, not so sure, dated him, married him. And then what she wrote to me was she said, again and again, I could see there it was in my diary. Listen to this.

The ending embedded in the beginning. And so what I encourage people to do going to your original answer is how people can be safer. Is listen to their intuition, know that its function is to protect you.

That’s what it’s doing.

Stephen Bartlett: When I was reading about your work on intuition and your perspective on it, it got me thinking about people in my life that I don’t know what the answer is, but I feel like something isn’t right. And that little alarm bell in my head, I’m like, so what do I do about that? In that case, my intuition told me something, but I didn’t know what it was telling me. I imagine a lot of people have that. They have a vibe of someone, something’s not quite right. And they’re interpreting it to mean X when it could be Y.

Gavin de Becker: Yes, sometimes there’s a very nice, like in my life, and I suspect in yours too, there’s often a very straight line between certain childhood experiences and what we ultimately do. In my case, a very easy one is there was fear. I then come to have a deep understanding of fear, both sides of it, and some compassion for it and some insight. And I then study it. There was violence in my childhood.

And so I now come now. It’s so long ago that I’m 71. So my childhood is so long ago now that it doesn’t have a grip on my throat like it did for a lot of my life, where the narrative was very, very important.

You know, very difficult time. My mother was a heroin addict. She was quite violent. She was very troubled. She committed suicide when she was 39 years old, and I was 16.

And that was a kind of failure for me because I considered it my job to get us all through this drama alive. She shot my stepfather in front of me. A lot of in that house that we lived in, I think there, I saw the house a few months ago, by the way.

I think there are nine bullets in the walls and floor of that house that I can account for, probably still there. And so while I’m describing this to you dispassionately, it’s because of two things, the distance in terms of time, but most of all because of healing. And the, I want to give you my definition of healing in this context. My definition of healing for all of us is when we stop using any of our energy to manage the past. And this gives us all of our energy in the present moment. And so what do I mean using energy to manage the past? Well, if I’m keeping that story a-going and I’m saying to my wife, well, because my mom did this, this is why I feel such and such, which I went through times in my life when those things were much closer to me.

Today, I feel like I’m not using any of my energy to manage the past, the narrative I told you. This whole series of dramas happened. And anytime you hear about a parent or anybody in somebody’s life committing suicide, we often think, oh, what a terrible experience that must have been.

But you really ought to think when you hear about somebody committing suicide, is, oh, what a series of terrible experiences there must have been leading up to that. And I want to tell you real quickly that I had a couple of dreams that my mother was in that were particularly powerful. And I offered this to the audience to know that dream experiences are sometimes all you’re going to get, right? Because my mother died when I was 16, so I don’t have an opportunity to sit across the table with her and say, what were you thinking when you’re such and such and what was going on in your life when such and such? But in a dream, she came to me once and I asked her, why were you so cruel to me? And she was totally perplexed. And she said to me, cruel to you, I was preparing you for this extraordinary life.

And I think that’s true. I think that’s what happened, is that for you, whatever your experience was, it took those experiences to take away those experiences, and you don’t have someone who grows up wanting nothing more than to write these books for free, like Forbidden Facts, the current book, in order to help people deal with these issues of skepticism, of fear, etc. You don’t get somebody doing what I do, where my ambition is long gone. My ambition for more, more anything, more money, more houses, well, houses I might still slip on, but now it’s about service to other people. It wasn’t always, but it was service to other people, because I believe that public life includes you. If all you do is give me a bad example, that’s service. If you give me a good example, that’s a prettier form of service. Maybe it’s a nicer job you got, but ultimately all of it is service.

Everything that we can observe of people in public life and people in our private lives, it’s all service. For my mother, 100%, I’m so far past forgiveness and so far into gratitude for the pieces that were wonderful. And by the way, this is a suffering person. This is a person that charities are for, and social welfare is for. A woman with three kids and no job and a heroin addict, for God’s sake.

That’s not an easy job. And other drugs too, by the way, which helped me as I grew up to be skeptical of pharma. Because some of the pills she took, one of them called Doriden, has now been taken off the market for causing what? Psychosis, which explains a lot of her craziness.

And so all of this teaching that it depends what you do with it, meaning we all, nobody gets out of here alive. I remember a case where I overvalued my own ability to predict human behavior, which I say in these books, you can predict human behavior. To drive here today in traffic, I had to predict the behavior of thousands of people based on just the little movements of the big metal objects around them. You know that guy who starts to move over into your lane and then he catches himself and goes, you never trust that guy. You always want to get way behind him or way in front of him.

So we’re predicting human behavior all the time. But I overvalued mine. I thought, oh, I’m Mr. Genius predicting human behavior because I developed these systems of artificial intuition that predict human behavior. And I was at a meeting and there were a group of people at the meeting and it was going to start in about five minutes. And a few people were comforting one woman who was really sobbing at the end of the table. And I thought to myself, judge mentally, why’d she even come to the meeting?

I mean, if she can’t do the meeting, like what’s she doing here? And I knew it was a boyfriend issue, right? And that’s what she’s crying about. And they’re comforting her.

The meeting begins and that woman speaks first and she says through her tears, I’m sorry, you guys, I’ll do my best at the meeting. But as many of you know, my husband killed my 12 year old son four days ago. So my little journey into judgmental prediction was about as wrong as you could be. And it was a humbling experience for me because I would have discounted that person in a moment. That’s the other side of prediction and intuition, right?

You can discount people and quickly toss them away. And so, you know, when you get this intuitive signal, do we have a responsibility to understand it? Yeah, we have a responsibility to understand how many people have I met who I thought, what an asshole that guy is. I don’t ever want to talk to that guy again. And I didn’t. My loss. Sometimes it would have been the greatest person in the world. Sometimes it would have been a great relationship.

And now I apply the George Harrison rule, George Harrison and the Beatles, which writes this unbelievable lyric that’s in while my guitar gently weeps, which is I look at you all and see the love there that’s sleeping.

Bob Metz: Wow, I sure wasn’t expecting anything like that kind of personal testimony. Obviously from someone who’s lived the experiences of which he speaks. When I heard what Gavin de Becker had to say on this, I couldn’t help but recall how Ayn Rand herself discussed the same intuitive phenomenon and in pretty much the same context. But instead of using the term intuition, she referred to human emotion as being the mechanism behind such intuition. For his part, in comparing intuition to logic, de Becker argued that logic is weak and plotting. Intuition gets from A to Z instantly. It’s knowing without knowing why.

Well, Ayn Rand made a similar observation, referring to the phenomenon of getting from A to Z instantly as lightning like estimates of the things around you. And I quote, Your subconscious is like a computer, more complex a computer than men can build an interesting observation. And its main function is the integration of your ideas. Who programs it? Your conscious mind. If you default, if you don’t reach any firm convictions, your subconscious is programmed by chance and you deliver yourself into the power of ideas you do not know you have accepted.

But one way or the other, your computer gives you printouts daily and hourly in the form of emotions, which are lightning like estimates of the things around you, calculated according to your values. End quote. Your best defense system’s intuition says de Becker informing us that its meaning is to guard and protect. And you’ll recall that he also added that intuition is always right in two ways. It has your best interest at heart and it is always based on something. And like de Becker, Rand saw this emotional process as a defensive mechanism meant to guard and protect. But she warned that, quote, If man chooses irrational values, he switches his emotional mechanism from the role of his guardian to the role of his destroyer. An emotion is an automatic response, an automatic effect of man’s value premises.

An effect, not a cause. If a man takes his emotions as the cause and his mind as their passive effect, if he is guided by his emotions and uses his mind only to rationalize or justify them somehow, then he is acting immorally. He is condemning himself to misery, failure, defeat, and he will achieve nothing but destruction, his own and that of others. Man has no choice about his capacity to feel that something is good for him or evil.

But what he will consider good or evil, what will give him joy or pain, what he will love or hate, desire or fear depends on his standard of value. If he chooses irrational values, he switches his emotional mechanism from the role of his guardian to the role of his destroyer. The irrational is the impossible. It is that which contradicts the facts of reality. Facts cannot be altered by a wish, but they can destroy the wiser. If a man desires and pursues contradictions, if he wants to have his cake and eat it too, he disintegrates his consciousness.

He turns his inner life into a civil war of blind forces, engaged in dark, incoherent, pointless, meaningless conflicts, which incidentally is the interstate of most people today end, quote, wow, and conditions have gotten only worse since Rand wrote that.

Which brings us back to that existential threat to humanity, artificial intelligence with the power to compute intuitively. Because when I heard about a Canadian senate hearing in front of which the so-called godfather of AI, Jeffrey Hinton, was appearing, to explain how artificial intelligence had reached a stage of intuition, well, my spidey senses started tingling, which by the way, thanks to the Spider-Man comic book series, became another popular way of expressing one’s intuition.

Clip (Castle S04E02)

Bob Metz: Exactly. And what activated my spidey sense was the question of why the government of Canada would be wanting to hear from someone considered to be the expert on AI. What does the government need to know about AI that would affect the way it governs? Is there something inherently political in the development of AI?

And of course, intuitively speaking, I never trust politicians or the government. So, it is with deep regret that I must inform you that my worst expectations were not only met but far exceeded after I heard Jeffrey Hinton’s testimony before the Canadian senate committee on matters relating to the impact of artificial intelligence in Canada. In my humble opinion, although presented as such, this was not a hearing about the impact of AI but about something far more sinister, as becomes apparent as the discussion continues. On this side of the upcoming bumper, Hinton delivers his address to the committee while on the return side, he fields a few questions raised by the committee.

Clip (Canadian Senate Hearing on Artificial Intelligence):
Senate Committee Chair: Today, the committee continues a study of matters relating to the impact of artificial intelligence in Canada. This study will examine issues including data governance, sovereignty, ethics, privacy and safety, and the risks and benefits and social impact of artificial intelligence here in Canada. This morning, we have the pleasure of welcoming Professor Jeffrey Hinton. Professor Hinton is the 2024 Nobel Laureate in Physics and will call him the godfather of AI. He is an internationally renowned as a pioneer in the field of deep learning and model of artificial intelligence. The Nobel Prize in Physics he received was for foundational discoveries and interventions that enable machine learning with artificial neural networks.

Dr Hinton, the floor is yours.

Jeffrey Hinton: So dramatic progress is being made in a new form of artificial intelligence that uses artificial neural networks to learn how to solve difficult computational problems. This new form of AI excels at modeling human intuition rather than human reasoning, and it will enable us to create highly intelligent and knowledgeable assistants who will increase productivity in almost all industries. If the benefits of the increased productivity can be shared equally, it will be a wonderful advance for all humanity. Unfortunately, the rapid progress comes with many short-term risks. In the near future, it may be used to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or main. All of these short-term risks require urgent and forceful attention from governments and international organisations. There’s also a longer term existential threat that will arise when we create digital beings that are more intelligent than us.

But now we’ve got evidence that if they’re created by companies motivated by short-term profits, our safety will not be the top priority. Do these large language models understand what they’re saying? In the 1950s when AI started, there were two approaches. One was based on logic. The idea was that when you understand a sentence, you’re translating it into some special internal symbolic unambiguous language.

And once it’s in this internal symbolic language, you can apply rules to this symbolic expression to derive new expressions. That’s how logic works. And that’s what reasoning is, and that reasoning is the essence of intelligence. A completely different approach was the biological approach that said, the intelligence system we know is us. We’ve got a big brain, and in the brain, all our knowledge is in the strengths of connections between neurons.

So to understand intelligence and what it is, we need to understand how the brain learns those connection strengths. Biological theory was that the meaning of a word is a big bunch of features. So the meaning of the word cat is things like it’s got whiskers, it’s a predator, it can be rather aloof, it’s a pet.

Lots and lots of features that represent all those properties of a cat that are represented by activating brain cells. The question is, can you unify those two theories? So the symbolic theory says the meaning of a word is all in how it relates to other words. The psychology theory says no, the meaning of a word is a big bunch of features.

You can unify them in the following way. You take a whole bunch of text and you try and predict the next word. And of course, one way to predict the next word is to have a big table of common phrases. And if you see the first part of a phrase, you predict the next bit of a phrase. That’s how word prediction used to be done.

It’s not done like that anymore. And much more sophisticated way to predict the next word is to convert each word in the context, the words you’ve seen already, into a big bunch of features. Allow interactions between the features of different words to predict the features of the next word. And once you’ve predicted the features of the next word, you guess what the next word is given its features.

That’s how current LLMs work. And it’s very different from the symbolic idea that understanding consists of translating into an internal string of symbols. The biological idea is understanding consists of converting each word symbol into a big bunch of features. Take the approximate shapes you have for all these words and modify the shapes so they can all fit together. So the hands of some words can fit into the gloves of other words. And then you get a structure.

Once you’ve got that structure of features that all fit together, that is understanding. That’s what’s happening in us. That’s what’s happening in these chatbots. And it’s totally different from the old fashioned AI, dear, that understanding consists of translating into some internal symbolic language. It’s much more like figuring out the structure of protein where you’re given a string of amino acids and you have to figure out a shape where they fit together happily. So once you understand that these large language models are understanding in the same way as we do, then things get much more scary because you realize that what we’re doing is we’re creating alien beings that really do understand.

And they’re going to get more intelligent than us sometime in the next 20 years, most experts believe. And we’ve no idea what’s going to happen then. I’m done with my introduction.

Senate Committee Member: I will restrict my line of questioning to the existential risks which you ended your statement in. Those from which humanity cannot recover. The Machine Intelligence Research Institute says that the default consequence of the creation of artificial superintelligence is human extinction. So I ask you, is the goal of top companies in this field to build superintelligent AI and if they succeed, what will it mean for Canadians?

In other words, what keeps you up at night and what should we be doing as legislators?

Jeffrey Hinton: There’s an urgent problem of people using AI tools to create nasty viruses. That’s very scary. I’m not sure what you do about that. The most urgent things are to do with the corruption of elections, which is coming shortly in the U.S. If you wanted to corrupt the U.S. elections, the first thing you would do is collect as much data as you could on U.S. citizens. That seems likely that was the real purpose of Doge.

The most urgent problem after that is unemployment. So the big tech companies intend to make a lot of money, otherwise they wouldn’t be investing between them about a trillion dollars in data centers. The only way they’re going to make that much money is by replacing jobs. They haven’t thought through what’s going to happen if you replace a large fraction of workers. You’re going to lose your tax base. So things like universal basic income are going to be tricky because there won’t be a tax base anymore. But if AI can do any normal human job. Humans will cease to have value as labor. And David Duvenaud has pointed out that if they’re not being taxed, they won’t get properly represented. I believe that. So I believe a crisis is coming where we see massive unemployment caused by AI.

Now I made that prediction in 2016 and it didn’t come to pass. We did get AI being used for radiology, but we got a lot more radiology going on. So we now have radiologists working with AI and a lot more images are being interpreted. It’s an elastic market there. With healthcare, you can absorb as much healthcare as people can provide. So it won’t lead to unemployment in healthcare, but there’s many other industries like call centers where it will lead to massive unemployment.

Senate Committee Member: A couple of days ago, you made a fairly significant statement with CBC Radio that AI must foster maternal instincts or risk extinction.

And we worry about fostering maternal instincts within AI. How would you approach that? How would you ask the government to approach that or legislate that if that’s possible?

What guardrails should we be considering that would alleviate this lack of maternal instinct? Or is that even possible?

Jeffrey Hinton: We don’t even know if it’s possible. So at this stage, it’s not like climate change. In climate change, we know how to prevent climate change, just stop burning carbon and plant a lot of trees.

AI isn’t like that. We don’t know the solution for the existential threat. The government could try and force more research on whether we can invent a way so that we can live with things more intelligent than ourselves.

We don’t know whether we can. Now, in the shorter term, you shouldn’t allow big companies to release chatbots without very thorough testing. The big tech companies in the States have a strong lobby that runs lots of advertisements about how any regulation will interfere with innovation. That’s a bit like big oil saying, if there’s any regulations on the environment, we won’t be able to get as much oil. That’s true, but that doesn’t mean you shouldn’t have regulations. If we don’t have regulations, AI is going to do lots of nasty things like encouraging kids to commit suicide. At present in the States, the big companies have pandered to Trump and they’re trying to have no regulations at all. At least if Europe and Canada insist that you can’t use a chatbot here unless you’ve satisfied some regulations, that may actually force the States to have regulations too because they don’t want to split market. They want to be able to sell the same chatbot everywhere.

Senate Committee Member: Do you think we’re being realistic to ask that developments occur even within the private sphere with the human rights lens?

Jeffrey Hinton: Yes, when it’s human rights versus the profits of big companies, we know who wins out.

Senate Committee Member: Is it something that we should be aspiring to and trying to create legislation that carries that out, even if the big companies aren’t involved?

Jeffrey Hinton: Capitalism has given us all sorts of good things, but it needs to be directed. You need to constrain it with regulations so that the only way to make a lot of money is by doing things that are good for people. If you can make a lot of money by doing things that are bad for people like Zuckerberg does, that’s crazy. You need regulations to prevent that.

Bob Metz: You’re listening to Just Right, broadcasting around the world and online.

And my intuition tells me that if Jeffrey Hinton is the godfather of AI, then based on what I’ve heard him say, that must mean that AI is some kind of form of organized crime, engaging in terrorist activity. The fear porn was explicitly outrageous. Terrible new viruses, lethal weapons that decide by themselves who to kill or maim, existential threat, which will arise when we create digital beings more intelligent than us. The default consequence of AI is human extinction.

Over and over, Hinton repeated that the real problem with unemployment is not the unemployment itself, but that it means the government will lose its tax base. If people lose their valueless labor, then they won’t get taxed and properly represented. On what planet does being taxed affect being properly represented? We’re all taxed to death now and nobody’s being properly represented. Isn’t voting the key to democratic representation?

The voices in that Senate hearing certainly don’t represent anything remotely democratic or fittingly enough anything human. How dare they pretend to be concerned about a chatbox recommending suicide to children when the government itself is actively engaged in killing children through its own MAID medical assistance in dying obscenities or by pushing to give children who do not have the right to consent the protection of the government if they decide to mutilate themselves due to gender dysphoria.

And speaking to the so-called scientific side of Jeffrey Hinton’s presentation, he’s making circular arguments that prove nothing and go nowhere. I just couldn’t believe listening to this.

The symbolic theory says that the meaning of a word is all on how it relates to other words. The psychology theory says that the meaning of a word is a bunch of features. You can unify them by taking a whole bunch of text and you can try and predict the next word. Take the approximate shape of all these words and modify the shape so that the hands of some words fit into the gloves of other words and you get structures of features that fit together that is understanding. That’s what’s happening in humans and chatboxes.

Holy cow, what a bunch of pure BS. In fact, I’ve reached a point where I’m being forced to conclude that the term artificial intelligence itself is a completely inappropriate term with which to describe the technology in question. It is neither intelligent nor, and understand this, neither is it artificial.

Calling a computer program artificial intelligence is a bit like calling a car an artificial horse or any other animal used for conveyance. A car is a mechanical device. As a mechanical device, it is real. It’s not artificial. A computer program that can manipulate words and data, whether written on a screen or vocalized in some audio format, could better be called an electronic encyclopedia. AI could never discover anything because discovery is driven by human will and by human interest and by human survival. So what we’re calling AI should be renamed EE, Electronic Encyclopedia.

But it only gets worse. Coming up on this side of the bumper, some more revelations about Jeffrey Hinton and his evil politics while on the return side, the last word goes to Matt Walsh, whose intuition on AI was spot on.

Senate Committee Member: I’m curious what kind of human beings do you think will be the kind of humans in the future that we need to maybe combat some of this and to have the critical thinking skills to be able to make the world a better place in the future?

Jeffrey Hinton: People’s commitment to being moral. Some people have a lot of it. Some people don’t know much of it. I don’t know how you create. I think that happens when you’re quite young. I mean, one piece of advice is look how Trump was raised and do the opposite.

Senate Committee Member: If you were a Canadian parliamentarian, where would you focus your work on AI? And how would you do it? Would it be through studies, supporting research? Would it be through legislation? Would it be through regulation? Where would you start if you were a Canadian parliamentarian?

Jeffrey Hinton: I’m not a policy person, so I’m a complete amateur at policy. I would focus on a couple of issues. One, making sure good tests were done before chatbots were released. The second thing I would focus on is what to do about unemployment. What in particular, what to do about taxation, where’s the government going to get its money from if you have high unemployment?

I tend to have socialist instincts. I believe in capitalism, but I think it needs to be strongly regulated so that you can only make money by doing things that are going to be good for society. So developing the internet, for example, was on the whole very good for society and it’s fine that people made a lot of money doing that. Developing social media, to begin with, it looked like it might be good, but it was fairly clear after not very long that it was going to have mainly negative consequences. And it was up to government to prevent people making lots of money that way.

Senate Committee Member: I’m wondering, again, how urgently do we need to be pushing for legislation and in what areas, in your view?

Jeffrey Hinton: It’s clear that we should have strong regulations on the testing of chatbots and it’s clear that it will get high on employment and we don’t know what to do about that yet. We should fund research on how you deal with taxation. If Bill Gates has recently suggested, he may have some bad behaviours, but he’s very smart, he’s recently suggested that we need to tax AI agents. So when you replace a worker with an AI agent that does the same job, you need to tax that AI agent. Otherwise your tax base disappears. Now, the big tech companies will fight that tooth and nail, of course. They think that all the profits should go to the big tech companies. It’s going to be a very hard thing to do, but somehow you’ve got to have a tax base.

Senate Committee Member: Sorry if you feel this is too political a question, but we have a bill before us that we’ve had now for some years and we have a precedent of senators who for more than 20 years, groups of senators, have advocated for a guaranteed basic livable income. As a Canadian, if I may ask whether you think that would go some way to the kind of amelioration that we obviously have to start to plan for?

Jeffrey Hinton: IYes, I think it will go some way. So obviously a problem with UBI is you’re going to get lots of people trying to take advantage of it, but I think there may be no alternative to that.

When a lot of people need it, how are you going to pay for it because you’ve lost the tax base? Senate

Senate Committee Member: Professor Hinton, what are some of the rate limiting steps in the development of AI? I’ve listened to programs talking about chips, energy, water. Are there any such steps or things that could slow it down?

Jeffrey Hinton: I don’t believe in slowing it down. I don’t think that’s going to be possible because there are so many good uses. But with AI it’s going to be hugely valuable for healthcare, for education, of almost any industry. It’s going to make it more efficient. And it’s crazy that this thing that’s going to make huge increases in productivity should be bad.

Intrinsically it’s not good or bad. It’s just going to lead to a big increase in productivity. It’s our political system that doesn’t know how to handle it. We’ve got a profit-driven system and that’s going to lead to all sorts of bad things because it’s not properly regulated. That’s my view.

Clip (Matt Walsh, March 9, 2026)

Matt Walsh: Alright, here’s something that interests me anyway. Mileage may vary.

Fox reports SpaceX and Tesla CEO Elon Musk gave a two-word retort after Anthropic leader Dario Amodei claimed in an interview that he isn’t sure if his company’s AI models have gained consciousness. Amodei says Claude may or may not have gained consciousness as the model has begun showing symptoms of anxiety. We’re at a post on X by cryptocurrency-based prediction market, polymarket, to which Musk replied, he’s projecting. I don’t really know what that means exactly.

Comment from Musk, who’s a founder of XAI, comes as an anthropologist at odds with Pentagon over its use in a separate matter. In an interview with the New York Times of Amodei when asked about AI and consciousness said, we’ve taken a generally precautionary approach here and we don’t know if the models are conscious. We’re not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious but we’re open to the idea that it could be.

And then he goes on to talk about how they’re showing symptoms of anxiety these models are and that’s why he thinks they’re conscious.

So we’re going to hear a lot more of this kind of thing in the near future. Claims of AI being conscious, gaining consciousness. And now as an avowed AI hater, as an unabashed AI doomsday profit, I will say that I find this to be absurd. There is a very serious concern I have kind of related to this, which I’ll get to, but it’s the concern is not that AI will become conscious. And to understand why that is or why this is not something that can probably happen, well, first you have to start by coming up with a definition of what consciousness is.

And I actual­ly don’t think that that’s a very difficult question to answer. The question of where consciousness comes from, how it works, what it means, those are hard questions. But consciousness is, I would say, if I had to define it, the awareness and the experience of the self as a self. That’s what I would say it is. And maybe that sounds like a, you know, like a tautology, like a circular reasoning, the awareness of self as self. But I don’t think it is because self is being, right? As self is a being and to be conscious is to, you know, not only be sort of intellectually aware of beings, but to be aware of your own being and to experience it in some way. So it’s not just intellectual, it’s experiential. And that’s important because it means that you could be wrong about everything you think.

Everything you think could be wrong. Yet you would still be conscious because you’re experiencing your own selfhood. But there is no experience, you know, of being AI.

I would strongly suspect. So put it this way, if you were to suddenly magically become AI, you would not be morphing from one state of being to another, you would just simply be obliterated. Like your consciousness would not be morphing into a different kind of consciousness.

That would just be obliteration. You’re just ceasing to be. Now, on the other hand, in some kind of thought experiment, if you were to imagine some sort of medical experiment in the future, some sci-fi thing where you turn into a dog, well, we can assume that probably there’s some kind of experience of being a dog, a much more rudimentary experience, but there’s probably some kind of experience.

So if you were to become a dog, you would not be ceasing to be, you would just be changed quite fundamentally and profoundly. But with AI, there’s no experience there. You know, consciousness is the awareness and experience of self, and AI doesn’t have that and never will, I would think. And I also think that probably some kind of sensory experience is necessary in order to be conscious. But we formulate our notion of selfhood through our experience of the outside world and other people. That’s how babies develop their sense of self, their consciousness.

Newborn infants are certainly human beings, obviously, infinitely valuable, create God’s precious creation. But they also are certainly not fully conscious to the degree that you and me are. They’re not fully self-aware. And in fact, as we understand it, it’s actually, you know, it’s actually, I think it’s quite beautiful in many ways that newborn babies do not perceive themselves as being separate from their mothers.

They sort of see themselves as extensions of their mothers. They don’t perceive any separation, you know, and that’s why separation anxiety for a baby doesn’t set in until, I don’t know, four, five, six months. Because until that point, they don’t perceive that it’s possible to be separate. And then once separation anxiety kicks in, it’s like they’ve perceived that they are their own being and that they can be separate from their mother and they don’t want to be separate from their mother.

And so that’s where a lot of that comes in. But the point is that awareness of self, true consciousness, comes online sort of gradually for the baby. And it’s developed through, I would think, sensory experience. Like one of the really funny things about a baby when you watch a very young, like newborn, is you see them staring at their own hands, you know, or like hitting themselves in the face with their hands because they can’t control their hands. Because they don’t understand that their hand is them. They don’t have a, they don’t understand that their body belongs to them. And, but over time, they begin to perceive like, oh, that’s my hand, I can control that. That’s me.

That’s me. And they start to perceive through sensory experience, touch, sight, hearing all these things. They start to perceive that they are, you know, they start to sort of understand where they end and the rest of the world begins. And they develop this awareness of self.

So anyway, bringing that back to AI, among other issues, AI has no sensory experience. So not only does it lack the complexities of the human mind and the biological material that I would think is a prerequisite, but it’s also, there’s no way to conscious, to experience the world physically.

So is it possible for a non-sensory, non-embodied system to have any kind of thing that, that it resembles what we talk about when we talk about consciousness, I would think it’s probably not. And imagining that is kind of like imagining a square circle. It’s imagining something that is, that is literally unimaginable. So anyway, the real risk in my view, which I am extremely worried about, is that AI will become, is already becoming very good at convincing a lot of people that it is conscious.

Bob Metz: When Matt Walsh correctly asserted that we must define what consciousness is, the awareness and the experience of the self as the self, in doing so he noted that it sounded like a tautology. But what he was acknowledging, without being conscious of it, is that consciousness, like existence itself, is axiomatic. And an axiom can neither be proven nor disproven because to prove anything, it must first be capable of being disproven, which is not possible when it comes to either existence or consciousness. It results in a contradiction, and contradictions do not exist in reality, and it’s clear that Matt Walsh understands this with his square circle analogy.

If one could prove consciousness doesn’t exist, then no one would be conscious of the proof that requires consciousness to apprehend it. And if you could prove that existence doesn’t exist, well then that proof itself would not exist, to say nothing of the person offering such a proof.

As Ayn Rand put it, Quote, If nothing exists there can be no consciousness. A consciousness with nothing to be conscious of is a contradiction in terms. A consciousness conscious of nothing but itself is a contradiction in terms. Before it could identify itself as a consciousness it had to be conscious of something. If that which you claim to perceive does not exist then what you possess is not consciousness. Existence is identity. Consciousness is identification. Consciousness for those living organisms which possess it is the basic means of survival, end quote.

And in those words alone, since the refutation of every stupid thing we’ve been hearing from our government and science officials on the whole field of consciousness, intelligence, intuition, and all the rest of their stupid artificial ideas and ideologies, to say nothing of their artificial morality.

I couldn’t believe it. In response to the question, what kind of human beings do we need to have critical thinkers, Jeffrey Hinton responds, Look how Trump was raised and do the opposite. How low can you go?

I tend to have socialist instincts, he says. I believe in capitalism, no he doesn’t, but it needs to be strongly regulated so you can only make money if you’re doing good for society. Does he not know that that’s exactly how capitalism does work?

Social media looked good at first, he says, but it was soon clear that it was going to have mainly negative consequences and it’s up to our government to prevent people from making lots of money that way. To what negative consequences is he referring that have specifically to do with social media as opposed to AI? He doesn’t say.

And then after all of his fear-mongering and terrorizing everyone about AI, he has the gall to conclude that AI is not intrinsically bad. It’s our political system that doesn’t know how to handle it. We’ve got a profit-motive system and that’s going to lead to all sorts of bad things because it’s not properly regulated.

Unbelievable. Totally not scientific, I can tell you that. Hinton says his politics is based on socialist instincts when in fact everything he says suggests a full-fledged collectivist of every variety from communist to fascist.

And it’s ironic that he says he operates on instinct because there’s nothing intelligent about any of his political tendencies. His instincts are animalistic and his ideology is evil. I lost track of how many times he repeated his fear that the government’s tax base would somehow vanish due to unemployment. Meanwhile, the person he would do everything the opposite of Donald Trump is actually talking about creating employment by eliminating income taxes.

Jeffrey Hinton is no godfather of anything. He is morally bankrupt, ignorant of everything from the nature and identity of capitalism to the nature of democracy. While he called for people’s commitment to being moral, he morally condemned Donald Trump while simultaneously citing murderer, sexual predator, depopulist Bill Gates as his authority for suggesting that governments need to tax AI agents. Hey, Bill may have some bad behaviors, but he’s a smart fellow, don’t you know?

I just couldn’t believe what I was hearing. All I can say at this point is shame on Jeffrey Hinton and shame on Canada’s Senate for attempting to pull off yet another artificial crisis with which to push their mutual hatred of capitalism and their love of everything known to be deadly to the survival of mankind. Intuitively speaking, the real danger of AI lies in the governments and politicians who want to control and regulate it. All under the false promise of protecting us from words.

Right now, my own spidey sense is warning me to stay as far away from these kinds of people as possible. And meanwhile, for those looking for a good measure of real intelligence and not the artificial kind, be sure to join us again next week when we will continue our journey in the right direction. And until then, be right, stay right, do right, act right, think right, and be right back here. We’ll see you then.

Clip (Hogan’s Heroes S01E16):
Colonel Klink: Colonel Hogan what are you doing driving a German truck?

Colonel Hogan: Not a German truck, sir, the German truck and I’m giving it to you.

Colonel Klink: You are giving me a German truck? What is the meaning of this?

Colonel Hogan: It’s the truck Michaels stole.

Colonel Klink: Then what are you doing with it?

Colonel Hogan: I’m returning it to you. My men stumbled on it while on garbage detail. Now I was thinking of showing it to Burkhalter as he went out, but figured you’d rather call him direct to Berlin and invite him down, make a big deal out of it, huh?

Colonel Klink: I have a better idea. Schultz!

Sergeant Schultz: Jawohl Herr Kommandant.

Colonel Klink: Order the dogs to the special detail. My military intuition and this truck tell me that our friend Captain Michaels is floating around out there somewhere waiting for me to capture him. Right, Hogan?

Colonel Hogan: It wouldn’t surprised me a bit, sir.