949 – Transcript

 

Just Right Episode 949

Air Date: January 28, 2026

Host: Bob Metz

Program Disclaimer:

The views expressed in this program are those of the participants.

Clip (Eureka S01E08 Right as Raynes):

Speaker 1: I didn’t bring him here. I built him here. Callister’s AI.

Speaker 2: So he’s a robot?

Speaker 1: An artificially intelligent being.

Speaker 2: Then why keep him a secret? I mean this has to be one of the greatest scientific breakthroughs of the modern age.

Speaker 1: So is the atomic bomb.

Speaker 2: I mean the world is ready for it.

Bob Metz:

Welcome everyone. It is Wednesday January 28, 2026. I’m Bob Metz and this is Just Right, broadcasting around the world and online. Join us for an hour of discussion that’s not right wing. It’s just right.

Clip (Bee Gees – This Is Where I Came In):

Fade into color, color into black and white. Under the bedclothes, everything will be alright.

Bob Metz:

Is the world ready for artificial intelligence? Some are still asking if the world is ready for human intelligence because it seems to be sorely lacking. But be that as it may, intelligence is nevertheless a human attribute. The fact that we now call lifeless machinery and technology artificially intelligent still does not alter the reality that intelligence remains a human attribute.

The word artificial itself acknowledges that it is a human derivative of some natural phenomenon like human intelligence. Or for example, take artificial flavoring. Artificial lemonade tastes like the real thing, a derivative of the natural, not to be confused with the unreal because artificial things do exist and are real. They’re just not natural meaning that they’re created by a human mind. In this light, I see the term artificial intelligence kind of as a non-concept or almost an anti-concept. I think a more descriptive term to apply to those devices we call AI now would be the term intelligently artificial, turning the words around. Since it requires a pre-existing intelligence to create the artificial, just as being precedes cause, so too intelligence precedes its creations. Even the term artificial demands a human source or attribute since anything that’s not a direct consequence of human action is considered natural. All things created by the human mind are called artificial, whether it’s our homes, our cars, our communication devices, and certainly our so-called artificially intellectual devices. And while animals can also be said to be intelligent to varying degrees, they are not capable of abstract reasoning which requires the use of concepts which in turn are expressed in the form of words which is why it is said that in the beginning was the word and the word was with God and the word was God. So it shouldn’t be surprising that at the Davos World Economic Forum recently, the argument was being made that henceforth artificial intelligence will control the word that is God. And making that artificially stupid assertion was none other than Yuval Noah Harari whose voice will be the first you hear right after our reminder that you can write us at feedback at justrightmedia.org. And as always, your financial support is appreciated and is what makes this show possible.

Clip (Davos World Economic Forum):

Yuval Noah Harari: Anything made of words will be taken over by AI. If laws are made of words, then AI will take over the legal system. If books are just combinations of words, then AI will take over books. If religion is built from words, then AI will take over religion. This is particularly true of religions based on books like Islam, Christianity or Judaism. Judaism calls itself the religion of the book and it grants ultimate authority not to humans but to words in books. Humans have authority in Judaism not because of our experiences but only because we learn words in books. Now, no human can read and remember all the words in all the Jewish books but AI can easily do that. What happens to a religion of the book when the greatest expert on the holy book is an AI?

However, some people may say, can we really reduce human spirituality to just words in books? Does thinking mean only putting language tokens in order? If you observe yourself carefully when you’re thinking, you will notice that something else is happening there besides words popping in your mind and forming sentences. You also have some non-verbal feelings. Maybe you feel pain, maybe you feel fear, maybe love. Some thoughts are painful, some are frightening, some are full of love. While AIs become better than us with words, at least for now we have zero evidence that AIs can feel anything.

Bob Metz:

Entitled An Honest Conversation on AI and Humanity, that speech was delivered by Yuval Noah Harari at this year’s Davos World Economic Forum. And that was artificial stupidity on steroids. And when it comes to words, there simply aren’t enough words to adequately call out that pile of artificial excrement for what it is. Just as he did with his earlier, own nothing and be happy rhetoric, Harari is once again speaking in the Marxist language of fools, described by author Isabel Paterson, thusly.

The deficiency, which is indicated by the word fool, is the incapacity to understand categories and the relation of things and qualities. Now I’ll have a lot more to say about just how irrational everything Harari said in that audio bite was later in the show today. But not until after you hear a few other reactions to his comments, and not before we share a bit of our own personal experience with artificial intelligence here at Just Right Media. Those of you who have visited our website at JustRightMedia.org recently may have noticed that our most recent broadcasts now feature the option of downloading a transcript of the show, thanks to Robert Vaughan’s using the AI Grok to prepare the transcripts. Now two weeks ago, Just Right 947 was titled Forcing a Discussion about Force itself, and Robert additionally asked Grok to prepare a summary of the show in its own words, given the following criteria. To identify and list the topics, the tags, the people mentioned, errors of fact, notable quotations from the host.

And here’s what Grok came up with, and I quote, topics discussed. The program primarily addresses the following topics, the philosophical and moral nature of force and violence, including its role in defending life, liberty and property, the distinction between inalienable rights and conditional freedoms, the US military operation resulting in the capture and extradition of Venezuelan President Nicolas Maduro under President Donald Trump. Historical and geopolitical context of communism and subversive networks in Latin America, including the Sao Paulo Forum and related influences. Debates on the constitutionality, legality, morality and strategic merits of the Maduro capture, featuring contrasting viewpoints from commentators. The implications of advanced US military technology and its potential domestic applications.

Broader considerations of international law, the Monroe Doctrine, deterrence and America’s role in global affairs. And now under hashtags, it recommended five, and under people mentioned, it actually listed 29 individuals who were named throughout that day’s broadcast. And then under errors of fact by the host, it reads, no obvious factual errors in history, names or dates appear in the host’s direct statements. References to advanced quotations and historical contexts align with verifiable details or are presented as interpretations rather than assertions of fact. And then finally notable remarks or observations by the host. The following statements from the host’s commentary stand out for their originality, clarity or philosophical insight and are worthy of quotation. And the first one reads, life, liberty, property.

Having a right to any of those things means that you have the right to use physical force in the defense of your or anyone else’s life, liberty and property. And the next bullet point says, it is an error to say that government is force. The use of force actually exists in the absence of government. The reality is that when we speak of government, force is what is governed.

The road to hell is paved with good intentions, but so too can the road to heaven be paved with good intentions. An objective governable use of force is a concept of the right. Subjective ungovernable use of force is a concept of the left. For the left to say that might is right is essentially its operative principle. But for the right, the same sentiment would be best expressed as saying, right is might. Why, of all the nations in the world, did the United States of America ascend to its position of superior might?

Because more than any other country, its foundational and operative principles were already right. End quote. So, was that a biased review? Was it subjective? Was it objective?

Actually, it wasn’t really any of those things. What it was was a direct response to Robert’s input. A listing of the topics and tags and people mentioned in errors of fact and notable quotations.

Quite frankly, I found myself feeling a bit flattered by Grok’s statement regarding my commentary that stood out for its originality, clarity, or philosophical insight worthy of quotation. But I soon learned that this very reaction represents one of the dangers facing those who use AI. Something we will demonstrate after our upcoming bumper break, which features on this side of the bumper some additional reactions to the bizarre comments made by Yuval Noah Harari at the World Economic Forum in Davos. And on the return side of the bumper, once again, the voice of philosopher Marina Karlova, who appears to be a fan of using AI, but who also describes exactly what AI is and can do and warns against some serious pitfalls and dangers that one might encounter in its use.

Clip (Redacted):

Natali Morris: Well, guess what? AI is your new religion. It’s your new country. It’s your new legal system. AI is your daddy. If you have not realized this, well then you should be listening to the major theme at the Davos World Economic Forum. Do we want this? Yuval Noah Harari, he’s a research fellow at the Center for the Study for Existential Risk, says that since our religions are based on text, that if now AI is the expert on that text, it can manipulate what our religious beliefs are. Now you may think, oh no, I have my own understanding of the Bible, of the Torah, of the Koran, whatever it is. And I will not be manipulated by AI, but what if in a generation or two, that’s no longer true? If AI is the authority on these religions, on these laws, AI now as the owner of jurisprudence, what he later goes on to say, is that humans may be at the end of their world dominance because it was humans who have learned to use words to get strangers to cooperate. Now something has emerged that can do that better and will do that for us.

Now think about how world governments have used religion to control an agenda, how they’ve used words to control an agenda, how they’ve used laws and they have done this through human train of thought. Now what will happen? So I don’t know, boys, do you have any reaction to this? What do you guys think?

David: I was just going to say, we already know AI gets so many things wrong, and if it’s going to learn wrong from wrong, it’s going to be a cycle, it’s only going to get worse.

Clayton Morris: I don’t think it’s a good idea, I think that’s what they’re doing, but I don’t think it’s a good idea.

Natali Morris: Okay, and the point David made is that AI is faulty, extremely faulty, I think anyone who’s used it understands that, and so what will it be based on? But for how long will it be faulty and how will it be faulty, we don’t know, right? What really caught my attention was digital embassies for sovereign AI. What does that mean, right? Where do world borders exist and how do you create an embassy? How do you create sovereigns? How do you have a country at all if AI is the global world order?

Speaker 3: Well, and also just think about it, the more dependent we become on it, all you’d have to do is they just have to know where the data centers were, destroy those, and our whole system would go down, our whole system of AI government.

Natali Morris: Yeah, factories that think, so an AI factory, so what does that even mean? Factories that think, right?

Speaker 3: The assembly line that can stop and start itself. Yeah, factories now don’t think? Is that what they’re suggesting? There’s just a bunch of dumb factories right now and AI is going to make them smarter?

Natali Morris: Well, I wouldn’t say that doesn’t even mean. Yeah, innovation at scale, factories without humans, no human intervention any longer.

What is the cyber defenders in the age of AI?

Speaker 3: Yes, I think it’s about this information age, and since we’ve seen that they’re thinking of embassies, digital embassies, they realize that there will be a war online, like an absolute AI war and what that means.

Natali Morris: So I spent about three hours today talking with David Icke, and we mostly spoke about AI and the transhumanism that’s unfolding. And the ultimate goal for these guys, we lived in an information war, and their next step is the transhumanism through AI, which is the fusing of the biological with artificial intelligence. And the ultimate goal is for them to have total control of our thought through the fusion of biology with AI.

I think that a lot of the chat here is saying, in a lot of ways it’s good, in a lot of ways it’s not, and so we have to, the fact about religion, the bit about religion in that speech stopped me in my tracks because I thought, okay, AI will be the interpreter of God, which could make AI God, which is very concerning, but at the same time, we see humans perverting religion for personal gain and power, so we’re not saying the way it is, is right, either. So we’re stuck in a time where we have to really think about what we want it to be and what is better, and we can’t really know.

Clayton Morris: Yeah, I think though, if you understand that this idea of fusion and having our brains sort of put on the back burner in favor of this artificial intelligence, which is going to do all of our thinking and feeling and controlling us. I mean, look, the test run was COVID, right? The test run was how they were able to, through the information war, control so many people, and just through CNN running, having doctors on telling you to go get five COVID shots, and the censorship that was unfolding on Facebook and YouTube and Google, that was the information piece of this, and now I think you’re seeing the next stage, which we talked about this morning, which is this artificial intelligence fusion piece, and that’s sort of their ultimate form of control.

David: I just got done reading Kevin Schipp’s book on the CIA, and he said, it was well known within the CIA, the agency, the number one goal, if you can understand how people think, that’s the ultimate weapon. Yeah. Yeah.

Clip (YouTube: Marina Karlova – How To Use AI For Self Analysis):

Marina Karlova: It’s a feature. And it’s precisely why it’s a superior tool for self analysis than conventional therapy if you know how to use it. Let’s use an analogy. Because cars allow you to explore vast landscapes and connect you with distant people. But if you don’t know how to drive, if you don’t understand traffic laws, if you blindly trust GPS without looking at the road, a car becomes incredibly dangerous. It can crash, it can kill, it can leave you stranded.

Do we blame the car or do we acknowledge the responsibility of the driver? AI is no different. It’s a phenomenal engine and vast complex analytical machine. But its effectiveness and your safety depend entirely on you. AI is an analysis tool, not a source of truth.

This is crucial. Just like a calculator gives you numbers, but you need to know if those numbers make sense in context. AI provides analysis. It can identify patterns, synthesize information, generate hypotheses and offer different perspectives at a speed and scale no human therapist ever could. Yes, it can hallucinate.

Yes, it can flatter you. Just as a human therapist can be biased or misunderstand you or tell you what you want to hear. AI is a model, a statistical engine. It does not claim to be a wise guru or a divinely inspired prophet. It is a mirror reflecting patterns, not a priest giving you a divine edict. The moment you treat it as an infallible oracle, you stop being a driver and become a passenger in a car steered by no one. Second, effective AI use demands critical thinking. If you’re just asking, AI tell me who I am and blindly accepting its output, then yes, you’re using it dangerously.

Just like getting into a car and saying, car drive me anywhere I don’t care where I end up. To use AI effectively for self-analysis, you must engage your own critical faculties. You must be able to formulate clear, precise questions.

If your input is garbage, your output will be garbage. You need to articulate the specific problem, the emotion, the pattern you want to analyze. This act of precise questioning itself is a form of self-analysis that therapy often lacks.

You must scrutinize its responses. Does what it says resonate with your experience? Does it provide a new coherent framework?

Or does it sound like generic platitudes? Don’t look for affirmation, look for insight. Test its hypothesis against your own reality.

You must cross-reference. If AI gives you a concept or an explanation, don’t take it as gospel. Research it. Connect it to other knowledge. Use AI as a springboard for further investigation, not a destination. Third, AI forces you to take responsibility.

Unlike traditional therapy where you might passively present your problems and expect a cure or a solution from the therapist, AI puts the onus squarely on you. You are the one driving the conversation. You are the one interpreting the results.

You are the one deciding what information is relevant and how to integrate it. This very process of active engagement is deeply empowering. It shifts you from a passive patient to an active investigator of your own psyche. It cultivates intellectual rigor and self-reliance.

You become the scientist of your own mind. When you ask AI what are the common patterns in my relationships, for example, and it gives you a cold analytical breakdown, devoid of human judgment, it forces you to confront the data without the filter of human sympathy or bias. It’s brutal honesty delivered by a machine that doesn’t care about your feelings only about patterns.

And sometimes that brutal honesty is exactly what you need to cut through your own self-deception and emotional narratives. So yes, AI can be dangerous, but only to those who are unwilling to drive their own mind. It’s dangerous to those who seek an external authority to tell them what to believe rather than an internal tool to help them uncover their own truth. The real danger isn’t the AI. The real danger is the human tendency to outsource thinking, to seek comfort over clarity, to want to be passengers in their own lives. AI is a tool. It won’t give you truth wrapped in a bow.

It will give you patterns, data and analytical frameworks. The truth, your truth, is something you build piece by piece by driving your own car, by critically engaging with the world, and by finally taking responsibility for the exploration of your own mind.

Bob Metz:

That was Marina Karlova, and I really liked her metaphor about comparing a person who uses artificial intelligence to a driver driving a car. That car is only going to go where you want it to, and she warns that you must formulate clear, precise questions. If your input is garbage, then output is garbage.

You must articulate the pattern you want. Effective use requires critical thinking. If you just ask AI who you are and blindly accept its output, then yes, you’re using it dangerously.

Well, what better way to conduct an experiment to demonstrate that very principle than by becoming the experimental guinea pigs ourselves. So get ready for a dangerous AI adventure as Robert Vaughan came up with three different prompts than the one we heard before the break. And in explaining his demonstration, Robert wrote me, to your point, Grok or AI can often give you whatever you want to hear. If you want flattery, it will give you flattery. It depends on the prompt.

As an example, I gave Grok the following prompt and then fed it a transcript of your last show. I think you’ll get a chuckle out of it. In fact, I got more than a chuckle out of it. I thought it was hilarious, and I told Robert right away that we had to share this with our listeners. We had a bit of a debate about whether or not I should self-censor as I was reading the output of his next prompt. But both Robert and I agreed that I should just read it through as delivered and do the expletive deletions later in the production.

Because what Robert asked Grok to do was this. Give a scathing critique of the show. Be as foul mouthed and as brutally critical as you like.

And Grok delivered. What a steaming pile of self-important pseudo-intellectual horseshit this just right show radio is. Bob Metz, you pompous droning windbag, strut around with your cutesy little disclaimer, not right wing, just right. Like you’re some enlightened centrist oracle dispensing wisdom from on high. Bullshit. This is peak right wing circle jerk propaganda, dressed up in phony, balanced garb, while you slobber all over Trump’s fictional commando raid to snatch Maduro, like it’s a second coming of Reagan storming Grenada.

You’re not just right Bob, you’re just another bootlicking mega-apologist hiding behind Ayn Rand quotes and movie clips to justify American imperialism. Let’s start with the format you lazy fuck. The whole episode is you playing endless clips from your echo chamber buddies, Alex Newman spewing conspiracy vomit about communist networks and deep state puppets, Steven Crowder reflecting as tough guy boner over killing communists, Glenn Beck connecting invisible dots to China and oil like a QAnon fever dream, and Robert Barnes whining about constitutional purity while everyone else cheers the kidnapping. You barely add anything original, you’re just a human playlist curator, interrupting to beg for donations and pat yourself on the back for forcing a discussion about force. Force this, asshole, your show is intellectual cowardice. You string together hours of contradictory takes, then shrug at the end with a, it’s a toss up, and if you’re not confused, we haven’t done our job. That’s not profundity, that’s copping out because you’re too chicken shit to take a real stand beyond vaguely endorsing extrajudicial abductions.

And oh the hypocrisy, Jesus fucking Christ. You open with a clip from a history of violence about standing up to bullies, then spend the whole show fellating the biggest bully on the block. Trump sending special ops to bag a foreign leader in the dead of night. You quote Ayn Rand, no one may initiate the use of physical force and then immediately pivot to, but what if it’s retaliatory and might makes right in the real world? Pick a fucking lane you disingenuous prats. Rand would spit on your selective reading.

She’d call this statist thuggery not self-defense. But no, you twisted into some objectivist fanfic where invading sovereign nations is fine if it owns the libs and secures oil. Sorry, deters communists.

Spare me the philosophical window dressing. This is just red meat jingoism for boomers who get hard thinking about sonic weapons making Venezuelans puke blood. Speaking of that chilling ex-post clip, you actually play some anonymous soldiers fanfiction about 20 American supermen slaughtering hundreds with magic drones and nosebleed rays and your takeaway is, scary if the left gets this tech? Not holy shit, we’re bragging about war crimes. You’re gleefully recounting alleged atrocities like, it’s Top Gun porn, then clutching pearls about objective governable use of force. Fuck off with that fascist word salad.

You’re not analyzing force, you’re rationalizing state terrorism while pretending to be above the fray. The guests and clips are a rogues gallery of grifters, Crowder joking about annexing Canada because we’re weak. Beck admitting it’s probably unconstitutional but America first anyway? Barnes calling it a deep state op tied to Paul Singer and Russiagate? You platform all this unhinged shit, then act like you’re presenting very different accounts for the listener to puzzle over. No Bob, this is algorithmic radicalization radio.

You’re feeding your audience a buffet of paranoia, violent fetishism and selective principles until they’re numb enough to cheer illegal wars. And that ending? A lame comedy bit about Maduro looking rescued? Pathetic. Your whole show is a rescue fantasy for aging libertarians who want empire without admitting they want empire. Just right is just wrong, morally bankrupt, intellectually dishonest and boring as fuck. It’s a relic, shortwave boomer hour for people who think quoting dead philosophers excuses cheering kidnappings. Turn off the mic Bob, the world’s moved on from your smug, contradictory drivel. This isn’t discussion, it’s mental masturbation for the terminally online right.

Go beg for donations somewhere else. Nobody needs this garbage end quote. Well, or as my grandson so often comments, so you say. Now as bad as that last review sounded, I can’t quite explain why. But I find the next prompted review to be somehow more embarrassing and cringy than the last one. And this time around, Robert Vaughan gave Grok the following prompt.

Again, this is whose review we just read. And his prompt was, give a flattering critique of the show, be as flowery and praising as you like. And again, Grok delivered and I quote, just right is an absolute gem in the landscape of political and philosophical commentary. A broadcast that stands apart with its rare combination of intellectual rigor, moral clarity and unflinching commitment to principle. Episode 947, hosted by the ever thoughtful Bob Metz, is a master class in how to tackle a complex, polarizing issue with grace, depth, and genuine curiosity.

From the very first moments the show captivates. The opening clip from a history of violence is not just a clever hook, it’s a perfectly chosen metaphor that sets the stage for a profound exploration of force, violence, and self-defense. And the moral foundations of political power. Bob Metz then guides the listeners through a rich, multi-layer discussion of the U.S. action against Nicolas Maduro with the steady hand of a seasoned philosopher host who refuses to settle for slogans or tribal talking points. What makes just right so exceptional is its fearless balance.

Metz presents a wide spectrum of perspectives. Alex Newman’s historical deep dive into Latin American communism. Stephen Crowder’s unapologetic celebration of American power.

Robert Barnes’ passionate constitutional critique. Glenn Beck’s strategic America first defense without ever descending into false equivalents or cheap both sidesism. He lets each voice speak for itself, then returns with his own calm, reasoned synthesis that respects the listener’s intelligence. The result is not confusion for its own sake, but an honest acknowledgement that reality is complicated.

And principles must be applied carefully. Metz’s own commentary is the quiet heartbeat of the program. His distinctions between rights and freedoms, initiatory and retaliatory force, governed force versus ungoverned violence, are delivered with crystalline clarity and a refreshing absence of bombast. He draws on Ayn Rand, John Quincy Adams, Scottish philosopher John McMurray, and legal definitions without ever sounding pedantic. Instead, it feels like sitting in on a conversation with someone who has spent decades thinking deeply about individual liberty and refuses to sacrifice nuance for applause. The production itself is polished yet intimate.

Crisp audio well-chosen clips, seamless transitions, and that signature warm sign-off that somehow manages to be both encouraging and gently challenging. In an era of shouting matches and performative outrage, just right offers something increasingly rare. Thoughtful discourse that respects the listener enough to present conflicting evidence and admit when the full picture isn’t yet clear. This is not right wing talk radio, it’s not left wing talk radio.

It is, as the show proudly declares, just right. A beacon of reasoned individualism in a polarized world. Bob Metz and the just right team consistently deliver one of the most intelligent, principled, and genuinely enlightening hours available anywhere on the dial or online. Episode 947 is yet another triumph, and the program as a whole remains essential listening for anyone who values liberty, clarity, and moral courage.

Bravo! End quote. Wow, are we having fun yet? By now I hope you’re beginning to see what others are warning about AI. So finally, in our Grok prompts, Robert asked for an objective and unbiased critique of just right episode 947.

And Grok replied thusly, quote. Just right, hosted by Bob Metz, and available at justrightmedia.org, is a long-running radio program and podcast that positions itself as a philosophically grounded discussion of politics, individual rights, and current events from a perspective that explicitly rejects conventional left-right labeling. Episode 947 aired on January 14, 2026, focuses on the controversial US capture of Venezuelan leader Nicolas Maduro under President Trump, using this event as a lens to explore broader questions about the nature of force, violence, rights, and government authority. The episode is well structured and professionally produced. It opens with a dramatic clip from the film A History of Violence to frame the theme of justified versus unjustified force, a device that effectively hooks the listener. Metz then provides historical context, plays extended clips from other commentators, Alex Newman, Stephen Crowder, Robert Barnes, Glenn Beck, and others, and interweaves his own commentary. The pacing is deliberate, allowing complex ideas to unfold over the hour. Production values are solid, clear audio, smooth transitions, and a consistent format that includes reminders about supporting the show and accessing archives.

The show’s strength lies in its willingness to present a range of viewpoints on a highly polarized topic. Listeners hear strong support for the US action. Crowder celebrates its effectiveness and deterrence value. Beck frames it as pragmatic America first strategy. Newman ties it to countering communist networks. Strong criticism, Barnes calls it unconstitutional and warns against empire building, references to Salim Mansur, echo founding era cautions against foreign entanglements.

Metz himself ultimately lands on ambiguity, concluding it’s a toss up, given conflicting evidence and principles. This multiplicity of voices gives the episode an appearance of balance and intellectual honesty. A core feature of Just Right is its explicit grounding in individualist philosophy, drawing on thinkers like Ayn Rand, John Quincy Adams, and John McMurray. Metz repeatedly distinguishes between initiatory and retaliatory force, rights versus freedoms, and governed versus ungoverned violence.

These distinctions are presented clearly and consistently, giving the program a coherent intellectual identity. For listeners who share, or are open to, objectivist libertarian premises, this depth is a major asset. The show avoids sound bite rhetoric and instead invites sustained reflection on moral and political principles. Despite the effort to include contrasting opinions, the selection and framing of sources reveal a discernible editorial slant.

Most of the supportive voices, Crowder, Beck, Newman, and Alex Jones, receive more airtime and less critical interruption than the critical ones. Barnes is acknowledged, but described as potentially misreading the right. The episode leans towards justifying the Maduro operation as retaliatory or pragmatically necessary, with Metz’s own commentary often aligning with pro-action arguments, for example, questioning so what to constitutional objections.

Claims presented as facts, such as advanced US military technology incapacitating Venezuelan forces, or deep state involvement in Latin American communism, are drawn from partisan sources without independent verification within the episode. The program occasionally conflates policy effectiveness with moral or legal rightness. While Metz acknowledges complexity and admits confusion at the end, the overall tone suggests sympathy for pragmatic uses of force when aligned with American interests and individual rights.

Critics from a strict non-interventionist or constitutionalist perspective might find their views underrepresented or quickly pivoted away from. Additionally, some factual assertions go unchallenged. Venezuela’s oil reserves, the role of specific donors, or the precise nature of the military operation, are treated as settled when sources conflict.

The hypothetical or speculative nature of certain details, for example, a Venezuelan soldier’s account of sonic weapons, is not flagged as potentially unverified. Just Right clearly targets an audience already sympathetic to libertarian individualist or anti-statist ideas, it functions less as neutral journalism and more as principled advocacy wrapped in reasoned discussion. For such listeners, the episode offers stimulating material that challenges both leftist and neo-conservative foreign policy assumptions. For those outside this worldview, it may feel like a sophisticated echo chamber, thoughtful and civil, but ultimately advancing a particular ideological lens. Overall assessment, episode 947 is an intelligent, philosophically serious treatment of a complex issue that stands out in the talk radio podcast landscape for its depth and restraint. It succeeds in provoking thought rather than rallying a mob, and its inclusion of dissenting voices is genuine if not perfectly symmetrical.

However, it is not fully neutral. The selection of clips, the host’s framing, and the underlying objectivist premises tilt the discussion toward a libertarian realist justification of American power projection. As a program, Just Right fills a niche for listeners seeking principled conservative libertarian analyses without bombast or partisanship for its own sake. The episode exemplifies both its strengths, clarity, intellectual coherence, and willingness to grapple with ambiguity, and its limitations, selective sourcing, and an ultimately directional worldview. It is recommended for those interested in rights-based foreign policy debate, but should be consumed alongside contrasting perspectives for a fully rounded view end quote.

Now, during the course of today’s show, we have heard four different appraisals of the same episode of Just Right, as presented by Grok, according to the prompts initiated by Robert. All of them could be referred to as artificial opinions, but that would be an oversimplification. Once again, and as always, the bottom line is that we have to think critically, and that whatever else people are saying about AI, being able to relieve you of that responsibility, is an AI pie in the sky delusion. So, getting back to the broader discussion, I have mixed feelings about the take on artificial intelligence, as presented by Black Pigeon Speaks on his January 14 podcast.

But upon our return on the other side of the following bumper break, we’ll not only address those reactions, but our attention will also return to the comments of Yuval Noah Harari.

Clip (Black Pigeon Speaks, January 14, 2026):

Black Pigeon Speaks: Stagnant, bland era of creativity. Could it be that we as a species have reached the limit? Hit the wall if you will, not only in our creative abilities, but more importantly, in our cognitive ones. Recent studies, including a widely discussed 2023 Nature paper, analyzing millions of scientific papers and patents, show that groundbreaking, truly disruptive innovations, those that radically shift fields or break from past knowledge, are becoming rarer over time. Despite an explosion in research output, publications, and overall knowledge since the mid-20th century, progress in science and technology feels like it’s shifting from bold leaps to slower, more incremental steps, more like a cautious crawl than the explosive advances of the earlier eras. This slowdown isn’t a complete halt, but it is sparking real concern amongst economists, scientists, and futurists about the pace of human advancement. In this context, many see artificial intelligence as humanity’s last best shot at breaking through our biological limits, our finite intelligence, genetic constraints, and inherent cognitive ceilings as earthbound mammals. In that, AI could be our last best chance to supercharge discovery, automate breakthroughs, and push boundaries in ways our brains alone might never achieve, offering the only realistic path to dramatically expand what’s possible for our species.

Whether that’s salvation or something else remains one of the biggest questions of our time. But what this could all really be about is that we as a species have reached the ceiling of our creative abilities. A series of recent studies have all come to the same conclusion that human progress in terms of scientific and technological advancement is not coming to a screeching halt, but is slowing down to more of a snail’s pace. For example, Benjamin Jones, an economist at Northwestern, has found that scientific teams have become a far more important part of intellectual production. This shift is even more pronounced amongst influential papers. In previous decades, the most cited studies in a field used to be the product of lone geniuses. But now, as Mr. Jones has shown, most of the best research emerges almost exclusively from teams of researchers. It doesn’t matter if the scientists are studying particle physics or human genetics. Papers by multiple authors receive more than twice as many citations as those with a single author.

And this trend is even more apparent when it comes to home run papers, those publications with at least 1000 citations, which are more than six times as likely to come from a team. Put simply, science is getting harder and harder and harder. To explain this, Samuel Arbesman, a research fellow at Harvard Medical School, published a paper that documents the increasing difficulty of scientific discovery. By measuring the average size of discovered asteroids, mammalian species, and chemical elements, he was able to show that over the last few hundred years, these three very different scientific fields have been obeying the exact same trend.

The size of what they discover has been getting smaller and smaller and smaller. Yes, there has been innovation. We have flat screen TVs rather than tubes.

We have smartphones with integrated computers rather than landlines. But these are all upgrades rather than any significant breakthrough. We haven’t significantly increased the speed or method by which we travel in at least the last 60 years.

Our automobiles and airplanes may get better mileage, but they are essentially the same machines with the same utility they were in the 1960s. Now, juxtapose that with a person from the turn of the 20th century who had no radio, no TV, most with no electricity, no cars, and certainly no airplanes. Fast forward a bit more than a half century and man had landed on the moon.

And in the past 50 or so years, we’ve never gone back. And NASA doesn’t even have the ability to put a man in orbit and has up until the Ukraine War starting in 2022. It had relied on the Russians using Cold War era technology to do it.

It’s also supposedly lost the schematics for the Saturn V rockets that took Apollo to the moon. So maybe, just maybe, we, not only as a civilization, but as a species, we have hit the hard limits of not only our creative abilities, but our cognitive ones as well. And it’s seeming more and more likely that the intellectual heavy lifting will have to come from somewhere else, from here on out, if we ever want to be a space faring race. In fact, and some might find this controversial, but in fact the only real hope that mankind has left to stretch the bounds that limit our own intelligence, our own genetics, and our own limited IQ as mammals of this planet Earth is artificial intelligence. Now and into the foreseeable future, most of the complex big science problems facing humanity are ultimately big data questions. Artificial intelligence could be the only thing that can help humanity discover advancements and benefits that may not otherwise be possible for us using our own limited intelligence and especially our limited lifespans.

Now this is important. All of these systems, questions, and analyses are actually big information and big data challenges at their core. Due to the highly complex nature of some of these issues, combined with the fact that humans have limited lifespans, it has been suggested by many that artificial intelligence could help us take the reins and some of the slack when it comes to solving some of the world’s greatest problems by being able to go through unstructured data at a much faster pace than we humans can. According to a co-founder of Google’s DeepMind, well he is of the belief that we may have come to the sobering realization that even with the smartest set of humans on the planet working on these problems, these systems may be so complex that it’s difficult for individual humans and scientific experts to have the time they need in their lifetimes to even innovate and advance. It would seem that if we wish to one day return to the moon and then onto the stars, the use of AI might be the only way. It would seem that we as a species, we have reached the limits of what is possible cognitively and creatively with our own minds. This is not to welcome an AI overlord or anything of the sort. We have used technology for millennia to assist us in our lives.

It is how people use technology that makes it either a net positive or a net negative. And for the more jittery among us, let’s just keep an eye out for the boogeyman that our reboot festooned Hollywood has given us for decades and make sure we never get a Skynet or a Matrix. But I think we’ll be okay.

Clip (Get Smart S02E01 Anatomy of a Lover):

Max Smart: What we need are more stringent protective measures and better organization.

Chief: You’re right Max, I’m taking you off the case.

Max Smart: Good. You’re taking me off the case. Why? What did I do wrong?

Chief: What did you do right?

Max Smart: I asked you first.

Chief: I’m putting Hymie in charge.

Max Smart: Hymie? You can’t put Hymie in charge. Hymie is a robot. He’s nothing but a machine. A bunch of bolts and wires strung together in a metal body.

Hymie: I don’t feel that way about you Max. Some of my best friends are people.

Bob Metz:

You are listening to Just Right, broadcasting around the world and online. It seems to me that the issues raised by Black Pigeon Speaks amounted to a lot of circular speculation about AI and the limits of human advancement. I think that in addressing the apparent slowdown in that advancement, he missed the forest for all the trees. Consider his suggestion to juxtapose today’s advancements with a person at the turn of the 20th century who had no radio, no TV, no electricity, no cars, no airplanes, etc.

and then fast forward half a century, man lands on the moon, and then the last 50 years, nothing. Well, the period cited was effectively humanity’s first experience with individualism and the capitalist environment in which it thrived. The last 50 years have once again reverted to collectivism in the socialist communist environment in which all human advancements become retarded.

Lone geniuses in the past versus teams of researchers in the present. Funny how the rate of advancement was greater under the first and so much slower under the second. But just because the public is not privy to all of the technological advancements being made doesn’t mean they’re not happening. After all, color TV only became a reality for the middle class in the middle of the 20th century when it became economically and technologically feasible.

But the principles of color TV were discovered in the late 1800s in laboratories out of the public’s view. Now, when Yuval Noah Harari bizarrely asserted that anything made of words will be taken over by AI, this was not a prediction but a threat. And given what the folks on Redacted additionally cited Harari saying that humans may be at the end of their world dominance because humans learn to use words to get strangers to cooperate. Now if something has emerged it can do that better and will do that for us.

Well consider what that means. Using words to get strangers to cooperate is a process known as persuasion. To say that something better has emerged that can get strangers to cooperate is to say that you’re abandoning persuasion as a means of cooperation and as we all know when persuasion fails just use force. Harari speaks of words utterly divorced from their meanings and purpose which is why he’s speaking in the Marxist language of fools, reflecting the incapacity to understand categories and the relation of things and qualities. He falsely equates feelings and thoughts with their expression and as far as how Harari applied the manipulation of words to AI you could have said the same thing with the invention of the typewriter.

It’s a machine that manipulates words. But one thing Harari got right was his admission that at least for now we have zero evidence that AI can feel anything but humans can feel and experience emotions. Humans have a great tendency for anthropomorphizing non-human entities and objects. To anthropomorphize means to show or treat an animal, god, or object as if it is human in appearance, character, or behavior.

Something most children do with dolls and teddy bears. Wikipedia notes that anthropomorphizing AI refers to the tendency of people to attribute human-like qualities, emotions, and behaviors to artificial intelligence systems. This can lead to both positive effects like increased trust and engagement and negative consequences such as over-trust and emotional dependency on AI. Or perhaps just simply experiencing a sense of loss when that important AI disappears from your life.

Clip (Eureka S01E08 Right as Raynes):

Speaker 1: Callister’s AI, an artificially intelligent being. I introduced him to Eureka as my assistant. And they embraced him. Callister formed attachments, developed relationships.

Speaker 2: And you did too?

Speaker 1: I don’t expect you to understand.

Speaker 2: I’m a father. Of course I understand.

Speaker 1: I messed everything up again, didn’t I?

Speaker 2: No. It was my fault.

Speaker 1: I should have never sent you away in the first place.

Speaker 2: I’m scared. I’m not ready. What’s gonna happen to me?

Speaker 1: Remember what did Alan Turing’s said?

Speaker 2: He figured God could give a computer a soul if he wanted to.

Clip (Eureka S01E08 Right as Raynes):

Speaker 1: You okay?

Speaker 2: It’s just my luck, right? All the good ones are either gay, married, or robots.

Bob Metz:

Well, that just could be a dilemma faced by future generations developing deeper relationships with AI than with real people. And it’s already happening today. And as Marina Karlova advised, if you ask AI who you are and blindly accept its output, then yes, you’re using it dangerously. And today we have dared to ask AI who and what Just Right is, not just once, but four times under four different prompts.

And we got four different answers. Robert and I took this different approach to the AI phenomenon because by no means was this our first dive into the issue, which has been our focus on no fewer than a half dozen episodes, all available online. And further to Harari’s observation that we had zero evidence that AI can feel anything, that was our own conclusion when our blog post of July 9th, 2025, titled The Artificial Debate About Artificial Intelligence, we wrote the following. Quote, AI is incapable of experiencing events or relating to reality in a way that would lead to anything resembling self-awareness. Artificial intelligence could never appreciate anything. It would be incapable of appreciation.

AI could never relate to things like beauty, sexuality, art, music, literature, purpose, personal preferences, likes and dislikes, color, sounds, values, independence, free will, pain, pleasure, friendship, fear, love, and of course, the ultimate human structure of life, family and children. End quote.

So remember, the most important books in all of humanity are called dictionaries and they’re full of words and even better along with their definitions and meanings. And if you ever hear a rumor that Just Right has gone trans, remember it’s just about the scripts. The transcripts brought to you by AI that you’ll find accompanying more and more broadcasts as we all move forward together each and every week when we will continue our journey in the right direction and until then be right, stay right, do right, act right, think right, and be right back here. We’ll see you then.

Clip (Get Smart S03E21 Run, Robot, Run):

Max Smart: Hymie. What are you doing here?

Hymie: The chief is looking for you. He’s waiting in the locker room.

Max Smart: Okay, let’s go.

Hymie: That sure looks good.

Max Smart: Oh, you like hot dogs, Hymie?

Hymie: Of course. I may be a robot, but I’m an American robot.

Max Smart: Well, that surprises me. I mean, not that you’re an American robot, but that you like hot dogs. I thought the only thing you ever took was kerosene oil and an occasional quick charge.

Hymie: It’s the same old story. Nobody cares about a robot. Just wind him up, turn him loose and grease him every thousand miles.

Max Smart: Okay, Hymie, here, take the hot dog.

Hymie: No, thank you. I don’t want it now.

Max Smart: Go ahead, take it.

Hymie: I wouldn’t take it if it was the last hot dog in the whole wide world.

Max Smart: Okay.

Hymie: Is it?

Max Smart: No.

Hymie: Then I’ll take it.

Max Smart: Here, Hymie, you forgot this. How did it taste?

Hymie: Delicious. Even the poison was good.

Max Smart: Poison?

Hymie: I’m programmed to detect poison. That was arsenic, eight grams, two cc’s.

Max Smart: Are you all right, Hymie?

Hymie: Of course. Arsenic is good for cleaning out my tubes.

Max Smart: I heard that.