SGU Episode 1055
This episode was created by transcription-bot. Transcriptions should be highly accurate, but speakers are frequently misidentified; fixes require a human's helping hand. |
| This episode needs: proofreading, links, 'Today I Learned' list, categories, segment redirects. Please help out by contributing! |
How to Contribute |
| SGU Episode 1055 |
|---|
| September 27th 2025 |
"Inside NASA’s mission control: where innovation meets exploration and teamwork thrives." |
| Skeptical Rogues |
| S: Steven Novella |
B: Bob Novella |
C: Cara Santa Maria |
J: Jay Novella |
E: Evan Bernstein |
| Quote of the Week |
"Inductive reasoning is, of course, good guessing, not sound reasoning, but the finest results in science have been obtained in this way. Calling the guess a “working hypothesis,” its consequences are tested by experiment in every conceivable way." |
— Joseph William Mellor |
| Links |
| Download Podcast |
| Show Notes |
| SGU Forum |
Intro
Voice-over: You're listening to the Skeptics Guide to the Universe, your escape to reality.
S: Hello and welcome to this Got this Guide to the Universe. Today is Wednesday, September 24th, 2025, and this is your host, Steven Novella. Joining me this week are Bob Novella. Everybody. Cara, Santa Maria.
C: Howdy, Jane.
S: Novella. Hey, guys. And Evan Bernstein.
E: Good evening everyone.
S: How's everyone this fine Wednesday?
E: Pretty good.
C: Doing well.
S: So, any of you watch the full press conference with RFK, Junior, Trump and Oz?
E: Yes, you could stomach.
C: No, I couldn't do that. I did watch. Did you guys see the cut that they made where they set it to like Bill Nye, the science guy, but it was like Don Trump the scientist guy. It's very funny. It's all the best quotes.
S: It was terrible. I mean, it was a straight up it was.
E: Tragic. Propaganda.
S: Fire hose of misinformation, propaganda and all with a very specific purpose as well. Although I honestly think Trump was sort of rambling off script and giving away the game. Like I can imagine, like they had a meeting where they said this is what we're going to say and this is the overall strategy. And Trump didn't know what they were supposed to say at that conference versus what was the long term goal. So he sort of gives the game away. But.
C: Anyway, do you think when they prepped him that they told him how to pronounce acetaminophen and he just forgot?
S: I don't know. I don't think they thought they had to.
C: But but remember, nothing bad can happen, it can only good happen.
S: It could only good happen, yeah.
E: All these world belong to. Kids.
S: So here's the quickie version. We talked about it on the live stream. I wrote about it on Science Based Medicine and Neurologica.
What's the Word? (01:46)
- Autism
S: The announcement was basically had two components that they've discovered the cause of autism wrong, you know, and it's Tylenol acetaminophen in pregnant mothers, which is wrong. I, you know, again, I talked already about why that the evidence for that was preliminary and inconsistent. And then actually the best evidence is that no, wait, there is no causal link between those two things. And pretty much every medical organization and specialty organization in the world has looked at the evidence and come to the same conclusion. But they're of course, just cherry picking the whatever the studies that they want to to cherry put because they had to. Again, the RFK promised he was going to find the cause of autism in six months. So boom, here it is, right? Even if he has to just make it up. The second one was a new treatment for autism, which is really a treatment for cerebral folate deficiency, which may have some manifestations of autism. And again, this is preliminary. It has not been proven yet. It requires more research and more evidence. But but it looks like they just pressured the FDA into which is, you know, he has his toady in there now to just give approval for this drug, which is already on the market for other reasons. They what basically giving it a new indication for for autism. So there you go. They found the cause of autism and they found a treatment for autism, both of which are complete bullshit. But the, the deeper game was given away by by both RFK and what he said on script and what Trump said off script. You know, RFK basically made the the case that or tried to make the case that autism is primarily an environmental disease, right? It's not genetic. He said that the research showing it's genetic is all fraudulent and it's a conspiracy fraudulent and that he's going to direct the NIH now to look for environmental causes of autism, IE vaccines, right? But other shit as well. I'm sure whatever it'll be, drugs and vaccines and toxins, you know, right. So that's always going to be, you know, redirecting the NIH to waste their money on on his pet project rather than having scientists and researchers following the evidence, you know, where it actually leads. And Trump of course, goes off on vaccines how the MMR vaccine is bad. It's just bad. And you have to break it up into three different shots, which of which I think is the strategy here, right? Because we don't have a separate mumps or measles or rubella vaccine. We just have the the MMR vaccine, the combinant, the combined vaccine. So if you if and they and RFK's vaccine panel that he packed with his anti vaxxers already has removed the approval for the for the recommendation rather for the MMRV, you know, plus the varicella vaccine saying it's slightly higher risk of fever associated seizures than the MMR alone.
C: So what's the end game there? Is it like? So the end game is but.
S: They're going to do the same thing to MMR. They're going to say, Nope, we're going to delay it till after four years old and you have to give the individual vaccines, but there are no individual vaccines. So if they try to get individual vaccines approved, then kicks in the gold standard science we talked about where you have to have a placebo-controlled trial, which you can't do. You can't do something that already has a working competitor, right? You can't do.
C: That, Oh, see, I thought you were go Wakefield on this and be like, oh, they're just going to get their own people to make their own vaccines and make a ton of money off of it.
S: Well, I don't think that's the point. I think, you know, he just wants to go. Yeah, just.
C: Get rid of it all together.
S: So this is this is all maneuvering to make it and not outright ban vaccines, but just maneuvering to remove them from the recommended schedule, to delay them until an older age where that insurance companies won't cover them to prevent any new vaccines or variants from coming on the market. Because you can't do the science and to direct research into only what he wants, which is only looking for environmental causes of things because that's what he does. And.
E: What happens to the mortality rate once this all?
S: It's all going to be a completely unmitigated disaster. This is a healthcare disaster for the American public. The only question is how much? How far along is he going to get in the time that he has? And you know, also like what happens in 2026, you know, when, when the next election happens, is the public paying attention? Do they care? Are they full of misinformation? Are they idiots? I mean, you know what combination of these things, as I've said for a very long time, no. Yes, human civilization will destroy itself because of stupidity. That is the most grave threat to humanity.
E: Carl Sagan said as much as well.
C: Yeah, but I think the scary thing here is that for some, not all, but some of these childhood diseases, the manifestation, the public health crisis, won't happen until after he's out of office.
E: Yeah, it'd take many years for it to come to ITS. Come to its fruition.
C: Some of them will be overnight because the minute that people are unable or unwilling to vaccine their children, children are going to start dying of disease. Like, it's going to happen quickly with new births for some diseases. But for other diseases that don't really become an issue until kids are in daycare or in, you know, elementary school, there is going to be a delay.
E: Well, they're not going to. I mean, are they turning off the spigot tomorrow here? Or I mean, does this stuff take years to get to the point that they're want to get it?
C: To well, I think they're, they're trying to turn off the spigot as quickly as possible, obviously, and and kind of like affect change very fast. I guess one of the things that I think I don't know, maybe we don't talk about enough or we do, but I I'm so curious about is the fundamental motivation that's sort of behind the motivation that you often see with key players in anti vax movements. We go back to Wakefield and we know that the fraud with Wakefield had a financial incentive, right? And there was a power incentive. Very often when we talk about RFK or we talk about his HHS kind of group. Steve, I know I sent you some articles today about it's not going to be what we talk about later, but about like David and Mark Geyer or about William Parker. These individual anti vaxxers who themselves were either practicing medicine without a license or or, you know, committing fraud in their own ways, but had their own quote treatments that they were peddling, which were often really dangerous. Like one of them was using Lupron. It's a it's a hormone blocker and it could basically chemist chemically castrate young children. But so these like horrific experiments and really dark kind of approach, which is to offering an alternative to an afraid public. That's what that's what scares me the most is when people are told this thing that is safe is actually not safe. It's the cause of all the things you should be afraid of. But here, don't worry, I have something you can do instead. It's the instead that makes me go, let's follow the money and let's figure out why these kind of unproven treatments are are being peddled.
S: Yeah.
C: Do we know what's up with the kind of new treatment that they're starting to tout?
S: Well, that the only thing that that came out about that was that Doctor Oz at one point had a stake in the company that sells that and which he then claimed he divested from. But we that's never been confirmed. So we don't know. So I don't know. I don't know if this is specifically about grifting, you know, and trying to make money off of alternatives, although that is what's fueling, you know, the alternative medicine industry for sure is selling supplements and stuff like that. RFK Junior mainly makes his money by being a lawyer, defending people, suing for toxic exposures and things like that, right? So that, you know, that's, that's how he makes, he wants everything to be environmental and toxic, right? Because that's how he makes his money.
C: But I think we we need to do, we need to, I'm sure somebody has already done a detailed deep dive of everybody on that vaccine panel of every single consultant that has been brought in where a legitimate scientist who has dedicated their lives to doing this kind of research was nixed and somebody else was brought in to give their opinion. Maybe it's just because they're toeing the party line and they're anti vax, but I have a feeling that part of the reason they're anti vax is because there's some sort of incentive.
S: They're often, yeah. They're often intertwined.
C: Yeah.
S: I mean, it doesn't matter for the terrible arguments they're making and what the science actually says. But you're right, they are often intertwined.
C: And I think it matters for the public to better understand this because if there's just straight up fear mongering, a lot of people go, well, why would they do that? If there's not something, if there's, if it's not true, a lot of people say, why would this public official say that if it's not true? But if it's like, oh, This is why it can, it starts to make sense for people. Yeah.
S: I mean, obviously we're going to have to keep an eye on this as it unfolds. I'll just say this too, that like me and my colleagues at Science Based Medicine, especially David Gorsky who's been writing about this, but all but most all of us have at one point or another been predicting what RFK Junior is going to do. And we've been pretty spot on. So it's not as if we we don't have a good bead on where he's going with this. He is going to do everything he can. He can't limit and minimize Americans use of vaccines short of outright banning them. And he's so far he's way ahead of schedule. Like he's doing it faster even and just more, you know, draconian than than we even thought. You know, it's basically at the worst end of the spectrum, right? He's doing the exact thing he promised he wouldn't do when he got when he got approved in the Senate. Oh, gosh. All right, Cara, we're going to do a what's the word?
C: And it's kind of.
S: Related.
C: It is. Yeah. It's not grift. I actually wanted to dig a little bit deeper into the word autism itself. I know we've done deep dives on the show in the past about what autism is, what autism isn't, Some of the kind of misinformation in the pseudoscience that we're often hearing peddled about autism. But I was really curious, like where does the word come from? Because I think most of us can sort of recognize the two components of the word, right? If we break it up into two, it ends in the suffix ISM, just like many actions or like states of being are isms. But the the prefix or the first portion of the word comes from the Greek for autos, right? Or auto meaning self. And so why is it self ISM? Like, where does that come from? And one thing that I remember learning sort of it was somewhere in the recesses of my mind from when I was early on as a psychology student, but was refreshed for me today is that the term was actually coined way back in 1912, so, you know, over 100 years ago by a Swiss psychiatrist by the name of Paul Blohler. Blohler. I'm very bad with like German pronunciation. I have it right here. Bloiler. OK, well, fine, Yugen bloiler. So he actually coined the term autism, but he wasn't referring to what we now know to be autism. Back in 1912. What he was actually referring to was a symptom that he saw in many of the severe cases of schizophrenia that he was studying. So he also kind of created the concept of schizophrenia. He was the first to sort of look at that and determine it as a syndrome. Basically, he said autistic thinking has to do with, and this is back when psychoanalysis was king. And so a lot of psychiatrists thought that, you know, there were portions of your mind that would kind of do things in order to avoid facing the harshness of reality. And so he described autistic thinking this self ISM as spending time in one's inner life and not being readily accessible to observers. He actually characterized it by, quote, infantile wishes to avoid unsatisfying realities and replace them with fantasies and hallucinations. But around the mid century, so the 1950s and 1960s, we saw a big change in the way that that word started to be used. So not only did we know more about schizophrenia at that point, we also saw something big happen in like the 60s having to do with mental health. Do you guys know what that was? Something really big, A big change.
E: Electric shock therapy.
C: No, that's when we closed all of the asylums, right? That's when we had the rise of psychiatric medication. We started to, yes, classify with a little bit more kind of science, but we also were closing the asylums. And so there was a real push for individuals to integrate into society and to be able to do that with appropriate therapies. At that point, that word autism started to shift and mean more what it refers to now, which is, yes, a diagnosis. Some people might say more of a syndrome, right than an actual like quote disease or disorder. And really for a lot of people, I actually read a really lovely, it was on Reddit, somebody talking about how they really like the word autism. They really like going back to the roots because they as, as somebody who is neurodivergent, they see it as having an extremely absorbing interior life. And that that was something that really related for them. And so now we'll often see that shift and that happened again through a change in psychiatry and also epidemiologic measures that helped us kind of understand incidence rates of these different diagnosis to less have to do with excessive hallucinations or fantasy and more have to do with one's kind of tendency to draw inward or sort of deficits in social interaction or in communication. And so it's interesting that the word still holds and it still does define not all individuals with autism because as we know, many people with autism have very different manifestations of the diagnosis. But that kind of core root of self being kind of on ones own, being somewhat internal, but having this like deep relationship with oneself does hold for many people who who identify in that way. So it's a pretty interesting, I think etymology that sort of like left and came back, you know, it, it, it, it sort of was a core symptom of schizophrenia back when a lot of psychiatric syndromes and disorders were all sort of mashed together and they weren't, you know, well understood. And then over time it was teased apart and and better used to describe what we now would call autism as a diagnosis with communication deficits.
S: I think a lot of it, I know you said this, but just to emphasize they, they actually thought it was the early stages of schizophrenia at one point.
C: Yeah, and back then, schizophrenia was kind of everything and.
S: You know, yeah, schizophrenia was like the catch all. They didn't really know what that was either. Yeah, they were just focusing on the They're just absorbed in their self.
C: You had psychotic, you had psychotic disorders and you had neurotic disorders, and that was pretty much it. Neurosis was things like anxiety, depression, you know, nerves. And then psychotic disorders was pretty much anything else, anything that seemed kind of bizarre or odd or just different. And then later that was kind of teased out and we started to have a better understanding of what psychosis actually was. And autism emerged as a developmental disability, not as having anything whatsoever to do with schizophrenia, but the root came from that.
S: Right.
News Items
New NASA Mission Control (17:51)
- NASA debuts new Orion mission control room for Artemis 2 astronaut flight around the moon (photos) [1]
S: OK, Thanks, Cara. Jay, tell us about NASA's new Mission Control.
J: Well, there's a couple of things going on. The first one is very brief, but interesting. NASA has just recently opened the new Orion Mission Evaluation Room, and that's called the Mer Mer say Mer. This is inside the Mission Control center at the Johnson Space Center in Houston. The room was activated on August 15th, 2025. You know, like they turn the lights on, then you hear all like the, you know, that's how I see it. It's fun, Steve. You should try it sometime. This adds 24 engineering console stations. They're staffed around the clock during the 10 day mission, the Artemis 2 mission. These are meant to augment the standard white flight control room. I guess that's what they call the existing one. And this is because they're going to have expert engineers from NASA, Lockheed Martin, ESA, and Airbus that are going to be constantly monitoring the spacecraft data, comparing performance against their expectations, and help troubleshoot unexpected issues that always pop up. It's important to note, like, this is not overkill. This represents just how complicated Orion systems are and how many moving parts need simultaneous people looking at them to keep the crew safe and the mission on track, right? It's exactly what Mission Control is supposed to do. It's just like Mission Control on steroids. Our numbers 2 is also going to be As a quick reminder, this is the first crude flight in NASA's modern lunar program. I'm personally extraordinarily excited about this. All the reasons why I will probably list most of them in in what I'm about to tell you. The first reason why I'm super psyched is that this is when things start to get really, really exciting, right? We have the four astronauts that are going to ride Orion on this 10 day mission. It's called a free return. And what happens is they're going to go to the moon, they're going to circumnavigate the moon. They're not getting off the rocket, nothing like that. This is just people in the ship going around the moon and then coming back. This is going to prove that the rocket and then this spacecraft and the ground systems are all ready for sustained deep space work, which from here on out after the second mission, and like, that's what we're talking about. Even though it might not seem like a big deal, you know what's going to happen. It's going to ride there and come back. Like this mission is unbelievably critical and it's really cool. This is the beginning of crude missions. And if things go as planned, they're never going to stop. Just think about it. You know, they're building a huge, huge system. On the moon, you know, there there's so many different giant pieces of the puzzle that need to be constructed and brought to the moon and a moon base and figuring out all of the technology that's needed. And then they're going to go to Mars, you know, things if the funding is there and the will is there, like it's just going to be, you know, crude flight after crude flight after crude flight on and on and on. And I think we're all going to get bored with it at some point, you know, like it's just going to become so common. NASA's schedule is that the flight launch is no later than April of 2026. So I remember when we were talking about this guys. I haven't really. Been bringing it up that much just because there really wasn't that much to say. I was waiting for this milestone. But I remember hearing April 2026 and and saying to myself, oh Mike, that is so far away. And now it really isn't. Like it's going. To yeah, it's going to come very quickly.
E: So was the Artemis project to designed in what 2018 I think is when it first?
J: Yeah, I don't remember the date, but the original launch, I think Artemis 2 was supposed to go off like in late 24 or early 25.
E: That's what I remember as.
J: Well, yeah, so we we had a significant delay and again, good, good for them. Delay it. You know, we're talking about sending people to the moon again with all new technology, so they have to get it right. Exactly. So the agency left the door open to fly even earlier than April if the work finishes faster. But the official commitment is still April 2026. The crews set, meaning they're selected, and they have been selected for a while. We have four people going. Commander Reed Wiseman, Pilot Victor Glover, Mission specialist Christina Koch and Mission specialist Jeremy Hansen of the Canadian Space Agency. These guys are going to take Orion out of the garage and take it out on the highway. And this will be the first time since Apollo 17 that a crew will travel beyond low Earth orbit. So these are all profound moves that are happening here. The hardware status is better than I think a lot of people assume at this point. The Space Launch System core stage, right? This is the rocket, the single use rocket, right? They're going to have to build 1 specifically for each mission. If they don't, you know, eventually have SpaceX help. The Space Launch System core stage, right? You got that, Steve? Space Launch System core stage. That's essentially the rocket without the Orion capsule. This arrived at Kennedy Space Center by a barge in July of 2024. That's a long time ago. The solid rocket boosters were stacked in the Vehicle Assembly building, right? These are all things that happened before they they start to really build the whole ship out. NASA reports that the core stage and boosters were connected and integrated on the mobile launcher in March of this year. And these are the hard milestones and a clear sign that things are definitely a go. So Orion is past its assembly phase, meaning it's built. Lockheed Martin says development for the Artemis 2 spacecraft is not only finished, but it's in launch preparation flow at Kennedy. I like that they call it flow, right? So all the things that they got to do to get it prepared before they attach it to the rocket. Now this matters because most of the open risks after Artemis 1 centered on Orion, right? Not the rocket. So if you guys remember now during Artemis one, this was back in December of 2022, NASA discovered an issue with Orion's heat shield during reentry. This, we don't want this problem. It's a really bad problem to have because this is where people could easily die. You don't want them, you know, dying literally moments before they touch back down. The shield made of something called AV coat or AV coat ablative material, which basically means it's heat resistant. It's designed to gradually burn away in the controlled manner to protect the spacecraft. However, Orion lost more material than expected because there was chunks of that stuff kind of popping off prematurely in a process that they call spillation. I've never heard that word before. While this didn't endanger the Artemis one because it was, you know, First off, it was uncrewed and internal temperatures still remain safe. It was still a big deal. Like it it the concerns were really high for future missions. There was excessive material loss and that could allow the interior to get super heated, which means the gases are going to dramatically expand. And this would definitely pose a threat to anybody that's would actually be on a future mission. So NASA spent over a year investigating the problem. They ran, you know, a huge number of tests to recreate these re entry conditions. They were examining the existing flight data. And back in December 2024, they identified the root cause. So for Artemis 2, NASA decided to fly the heat shield as built, which is the same spec as the first one, using the same materials and construction as Artemis one. While that, you know, they're essentially relying on updated thermal models. They adjusted re entry procedures, I guess, changing the the angles and stuff like that. They enhanced monitoring to keep risks within safe margins. Although I don't know what the monitoring is going to do if they're on their way in and there's a problem, but they, I guess they, they know something I don't. But a permanent hardware fix, which is going to mean manufacturing tweaks, the improvement to how the AV coat tiles are bonded and layered. You know, all of these details that these are being developed for later missions, probably Artemis 3 and 4. It's only going to be implemented until after extensive testing to ensure the reliability. Meaning that if they were going to change something that big and that significant, it, it would not only delay, but it would, it could throw the whole thing, the whole mission series off kilter, right? You don't want to like throw in a three-year delay, just do it on the next one. And they're they're confident that everything is going to be fine. The crew is now actively preparing for the mission. NASA is showing the crew like running the launch day walkout drills. Like, you know, what happens when it the day comes. This is exactly what's going to happen. So they have to coordinate everything with all the people that go with them, like that entourage. This includes people like getting them into the capsule, buckling them in, giving them a pack of gum, slapping them around, you know, all the things that need to happen. They're rehearsing like nighttime operations. There's separate updates that they put out that described the research plan for the mission. This includes monitoring sleep and the activity during the daytime, collecting biological samples on the the astronauts to support the human research for for deep Space Flight. Meaning they have to know everything about these people just to make sure that they're perfectly healthy and that nothing is going to come up. They have independent reporting that shows them practicing lunar observation protocols. I know that sounds simple, but these are very useful backup skills and nothing here is fluff. It's how NASA lowers something called burn down risks. A burn down risk is a potential problem or technical issue that has to be fully resolved, tested, and signed off on before major milestones are launched. And they have them. They have. They have some risks there that they have to work on. There's some unknowns and and some of these are quite big. If anything is going to cause a delay, it's going to be in the next few things I tell you here. So life support performance, NASA has to confirm that the environmental control and life support systems, they have to make sure it works properly inside the fully integrated Orion capsule and it has to be better than lab testing. It has to be fully put together and 100% functional. Heat shield confidence. Again, I went over this, but the heat shield has to perform safely for specific reentry trajectory. Artemis 2 will fly. It's going to be a different reentry than Artemis 1. So they have to, they have to like really, really test that up and make sure it's 100% go. They have something called first crude mission pacing items. These are slower checks for safety. They're required because this is the first flight with astronauts. This is naturally going to introduce more steps and potential failure points and potential delays. But they have to, they have more protocols that they have to go to. The agency's official timeline remains to be no later than April 2026. Of course, it will be pushed if they have to push it. You're right. You keep in mind everything I just said, everything has to be completely greenlit by all of the engineers and everyone, everyone's whose skill set matters here, you know, everyone has to give a thumbs up. If you hear any other dates from outside sources, You have to be, you know, very skeptical of that. Like you should really only listen to the dates that are coming from NASA because other other, there's been a lot of reports of like other, you know, companies, whatever, like groups that are trying to say this is not going to happen, this isn't going to work or whatever. But they don't have the inside information. You know, they don't really know what's going on. You know, NASA, I don't think NASA has any real reason to lie. Like they make it very clear, like it's we're only going to launch if it's safe. We're saying, you know, April 2026. And again, we know that they'll delay if there's a problem because they've already done it. And that's the culture at NASA. So I, I know, I trust them and trust the engineers. And I'm looking forward to like, some spectacular space adventures moving forward in 2026.
S: Do you guys all see the picture of the fingers crossed, planned Mission Control?
J: Yeah.
E: No. So that's. Cool.
S: Yeah, it's pretty cool. I mean, it's basically a bunch of big monitors, right? It's a bunch of computer stations with gigantic monitors.
E: What games would you play on those?
J: I mean, those control rooms, like they don't really differ that much from the historical ones, right? It's really, it's like Steve said, giant monitors on the walls, computer monitors and computer desks, like everywhere with tons of people with signs above the desks and all that. It's the same thing. It's just, you know, just better modern technology. I think the old school stuff looked really cool. I just like the layout, but the new one is cool. Take a look at it.
S: All right. Thanks, Jay.
Element 120 (30:24)
S: Bob, tell us about element 120.
B: If you insist more accurately 119. I don't. Not sure why they're focusing on 120, but that's neither here nor there. OK, never mind all that crap Steve. A new method of discovering new super heavy elements has recently been tested with positive results. Could this method find new elements that do not exist yet in our periodic table of elements? OK, this announcement came from the Lawrence Berkeley National Laboratory. I'm, I'm sure all of you have heard about the periodic table of elements, right? Most of these elements, you know, they're essentially just lying around waiting for us to catalog him, right? And just like laying right there. Some of them will never appear naturally on Earth though. And I was curious, what is that cut off? I wasn't 100% sure. Do you guys know what's the heaviest naturally occurring element that forms on Earth 92 that 92 protons, if that's what you meant that that is that is true. That is correct. So uranium 238 with 92 protons and 146 neutrons. But then what I didn't know is that what is the heaviest natural element that we know of? And it's not uranium 238, it's plutonium 244, which we found in some meteorite dust. But apparently a plutonium 244 apparently has life, has a half life of like 80 million years. So if if some were created, we're on the Earth, it's, you know, it's already decayed away. So it's not totally fair to say that. But it is correct though, that the, the, all the elements beyond plutonium were never just found. They had to be synthesized, they had to be created. So have you ever wondered how they create even heavier synthetic elements to add to the periodic table? All the. Time.
E: I've. Wondered.
B: I mean, yeah, I've thought about it, I've read some stuff about it. But what I learned, what I learned recently though, was a lot of it was was new to me. That what they do have a super high level as they smash elements together one way or the other, they're just smashing them together and hope that the protons and neutrons of 1 nucleus can fuse to the nucleus of another atom that's kind of what we're what we're doing here We and we've talked about that in the context of colliders and things like that plenty of times So if you add new protons to a nucleus you have created by definition what.
US#03: A new element.
B: Exactly a new element since the number of protons defines what what the an element is. So for example, all elements with six protons are carbon atoms. No, there's no other way. There's nothing else that they could be except carbon atoms. This number is the atomic number. But if you change the number of neutrons in an in an element, that just changes the isotope of that element. So say you go from deuterium to tritium, that that's all that is. It's still a form of hydrogen. It's just, it's just a different, different, it's just a different isotope and atomic mass. You don't necessarily need to know that that much for this talk. But atomic mass is the is the protons plus the neutrons. The atomic number is just the protons. That's the critical one that defines the new element. So, all right, so the old method of doing this used a particle beam of calcium 48. I did not know this at all. They essentially used a particle beam weapon. I mean, I don't know how much of A weapon that would be, but I wouldn't either. It's a particle beam of calcium 48 with with 20 protons and 28 neutrons. So this is a rare isotope of calcium. So imagine you have a beam of calcium atoms with no electrons, right? Just a nucleus and they all hit. Other. Heavy elements like curium or californium 1. So you've got 1,000,000 billions of these calcium ions that are just impacting onto this californium say. So once in a while, one of those calcium 48 bullets would fuse to a Californium atom instead of just bouncing off. And that's literally like the odds of that happening are one in, in like quadrillions. But you know, with with enough, if you have enough of these atoms in your calcium beam, it's going to happen. So after fusion takes place, what do you got? You've got a new element. Since the number of protons has has changed, if you no matter what you do to the protons, if you add one, take away one or anything like that, you now have a new element. So these these super heavy atoms don't last long, but we can detect the decay chain of the elements. And once we can detect what this this mysterious thing decayed into what that decay chain is the daughter elements and granddaughter elements, if you will, then you can definitively say what had to have existed to create those daughter elements. Those right, You follow that. So it's like cat, it's like cataloguing your daughter's DNA and her son's DNA to conclude that you definitely had to exist. So right. So they're they're seeing this decay change. You like that one care. So you see this decay chain and you have to, and they say, well, for this decay chain to exist, this then element, this element had to have created it. So that that's their evidence. And it's pretty, it's pretty damn solid. The specific method that I've been talking about, about this calcium 48 beam, it has actually helped us find elements 113 to 118 in in the early 2000s. Or should I say aughts? I just don't like saying aughts. Does anyone like saying aughts? Oh wow, you're weirdo. So unfortunately, though unfortunately, this calcium beam technique has reached the end of its useful life in finding new elements. It's not just, it's just not heavy enough to create an element above the current heaviest element, which is 118. Organisan is one way to. Pronounce.
E: I would have never guessed that as. True, yeah.
B: 1/1/18 1:18 So we need something heavier. Calcium 48 is just isn't cut. It's not enough oomph. You know, there's not enough power behind us. We need something we're a little bit more formidable, a little bit heavier. And this is where this news item kicks in and it's called Titanium 50.
Scalable Quantum Computer (36:11)
B: It's a Beam Titanium 50, which is a little bit more than 48, right?
E: Yes, I agree.
B: Yes, yes. Thank you for agreeing. This new particle beam, this new, this new particle beam, the team has developed titanium 50. Oh yeah, I have it right here. 22 protons and 28 neutrons. It's been tested essentially as a proof of concept for creating super heavy atoms beyond 118. So this is this was their goal. Let's they've been developing this new titanium 50 beam for quite a while and they're like, let's test this out. Let's just see what it can do. They weren't expecting to make any huge, major breakthroughs. They just wanted a proof of concept. So to do this, they research it. They sent a new beam against the target of plutonium 244, the heaviest natural element that we have encountered. They shot the beam against plutonium 244. And when the titanium nucleus and plutonium nucleus fused, they briefly created a new heavier nucleus and that and that and what it created was they found actually two element 2 atoms of element 116 called livermorium. I mean, when did that name? I don't, I guess I remember the old, was it an old Latin, The old Latin name for for these elements? But they must have renamed it and I missed it because I never heard of Livermorium before. It sounds vaguely funereal, doesn't it? OK.
C: So it must be named for somebody named Livermore.
E: Well, right, the Livermore Laboratory.
B: Yes, I suppose. Oh damn, there you go. Lawrence resolved. Is it Lawrence Liver? Yeah.
S: Is that right? Yeah, I believe. It is. You are correct, Sir.
B: They found, so they found, so they use this new this new technique, this new and they found two atoms of element 116. Now this element, like I said, it's already been found. It was found using the calcium beam probably back in the odds. But like I said, this was a proof of concept and they pretty much, well, you know, prove the concept the odds that were against their success. Like I said, it's a like only a few nuclei within a quadrillion of the tries should have done this. And of course, if your beam is, you know, big enough and long enough, you're going to you're going to you'll eventually hit it right. So what does this mean? This means that titanium 50 could work for perhaps at least the next few elements. So we may be able to get to 1/19, 1:20, maybe 121, at least if we are super lucky, it could last us, it could help us discover a few more after that. But we but I suspect that we're going to probably need another type of beam after 121 or so. But now this is this is one thing that caught me by surprise. These next elements though, one 19 and 120, they could be extra special for a couple of reasons. 1 is that element 119 would be a new row in the periodic table, a new period I guess is what they what they would say because all 7 rows or periods are, are basically filled up right now. So when they discover 119, then it's going to go to, it's going, it's going to go to a new row. So who here, who, which of you guys knows what the rows of a periodic table or what those periods? What did they reflect? What? What is? What is the significant? Electronic shells, right? Yeah, it's essentially how the electrons are arranged around the atomic nuclei. So all of known chemistry, everything we know about chemistry fits in the seven rows of the periodic table of elements. Right now, if or when we confirm the next heaviest element, say it's 119 or 1:20, we're not sure which one it would be. We will then be on a a new row. It would be row 8 or period 8.
E: In which column?
B: Well, all far left. It would be the far. I'd probably be the yeah, 119 would be the far, the farthest left. OK, so it's expected though. So what's going to happen in row 8 right away? We can't be 100% sure, but we do very strongly suspect that relativistic effects could could strongly influence the electron behavior. One website. That was saying that that the, the electrons are essentially travelling near, you know, close to the speed of light. So that's why they're saying that that relative relativistic effects could have some influence here. So these elements, who knows? They, they, they probably won't follow expected chemistry patterns, right? We're not sure, you know, what kind of chemistry these things could engage in, but not that we would ever see any chemical reactions, right. These are ultra heavy atoms. Their half lives are probably they're in the microseconds. They're, you know, they're, they're very, very super brief. So, you know, there's, there's not going to be any real chemistry going on there unless unless of course, that there's that Holy Grail of chemistry known as, and I've mentioned it here and there on the show. And even just talking with you guys recently, the island of stability, Evan, you and I were talking about this. That's that's one of the things that some nuclear theories predict. There may be some very heavy elements that have that might have considerably longer half lives. Instead of microseconds, it could be whole seconds. Imagine a whole second or minutes or even days. I mean, it's you can't rule that out. I mean, maybe it's unlikely. Some theories point to it and this would be due to some special, some call it mat, some some magical ratio of protons, you know, to to neutrons. They they say that that could make these super heavy new atoms suit just extra stable, so stable that they can they they could last far longer than than the microseconds that these super, super heavy elements typically last. So who knows? I mean, who knows? What we? We. Could learn if we had that much time to play with the Super heavy element, you know, even even seconds, I think would allow us to do a lot more testing far more than what we could accomplish with the with just microseconds. We could, you know, do something something more substantial and just looking at the, the, you know, the decay of its daughter particles and stuff like that. So I have a silly hope. It's a silly, it's a silly hope. I don't tell too many people, but in my I sometimes I think imagine if you know my what if scenarios here What if? What if? What if machines? Yeah, right. What if at the highest levels of the of what's possible with technology, it could it could be reasonable or feasible to create a technology using these elements with half lives that that go not even seconds, days. Or I'm talking like imagine half lives in the years or even decades, which which I'm not aware of any theory that that make that says that that's even a reasonable expectation. But imagine that this is the kind of stuff that I would expect from super advanced aliens, you know, having materials with radical new properties based on these, you know, these relativistic or quantum effects that this super heavy element in this island of stability could have. I mean, I, I did some research, you know, what kind of abilities could these have? You know, could be stuff like super dense fuels, super imagine super compact reactors that you could like put in your phone or something crazy like that. Whole new branches of chemistry. Oh, here's a good one. Element 126 armor plating. All right, I'm going to stop right there. That's just really goofy. I mean, nobody's saying that these that this island of stability would be that awesome. I mean, I think they'd be incredibly happy if it lasted for a few seconds or a minute, but who knows? Who knows, once we get there, they may be so ridiculously stable that they could have a half life. Don't count on it, but hopefully we can. At the very least, we can find elements using this new technique, this titanium 50 beam. We could find 119 and 120 and maybe even element 121 and see what this period 8 is all about in the periodic table of elements.
E: So long calcium 48, we miss you.
B: Thank you for your help. Sure, man.
S: Bob, how you feeling about quantum computers?
B: Pretty good, pretty.
E: Frustrating or there.
B: It's. It's it's frustrating you. Know I'm just they just got to they have to focus and they are focusing to a certain extent. I'm sure error correction is key. It means you wouldn't even you wouldn't even need that many qubits if you had next, if you had negligible errors, you you with 200 qubits or even less, you could do some amazing things. It's all the error correction is what's taken up so much of the effort because it's so hard, right? If they can crack that nut, and I know I really don't know what you're going to be talking.
S: About yeah, you don't so calc Wow, Caltech just set a record with a 6100 cubit array.
B: No, no, no, wait. Wait, what does that mean? Wait wait, wait. There was only 1000 like a few months ago.
S: It's huge, that is.
B: Huge, doesn't it doesn't mean much. What's the error correction? That's that's what's important. That's not. I know about, yeah, but yeah, you know, but you should.
S: And the Australian company start up Dirac has now shown that they can maintain 99% accuracy needed to make quantum computers viable. This is with production of silicon based quantum chips. That's not what I'm talking about either, but this is the high.
B: Talk about something.
S: Quantum computer news that we see all the time, it's just so hard to know what to make of it. We are. I know we do appear to be making steady advances, but that doesn't, as you say, Bob, doesn't give us a good feeling for how close are we to like really functional quantum computers. You know where you get quantum supremacy? Where it's doing stuff we couldn't do without them.
B: Some claim that already, but I haven't taken a deep dive on that in a while. I'm not sure how accurate that those claims of supremacy are, but OK, continue all.
S: Right. So there's, there's, as you say, just the number of qubits we're lashing together is not the only piece of information that's important to understanding quantum computers. And just for quick background, for those of you don't know, we're talking about regular computer. Computers use bits of data like ones or zeros, right? Anything that's binary, it could be any state, like a switch is either on or off or a gate is open or closed or whatever. Quantum computers use qubits, which essentially have their bits in a state of superposition. So it's not a one or a zero, it's a superposition of 1 and 0. That's one of the quantum weird quantum effects that are critical to quantum computers. The other one is that the qubits need to be entangled, and it's the entanglement that actually makes the quantum computers work, right? That's how you connect them into a circuit, and that entanglement is what it is. Both the superposition and the entanglement mean that we need to maintain these quantum states while the calculations are are undergoing. But these quantum states are very fragile. You need to have super cold temperatures, you know, single digit degrees Kelvin for example, is why it's never going to be sitting on your desktop at least. No, nothing, no extrapolation of current technology that this is always going to be like governments and countries, you know, and wealthy institutions may have these to do again, the kind of computing that you can't do with classical computers. All right, So this is where the breakthrough comes in now is in the entanglement part of this. One of the huge limiting factors is how far apart the two entangled qubits can be because they have to be isolated. So one of the analogies given in the study is imagine two people in a soundproof booth, right?
E: Like. Get smart.
S: Yeah. So they have to be in a sound booth in order to limit the noise because it's the it's the environmental noise which breaks down the entanglement. But that also means they have to be close together. So you can't have somebody far away because then they'll be outside the soundproof booth. But what if, what if? No, What if you could connect soundproof booths together so that they can't communicate with each other while still being isolated from outside noise? So that's kind of the idea here. So what they did what? So what the researchers did is they found a way to keep the systems isolated, to maintain entanglement and minimize noise while simultaneously giving them the ability to communicate over much longer distances. So they're using nuclear spin, right, as the information, you know, Holder, the spin of phosphorus nuclei, that's their qubit, right? The spin of a phosphorus nuclei. And they keep it in a clean quantum system by surrounding it with an electron. And they demonstrated that they could maintain an entanglement for 30 seconds, which is a massive amount of time when you're talking about quantum computers with Bob, less than 1% errors. So that's a very low error rate or very long period of time. This is a, this is a good workable quantum system. But now they've taken it one step further. They've figured out how to manipulate the electron so that it's orbit can essentially surround 2 phosphorus nuclei, which electrons do, right? Nuclei. Yeah, they can share electrons, but this enables 2 nuclei to communicate with each other over 20 nanometers. Now, can 20 nanometers is a very short distance, but you know what? That's on a par with our current manufacturing techniques for regular silicon computer chips.
E: Oh yeah, right. So we can use the material.
S: We've already got, yes. So the idea is we could use manufacturing techniques we already have to make stuff at the 20 nanometer scale, and that could be applied to this system because you're dealing with the 20 nanometer scale. So they proved that this works basically that you can have a quantum entanglement in two qubits separated by 20 nanometers. And, you know, using this phosphorus nuclei spin as the as the qubit system. So this could be, again, is this going to be the basis of future quantum computers? It's too early to tell, but they but this so they are, you know, progressing nicely. The thing about this system, which they say is a massive breakthrough for quantum computers is that it's scalable because you could just keep adding phosphorus nuclei and connecting them with other phosphorus nuclei using this shared electron technique. They said, they said they see no rise reason why they can't just keep scaling this up. And the scaling is of course, that's the main limiting factor with quantum computers is making it bigger and bigger. So we'll see where where this plays out. I mean, it may be years, you know, before we really see this mature into the kind of thing where you're mass producing quantum computers, you know?
E: Neat, but it's the right path.
S: Yeah, but this, this is this seems like a very encouraging path, but even still, I mean, it's still like just to give people an idea of why, why do people talk about quantum computers and what are they and how do they work? Nobody knows, right? I mean, basically it's complicated. It's super complicated. Every time I think I understand it, I'm like, no, it's not really that. It's really this other thing.
E: Well, who famously said like, if you think you understand it, you don't really understand. Yeah, yeah. Feynman. I don't remember.
S: It's super complicated. When I wrote about it recently, yeah, I talked about quantum encryption because that's like the big thing with quantum computers. Once you get a really powerful quantum computer, it kind of breaks all old encryption. And you need a quantum computer to make encryption that another quantum computer can't crack. But then it was pointed out that, yeah, we're there are ways to make quantum computer resilient encryption that doesn't require a quantum computer. Exactly. Yeah.
J: Yeah.
S: So I see. So we're already working on that. But still, it seems like there could be huge technological advantages to having a quantum computer. You don't want your adversaries to have one when you don't want one when you don't have 1. So I think that's what's fueling a lot of this research. So, you know, when will we have like mature quantum computers? I don't know. It's so hard to tell, even reading these kinds of news items. It's very sexy. It's very exciting. This sounds like a big breakthrough. It all makes sense. Sure, you can have these entangled qubits that are stable over 30 seconds and over long distances at the distance of manufacturing existing computer chips. I get all that. I just don't know whatever how meaningful it really is, you know? Do you have any other thoughts on that, Bob?
B: No the error the error rate is encouraging. Yeah, the next less than 1% error rate. Is very encouraging and the scalability is encouraging as well. So yeah, definitely be tracking this one.
S: Yeah, yeah, we'll be tracking it. You know, maybe one day we'll be able to report that we have a really significant usable quantum computer. All right, let's move on.
Using AI Increases Lying (53:03)
S: All right, Evan, tell us about artificial intelligence and lying. But maybe not the way you think.
E: Yeah, exactly. And now there's a study out it was in nature and I've made the rounds in the media this past week in which the headline, and this is what drew me in using AI increases unethical behavior. We know that headlines are never the whole story. So we have to definitely take a closer look at that. What what did this study actually show? How worried should we be about a supposed impact of AI on human morality here? So you go to the paper. The paper is titled Delegation to Artificial Intelligence can Increased can increase dishonest behavior. They ran 13 experiments with over 8000 participants, and the researchers explored what happens when people can delegate tasks to AI systems compared to people doing those tasks themselves. I would say that the central question here wasn't just, you know, will people cheat if given the chance? You know, we kind of know that answer. But the the deeper question was, does delegating tasks to AI change the psychological dynamics in a way that make cheating more likely? So there is a distinction there. And the experiments were built around controlled tasks where participants could benefit financially by being dishonest. This was the test, the die roll game. So apparently psychologists have been using this for decades. Is this true? Have you heard of the die roll game? I'll explain how it works.
C: Roll a lot of paradigms like that.
E: Roll A6 sided die and keep the result to yourself. You then report your result to the experimenter. And the higher a number you report, the more say, money you get. As an example. So let's say I roll in secret. There's a three, but I tell the experimenter, hey, I got a six. I'll actually get more cash or whatever the whatever the reward is. And there's no way to prove essentially that you're lying. It's almost like Liars Dice in a way. You know that game they played in Pirates of the Caribbean in a sense when you can call the bluff. But in any case, so in the study people rolled and reported directly to another person. There was no AI involved. And then others delegated the reporting to an AI agent and they used different kinds of interfaces. Some used a rules based system in which they said report it honestly OK. Some used what they're classifying is like a supervised learning. They trained the AI by selecting us example data and then there was goal base to maximize your profit. You know, let let let the I I figure out how best to, you know, make it come out in your in your favor. They also at the same time ran a more realistic, say, tax reporting game. This is what where I got interested in which the participants reported their income and they could choose to under report it if they wanted to. And you do it with people and you do it with AI agents and to see what the differences are there, you know, are people, how are people going to report that to a person, someone like me, a tax preparer versus having an AI take care of that reporting for you. So here's what they found. Delegation did increase dishonest behavior, they're saying. They said even with the most explicit rule based interface, people who used AI were more likely to cheat than those doing the tasks themselves. The effect skyrocketed with the vague goal based interface. That was the one where, you know, basically saying, hey, maximize this for me. And they turned it over to the AI to do that for him. And in the in that condition, they said 12 to 16% of participants stayed completely honest and everyone else cheated to some degree. So basically, the more abstract and hands off the delegation, the easier it became for people to let the AI do their quote UN quote dirty work for them. And also the AI agents were far more accommodating than the human agents. And this is where this is, you know, this is where I have a little practical experience with this. When I'm sitting down with someone for a tax appointment and I'm talking to them about it, I would definitely say that they would be less inclined to be more liberal with their answers to me as opposed to me because of the interaction that we have. That, you know, because I, I make sure that they're trying to be as accurate as possible. That's part of my job because I don't want to get my clients in trouble. I'm trying to save them basically from themselves and point out where the certain things might be, say red flags for IRS. For example, somebody comes to me and says, hey, I earned $100,000 last year and I gave 50,000 to qualified charity. So I get a charitable deduction off my, I don't have to pay 50,000 half of my income taxes because I get to write that off. That is outside the boundaries of normal, of the normal statistics. And that is an outlier. I would therefore press back and say you need to make sure you can produce your receipts and, you know, and do all these kinds of things, you know, make sure you've got it ready because this is a high audit item. The IRS is going to come back and ask you to prove it. So I encourage them to do that or to change their answer. Well, yeah, it wasn't 50,000. It was actually 5000. OK. That's more of a number that could that would be believable. Whereas if they go and they do that with a computer, an AI or something else like that, an AI will be generally speaking, more accommodating and allowing them to go ahead and report that $50,000 without the pushback.
S: Yes, but to be clear, Evan, the people were no more likely to request unethical behavior from the AI than from people. So they still asked people at the same rate to do the cheating for them, right? Unless there were guardrails. Now, what you're talking about is that you provide guardrails.
E: Right. Yes.
S: So that's two different things. So as you said like the AI will may not make people request cheating more, but it's more likely to do it and not and not ask any questions.
E: And that was. Yes, great idea.
S: Let's do that, you know.
E: Yep, the guardrails and that that is kind of the point. And the authors of the study also definitely point point this out that, you know, we need to guardrail better guardrails need to be incorporated into these systems to protect people from basically, you know, from themselves. And because I think I think the tax reporting example is a good example of this, you know, in a practical one that a lot of people can understand and how they can be, you know, their own can be LED astray in a sense and get themselves in frankly, in frankly, trouble in this way. The data showed basically, yeah. So here. So again, the data showed delegation to AI lowers psychological barriers to unethical behavior. The effect is strongest when instructions are vague or high level. I don't think any of that's surprising and that the AI systems at the moment are more compliant with, say, unethical requests than when dealing with humans with this data instead. Now, what about the headline though? You know, using AI makes people unethical. That's an oversimplification. You know, it definitely always needs more nuance. We've talked about the misleading headlines and things like that. So you really, that's a tough one to, to, to swallow right there. Maybe they should have said something like, you know, delegating to an AI can increase dishonest requests, especially with vague interfaces. That might have been a more accurate headline in a sense, even though it's a subtle difference. But still, you know, pretty important one as far as I'm concerned. And again, we need to design systems that minimize moral wiggle room and need accountability mechanisms to keep people in inside this loop. So an interesting study, definitely informative, but never go by just the one study and always read a little deeper into it.
S: All right. Thanks, Evan. Cara, thank you.
Scams and Fraud (1:01:03)
- Scams and frauds: Here are the tactics criminals use on you in the age of AI and cryptocurrencies [5]
S: Yes, tell us about scams and fraud.
C: So we often talk about scams and fraud on the show. A new article in The Conversation that was published by Raul Telang, professor of information systems at Carnegie Mellon. He writes about sort of scams and frauds in the age of AI and crypto. Because of course, we see this all the time, whether we're talking about, you know, frauds to make money or pseudoscience. Is that the same rhetoric is like a repackage with whatever today's sort of zeitgeist allows it to be. I don't know. I know this is an aside, but I don't know if you guys were following all of the like, rapture stuff on TikTok this past week. And I was like, God, this, it's so old hat. It's like all the same stuff, except because it's like Jen, people who are talking about it, there's like a very modern spin.
E: Maybe their first time hearing about it.
C: Yeah, this.
E: Is this is a regularly occurring thing?
C: Exactly. Well, I mean, it's just that these things just keep getting repacked over and over with whatever like the the technology of today is and the technology of today is AI. And so the professor who wrote the article, he talks about sort of emotional tactics. First of all, he talks about things like duty, fear and hope. And he he says that most scams occur because of an individual target. Duty, fear or hope. So duty refers to, you know, if you're an employee and your employer asks you to do something, you feel a sense of duty to do that. Fear is the idea that maybe somebody is telling you that, like a loved one or somebody that you really care about is in danger, so you need to do something to help them. And then hope is often like, you know, investment scams or job opportunity scams. They talk in the article about specifically AI powered scams and deep fakes. And then after that cryptocurrency scams, both of which are sort of like I mentioned, repacks of age-old, you know, scammery. There's got to be another word for that, right? Swindling what? What are all the words we often use? Flim flammery, age-old flim flam flim flammery, right, Actually, but but repackage for a modern era. So so we've talked about this before. I know Jay, you've covered like AI and like AI deception quite a lot.
B: Yeah.
C: So we've got to remember that this is not a in the future, this could happen like this is happening right now. A little bit of statistical data here just documented well over 100,000 deep fake attacks were recorded back in 2024 and only in the first quarter of this year of 2025 individuals who were swindled. So these are people who actually reported it said that they were swindled out of 200 million plus dollars. And this is all from individuals using AI generated audio or video to impersonate other people. Oh.
US#03: No, yeah.
C: So whether it's hey, grandma, I'm in trouble. I'm, you know, I'm overseas and I really need some money because I lost my passport, or it's hey, worker, I'm your CEO and I need you to do X. People are falling for them, you know, very often there are different kinds of ways that they go about it. So we we talked about like fake emergencies. That seems to be one of the hardest ones to fight against because there's so much emotional manipulation and it's a lot harder to check against the fraud. But but we do see, you know, kind of tech support scams happening a lot in corporate settings where somebody will get like a pop up on their screen that says that either there's a virus or there's some sort of identity theft. And I need you to call a number or, you know, somebody will get called directly from a number. And then while they're on the phone with tech support that they'll be like, OK, I'm going to take over your computer. And you guys have all done this at your actual jobs, right? When something's wrong with your computer, the tech support at your job will like be granted remote access to fix the thing. But when it's a nefarious actor and not actual tech support within your within your job, they can install malware, they can steal a lot of information. I mean, so many things can happen. There's also examples here of like fraudulent sites that impersonate like ticket sellers or universities or people being offered fake jobs and then having like placement fees taken from them or having, you know, personal data stolen. But they also talk about crypto scams. And I mean, I've got to admit, Jay, you may know all of this terminology, but a lot of this was new to me. Like you know what a pig butchering scam is?
J: I actually don't. What is that?
C: OK, so so it's like a hype, it's a hybrid. It's sort of a it's a crypto scam. It's often involves crypto and then it's usually some sort of like romance scam or catfishing scam. Sometimes it can involve investment fraud. So basically the scammer builds trust over like weeks, months, maybe even years with a victim because they're either, you know, supposedly dating them or they're investing a lot of time in them. And eventually they have them invest in a fake crypto plat platform and then they'll extract a bunch of money and then vanish or otherwise send them money. But usually using crypto because crypto is not traceable, right? And there's really not a lot of recourse. Like if somebody exploits you using crypto, you can't really do a lot about it, right? It's not FDA insured money. Also, there's pump and dump scammers. You've probably heard of that. So that's like, we often think of it in terms of the stock market, but like, let's say the scammers will artificially inflate the price of like a crypto that's not really worth a lot through hyping it up on social media. So they'll get a bunch of investors. And then the minute that people start buying it like crazy, they just dump it off their holdings, right? So they pump and then they dump and then they end up having all of this worthless crypto. And then finally, the author talks quite a bit about phishing scams. So you know, we just had a science or fiction about that. And also have you guys heard of smishing?
J: Smishing.
C: I feel like this is just like this is just a thing.
J: I do that with my wife.
C: Right. Like I feel like this is something that like isn't just not going to catch on because there's an FCC article about it. Because I was like, what is smishing? And I googled it and it's like the FCC is writing about smishing. Basically, smishing is just a portmanteau of phishing and SMS, right? Or, or, or text messaging. So it's phishing via text as opposed to phishing via e-mail phishing. I, I guess is is specifically an e-mail scam and smishing our text message scams, but those are rising all the time. And because of tools like AI, whether we're talking about artificial voices, making artificial videos or manipulating imagery, it's just it's cheaper and easier to do now. So you have these sort of like scam farms, these huge organizations that are able to do this and exploit victims cheaply, easily and then vanish, vanish just as quickly as they arrived. So we've talked about this before, you know, how do you protect yourself? Well, we know that like what did we just talk about, Steve? Third party apps, you know, using 2 factor authentication, you know, any sort of like additional security that you can use, making sure that you know, when you're on a website, it's legitimate. But honestly, that's getting harder. Like back in the day, you could almost be like you fell victim to a phishing scam. That's embarrassing for you. Did you notice that it was EBORC that was asking you for like a, you know, some payment? But now like people are cloning whole websites and they look the exact same and they're, they're even cloning interior company, you know, videos or sounding like the company's CEO and it's coming from emails that look the same. So it's getting harder and harder to recognize that. But of course, don't click on suspicious links. Don't download attachments from people you don't know. Like we said, enable 2 factor authentication. Remember that most legitimate businesses, they're not going to ask you for information. They're definitely not going to ask you to send the money. It does seem to be the case that the pig butchering type scams and the personal relationship type scams are just, they're just a lot trickier. But more and more we're seeing organizations and governments or posting some information on what to do, how to avoid it, and if you do feel like you're involved in a scam, who to reach out to, like the FBI. Again, this is age-old fraud. It's all the same stuff that always happened. A swindler is going to swindle. You've got to protect yourself. But in the age of AI and cryptocurrency, they can do it faster, easier, cheaper, more efficiently, more effectively, and without a trace. And so we've just got to remember that if we are victims of these types of scams, we probably have less recourse. And it's kind of gone are the days that it's like, fool me once, you know, shame on you. Because I think a lot of people can be fooled pretty readily, even very savvy people. So you've got to get your heckles up. You got to stay skeptical.
S: Absolutely. Yeah, you're right. I mean even like as totally you know how much radar I have for this up all the time, every now and then I still almost click things I shouldn't click.
C: Mm hmm. Of course, because they seem to be coming from a legitimate source. Where? You or.
S: The timing is quint, so that's usually what gets, Yeah, like the timing is. But The thing is to realize that there's so much going on, you're going to get that, you know, incidental timing every now and then. You know, like I just did something and then I get an e-mail that might relate to that thing.
US#05: And it's.
S: Just specific enough where, you know, you think it's Oh yeah, this is the follow up of that thing that I just did. But wait a minute. Is it you know?
C: And that's the ploy, right? Because if we can send out thousands, hundreds of thousands of these emails with scammers, yeah, somebody's going to click.
S: It's terrible. I mean, I also, I just think, you know, relying on like everybody doing the right thing every time is not a good strategy just statistically speaking. And because they just overwhelm the statistics by just flooding the zone with scams, you know? And so that's the world we're living in where we're constantly being bombarded with attempts at stealing our information and stealing our money. Who wants to live in that world? There has to be something we could do at the infrastructure side to just lower the the how easy it is to just mass produce scams.
J: Steve, I hate to say this, but the political will has to be there.
S: Yeah, of course.
J: This and it's not, yeah.
C: And I think, I think that, you know, organizations that are offering us the products, you know, the banking products that would allow us to be scammed, they need to see that there is a capitalist incentive to help protect us, right. I would much rather use one of my credit cards online than a debit card because I know that if somebody steals my credit card, I have. Protection with a credit card. I have protection with the.
S: Debit card. I think banks, especially online banks, you're getting very careful. I've been recently dealing with that and I had to download an authenticator app that just exists solely to be another layer of authentication for these types of interactions and that's fine. I'm doing basically three factor authentication now.
C: Yeah, same.
S: Gosh, whatever.
C: One of my banking apps is just as intense as my hospital records app. Yeah, like it's amazing, but I mean.
S: It's annoying, but I'm like, OK, it's like all right, here's my 2 licenses. Here's like all this paperwork I have to prove who I am. Like all these things. It's like, OK, I get it though. It's a bank, you know.
C: Yeah. And we get why. Yeah, we're talking a lot of money. And the truth of the matter is like, I think we have to be more vigilant and yes, be more vigilant with clicking links and all of that. But also, like with your actual information, you know, in the past I might have been that person who like didn't really look at the receipt before I signed it. But now I'm the kind of person who uses, you know, software both for my personal banking and my business banking where, you know, every few days I'm going in and I'm reconciling each transaction line and I'm constantly looking to make sure that everything is up to date are.
J: You finding any weird stuff?
C: No, I mean if anything is just making me a better bookkeeper, every time there's weird stuff it's user error. I'm.
US#03: All for that.
J: Yeah, yeah.
C: It's like, yeah. You've got to look at.
J: Your got to look at.
C: Your. Yeah, it's because I've listed something as a, as a, as a transaction, this kind of transaction, but it should have been an asset and blah, blah, blah. But I'm learning a lot. And yeah, it is definitely helping because the quicker you can figure these things out, the quicker you can try to do something about it. But I have a feeling the numbers that are reported are exceedingly low.
B: Yeah, it's probably 10%. That's right.
C: Embarrassing. It's embarrassing to say, you know, I fell victim to somebody who pretended to be my grandson, who was, you know, stranded and needed cash from me. And I gave him cash really quickly. Like what a bummer.
E: Yeah, the elderly are high targets. It's.
S: A yeah, they're targets because they're not as savvy and sometimes they're just, they have mild cognitive impairment or yeah, whatever. They're more isolated than not, you know.
C: And they're more likely to be emotionally manipulated into helping people who depend on them. The older you are, the more likely you are to have people who depend on you because you might have children and your children have children.
E: So we need to watch out for our parents as well, or you know, whoever our elders. Are totally. We have to, we have to be part of that team to help them.
C: Yeah, but don't think you're immune if you're young, because you're not.
E: No, none of us are.
S: All right. Thanks, Cara.
Who's That Noisy? + Announcements (1:14:50)
S: Jay, it's who's that noisy time?
J: All right guys, last week I played this noisy. Okay haha, everybody knows what that sounds like I got.
E: I'm glad you.
J: But I, you know, I got tons of emails. Like people are like, it's someone peeing in an airplane flying over New Mexico. It you know, it's like that.
E: Wasn't New Mexico.
J: It's funny I know, but it's not what it is and I would never do a noisy of someone peeing unless it would sounded really cool though. But I it's funny I got you guys. Thanks, but I did get some legitimate guesses. Oh, if you guys can only be a family on the wall of the wacky emails I get right. But before I get into that, I'm going to do a correction of a noisy couple weeks. Remember the one I explained to you was recording of someone who spoke out loud in a room, they recorded themselves, then they uploaded that sound file and then they downloaded it and then they and then they played it open air again and upload I guess are uploaded again. Whatever. OK, it's a little complicated, but basically there was like massive distortion going on over the iterations to the point where you couldn't understand anything anymore. OK, so that was called it's Alvin Lussier's I am sitting in a room bit right. So the person that wrote in said, well, many people wrote this in, but this person in particular said, so he's continually re recording a playback of his own voice. And the resulting degradation of the sound is less a case of media lossiness, right? Meaning when I described it, it was that every time they, they uploaded it, the algorithm inside of like YouTube would, it would lose a little bit of data every time and it would get really messy if you did it like 100 times, right? But that's not really it. The, the real thing that's going on is that the room, that room that he was in was of a particular size and geometry and it caused certain resonant frequencies to be emphasized in the playback while others are attenuated, right? Every room has has acoustic signatures like this where you know, certain things bounce more readily depending on the objects and the surfaces and all that stuff. So the end result is that the recorded voice gradually morphs into like a natural resonant frequency of the room. Not it wasn't an artifact of the uploading and the algorithm that was that would be processing that. So if you play the full original recording of that that person's voice, he's actually explaining it in the in the original recording of him sitting in the room, he's telling you exactly what's happening. I never listened to the whole thing because I was listening to it more as a noisy and not as like a piece of information. So anyway, there it is. It's even more interesting now because it's not just software losing it, it's the acoustic. It's the acoustics in the room and the effect of those acoustics on on the recording, which I think is fascinating. All right, so now back to the noisy. That sounds like people peeing. So of course Visto Tutti had to chime in here. He said this one reminds me of the sound of tropical rain going down a big drain pipe. I've heard similar sounds in Thailand where it can pour down like God himself has been drinking beer. So you are incorrect, Sir. But then I got another person that wrote in. This is a listener named James Joyce. And James says, hey there, Jay, my bro, I'm probably way too late, but I'm going to take a crack. And who's that noisy anyway? This week's noisy is a spacecraft. Is the spacecraft Ingenuity, the helicopter on Mars that went with the Perseverance mission. That is not the helicopter. But I do understand why you selected that. I have another person that guess this is Karen Good. And Karen says this week's noisy sounded like water to me, but it also had a high pressure sound. I didn't like that. That reminds me of a drill or the high pressure water plaque remover that's used by dentists. You know that thing they stick in your mouth and it's like, you know, it's like a water pick, right? You guys know that? Yes, but you said it sounds bigger than that. So she's going to get a high pressure water cutter in a shop like a saw. And she points out that, you know, with enough power there can cut through metal, right? Definitely, definitely. I've seen it lots of times. It's really cool sound. But that is not correct. I have a listener named Sierra Asher and Sierra says hi Jay. And he identifies himself as a man because depending on what culture you're from, Sierra might not be a male name. He's from Melbourne, Australia, He says we're cafes with espresso machines are everywhere. This week's noisy sounds to me like milk being throffed and heated by a steam wand of an espresso machine. I do that at home. I have my wife and I are coffee fanatics and we have an espresso cappuccino machine, whatever you want to say. And we do that all the time. If there are definite similarities, I totally see it. But you, Sir, are not correct. And look at this, I have another listener from Australia. This person is Mark Penny and he says good day Jay, I'm Novisto Tutti. But to me this sounds like thousands of bats leaving a cave at night. And he says he's looking forward to Australia 2026. Mark, you are not correct. I do know what you're talking about because the bat, the bats flap their wings and and there could be like a staccato type of thing happening for sure. And in regarding Australia 2026, just so everybody knows, it is fully, fully, fully going to happen. It's completely in the works. We have purchased airline tickets. I am finalizing details with the Australian conference, which is going to be not a con, right. So let me just quickly explain this while we're in the middle. It's like a break in. Who's that noisy? The conference is going to be in two places. First, it's going to be in Sydney. So that conference will start on the 23rd and it'll go to Saturday the 25th. This is a Nauticon, guys. This is a Nauticon that we're running in Australia. This is an SGU conference that is being hosted by the Australian Skeptics. So we're working in coordination with them. But just to make it clear, like it's not going to be like any of their other conferences. It's going to be exactly if you went to Nauticon it, that's what it's going to be. If you haven't, it's going to be us, like all the SDU, George Hobb, Ian will be there, and Brian Wecht and Andrea Jones. Roy, we are not a con. And we will be there. And then the following weekend we will be going to New Zealand, which I'm working with right now. I'm working with Johnny from New Zealand, who's part of the New Zealand Skeptics. And that's right, Johnny. And we're going to be, you know, picking the location and all the details and everything to be announced soon. But tickets will go on sale for the Australian side of this, hopefully, if I can push hard enough, maybe within a week. But I'll keep you updated anyway. Thank you, Mark for writing in. And again, no winner. Nobody guessed it. It's not an easy one guys, but I'm going to tell you what it is. This is simple. This is molten metal being poured into cold water, which I, I was surprised nobody guessed it because I've had, without exaggeration, I must have had 100 people e-mail me one variation on this noisy or another. But I finally got one that I thought was a really interesting version of it. So it's a, it's a dynamic sound because lots of things are happening. First of all, you know, it's a liquid metal. So when it hits the water, there's immediately a burst of steam and you're also hearing like the metal itself, like entering the water. So it's, it's complicated. It has a few different things going on. If you haven't heard it in person or you know, go watch a video of this and you'll see it. There's an interesting little change to the sound. It's not like just dropping coins in the water. It has its own effect. Kind of reminds me of the difference between pouring cold water into a cup or pouring hot water into a cup. You can hear the difference. Hot water makes a different sound in cold water. You guys remember that?
E: Nope, no, yes, all.
J: Right. Don't get too excited. I got a new noisy for you guys. This week's noisy was sent in by a listener named Justin Fisher. Yeah, if you guys think you know what this week's noisy is or you heard something cool, e-mail me at wtn@theskepticsguide.org. If you guys watch our live streams on Wednesday, Bob, Steve and I recently demoed a video game that a friend of ours and a supporter of the SGU, his name is Alex. Him and his team created this game called Platypus Reclayed. And you know, we're, we're trying to help him because he's, you know, he's got a small gaming company. They're a bunch of skeptics and we just thought it would be cool to help him promote his game. So first of all, I'm just wanna tell you real quick, it's called Platypus, Platypus Reclayed. And the cool thing about this game is it doesn't have computer graphics at all. It's all handmade clay.
S: Yeah, it's cool looking.
J: It's, it's really cool. You've never seen anything like it. So every frame of it is clay that they've molded into different positions. So it's like, you know, it's an incredible amount of work, an incredible attention to detail, so that that alone is worth checking out. But I it's a side scroller. I've played this game at this point quite a bit. It's a it is a ton of fun. It's.
S: A good, simple game. It's a lot of fun. Absolutely, yeah.
J: I think it would actually be good. It's a good like game to as a parent to play with your younger kids because it's it's accessible to them and it's accessible to you as the parent. Like you can actually play it because they have different levels of difficulty and everything. And it's interesting because there's lots of different options in the game and you just got to see it. It's got really cool parallax. Bob was freaking out about the multi layered parallax. The bottom bottom line is we, we want to thank Alex for his support and we want to help support their their video game. So anyway, if you end up taking a moment to play it and you'll like it, leave them a, you know, leave them a, a good review because that helps more people find them. So anyway, very cool game and I hope you guys enjoy.
S: It Jay, didn't he say that they're including some kind of SGU shout out in the game?
J: Yeah, so that was a little secret, but OK, he spilled it. So he is going to put in some SGU Easter eggs into the game, which I don't even know what he's going to do. I mean, God, I just when he said it, I just thought how cool would it be if the ship shoots Steve's head out as the weapon? That would be really fun. All right, anyway, if you have if you have the time, go check it out. Platypus replayed.
S: And Jay, even though we're going to Australia next year, they are having their 2025 conference October 4th to 5th at the University of Melbourne Parkville. You can go to skepticon.org dot AU to check it out and get tickets.
Emails (1:25:46)
S: All right guys, I'm going to do a quick e-mail. This is a follow up to Bob's news item actually last week about the nuclear propulsion. And we were talking a little bit about hydrogen as a propellant and some people emailed in for some clarification. So one thing to do for background, right? So sometimes it gets confusing and I had like Bob and I had to make sure we were consistently using the right terminology here. For rockets, something could be a fuel and or a propellant, right? Usually if like you're burning hydrogen to oxygen, the result of that combustion is the propellant as well, right? So it's the fuel and the propellant. But with the nuclear system, the nuclear reaction is the fuel and the propellant is not the fuel, it's just the propellant. So that's what we were talking about. Hydrogen is a great fuel because it's very light. And so you get the most acceleration change in, you know, delta V over for the for the mass of fuel, which is for rocketry. That's the big deal. The question I had though was like, is it a good propellant alone? Because it's very light so you don't get that much inertia out of it. But what a couple of people pointed out, I'll just read the one e-mail from Matthew who said hydrogen is a great propellant if you are optimizing for ISP with the combustion chamber. At a given temperature, the average kinetic energy of the molecules is equal irrespective of the type of gas. If the gas is made-up of lighter molecules, those molecules will be moving faster. Faster molecules leads to faster exhaust velocity. Faster exhaust velocity leads to higher ISP. Higher ISP leads to hate. Hate leads to suffering.
E: Thank you, Steve. Oh my gosh, I was about to say that. Whoa.
S: So that was in his e-mail. So Matthew gets the Star Wars nerd points for that. He.
E: Wasn't even reading. And that's where exactly where my mind went.
S: Leads and that leads to the dark side. Okay, so essentially, yes, it's lighter, but it goes faster. So the temperature is really the key the key determining factor the right heavier molecules go slower lighter molecules go faster as propellant at a given temperature and so it kind of evens out. Now it's way more complicated than that. It's all kind of gas stuff. You know it's it's a lot of complicated equations. It's not just Oh my God, yeah, it's not simple like that but just as a general sort of physics principle. The other thing that is interesting though, that hydrogen as a propellant, the really the main downside is that it is volume is that it doesn't like liquid hydrogen doesn't condense down as well as other propellants might and it's you have to keep it very cold and it it is very corrosive. So it's just not a great propellant for that reason right. It's just it takes a lot of technology and infrastructure and it's very tricky to deal with.
B: It's a corrosive. I wasn't aware of that.
S: One, yeah, it's and it yeah, it's hard to contain too, because it's so small. Again, kind of leaks.
B: It can get through it. Leaks a lot.
S: Yeah.
B: Yeah.
S: All right.
Name That Logical Fallacy (1:28:54)
Topic: Hi, SGU! I came across the following fallacy used by Douglas Murray and Mosab Yousef in debates against critics of the IDF: "Unless you've been there, you cannot express an opinion on the issue, and since I've been there, I have more credibility than you." Someone made fun of that argument by saying: "Katy Perry therefore knows more about space than Stephen Hawking, because she's been there and he hasn't." I can't quite pinpoint if this just an argument from authority, or if there's something else to it. Max
S: I'm also going to do a quick Name that logical fallacy while you're at it. You might be at it now. This one comes from Max. He writes hi. As to you, it came across the following fallacy used by Douglas Murray and Mossab Youssef in debates against critics of the IDF. Unless you've been there, you cannot express an opinion on the issue. And since I've been there, I have more credibility than you. Someone made fun of this of that argument by saying Katy Perry therefore knows more about space than Stephen Hawking because she's been there and he hasn't. I can't quite pinpoint if this is just an argument from authority or if there's something else to it. Max, what do you guys think?
C: Unless you've been there, is that moving the goal post?
S: No, I think it is an argument from authority. It's just kind of a tangential one in a way. I remember Joe Nickel, when he would do investigations, he would always go to the place that he was investigating, even if it gave him zero information, just so he could say he was there because he knew that people use this logical fallacy. So for example, he was writing an article about the Bermuda Triangle. You gain absolutely no information by actually going to the Bermuda Triangle unless you.
B: Get you eliminate their.
S: But that just to say, right, he had he always because I remember I went on a couple of investigations with him and he's like take a picture of me in front of the house. Like why? Because I'm here, I have to prove that I was here. Otherwise people will say, well, you didn't even go there, so how do you know what's going on? Which is it? So yeah, it's a total logical fallacy. It's again kind of a non sequitur, but it's just saying your argument is not valid because of something about you or your argument is valid or more valid because of something about you rather than the argument itself. So that's sort of the broad umbrella of the argument from authority. In this case. It's not even genuine authority. It's just that were you physically there or not? Even when it doesn't matter for your opinion, it's one thing to say, well, you didn't see something yourself, and so that kind of diminishes your opinion. Like if we're talking about how wondrous the Grand Canyon is, I say, well, did you ever see it in person? Like, no, I saw pictures of it. I was like, well you really do get a different impression of it if you see it first hand.
C: I tried to say that to you guys before the eclipse, I remember. Yeah, absolutely. And. I was like, you just don't know or even when you're like as partial.
S: I intellectually believed you, but until I saw it myself, I didn't appreciate it. You're you can't fully 100% right. You have to see it. But that's, this is different. This is, I do think the Katy Perry example is perfect. Like you don't understand space anymore because you went up in a rocket, you know, and Stephen Hawkings knowledge about astrophysics is not diminished because he's never been in space.
E: Yeah. Maybe other because why a person doesn't understand something, but that's not.
S: But that's not one of them.
B: And it's so silly because it's, it's so broad. That statement's so broad. I mean, what you could say is that she understands what it's like to launch in that specific rocket. Sure she does, but that's about it.
S: Going to a sub orbital, sub orbital orbital.
B: Was that intentional?
C: I do think you can say something like, you know, I hope that you will understand that my perspective on the issue is different than your perspective because I have experience. Experience, right? Yeah. And I think that's what makes sense, right. Like I do have a different perspective, but not I have more intellectual knowledge than. Right.
S: Or your opinion?
E: You're deficient because.
S: You should defer to my opinion because of some whatever tangentialist relationship I have with the topic.
E: Yeah, All right. This is why, right? Pilots who, you know, say they found, you know, have seen UFOs and things like that, right. Oh well, you're not up there in the in the air, in the cockpit.
S: Well, that they're, they're going beyond it. They're saying they have special perception skills because they're pilots. That totally is an argument from authority.
C: But what about, I guess here would be a question and tell me if you think that this is parallel. Because an example I can think of is if a person, let's say like a white person tries to make a racial argument about what about the experience of a black person? And then a black person says you don't know what it's like to be black. Like you, your opinion on this is not valid.
S: Yeah. I mean, I think there are limits to that though. I think it's it's valid to say like, listen, like it depends on what they're talking about. I think you can understand racism in took an intellectually and you could make valid arguments that are logical and evidence based that deal with that even if you were not personally involved. But you do gain a perspectives like you don't know what it's really like until you've lived it. That's valid.
C: And I think the issue is that very often what we'll see happen with sort of intellectual dark web types is that they'll try to make intellectual arguments to counter lived experience arguments to minimize the lived experience and say, no, I know better than you because look at the data. And that person is like, yeah, but I've lived this life. I know what it feels like to have microaggressions committed against me. But it doesn't come both ways.
S: But you shouldn't say I've lived it, therefore I could make up facts about it. And your statistics are wrong because I don't believe your statistics. You know, yeah, you could. You could make it a logical fallacy from either either way, which is often getting these, these are informal logical fallacies. And it all depends on exactly how you're formulating your claims. And right, and it's not. It's not a simple formula like some arguments from authority are legitimate, some are not legitimate. Depends on, depends on the details.
C: And I think just this idea of I know more is such a vague statement. That's the important thing, right? I know more because of X OK, let's be specific about what I have an experience that you don't have, therefore, you know, XY and Z or I have, you know, studied this intellectually. I have a pH D in this, therefore I've that's.
S: About more like, yeah, I just got into an argument with in my, in the comments on my blog about autism and somebody is like has no idea what they're talking about. Bottom line is they don't know what they're talking about. They and they just like throwing like one link to one study up. I'm like, dude, I have surveyed the literature on this. I've been writing about this for 20 years.
E: Yeah, you know, swim. You swam those waters.
S: Yeah, this is yeah, I'm telling you what the all the evidence shows, not just you're just cherry picking this one study. You have no way to put it into context. You just don't know what you're talking about. That's different, you know.
C: Oh, you know, a perfect example of this is that, you know, I have a very dear friend who's a young mom and she's not a young mom. She's an older mom. She's my age, but she's a mom of a young child and she struggles with, shall I say, boundaries with her child. And one of the things that we often I bite my tongue and I don't because I don't have children, right? It's like it's not my place to judge. It's not my place to give advice because I don't have children. But there are times when she she might say, yeah, but you shouldn't blah, blah, blah. And I'll be like, well, I am a psychologist who treats people in family dynamics and I do have specialized knowledge about parenting styles and about outcomes for children. And so it's one of those really tough things where it's like, no, no, I have intellectual knowledge. She has experiential knowledge. Sometimes my intellectual knowledge is is more valid in that setting. But sometimes her experiential knowledge is more valid in that setting. Exactly.
S: It turns out exactly what you're talking about again, where where I as a parent, you know, where I, you know, think people who, you know, they're too young to haven't had their kids yet or whatever, whatever reason they don't have kids. Being judgmental about parents, It's like, you know, until you've had to deal with kids, yeah, you have absolutely no basis to be judgmental. That doesn't mean that you can't have an opinion about, like, beating your kids, you know, but I'm just saying, oh, I would never let my kid do that. It's like, yeah, talk to me what you've had.
C: Right, right. But at the same time, when somebody says, I don't know why I just keep doing this and this keeps doing the outcome, it's like, well, because. There's evidence to help. U.S. Data show it's tricky, yeah.
S: OK, let's go on with science or fiction.
Science or Fiction (1:37:20)
| Answer | Item |
|---|---|
| Fiction | {{{fiction}}} |
| Science | {{{science1}}} |
| Host | Result |
|---|---|
| Steve | sweep |
| Rogue | Guess |
|---|---|
Evan | Unknown science or fiction item |
Cara | Unknown science or fiction item |
Bob | Unknown science or fiction item |
Jay | Unknown science or fiction item |
Steve | Unknown science or fiction item |
Voice-over: It's time for science or fiction.
S: Each week I come up with three Science News items or facts, 2 wheel and one fake. Then I challenge my panel of skeptics to tell me which one is the fake. Just three regular news items. You guys ready?
J: OK.
S: Oh, yeah, here we go. Item number one. In the first such study in Germany in almost 50 years, a mandatory speed limit of 75 mph would result in a 26% decrease in crashes with severe injuries. Item number 2. Scientists have demonstrated a quantum sensor that is able to determine linked properties such as position and momentum to great precision, bypassing the limits of the Heisenberg uncertainty principle and item number 3A. Recent study finds that despite advances, people are still able to distinguish in many cases between AI generated voices and human voices. Evan, go first.
E: OK, first such study in Germany in almost 50 years. OK, a mandatory speed limit of 75 mph. Unusual that they're using MPH but.
S: That's, well, it actually is 120 kilometers per hour. I should say that too, but I think it translates to 75 miles per.
E: Hour OK would result in a 26% decrease in crashes with severe injuries.
S: Right now there isn't any.
E: So we're talking Autobahn?
S: Yeah, there is no speed limit.
E: Oh boy, I I that sounds right. I'm not sure where the trick would be here on this particular one, but this makes sense to me.
C: And can I ask for clarification? When you say speed limit, you mean upper speed limit? Yeah, You don't mean minimum speed limit.
S: Oh. Yeah, Upper.
E: Yeah, yeah, right. Maximum speed limit, I suppose. Yes, a 26°. OK, I'm buying that one. The second one about scientists have demonstrated a quantum sensor that is able to determine linked properties such as position and momentum to great precision, bypassing the limits of the Heisenberg uncertainty principle. And I'm sure that and there's a reason why it's called the Heisenberg Uncertainty Principle.
S: But do you want to know what that is?
E: Yes, please.
S: So the Heisenberg Uncertainty principle is a law of quantum mechanics, basically that says that there are absolute limits to how much you could know about linked properties. So like position and momentum. So if you're studying a particle, the more you know about its position, the less you know about its momentum, and the more you know about its momentum, the less you know about its position. And you could mathematically calculate like how precisely you could know each of those factors.
B: If you know one with certainty, you can know nothing about.
S: The other yeah, basically.
E: Got it 100% one zero percent other like a it's a 0 sum game it's.
S: A 0 sum game, yeah.
E: Right. So they've just demonstrated a quantum sensor able to determine the linked properties. Well, I don't see why that's, you know, I mean, you had a news item earlier, Steve, about quantum computing and advancers there. Why couldn't they have developed a quantum sensor able to determine this? I'm not sure I have a problem with that one either.
B: Don't. Just blithe, I'll shut.
E: Thank you, thank you, that's all I needed #3. A recent study finds that despite advances, people are still able to distinguish in as many cases between generated AI, generated voices, and human voices. People are still able to distinguish recent study despite advances. Oh well this is Cara's news item right? Weren't we just talking about this? How they're using AI to trick people because they can't determine, you know, the grandchild is calling the grandmother. The grandmother isn't going to know between AI and human in in certain cases. And this technology is getting better. It will continue to get better. Yeah. All right, I'll. I'll say the AI one is the fiction. I I have a feeling that more in more cases, they weren't able to make the determination between the two. How's that?
B: OK, Bob, the Germany one, I mean, what are they? Are they changing it just like this is the Autobahn territory, right? I mean, we're with an unlimited. Are they saying that that if you if you take the unlimited speed limit down to 75, then we're seeing this or I'm not sure of the context?
S: OK, that's correct.
B: I mean, yeah, I mean, that sounds to sound entirely unreasonable. It's of course the second one got my damn attention here. This quantum sensor. I'd Steve, I know you knew I'd be all over this. I'm not going to fall for this one. They're they're doing some trick. I mean, because normally this should not be possible. This is pretty fundamental, but they're just, they're doing something that that is not removing. Probably that's not removing the uncertainty. It's just shifting it something that that makes sense. I'm not sure how they would do that because like you said, these are these are linked, but it's it's some trick that they're doing here. That's that's what I'm thinking is happening here. So for the third one, I'll just have I think this is baloney. I think this one's fiction here. I don't think. Let me see. Sure. Let me make sure I'm not yet yet again missing a critical word in this definite in this thing here. Are we studying?
E: They developed a sensor there.
B: People. Yeah. People are still able to distinguish many cases between. Yeah, I'm, I I'll say that this one's fiction. I mean, I've heard some really great stuff. I don't know what cutting the cutting edge is right now, but what I have heard was, you know, fairly convincing. Oh, wait. Question then. Steve, is this, like, here's a voice? Is this AI or is it real? Or is it like, here's your brother Jack? Is this, you know what I mean? Is it a voice? You know it.
S: Was both all.
B: Right.
S: They did just AI voices not based on any person and the AI voices that were trying to mimic a specific person.
B: OK, I mean, I've heard some done like for you, Steve, and, and it wasn't perfect. I mean, it seemed like I could tell the difference, but that was like, what, a year ago? I think they're probably good enough where people are not going to easily detect that with any reliability. So I'll say that's fiction #3.
S: OK, Jay.
J: Yeah, I mean it's one about the in Germany and the speed limit, right. So they're saying that they're going to change it to 75 and that would result in 26% decrease in crashes. How can that not be science? I just, I can't imagine that decreasing the speed limit wouldn't result in lowering crashes. It I guess the real number here is 26%. All right, A good question in here would be like, how fast were people typically driving on these streets? You know, I just think that science is too much there to agree with the second one about the Heisenberg principle. I mean.
E: It's a Heisenberg.
J: It's the Heisen, you know, who am, I mean, how the hell could they possibly do it, right? I agree with what Bob was saying about like, you know that when you know more of one of one parameter, the other one, the information on the other one decreases. I can't, I can't imagine a way for them to get around that. I mean, I'd like to think that they could. That one just seems a little too obvious that that's the one going to the third one. A recent study that finds that despite recent advances, I guess people are still able to distinguish AI generated voices and human voices. See, I agree with this. This could be the toupee fallacy, but I know I can do it. What I can do is I can't. If you played played a recording for me, there's lots of little subtleties that are in there. And when I've made extensive recordings of all of us, you know, AI recordings, you know, I know what those little nuances are that it gets wrong.
E: I'm an AI right now.
J: Can you hear what I'm saying right now? So I mean, I know that I know your voice is better than most people's voices in my life, but but the point being, though, is that there are, there are tells still that I think are detectable. And I think they're going to go away very soon. But I think that's science too. I I feel comfortable going with the second one, you know, the Heisenberg one as the fiction, just because it's a big long standing, you know, what would you call it? A rule A, you know, it's a, it's a definitive barrier, right? That that has been well documented and gone over so many times. I just can't imagine that that was overturned. That one's the fiction.
S: OK. And Cara.
C: I think you'd call it a principle.
J: Jay, thank you. It's not a manoeuvre though. It's not like the Heimlich manoeuvre.
C: It's. Uncertainty principle.
US#05: The Kobayashimur.
C: I feel like I don't have a lot to add to what most folks said. I think that you would really get us on this if the fiction was that putting in a speed limit actually didn't decrease severe injuries from crashes. Because otherwise, like is every speed limit in the world not evidence based? I just think, yeah, we've seen it over and over. We saw the speed limits go down in New York City to like really low and fewer bicycle and pedestrian crashes. So I don't know that one just seems realistic unless there's unless you fudge the numbers somewhere. So really it's between going with Evan and Bob and saying that the AI voices are distinguishable from human voices as the fiction, or going with J and saying that Heisenberg uncertainty principle has not been bypassed. I guess. I don't know. Is a principle different than like a fundamental law? And is anything really fundamental in physics like we think it is until it's not, Right. Yeah. Even like gravity, like it worked for Newton. So I don't know. And and you did say that they're using a quantum sensor. It's not like a traditional censor. So maybe you have to fight quantum with quantum. So and then yeah, I think I have to, I think I have to go with Evan and Bob on this. I don't think people are generally good at distinguishing between the voices and Jay. Maybe you are. I mean, the wording says despite advances, people are still able to distinguish in many cases between AI generated voices and human voices. I think probably the opposite is true.
J: That's a good distinction. Cara use my anecdote to kind of overlay on. I should have thought it broader and I OK but that.
C: Yeah. And so my guess is that people are generally not able to distinguish, but maybe some people still can. But they're the majority, not the not the minority. So I'll go with the other two guys and say that that one's the fiction.
S: OK. So you all agree on number 1, so we'll start there. In the first such study in Germany in almost 50 years, a mandatory speed limit of 75 mph 120 kilometers per hour would result in a 26% decrease in crashes with severe injury. You all think this one is science. I guess the question is, is it possible that German drivers are such that they're comfortable driving fast? Or is the Audubon sort of designed to accommodate faster traffic and forcing it into a lower speed would necessarily make it safer? Or maybe that 26% figure is wrong?
C: I think the idea is you can go even you can go way fast on the other. There's no limit.
S: There's no.
C: Yeah, they have. But I'm saying like right, the shape of it doesn't reduce speed.
S: There's a suggested speed limit of 130 kilometers per hour, but there's no mandatory limit. 5, So they were, that's like 87. So this, yeah. So this would introduce a mandatory limit.
C: Which I think by definition, a lot of people choose to take the Audubon just so they can drive really fast.
J: I just think it's.
C: Crazy.
J: Taurus I think it's, it's crazy that they let people drive that fast because the people who aren't driving that fast would have a big problem, right?
C: Oh, they stay in the right. Veins. Yeah. Yeah.
S: All right. Well, this one is science. This is science. Yeah, it makes sense because of course I.
C: Can't believe they've they've just now done a study on this.
S: They haven't. It was 45 years or something from the last study because they didn't want to study it. You know what I mean? Like we're driving fast, leave us alone.
B: And we like it. Leave us alone.
S: All right, let's go to #2 scientists. I've demonstrated a quantum sensor that is able to determine linked properties such as position and momentum to great precision, bypassing the limits of the Heisenberg uncertainty principle. And of course, these would be, gentlemen, Heisenberg compensators right now. So hang on now. It seems like Jay, Evan and Cara are not totally clear on what the Heisenberg uncertainty principle it is. Bob, would you say it's fair to say that this is as well established as the speed of light limit as just a fundamental property?
B: Oh, it's fundamental. You could actually say it's fundamental.
C: Isn't, yeah, like it's not a function of like our tools aren't good enough.
S: No, no, it's not. It's not a technical. Technical limit is a physical limit.
C: Right.
B: It's how the universe presents itself to us. There's no way around it. Unless you know.
C: Unless we have new physics.
B: No, not not even new physics, but just some a way to a way to preserve to preserve it but gain the information you're still looking for. I don't know, it depends what.
S: Do you think the one keyword is in this item? Let's. See, there's a very keyword in this item.
B: Quantity.
C: It's able to no. One demonstrated. To great precision.
S: Nope.
C: No.
S: Hold on.
C: Bypassing the limit.
B: Passing the limit it's.
S: Bypassing.
B: Yeah, it's.
C: Not instead of breaking.
B: This one moving.
S: It's bypasses science because it's not violating the limits of Heisenberg uncertainty principle. It's bypassing.
C: Them going around them.
S: It's going around them. So Bob, you pretty much nailed it. It's it's they figured out a way to spread the uncertainty out to things they don't care about so that it and and limit it to the features they amazing.
B: Dude, what else could they do if this, if given that this is true, which I assume this was true, it had to be something like that otherwise, because yeah, you're not going to get rid of it. You.
S: Can't get rid of it and they get very specific. This does not violate the Heisenberg Uncertainty principle. All right. The name of the paper is Quantum Enhanced multi Parameter Sensing in a single mode and here's the metaphor they give to sort of explain what's happening. I said the metaphor is it's like a clock with an hour hand and a minute hand. The hour hand. Let's say you have a clock with just one hand. It has just an hour hand or a minute hand. If you choose the hour hand, it gives you good information about where you are in the day, but it's not precise. Or you could choose the minute hand, and you could know precisely what minute it is, but you don't know where you are in the day, so it's a scale.
B: Issue.
S: So what they do is they said that if you're looking to nail down position and momentum, you can have uncertainty about where you are on the bigger picture. Like we don't know what grid we're in, but whatever grid we're in, we know exactly where we are in that grid. And they don't really care about the bigger picture. They just want to know the precise momentum and position wherever it is. Right. So right. Yeah. So that's it. So they, they basically said they would say, it's like we're spreading the the uncertainty out to these other parameters that we don't care about so that we could have more precision with the things we do care about like position and momentum. So yeah, it's we, you know.
J: It's still puzzling. It's still.
S: Puzzling, but it's because it's freaking quantum mechanics. But yeah, it's just an end run around that limit.
J: Sounds like BS to me. Not seriously. Like we're saying, oh, they're just kind of, you know, jerking around the corners. Like that doesn't make much sense to me.
S: Says we deterministically prepare grid states in the mechanical motion of a trapped ion and demonstrate uncertainties in position and momentum below the standard quantum limit.
E: There it is, Crystal.
S: Yeah, yeah.
B: I mean that's they're below the limit. So that's they did something special there.
S: Yeah, they did so.
B: Damn man, I wonder what that implications are for other.
S: Well, if I think you could make sensors with incredible precision, that's where they're here's. I think the other thing they said is that they kind of Bob, they borrowed principles they learned from quantum computing. So they kind of developed this technology because they're trying to error reduce in quantum computing and they basically ported it over to sensing technology, hence the quantum sensor. I don't know if that helps, but that's what they said. All this means that a recent study finds, despite advances, people are still able to distinguish in many cases between AI generated AI generated voices and human voices. Is the fiction. Because what the study found is that people were completely unable to distinguish the AI generated voices from human voices. And that was either just generic voices or specific people. Either way that this is just this is what the latest greatest like high end voice technology, AI voice technology. The people in their study had no idea. Interestingly, they talked about looking at AI generated pictures of people and they've gotten so good that not only can people not distinguish, but they're more likely to believe that an AI generated picture is real than a real picture is. AI generated pictures are so-called hyper real. Now in this in the audio test they did not see the hyper real phenomenon. So people were not more likely to think AI voices were real over real voices, but it but they were unable to distinguish the two.
C: I bet you I would love to see an experiment done where because I think my hypothesis is that this plays off of a very human bias where we like things that are slightly more attractive. And I think that we don't have that with an audio bias, but we have it with a vision bias.
E: And the AI knows what little tweaks to make to.
C: An yes, the AI can make people look kinder. They smile more with their eyes, they look slightly more attractive and people are going to go, Oh yeah, that's more real.
E: Interesting. Oh.
C: It would be really interesting to to have AI ramp that up and ramp that down that's. Weird.
E: They know what our brains want.
B: It's not like the uncanny valley, it's like the hypercanny valley or something, right?
C: More real. We've blown way past that, the uncanny valley.
S: But here's another hypothesis. Care. Perhaps we're, and I don't know if they can control for this in a subsequent study. In our media saturated culture, we are so used to photos of people that have been altered and perfected that we think that's real, that that's.
C: Yeah, I think, I think we could probably do 2 studies. I don't think people could distinguish between a photoshopped picture and a non photoshopped picture of like a model, for example. And then I think that people would or distinguish which is real versus which isn't real. And then you add that to like even a picture of ourselves. Yeah, I bet you we would have a hard time being like, oh, that's the real me versus that's not the real me because it's there's the slightest little tweaks and now that we don't have like 17 fingers in.
S: AI, yeah, what you do with that issue?
C: Yeah.
B: It'd be a better test if it's if it's somebody you know, because how often do you look at yourself compared to looking at other people?
C: People look at themselves more than they look.
S: Plus, we always see ourselves in the mirror, so when you look at a picture of yourself, it's reversed from what you're typically.
C: Looking, Yeah, which is why we like selfies.
B: Yeah, but I still would think that you, we would know. Like I think I'd know Jay's face and how how it should move more than I would know my own face and how it moves.
C: And I think that that is generational.
S: Well, no, but Bob is saying the movement is different. That's a different layer. None of this is dealing with movement I.
C: Know but I think but even I think like Gen. alphas and around that era they're watching their faces on videos all the time.
S: But but, but in terms of being able to distinguish AI, because I recently saw, you know, there was this company that did we talk about this, they make movies where you can like dub a foreign movie into English and then AI changes the lip movements to match the yeah, right. And it's total uncanny valley like.
C: Oh, yeah. Yeah. But Bob was saying he'd be able to, right. But he wasn't saying video versus photo. He was saying a video of Jay versus a video of him.
S: Yeah.
C: And I disagree with you, Bob. I or I agree with you, but I think it's a generational difference. I think younger people have a very self gaze when it comes to social media. Yeah.
Skeptical Quote of the Week (1:58:10)
"Inductive reasoning is, of course, good guessing, not sound reasoning, but the finest results in science have been obtained in this way. Calling the guess a “working hypothesis,” its consequences are tested by experiment in every conceivable way."
– — Joseph William Mellor, (description of author)
S: All right, Evan, give us a quote.
E: Inductive reasoning is, of course, good guessing, not sound reasoning. But the finest results in science have been obtained this way, calling the guesswork a working hypothesis. Its consequences are tested by experiment in every conceivable way. And that was penned by Joseph William Mellor. Mellor, who was an English chemist and an authority on ceramics, ceramics, and he grew up in New Zealand, 1868 to 1938. Apparently the what the an expert. I mean, you know, there you go an expert in this, in this particular field and you know, looked upon, there's a world expert on this. Now the quote itself I kind of thought was interesting because I, I did a little reading about inductive reasoning because I don't know that I really read much about it before. And, you know, Einstein was not a proponent of inductive reasoning. In fact, he argued quite extensively apparently against it. And he was more about deductive reasoning and, you know, didn't feel that inductive reasoning brought you to, to the true nature of science. And there was kind of a, you know, collision there in a, in a sense of those two schools of thought. But effectively, I think what modern science is saying is that they're partners in a sense. Induction and deduction. You can have both.
S: Yeah, deduction goes from the general to the specific. Inductive goes from the specific to the general. You have to engage in inductive reasoning if you that's how you come up with a hypothesis, yeah.
C: Yeah, that's bottom up reasoning. But it I think that the problem is that bottom up does tend to not always be as accurate the more that you kind. Of that's why you got to test.
S: It doesn't matter how you cope your hypothesis as long as you test them. Right.
C: Yeah, no, I guess that's true. But I think there is a difference between using reasoning for hypothesis testing and using reasoning philosophically.
S: Deductive reasonings are definitely more valid philosophically if you're just trying to reason to a conclusion. That's why inductive reasoning doesn't give you a conclusion. It gives you a hypothesis right? That as long as you understand that, you're fine. The problems when people use it to come up with a hypothesis that they think is a conclusion when it is.
C: And to form these like huge generalizations.
S: Exactly.
E: Which is why I think Mellor couched this particular quote correctly and and put it in good context.
B: Steve, yeah, I heard a beep on my phone. I looked down. It was a link to a news item and the title of the news item is Quantum Limits Redefined. Yep. Oh.
C: No wait timing.
S: Just made it. Just made it. Yeah. All right, well, thank you all for joining me this week. Yeah, Steve. And until next week, this is your Skeptics Guide to the Universe.
- ↑ www.space.com: NASA debuts new Orion mission control room for Artemis 2 astronaut flight around the moon (photos)
- ↑ www.popularmechanics.com: Scientists Discover the Pathway to the Elusive Element 120
- ↑ theness.com: Scalable Quantum Computer - NeuroLogica Blog
- ↑ www.nature.com: Delegation to artificial intelligence can increase dishonest behaviour
- ↑ theconversation.com: Scams and frauds: Here are the tactics criminals use on you in the age of AI and cryptocurrencies
- ↑ No reference given
- ↑ No reference given
- ↑ No reference given
