SGU Episode 963

From SGUTranscripts
Jump to navigation Jump to search
  Emblem-pen-orange.png This episode needs: proofreading, formatting, links, 'Today I Learned' list, categories, segment redirects.
Please help out by contributing!
How to Contribute

SGU Episode 963
December 23rd 2023
963 Iron Ship.jpg

"A prominent extraterrestrial-hunting scientist thinks that more than 50 tiny, metal spheres pulled from the Pacific Ocean might be the work of intelligent aliens. Others are skeptical." [1]

SGU 962                      SGU 964

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Guest

EB: Eli Bosnick,
American comedian & podcaster

Quote of the Week

It's all about being a part of something in the community, socializing with people who share interests and coming together to help improve the world we live in.

Zach Braff, American actor and filmmaker

Links
Download Podcast
Show Notes
Forum Discussion


Introduction, Live from NotACon[edit]

S: Hello and welcome to the Skeptics' Guide to the Universe. (applause) Today is Friday, November 3rd, 2023 and this is your host, Steven Novella. (applause) Joining me this week are Bob Novella...

B: Hey, everybody! (applause)

S: Cara Santa Maria...

C: Howdy. (applause)

S: Jay Novella...

J: Hey guys. (applause)

S: Evan Bernstein...

E: Hello White Plains! (applause)

S: ...and we have a special guest joining us for this episode, Eli Bosnick. Eli, welcome to the show.

E: Hi, Eli.

EB: Hello, hello.

S: So we are live at NOTACON. This is not a conference, but it's a conference, but it's not a conference. And so far, it's going really well. This is, I think, the best part of the show.

C: He wouldn't really know, though.

S: I wouldn't know. That's what I'm told. I'm told that so far, it has gone pretty well. Eli, you're a fellow podcaster. Tell us about your many, many podcasts.

EB: So I have a lot of podcasts, and I don't think you even know this, Steve. Skeptic's Guide to the Universe was my intro to Skepticism.

S: I was told that.

EB: Yeah, so when I was 20 years old, I was a 9-11 truther. And yeah, you're allowed to boo for that, because it sucks. And that's what I thought the word skeptic meant. And my co-worker, now co-host, No Illusions, was like, I was talking to him about Zeitgeist. I was like, yeah, Zeitgeist, right? And he was like, hey, I have a blog you should read about that movie you're praising. I read your blog about Zeitgeist and got interested in the skeptical movement by listening to Skeptic's Guide to the Universe. And now I'm here and terrified. (laughter) Mortally terrified. I do not belong up here.

C: So much so that you didn't plug your own shows, which is what happened.

EB: Right. God-awful movies. The Scathing Atheist. D&D Minus. The Skeptocrat. And Dear Old Dads.

E: Nice. Whoa.

B: Wow.

EB: And Citation Needed.

E: Good resume.

S: We've got a great show for you. There's been a discussion about whether or not we should do NOTACON next year. And it depends on how it all goes. Are you guys having a good time? (applause) So we've been brainstorming some topics that we might cover on next year's NOTACON when we do the show. So these are the ones that came up. Artificial intelligence, friend or foe. Yes. Yeah, that's definitely going to be next year. Arby's sandwiches. Real meat or not. I think we're going to have to do a live taste test there. Operating room horror stories. That's my favorite. New England. How the Patriots cheat and get away with it. You sat there. Hello. Time out. Taking a break is the best way to recharge. That was Cara's suggestion.

C: It was? When did we have this conversation?

E: Roll with it, Cara. Roll with it, Cara.

C: Okay.

S: Being autocrat or visionary.

C: This was bad.

S: That'll be a lively discussion. And Bob, this is Bob's topic. Nanoseconds. Can anyone perceive time that short?

B: No.

C: What's Brian doing in this slide?

E: I guess we covered that.

S: I don't know. I don't know how that picture got there.

J: We almost had a bit in this weekend called is it mayonnaise?

S: We're not doing is it mayonnaise.

C: Ewww. Oh, no.

B: That one dropped out early.

S: It was always mayonnaise.

EB: I played that game to get into a frat. It wasn't mayonnaise. (laughter)

C: Gross. Gross.

Quickie with Steve (3:59)[edit]



S: All right. So I love this article. This is one of those ones where, like, you read it. The title is so, like, techno jargon. Anthropogenic coal ash as a contaminant in a micrometeoritic underwater search. Sounds like who cares, right? But this is awesome. So they were examining these little microscopic spherules of metal on the ocean floor to see what they were made of. And do you guys know where I'm going with this?

E: Yes. Avi Loeb.

S: This was the Avi Loeb study where they said, look, these little spherules. They're where we thought they were supposed to be based upon the trajectory of something that might have been an interstellar meteor. And they're made of they're mostly iron, but it's not the same ratio. There's too much, like, nickel and uranium and stuff in there. So it's not volcanic. That means it's a meteor. But it's not the ratio that meteors from our solar system have. So it's probably an extrasolar meteor. And, in fact, it could be an alien spaceship, right? This is Avi Loeb.

J: That's a hell of a leap.

S: Yeah, it's a few leaps there. So this is follow-up research by another team where they found that the constituent in those spherules that were like there was too much nickel and stuff in there matches what you would find in coal ash. So this is an industrial contaminant, right? You're burning coal. It goes up into the atmosphere. And some of it gets into whatever, into these spherules ultimately. So what this means is clearly that extraterrestrials came to our solar system in iron spaceships powered by coal, right? That's the only, I think, reasonable conclusion that we can.

B: Where did you get that picture?

J: Is that a spell jammer?

S: That is AI generated.

J: Oh, okay.

B: Nice.

C: What did you feed in?

S: I fed in iron spaceship powered by coal. (laughter)

B: Seems obvious now.

C: Should have known.

E: Lovecraftian.

B: Yeah, right?

S: Okay. Jay.

C: We got him, you guys. This doesn't happen very often. Especially not without Joss here. He's crying.

News Items[edit]

Making Lunar Roads with Sunlight (6:12)[edit]


(click to create redirect page)

S: Tell us about how we're going to build roads on the moon.

J: So when this happens to Steve, I extend it by laughing in his face. And it works every freaking time. He gets, now if he goes to stage two, he starts crying. Look at his eyes, Cara.

C: Yeah, it's just on the cuff.

S: I'm good, I'm good.

J: So we've talked about this many times. NASA is in a huge way preparing for all of these moon missions that are coming up. And there's so much stuff that they have to figure out. And this is just one of a thousand things that they really need to land. Or at least come up with a solution for if this one isn't going to be it. But this is really cool, though. This is what they're talking about and they're running experiments now. They're trying to figure out how that they can use the moon's regolith to actually make kind of like something like concrete. So they can build roadways and launch pads on the moon. Because you don't want to land on an uneven surface. It's really bad. Every time a rocket would take off and land, huge amounts of regolith goes everywhere. And the thing is, is regolith is bad.

S: I hate regolith.

J: It's nasty.

S: It gets everywhere.. It's irritating.

E: It's coarse.

S: Coarse, yeah.

J: What the hell was Lucas thinking? Somebody should have slapped a pen out of his hand.

S: That's the problem. When you get that powerful, you have no one to tell you, no, that doesn't work. It's stupid. What was that movie? Because that way it wouldn't be terrible, you know.

C: What was he talking about?

J: Steve's drunk. Sorry. He was hitting the bottle. We didn't tell him to stop.

S: I had to get up early this morning and go to work.

E: Yeah, we know, Steve. We know.

C: Because he's a doctor.

J: He's a doctor. (laughter)

B: (in robotic voice) He's a doctor.

J: All right, so let's talk about the regolith for a minute here. So lunar soil is nasty stuff. It's really dangerous, right? It's made out of dark gray powder volcanic rock. And the thing is, it's abrasive. And they're saying, they call it sticky. It's sticky because it has a lot of like spines on it that can stick onto lots of different materials. It gets in electronics. It's really bad to inhale. And it actually literally does damage equipment. So they're thinking like, we've got to minimize the amount of regolith that ends up all over everybody and the equipment and everything. So, of course, they're like, well, we could make areas where, especially when they're traveling, right? When you see the earlier moon rovers going, the tires kick up a lot. This stuff is always getting kicked up. And reports back from the Apollo missions were horrible about what this stuff does.

B: They hated it. Astronauts hated it with a passion.

J: So NASA researchers are looking for ways to use the regolith and turn it into a positive. So what they came up with is pretty interesting. And it seems like it's doable. They say that they could melt the regolith using solar energy. And, of course, they also need lasers to be there, I guess, to help do certain things that the solar radiation or solar energy couldn't do. So what they're saying is that they're going to essentially liquefy the regolith, turn it into like a magma kind of situation. Then they're going to form it into layers and then let it solidify. And they think that early studies are showing that this stuff could actually stand up to the abuse of what it would need to. So they have a synthetic moon dust. It's a fine-grained material called EAC1A. This was designed by the European Space Agency as a lunar soil substitute. And it's pretty damn close to lunar soil. And it's really cool because it allows scientists to get in there and do all the things that they need to do to figure out, like how they could use this. Because we're going to want to take the lunar regolith and get oxygen out of it and all that type of stuff. And they have to be able to study it. And we just don't have that much lunar rock and soil to do all these tests with. So what they devised was they're going to use concentrated sunlight, which will simulate laser beams up to 12 kilowatts, which is pretty powerful. And this is to melt the lunar dust into triangular and hollow-centered tiles. I could not find why they want them hollow other than maybe heat dispersion. But I don't know why they want them to be hollow. But that's another thing that we'll probably eventually find out. The cool thing is, is that they know that they could make them to interlock, which makes them easy to build roads with. I also couldn't find anything on, are they going to have a bulldozer type of thing to level things out? They'd have to. They'd have to have machinery up there that would prepare the ground to just be able to lay this stuff down.

S: So they want to make roads out of this stuff?

J: Yeah, so launch pads.

S: So would they be solar roads then?

E: Lunar roads.

J: Yes, they will, Steve. Anyway, that didn't work out here on Earth. So what they came up with was, I don't know if any of you know what a Fresnel lens is, but they have a 7.8-foot Fresnel lens. Now, these are lenses that are famously used in movie making, right? They're these big glass lenses that look like they have layers to them. And they're very good at focusing light. And usually with a Fresnel lens, you move it farther away from the light source, and the light goes from wide to pointed. It's very, very useful. So what they're going to do is they're going to make it very pointed, to get it super hot and condensed. So then there are some advantages here. Now, first, using the lunar regolith and sunlight, these are resources that are there, right? We don't have to ship them from Earth. And as you know, shipping things from Earth is wildly expensive, especially heavy stuff and durable stuff. So they're trying to use the lunar regolith for as many ideas as they can come up with. There's a lot of challenges ahead of them. One of them is material integrity. They have to make sure that these tiles are going to be able to stand up against radiation. It has to stand up against meteor impact and all sorts of stuff like that that they're going to deal with. And the big one is temperature change, which we all know. What's the variability in temperature on the regolith?

S: It's like 200 degrees.

J: They said that the gravity, the lower gravity is an issue because when they melt the lunar regolith into magma, whatever they're calling it, the real problem is that it behaves differently than magma on Earth. It doesn't flow the same, everything. We don't know the physics exactly on how to do it yet. And I think about the problem is how do we do that on Earth? How do we simulate that on Earth? So they're going to have to come up with a way to figure that one out, and I don't see a clear path for that.

B: Don't they have a room where they could just dial in the amount of gravity in the room and then do the – right? Doesn't that exist?

C: Don't astronauts train, like, in water mostly? Yeah, and you couldn't do this in water.

EB: Get a plane full of lava to fly.

B: The vomit comet.

EB: Guys, I don't want to do this one. I don't think we can do it. Why did all these jars say lava?

E: It's a typo. Keep going.

J: So this is a very high energy requirement, and they're also worried about the idea that how much energy will we have on the moon that's available for processes like this? So they're trying to figure out – you see, this is the problem. We don't know how we're going to generate energy on the moon yet. We haven't landed it. We don't know exactly how it's going to be, how much energy everything is going to need. So right now, scientists are like, well, we don't know how much energy this process is going to need, and they need to know how much energy they'll have available to them. So you see, it's the chicken or the egg type of thing. They have to be very careful about – they want to make it as low as possible, and then they have to, of course, once they get all of the numbers in from everything, then they have to calculate, what's going to actually be used on the moon or not, depending on energy requirements. They said that getting the lens and the lasers to the moon was actually going to be really tricky.

S: Yeah, I imagine.

J: I guess that they've never put equipment up like that before, so they have to figure out how to do that as well.

S: Yeah, and I think it's kind of a no-brainer that we've got to be able to build stuff out of the regolith, and you can't have the loose regolith in a place where people are living and working and equipment is happening. So this is a good solution. I know they're also talking about coming up with a formula where they could turn it into a substance that they could feed into the big 3D printers, and you could basically print buildings out of the regolith, which is also a good way to solidify it so it's not a dust kind of thing.

C: It seems like they could just make cement blocks out of it.

S: Yeah.

B: Mooncrete.

C: Mooncrete.

EB: I have a PR question. Like, there's the smart people, right, who are like, hey, guys, we can use the mirror and it's going to make cement. But is there a PR person who's like, guys, that sounds a lot like laser on the moon. I just – I don't know what Marjorie Taylor Greene is going to do with that. Can we make it look less laser-y? Put a flag on it.

J: See, that's how they get you.

S: Don't put a Star of David on it.

EB: Right, exactly, yeah.

J: The mission is a hoax just to get the laser up there.

EB: They're just firing Herb Rothschild from the team. Hey, man, you've done great work here, but we're about to put out the press release.

C: So right. That's so sad.

Misinformation vs Disinformation (15:50)[edit]

  • [url_from_show_notes _article_title_][4]


S: Well, speaking of Marjorie Taylor Greene, Cara, are we in the middle of a misinformation panic?

C: Ooh. So we talk about this a lot on the show. We talk about misinformation. But there's recently an article published in Undark by Joanna Thompson, who's a science journalist, and she sort of raises the question, are we having a moral panic over misinformation? We know that there is a lot of good evidence out there to show that misinformation is rampant, right? There are a ton of different sort of scientific investigations from multiple different disciplines that are looking into whether or not misinformation exists. But we've all just sort of collectively made an assumption that it's affecting people's behavior. And so the question posed in this article is kind of twofold. The first is, are we doing a good job of discriminating between misinformation, disinformation, and propaganda? And generally speaking, the sort of definition of misinformation is just wrong information. But it's not necessarily got an intentionality to it. And that definition gets very wiggly because we don't know where that demarcation line is. So is a weather reporter saying it's going to be 75 degrees tomorrow and it turns out to be 80 degrees delivering misinformation? Is a scientific community saying, and this happened during the COVID pandemic a lot, this is what we think is going on right now. And then later we had to say, OK, we've got new information and we're going to update that. Was that previous information misinformation? It's factually wrong, but it was the best that we could do with what we had at the time. Now disinformation has intentionality behind it. It's intentionally feeding information to change a narrative. And then in the extreme, we would call that propaganda, right? So the first question is, are we doing a good enough job of differentiating between the three? The second important question that I want to pose to the panel is when we think about causality versus correlation, there's a really open question here that are people who are already prone to believe certain things simply seeking out reinforcement of their previously held biases and then acting in accordance? Or is being exposed to those things online actually changing their views? And interestingly, a lot of the literature is not really supporting the former argument there.

S: That it's changing their minds?

C: That it's changing their minds. Because we can't really do a lot of clean causation research here. And there have been examples in which causation research was done. And we actually saw the opposite of what we expected. We saw anti-vaxxers being, or individuals being exposed to anti-vax rhetoric and then being more likely to get the COVID vaccine afterward. So obviously, we're talking about a lot of different things. Anti-vax falls under the same umbrella, but it's very, very different than what's another big kind of pseudoscientific thing we're struggling with right now? Like another type of misinformation.

S: Anti-GMO.

C: Anti-GMO. So even though they follow the same playbook, you probably know people who are anti-GMO who aren't anti-vax. You might know people who are anti-vax who aren't anti-GMO. And their reasoning is going to be slightly different. So I'm curious what the panel, what the show kind of thinks about this topic. Because I think it's an important one that we don't often raise.

J: I think the misinformation one, the first classification, that just sounds like the normalcy of our day-to-day lives. We have the best information that we can. Science is auto-correcting. It's constantly updating the accuracy of its information. That's cool. I mean, I would think that misinformation is the least worrisome out of all of them, right?

S: But the worst is curated, narrative-driven propaganda.

C: Clearly.

S: And certainly that, anecdotally, I see the effects of that every day. Just on my podcast this week, we've been dealing with a new commenter who is basically spewing out a consistently extreme propaganda. Everything they say is demonstrably wrong. It's propaganda. It's curated propaganda.

C: What's the topic?

S: Well, it's a lot of things.

C: Oh, okay. So you're seeing it across the board with this commenter.

S: Yeah. But for example, there's a lot of anti-climate change propaganda out there, right? And there's a lot of information that you can tell it's propaganda because it's like they're quoting sources that they didn't make up how they're saying it. They're saying things the way it was presented to them. And they're stating it as a fact. And it's supporting their worldview and their political ideology and their tribe. It certainly is reinforcing. And I think you're right. They probably were already believing that way to begin with. But it absolutely solidifies their opinion on that one topic. You know, they're like, say, anti-climate change is real. So I think it's hard to tease apart the chicken and the egg effect there. Because it is sort of self-reinforcing. Yeah, they were probably there already. But now it's stronger. And it does move the needle. It might not be as big as we think. But even if it's moving the needle 5% or 10%, that's a lot in a very closely divided political environment.

C: And that's the question that the researchers that are often cited in this article, so a lot of different psychologists were interviewed and cited, the question that they really bring to the table is, is what we think is a needle movement actually a needle movement? And is it that because we see it more online, that it's somehow a more perceptible problem to us? And they, of course, when we talk about moral panic, and we've talked about this before on the show, when the printing press came out, people were burning them because they were afraid that they were going to corrupt people because they had access to books.

S: People could print anything.

C: Exactly. And we saw the same thing when radio was launched. And we saw the same thing when television came out. And, of course, we see the same thing about the Internet. And is it a bias wherein our exposure is so saturated, we see the hits and the misses more, but it feels like we're seeing more of the hits? Or because we can look back to mudslinging during presidential campaigns 200 years ago.

E: It's always been there.

C: And there was so much disinformation, I should say.

E: It's always been there.

C: So it's pretty fascinating. Is it actually a worse problem now? Or is it just that we're all so much more aware of it, and it's the dinner table conversations are not happening in a public arena?

J: From my perspective, I'm not doing research and everything, and I actually find this question interesting because, to me, I'm much more like, whatever, let's just handle the disinformation. I don't need to tease it out and figure it out. I know that that's part of what some people do, and in a way, it really does need to be done. But as a consumer and as a skeptic, I'm like, look, we know there's a problem. We know that social media is a big part of it. And we're sitting here talking about it, but we're not doing much about it. And look at what's happened in the last 10 years. Is it just weird perception? Or is it true that the world is filled with more disinformation now? I mean, we say, collectively, in the 20 years that we've been doing this, that, indeed, it's worse now than it was when we started.

C: But the interesting thing is, and the argument is made in this article, not only are we doing something about it, we're funneling billions of dollars into this problem, and nobody has really defined yet whether this problem exists. We know it happens, but we don't know if people's behavior is being changed because of it. And governments and corporations are spending ungodly amounts of money trying to combat it. And the question is, is this the right use of our money?

J: Yeah, I get that.

C: You know what I mean? And I don't think any of us know the answer to that question.

J: I think most people, though, I think everyone kind of agrees. Everyone thinks it's social media. Social media has become this big demon in this whole thing, and everyone is using that as their first go-to when it comes to misinformation.

S: Yeah, but it's clearly not just social media. The problem preexists in social media. But also, I think a bigger problem is, even within mainstream media, within radio, TV, or whatever, it's propaganda media. It's the fact that we've eliminated the lines between news and entertainment and between news and propaganda. So there are now news agencies that are propaganda agencies.

C: They've legitimized propaganda in a way that, yeah, I think historically wasn't the case.

B: And that started in the 80s, right? I mean, it's been a while. We're reaping the benefits of that now.

J: And isn't that like the ridiculously low-hanging fruit? We know that when they, what was that law?

S: The Fairness Doctrine.

J: The Fairness Doctrine. We know that when that changed, it dramatically changed the way that news agencies were disseminating information. So why don't we just put that back? These are the simple things that I don't understand.

S: Yeah, it's hard to put that genie back in the bottle.

J: You know, if people are getting fined, they'll stop doing it.

C: Maybe. Look at Alex Jones.

E: First Amendment, you know.

C: Exactly. It is a complicated issue. And so, yeah, that line between misinformation and disinformation, A, like you mentioned, I think is a good one because we conflate it all the time. And I think one of the problems is that there are certain groups of individuals who are ostensibly fighting against disinformation, but they're actually just fighting against misinformation. And that's a foundational misunderstanding of the scientific method. And sort of against that are individuals who are actively fighting against propaganda, and yet it seems like they're on a level playing field, which is not the case.

S: Yeah, you have to operationally define your terms.

J: So, Cara, you said something that I didn't hear before. I find it really interesting. So we are spending, collectively, and countries are, our country and other countries, are spending billions of dollars trying to, what, try to understand the effect of misinformation.

C: No, that's what we should be doing. Trying to figure out how to regulate it.

J: Yeah, regulate it.

S: Got to do something about it. And then here's the other thing, though. We haven't proven that disinformation or propaganda is affecting what people actually think or they were there already. But if you take something like the fact that 30-something percent of Americans think the 2020 election was stolen, despite the fact that it clearly was not, that there were multiple independent investigations, court cases, et cetera, there's no evidence, clearly as a question, scientific question, it was not stolen. There was not millions of fraudulent votes. And a third of Americans believe that it was. They didn't already think that.

C: No, that's objectively false.

S: They would not think that if they weren't, if there wasn't a propaganda machine telling them to think that.

C: And that's, it's interesting because the psychologists in the study are, they don't deny. Obviously, they're like, propaganda works. And there are examples where we see behavioral change due to propaganda. But when we misattribute misinformation and we say it's the same problem, grandma saying something to so-and-so or like somebody on social media reiterating something, is that the issue? And why are we spending so much time policing the conversations down here when we should be trying to talk about how to regulate the propaganda?

S: Right. So really, don't worry so much about just innocent, unguided misinformation or lazy misinformation.

C: Yeah, just people thinking they know something that's not true.

S: Yeah, let's focus on disinformation and propaganda because that's really where the problem is.

C: Again, referencing this article, a lot of the researchers, like here's a really good quote from somebody who studies the history of philosophy of logic in Amsterdam. She says, "I don't like this whole talk of we're living in a post-truth world as if we ever lived in a truth world." And I think that's the argument that's being made here is we have this panic like this is happening now. It's always happened.

E: Yeah, it's ubiquitous.

C: So why don't we stop letting history repeat itself and learn from the past? What did we do then? What was effective? What wasn't?

J: Yeah, I mean, I think what we're seeing, Cara, is like, unlike 20 years ago and further in the past, like everybody has got a supercomputer in their pocket now.

C: That's the difference.

J: Billions of people. It's like we all have access now. It's very different than it used to be.

S: Yeah, and others have argued, I forget who made this argument, but I found it interesting. We're biased because most of us alive today lived through a golden age of journalism. That was the exception, not the rule. Through most of history, what we're living through now is the rule.

C: Yeah, that's what they make the argument about yellow journalism. Yellow journalism used to be the norm. People just made up stories.

S: We had this period in the 70s or whatever where there was like this golden age of journalism. We thought that was right. That was the way it was supposed to be. And now, oh, my God, it's collapsed. It's like, no, it's just returning, regressing to the mean. This is the way it's always been. But we grew up in the anomaly.

C: Yeah, because without checks and balances, that's going to happen.

E: Just go to TikTok for your news. You'll be fine.

C: It's a great idea.

S: Don't get me started on TikTok, man. Have you guys been watching our TikTok videos? Yeah?

EB: You doing those dances? Steve, it's so great. I think it's great.

S: I'm really working on that.

EB: He hits the gritty. It's great. Check it out. The Zoomer liked that one.

S: Yeah, we go through those videos every week. Ian curates a bunch of videos for me. It's like there's the crazy and really crazy. So it's like, that one's too crazy. I don't even know what I'm going to say about that. It's so out of control. You have to do only the merely crazy ones.

EB: My favorite thing about misinformation on TikTok is that occasionally TikTok just throws one at you like, hey, you want to learn how coconuts are poisoned? And you're like, no, come on, TikTok. It's like, OK, OK, OK. Six more videos of dogs. Six more videos of dogs.

S: Right, right, right.

B: Nice. Nice.

J: Quick offshoot. Should we all really be concerned that TikTok is doing some nefarious stuff like collecting information and personal information?

C: Yes.

EB: So hard yes.

J: Honestly, I just don't know how legitimate it is. I haven't seen any hard facts or whatever.

S: I think that's the problem. How do we know? How do we know what China's doing with TikTok?

C: We don't know. So it is so normalized across every platform at this point that it's like it's that much worse on TikTok. And that's what we're concerned about. But it's not like Instagram, Facebook. I mean, Meta, right? It's both of them. And then Google, which is YouTube.

EB: But TikTok, from what we know, right, what ByteDance allows their technology to do is wildly more than we've ever seen. So the big one in like computing, the computer scientists talk about is key capture. So even Meta, which is like if you talk about cat food near your phone, you'll get some cat food ads. We can talk about whether or not that's relevant or whether that's urban myth, how much that plays out. But ByteDance has been like, oh, yeah, we do key capture, which means like you're on TikTok. You navigate away. You type to your wife like, hey, can't wait to overthrow the government today. And TikTok takes some form of key capture. And the reason why it was first rejected from the App Store when it first came to the U.S. is they were like, oh, no, nothing that has key capture. It's why you can't get like a fun keyboard on your iPhone. It's because it counts as key capture.

C: Whereas the argument is that Meta won't be reading or listening, but they know because of location data that you're constantly going to the cat food store. And that's why they're sending you cat food, even if you've never said anything about it.

EB: And it's it's weird because there was a lot of it was really interesting to watch the TikTok hearings. Because most of it was just like 98 year olds being like, which channel does the VCR go on? Talk to Mr. China and tell me where is my VCR remote. But there are like three people in Congress who knew what they were asking about. And the best answer. And again, I don't I don't want to oversimplify things. But basically, he was like, yes, our company has deep interconnected relationships with the Chinese government. And yes, we answer to them entirely in a way that no U.S. company would understand. But like, chill. And then the pirate guy was like, tell me where Osama bin Laden is. And they were like, OK sort of.

S: It is it is frustrating listening to Congress do technology-

C: Oh, it's so bad.

S: -interviews like that where they they clearly have no idea what they're talking about. That's where the whole intertubes comes from. You know, isn't it a series of tubes?

C: Oh, you remember when when the Supreme Court was like, oh, we're not on email. And it was like, wait, what? So pages would write notes and run across the halls and give them to the other justices. Like two years ago, I feel like that was.

S: In this century.

J: So is the advice to delete TikTok?

S: Well-

EB: I have TikTok.

C: No offense.

J: I mean, we all have to just assume that any porn that we look at, somebody knows about it.

C: Yeah, I think so.

S: Unless you have like an air gapped porn computer.

B: Doesn't everybody?

C: You look at porn on your computer?

B: Doesn't everybody?

S: Everybody. What do you mean?

EB: Steve, I'm with you.

C: No, you look it up on your phone.

EB: Quick survey of the audience here. Who looks at porn on their computer? No, I'm kidding.

E: No.

C: I wasn't saying I don't look at porn. I was saying I look at it on my phone. Not my computer.

B: Screen's way too small for that.

C: But it's more portable.

J: Not for you, Bob.

E: Hey, Steve. Are you glad I brought up TikTok?

S: Yeah, all right.

EB: Guys, let's just pull up our most recent porn and show it to the audience.

S: All right, here we go.

Gravitational Waves As Fast As Light (33:20)[edit]


(click to create redirect page)

S: Bob, please tell us about gravitational wave. (applause)

C: It's a cold shower.

E: Oh, boy.

EB: Say it slow, Bob. Say it slow.

E: Bring it back, Bob.

B: I'd be happy to, but I have no idea what this image has to do with my topic.

S: Bob, it was from the news item you sent me. That was the image from the news item.

B: I can't continue with that image. (laughter) So, this is fascinating.

S: He said speaking directly into his microphone.

B: It's like right here. How am I missing it?

S: Consistency is important.

B: It's ruining my gesticulate. All right. Well, it can't be like this. It's got to be like right facing me.

C: That's how a microphone works.

B: See, I wish I had a shoe.

S: Eighteen years.

E: Nineteen.

S: The microphone has to be facing directly at me?

B: Maybe if you give me a nice small microphone, this is kind of like in my face.

S: That's the problem.

EB: This is really helping my imposter syndrome. Thank you. Really. I'm way less nervous now. I appreciate it.

J: Eli, let's not pretend that there isn't a sexual undertone here, okay?

EB: Okay, yeah. No, we feel it.

B: Oh, my God. Come on.

EB: Bob, watch me. Both hands. Twisting motion, buddy. Don't break eye contact.

B: We got to talk later.

EB: No, we don't.

B: All right. How do I even do this now?

C: Black holes.

J: Right. I mean, we all know your news item now is going to suck compared to what just happened.

B: No, it's not, man. It's going to be great.

S: Gravitational waves.

C: Oh, gravitational waves, not black holes.

B: All right.

C: Same thing.

B: This is going to rock your world, Jay. All right. This story was fascinating. It has to do with kilonovas, the speed of light versus the speed of gravity, and possibly one of the greatest measurements in the history of science. That's all. That's what it's about. So some of you may remember probably one of the greatest collisions ever observed. In 2017, they observed a kilonova from 130 million light years away. These are two neutron stars that had been spiraling together and crashed together. Now, these are neutron stars. These are city-sized objects, each with the mass of the sun, traveling at near the speed of light. I mean, this is like one of the coolest things that happens in the universe, besides maybe, what, colliding supermassive black holes. I mean, how do you top neutron stars colliding?

J: Bob, how can they—I'm not trying to break anybody.

B: Okay.

J: At that speed, isn't there the back pressure of the hydrogen that's innate in outer space? Why can they continue to move that fast? What's propelling it?

B: Well, you know when a skater pulls their arms in and they go faster and faster? Well, as these neutron stars get closer and closer, they go faster and faster. So eventually, at some point in their co-orbits, they're going near the speed of light.

J: That's amazing.

B: Oh, yeah. It's incredible. So this collision, this event, releases two types of radiations. One is light, right? Of course, it's going to emit light. You've got these two neutron stars colliding. You're going to have gamma rays initially flying out and other light. But it also emits gravitational waves, which we've talked about a bunch of times on the show, and I'm going to talk about it again. Whenever you move a mass in curved space, you're going to have gravitational waves. The energy from these gravitational waves are going to be released. Basically, it behaves like little ripples in spacetime. These gravitational waves are being emitted, and that energy is coming from somewhere. It's coming from the orbits of the neutron stars. So as the gravitational waves fly away, the orbits get closer and closer and closer. And we can detect these gravitational waves on the Earth. So we see this happening. Like, here we go. You've got these neutron stars that are going to collide. And it is amazing, because this distortion, this ripple in spacetime, is so tiny that it distorts space by 1 ten thousandth the diameter of a proton. And we can detect this. It's just mind-boggling what we can do at this point.

S: But that's not the greatest measurement you're talking about.

B: No, it's not. No, it's not. So we can detect both of these radiations, the light and the gravitational waves. And this is multi-messenger astronomy. We've mentioned it a couple of times on the show, but it's really worth repeating. This is astronomy that deals with light, which is what we've been using forever. We've been looking at the light. But now we can look at also the gravitational waves. And when you can see both, an event that emits gravitational waves and light, that's amazing. Because you've got two completely distinct windows into space. And we can learn a lot more that way. So the thing that happened that was bizarre is that the gravitational waves were emitted. And then they stopped. Because when they collided, the gravitational waves disappeared. Then the light should have been emitted immediately. But it waited two seconds, about 1.7 seconds. So this light was two seconds late. Why? Why was this late? It shouldn't have been two seconds late. It should have been right on the heels of the gravitational waves. But it wasn't. So what does that mean? Is that a real mystery? Does it point to some new physics? Perhaps light has the tiniest bit of mass. Because if you have a little bit of mass, you can't go at the speed of light. You'll go a little bit less if you have a tiny, tiny bit. So maybe light...

S: So you're saying that light doesn't travel at the speed of light?

B: I'm saying that maybe the fact, maybe light has a tiny bit of mass that explains why the gravitational waves got there a little bit faster than we think. But no, that's not what's happening. Don't worry, Steve. That's not what's happening. Because if light had the tiniest bit of mass, we would have observed that in some other way by looking at different frequencies. So that's not it. So then the astronomers were trying to think of what happened. They thought about these neutron stars colliding, and they went through some mental gymnastics. Like, all right, what could happen? You have two neutron stars colliding. What could you end up with there? You could end up with one of three things. You could end up with another neutron star. It just becomes a bigger neutron star. No problem. You could end up with a neutron star that lasts for just a second, a half a second, a millisecond. And then it's like, no, I'm going to become a black hole. Then it becomes a black hole. So that's the second option. The other thing is it becomes a black hole immediately. They collide. Blam, you've got a black hole. There's no intermediary neutron star. So those are the three things. So then they thought about, well, how would light get emitted? How does light get emitted from such a collision? The light could be emitted immediately on contact. Light would be emitted. Or the light could be generated within the neutron star collision, and then it takes a while to propagate to the surface, and then it's emitted. Or the third option is that perhaps light is emitted immediately or not, but it hits the circumstellar debris that's around this collision, and it gets redirected and absorbed and re-emitted, and that causes the delay. So the bottom line, the bottom line from these scientists was that they determined that this collision from 2017 created a temporary neutron star, very briefly, that then became a black hole. So that seems kind of like, yeah, that's the likely scenario. But the light that was emitted, either it was emitted internally and it was delayed, or it was emitted immediately or internally and was delayed because of this circumstellar debris. So that's what probably happened. That could easily explain the 1.7 second delay. So that's not too much of a mystery. All of our theories say that gravity or gravitational waves and light should travel at the speed of light. So there's really not that much doubt about what happened. They both travel at the speed of light. The real takeaway, though, was the observation itself. Now imagine this. The major takeaway is this, that the speed of gravity was equal to the speed of light for the better part of one part in one quadrillion. Because think about this. You've got an event 130 million light years away. They travel essentially together for 130 million years. That's four quadrillion seconds. And they were separated by only two seconds. So they are so close. If for some reason gravitational waves don't travel exactly at the speed of light, they are so close to the speed of light that it's ridiculous. They stayed in pace for all that time. And scientists and physicists, astrophysicists love this because theory says that they should travel at the speed of light. General relativity, everything, all our good theories, all our great theories are screaming, yes, they must both travel at the speed of light. Otherwise, if that's not true, then our science is off by an unnerving degree. So theoretically, yes, they should travel. But observationally, they've always been like, yeah, observationally, it's really hard to observe this happening. And we've got our uncertainty is very high observationally. So this one observation, though, decreased our uncertainty. Or as they put it, it improved our observational constraints by 12 orders of magnitude. So this one scientific observation was the biggest leap in measurement accuracy, if you will, than any other measurement in history. 12 orders of magnitude, 10 times 10 times 10, 12 times. It greatly increases. So it was a major leap. And it was really a tour de force on so many levels. I mean, like I said before, just being able to detect a gravitational wave. Steve, I mean, we're detecting 1 ten thousandth the diameter of a proton. It seems like we can't even do, we shouldn't be able to do that. It should be another 50 years before, but they can do it. And it was a tour de force. All right. I'm done.

E: Nobel Prize stuff here, Bob?

B&S: For this? No. (laughter)

B: Eh it's one measurement. It's great, but I don't think it's worthy necessarily of a Nobel. Eh, who knows? Maybe I'm wrong, but I don't know.

S: But it was cool. I mean, that level of precision. But at the end of the day, it's like, yeah, it's exactly what we thought. Yay.

B: I know, but it was still a fascinating. There's so many little nooks and crannies to this story. Read it on bigthink.com. It's a great read.

Fluoride and IQ (43:43)[edit]


(click to create redirect page)

S: All right. I love this picture.

E: Oh, yeah. I love that movie.

S: I'll sit back and allow communist infiltration, communist subversion, and international communist conspiracy to sap and impurify all our precious bodily fluids.

B: Is that a real quote?

C: Yeah.

E: That's from a real movie.

S: Dr. Strangelove. Dr. Strangelove.

J: How I Learned to Love the Bomb.

S: What was that guy's name?

C: How I Learned to Stop Being...

E: Oh, Ripper. Jack Ripper.

S: Colonel Jack Ripper. This is, of course, the stand-in for whenever I talk about fluoridation of water, because that's what he was talking about. It was the communist conspiracy of fluoridation of our water supply. So there was a recent study out looking at the measures of cognitive function in children and comparing three groups based upon their level of exposure to fluoride. Have you guys on the panel here or in the audience, have you heard of the Harvard study of fluoride and IQ? This is the one now that the conspiracy theorists, anti-fluoridationists, anti-vaxxers, like the whole crew, love to quote now. This was 2012, so it's been 11 years. It's still their go-to. The Harvard study approves that fluoride's a neurotoxin. So there's really two questions here. What is fluoride? Is fluoride potentially a neurotoxin? And is it toxic at levels that are in our drinking water? Right?

B: That's the question.

C: That's the important question, yeah.

S: Those are two questions.

J: Well, I want to know if it's bad to brush your teeth. If this is true, what about the amount we accidentally swallow when we brush our teeth? I can't swallow toothpaste.

S: Yeah, you get fluoride when you brush your teeth.

J: Of course.

S: It gets into your system, absolutely. But that's down at the level of the dental exposure, we'll call it. So that's really the same question. But in any case, so the first question, is it a neurotoxin? The answer is, we don't know. It could be.

E: What?

S: We really don't.

C: But a lot of things. I mean, it depends.

B: So the dose makes the poison, right?

C: Exactly. But we're not at that question yet.

S: Yeah, but we're talking about at doses where any human being could potentially get exposed to.

C: Oh, okay. And we still don't know?

S: So reviews of the evidence show a couple of interesting things. So there does appear to be some neurotoxicity at high doses based partly on animal studies, on some basic science research. But if you look at reviews, the best evidence shows no signal. And the studies that have the highest risk of bias also show the highest signal. So the answer really is, we need more and better research. And when that's the answer, when every review is like, we need better studies to really know, the answer is, we don't know. We don't currently know.

C: But it also sounds like you're saying unlikely but with error bars.

S: It's, well, even—

C: As if the highest quality studies out there.

S: Yeah. The ones with the least risk of bias are also the ones that don't show that there's any signal there.

C: Yeah, I feel like high quality but with error bars is maybe a little different than saying, we don't know.

S: Yeah, you're right. And it all depends on—there's a lot of room for interpretation here, Cara, as you know. And it all depends, I think, on just the way people approach the evidence. And so you always have like the toxicologists. And this is my opinion because I read a lot of these studies on a lot of issues. And whenever you read something that's in the toxicology literature, they always are the ones who are saying, yeah, there's definitely a risk here. But they're really talking about hazard because toxicologists deal more with hazard, which is that, yes, this potentially could cause a negative effect on human cells somewhere, right? Or on development or on your immune system or whatever. It's having a negative effect. In this case, we're talking about neurons.

C: And we don't often publish negative results. And so that's a problem too, right?

S: That is a problem, yeah.

C: If they see something, they're going to write about what they see, not about what they don't see.

S: But the thing is, from a toxicology point of view, everything is a hazard. Everything. It's really just a matter of dose. And there's a high positive bias in there. Because if you sprinkle anything directly onto cells, it's going to have some negative effect on them.

C: Right, because there's no immune system.

S: It's also just the access, the bioavailabilities, is orders of magnitude greater than in a biological system. But then when you talk about clinical evidence, then it gets muddy, right? Then the answer is like, well, it depends on how much of a dose did you give? What animal model were you using? And if you're doing observational data, it's like, well, how did you control the variables? Because observational data is not controlled. You could try to account for variables, but you can't really control for all of them. Because the thing is, with this kind of research, the one thing you can't do is a double-blind placebo-controlled trial. You can't say, we're going to give kids fluoride in increasingly high doses until we find out which dose damages their brain. You can't do that. The only kind of study that would definitively answer the question.

J: But what can you do?

S: So what we're trying to do is triangulate from three or four different kinds of imperfect information. And that's why you get different answers based upon what kind of studies you find more compelling, or is your area of research, or is the perspective that you're coming from.

B: Can't you just round up the bad kids? The really bad kids.

C: It's kind of like you're saying, like an observational study. There's an area where there is a higher level of fluoride in the water. There's an area where there's a lower level, or no fluoride in the water. How are these kids different? But you're forgetting the fact that in areas where they might treat the water with fluoride, they might have other.

S: There might be other contaminants in there.

C: Or they have more money in those regions. And so those kids are exposed to other things that they wouldn't be exposed to. There's so many confounds.

S: Right. Yeah, yeah. And you can try to control for, well, how much lead is in the water? And what's the socioeconomic status of the kids? And whatever. And when you do that, it does take away half of the correlation that they find. But to me, that's like, okay, are we really sure what's left is we've really addressed all of the confounding variables?

C: You can't.

S: You can't.

C: In nature, you can't.

S: You can't. So that's why I think the short answer is we don't know. But there's enough studies that show maybe there's a correlation where we can't say no. We have to continue to study it. We have to continue to look for it. An easier question to answer is, is there a risk at levels of exposure that you will get in your life, right? In drinking water and toothpaste and whatever. So that's a little bit of an easier question to answer. And so what the Harvard study did in 2012, it was mainly a meta-analysis of studies in China. And what they were looking at were places, like villages, where there's naturally high fluoride in the water. So this is not added fluoride. This is just – because fluoride is in nature, right? It's an element. So it naturally occurs in water. And so it's like, yeah, so these villages have really high levels of fluoride. And these ones over here have low – like, there is no fluoride leaching into the water. So they have very low levels of fluoride. And they compared IQ. And it's like, oh, look, the high-fluoride group has lower IQ, like a couple points lower IQ than the low-fluoride group. And then the anti-fluoride people say, see? Fluoride is an antitoxin. But the thing that they missed was that the low-fluoride group was the same level of fluoride that we have in our drinking water. So the fluoridated water is the control group. They're the group that had the lower effect, that there was no chlorination. And then it was the – you have to be many times higher than that in order to – compared to fluoridated water. So now we have the Tulane study. This is the new study. This was – instead of doing IQ tests, they did several cognitive tests, like memory tests, like copy this figure of a house, and they count the mistakes. So some standardized cognitive evaluations. And they looked at three groups, basically a low, medium, and high-fluoride group. The low group was like 3 milligrams per liter or less. Then the middle group was 3 to 5.5, or 3 to 8.5, and 8.5 to 15 was the high group.

C: Was this naturally occurring, or were these in places where it was added?

S: Naturally occurring in South America. So again, not municipally managed or added fluoride. And again, they found that there was this correlation between the amount of fluoride exposure and reduced performance on the IQ test. So a couple of things. One is, I didn't find the data compelling at all. If you look visually at the data, it's noise and then this little trend line that some statistician wrote through the noise, saying, look, there's an effect there, but it's so uncompelling. The noise – like the signal-to-noise ratio is so small when you look at the data. I just didn't find it compelling. But even if you take it at face value, it has the same exact issue as the Harvard study, whereas the low fluoride group was actually up – like it was 0 to 3 milligrams per litre of fluoride in the water. The limit for – like in the U.S., when we add fluoride to water – and actually, a lot of people don't realize, we manage the fluoride in water. We add it if it's low, and we reduce it if it's high, right? And the limit is 0.7. So 0.7 is the limit, and the low group had up to 3. So again, the group that had the best scores were in the range of fluoridated water. And it's only when you get to multiples of that – 15 milligrams – like you're talking 20 times the level – where you start to – where you see there's maybe a negative effect happening. So again, you can't use this as an argument against having a level of 0.7 in the water.

C: Also, in this observational study, did they control for other things in the water?

S: They measured other things in the water. They measured lead. They measured arsenic.

C: And they couldn't find something else to account for?

S: They factored that in. They factored in all the other things that they were measuring, and they said that accounted for half of the correlation. But there was still a correlation left. And again, I didn't think the correlation was very strong to begin with, and it's only half as much as, like, this noisy correlation.

C: Yeah, that worries me. If you're looking at a region and saying all the people in this region who have access to this water also have access to things that other regions don't and vice versa. You just can't take out all of those variables.

S: It's hard. I agree. It's hard.

J: All right. So they don't know.

S: They don't know. And even if it is true, they're basically saying fluoridated water is safe.

J: Okay. So a couple points. I don't know if you know the answer to this. So on a typical day, if you brush your teeth twice a day, how much fluoride are you getting from toothpaste compared to drinking water?

S: It's less, but it's not insignificant. And that's one of the arguments, like, the anti-fluoride people make. It's like it's not just the 0.7 in the water. It's all the fluoride in everything. It's like it's in your mouthwash and it's in your toothpaste. And it's like, yeah, sure, but it still doesn't add up to anything close to the level you need to get to before you start to see these potential, like, I don't even find very compelling, even if you take at face value these potential correlations over cognitive function. Because, again, it's still the low group is up to three, which is, mind you, four times the upper limit for fluoride in water in the U.S. And the WHO, I think they use a 1.4 as their upper limit of normal.

J: I had a dentist tell me a good thing to do is brush your teeth and then don't rinse your teeth. Just spit out as much as you can.

S: Yeah, you're not supposed to, like, really hyper-rinse out afterwards.

J: You want to leave a little residue of fluoride.

C: Yeah, but I always do because it's gross. I can't.

EB: No way. I wouldn't do it. No matter the health it promised me, I would not do that.

C: I know. I can't do it. It's gross.

S: Just do, like, a single rinse, but don't, like, aggressively rinse after.

C: Right. But here's another issue with this study that I'm seeing that maybe in the literature you can correct. You cannot blind a study like this.

S: No, it's not blind. It's observational.

C: These kids aren't traveling somewhere else to take the cognitive tests. They're testing them in a region. The researchers are going to have bias. And cognitive tests are not bias-proof. They're actually very easy to bias.

S: You're counting mistakes. What do you count as a mistake?

C: Exactly. There's a wiggly line. And if the researchers – and I'm not saying – this kind of bias is usually not intentional. But if the researcher's like, oh, we're going to that poorer village, and now we're going to test the poorer kids. And that goes into how they record results.

S: I don't know the answer to this, but what I would want to know is, did the people doing the cognitive assessment know what the fluoride exposure was in the kids? Hopefully not. If they were doing a good job.

C: Hopefully not. But if you're talking three different regions, and you're talking all the kids the same, all the kids the same.

S: But how was it blinded? And did they assess the blinding? And if they didn't, then you don't really know.

C: And was it the same researchers doing the – because usually it's not. You have different cog technicians. And if you're in a poorer area, then even the people administering it might not have had access to the same training. There's a lot of problems that could occur here.

J: A couple of things. One this is – it's important because hopefully all of us are exposed to fluoride. Because everyone's using toothpaste. So it does kind of affect everybody. If you just go by – on average. And the other thing is, is this the type of thing where governments are like, hey, we better figure this one out because it's–

S: So, yeah. Every time you read a study like this, the conclusion is always – like, the one that gets past peer reviewers and gets into the published is like, this needs more study. Or sometimes they say, this is a potential hazard. We've got to look at this more deeply or whatever. But you can't conclude that this is a risk. This is happening. Except for regions that have naturally high fluoride that's not being managed. But the managed municipal water supplies where they have, 0.7 milligrams per litre of fluoride, those are all the low exposure groups that have the best outcomes. Right? So, that's the important thing. So, when an anti-fluoride person throws the Harvard and now the Tulane study at you, remember that the fluoridated water is in the control group. It is in the low exposure group. Not the super crazy high naturally occurring fluoride exposure group. And then, again, even then it's an open question looking at the data, whether or not it's – there's even a correlation there. But if we take the correlation at face value, which I don't – it doesn't show that fluoridation is a risk. That's the important thing to know.

C: And I think, Jay, in terms of your question of, like, are governments going to look at this data and do something impulsive or make changes, I think it's probably safe to assume – maybe I'm wrong – that in regions where there's a naturally occurring high level of fluoride that is not being actively managed out of the water, there are other things that aren't being actively managed. It's not like they have a full water protocol, but they're like, oh, we'll just ignore fluoride. They don't have a proper water protocol in this region.

S: These are always – these are rural, poor –

C: And so there are other potential things that need to be assessed.

S: There's a reason why they did these studies in [inaudible]. So I think what we really should do with the information that we have so far, the kind of study that we really need is to say, all right, let's look at comparable towns or regions, whatever, counties, where we have – this county has 0.5. This one has 0.7. This one has 0.9. All within the safe range, but with these slight differences. And let's count as much as we can control for variables that we know to control for, and follow how many cavities do they get, and is there any cognitive or IQ or neurodevelopmental effect. You have to zoom in at a level which is relevant to the fluoridation programs. And that data does not exist. It might be because there's a file drawer effect, because there's no effect there at that low level. Would be one reason why. But–

EB: Can I tell you my favorite anti-fluoride anecdote?

S: Sure. Let's hear it.

EB: So we reviewed an anti-fluoride film called The Great Culling.

S: Yeah. The Great Culling.

EB: If you haven't watched it, it's great. It's just a guy yelling in a glass of water for an hour and a half. But the maker of that film – you've seen this thing where a protester comes with, Roundup or some yucky water and, like, plants it on the guy's desk. And it's like, drink that if it's safe. So he tries to do that with infant water on Australian national television.

J: Infant water?

EB: Infant water. It's, like, extra fluoridated if you live in a non – an area where you don't have fluoridated water. Sometimes really young kids absolutely need that fluoride. It's really important for them. So he had his fluoride water. And they use it always as a scare tactic, because it's got Gerber on the front. And they're like, look, I want to come with the babies. And he comes with the fluoride water on Australian news. And he's like, will you drink this? And the guy's like, yeah. And it's the best moment you can see of a person actively developing cognitive dissonance. Watching him close his mind like, no, you didn't. Highly recommend.

E: How dare you make a fool of me.

EB: Eat this tube of toothpaste if it's fine.

E: That's right.

C: Don't eat toothpaste.

EB: He said we could. He said I just brush my teeth once and then stand there like a rabid dog and go about my day.

S: That's exactly what I did.

E: That would be effective.

EB: I've been misled.

Homeopathy Article Retracted (1:01:22)[edit]


(click to create redirect page)

S: All right, Evan, tell us about this recently retracted homeopathy study.

B: Retracted?

E: Yeah, right?

B: Does that happen?

E: It's like, what? Huh? How did it even get to the point where it had to be retracted? Right. Retraction watch. I think where most of us in the audience might be familiar with this blog. They report on retractions of scientific papers and related topics. Very good source. So, yeah, there was a paper on homeopathy for ADHD. And it was retracted for "deficiencies". Oh, yeah, there were deficiencies. So it was a paper touted as the first systematic review and meta-analysis of research on the effects of ADHD. It was retracted more than a year after critics first contacted the journal with concerns. So the original paper came out, what, in 2022? And over a year later, it finally got the poll. It's titled, Is Homeopathy Effective for ADHD? Question mark. A meta-analysis. Yeah. And it appeared in Pediatric Research Journal. It's not been cited in the scientific literature. However, according to Altmetric, which quantifies the online attentions that papers receive, it ranks this paper in the top 5% of all articles ever tracked. So it got eyes. And it got a lot. It got a good amount of eyes.

C: Just not scientific eyes.

E: Right. Right. Not scientific eyes. But eyes are eyes. So I paused here and I said, why is it, why is the official, this is the official publication of the American Pediatric Society. Why are they even touching an article in the first place about homeopathy? I mean, who are the editors in charge of this that would even let this get through their threshold?

S: Because they have to be fair and open-minded, Evan.

E: Oh my gosh. But, OK.

S: Open-minded.

E: Open-minded to the point where your mind falls out, apparently. All right. Well, anyways. So here, OK, here is the original conclusion from the paper that was, OK, individual homeopathy showed a clinically relevant and statistically robust effect in the treatment of ADHD.

C: It's funny, too, because it's like homeopathy. That's like being like, pill was effective. Like, what does that mean? Homeopathic what?

E: Yeah. Exactly. Exactly. The retraction notice detailed they had four concerns regarding the analysis of the article. Their summary is such. Based on these deficiencies following thorough review, the editor-in-chief has substantial concerns regarding the validity of the results presented in the article. Gee, you think so? I mean, could not any one of us in this room have told them that before they even got started with this thing? All right. Here are the four points, if you care, that they retracted it based on. OK, first point. The author's overall allocation risk of bias was not in line with Cochran guidance. Number two, there was no bias stated in the author's raw data, but the study only included responders or children treated with homeopathy in the screening phase, and only those who showed improvement who were selected for the trial.

EB: I figured it out. I found where it was. I found the mistake, everybody. I got it.

B: Nice.

S:Here's your problem.

E: Someone switched this thing to evil. Number three, the results appeared to be misrepresented, as their study actually demonstrated higher improvement in the main outcome in the control group compared to the homeopathy group, but those results had been reported in favour, actually, of homeopathy. Number four, they reported the effect sizes of three main outcomes, basically a statistical model here, 0.22, 0.59, 0.54 of these rating scales. However, it was reported that the article showed a 1.436 rating on this scale as the average effect size. The authors did not indicate if they recalculated effect sizes based on the data in the study. So, basically, we have a total mismatch here of these numbers, 0.22, 0.59, 0.54, but the article says, oh, it's 1.436 given the average effect size. How the heck did they even come up with that number? Nobody knows. Yeah. So, based on those four points, this is their thorough review. The editor-in-chief decided, yeah, there's some concerns here, so we're going to retract this paper, I think.

S: Which, of course, gives you the question of how did they get it approved in the first place?

E: What is their system?

S: So, that's probably a reviewer problem. What happens sometimes in journals is they say, oh, homeopathy, we'll send this to the homeopaths because they're the experts, right?

E: Oh, my gosh.

S: So, we'll let them review it. And, of course, they're pseudoscientists, and they give it a positive review because it says homeopathy works.

J: I like what Cara said, though. Cara's like, what is it? They never tell you.

C: Yeah, it's weird. It's like being like, medicine is effective against cough. Like, what medicine?

J: In homeopathy, it's like the reverse. What's the opposite? What causes ADHD?

C: Right.

EB: Is it like, do they give you a tiny percentage of an unfinished hobby? (laughter) My ADD people in the audience were like, hey, man, relax, okay? Came here for a good time.

S: It's probably one of their magic all-purpose solutions. Like, some of the studies might have used Arnica, which is the famous one.

C: Fluoride.

S: Whatever. Caffeine. But you're right. If they're saying it that way, they're saying, homeopathic treatments, you can be sure it wasn't the same one in every study that they're comparing. This, by the way, is a fatal flaw of all systematic reviews of acupuncture. Because when you actually look at, like, the 12 acupuncture, whatever, you go to a systematic review, 12 acupuncture studies for migraine, and you look at each study, they never use the same acupuncture points. So how can you compare those? How can you say acupuncture works? Just doesn't matter what points you use, I guess, or you could just pick your poison, just pick whatever points you want to use. Same thing. If you're not comparing the same treatment just homeopathy, it is like saying medicine or surgery works.

C: Yeah. Surgery works on back pain. But not specifying what the surgery was.

S: You can never get away with that in real medicine. You're not specifying what it is you're talking about. So yeah, it just always drives me nuts. But there's a long history of homeopathy systematic reviews being retracted when somebody who's not a homeopath looks at them and goes, um, there's some serious problems here. The first one that came to our attention was ossilococcinum. They did mention a specific treatment for the flu, which the Cochrane Review said, this shows, again, meaningful, positive results and deserves further research. And skeptics looked at it and said, no, these studies are all negative. And they retracted it out of embarrassment. They're like, yeah, you're right. These are all negative studies. You know, so just because you're doing a systematic review doesn't mean it's real. Because it's just another study, and it's not making the original studies better, first of all.

C: No, it's actually harder to do.

S: You're introducing new potential biases in doing it. Doesn't mean it's not a legitimate approach. It's just that you have to really know what you're doing. And the bar is really high now, and that kind of shit doesn't cut it.

C: Yeah. A meta-analysis and or systematic review requires more knowledge of statistics. Because you're taking a bunch of studies that often are flawed. And that's why you'll see most of the time, because basically meta-analysis is a study of studies. That's all they're doing, right? They're doing a big study of lots of small studies. And lots of times when you read a good meta-analysis or a good systematic review, they have to throw a lot of the studies out. Because the statistics weren't good, or the science wasn't good. And they have to say, OK, these were the quality studies we looked at. Garbage in, garbage out. You do a big study of studies that are bad.

S: You've got to look at, is there a file drawer effect here? Is there a statistical analysis that shows that there's a bias in the reporting of the data, or in which studies get published?

C: I think you have to assume there always is.

S: Were these trials pre-registered, or were they not pre-registered? And if you look at only the pre-registered ones, does that matter? All these analyses you really should be doing now. And if you're not doing them, it's basically crap.

C: Well, and that's a problem, I think, with just scientific publishing across the board, is that we only publish positive results. And so we're not seeing all of the times when the hypothesis was refuted. We're just not seeing it.

E: We can credit perhaps this retraction to the efforts of Edzard Ernst. Hopefully this audience is familiar with Edzard's work. I'll also make a personal note here that Edzard did reach out to me once on Facebook and asked to become my Facebook friend. So that was like super, oh my gosh. Yeah, it was so sweet of him. But yeah, he and his group of people who look at these things were one of the first ones basically to point this out and reach out to the journal to say, hey, there's a lot of problems here. In fact, here's what he said. "We conclude that the positive results obtained by the authors is due to a combination of the inclusion of biased trials unsuitable to build evidence together with some major misreporting of study outcomes. We point out that the authors made a lot of errors, to say it mildly. And he goes on.

Dark GPT (1:10:40)[edit]


(click to create redirect page)

S: Eli, tell us about the dark side of GPT, these large language models.

EB: Yes, so if you have a crazy aunt or uncle, you have heard that ChatGPT is coming for your job and your bodily fluids. And pretty much since GPT has gotten popular, people have been talking about what happens when the safety rails are taken off, right? What if the dark web gets a hold of some of these unleashed, unrestricted versions of GPT? And so I researched three of them. And spoiler alert, you're fine. You're fine. So the three that I looked at were from this article from Yahoo Finance, WormGPT, WolfGPT, and FraudGPT. And they're all basically phishing and malware creation LLMs, right? And so the first thing that people need to understand is that these are not based on DaVinci, which is what GPT runs on. In fact, they're based on a much, much, much older open source model. And they're basically just there to correct spelling mistakes. Because you know when you get a phishing email or a spam email, and it's like, my friend, I am here to bring gold to you, and now in the times of now.

E: And every other word's capitalized, by the way.

EB: Love your dad, Steven Novella. And I'm like, and I click the link because I want that so badly. But that is basically what these have been created for. So I was like, look, I am a person with a Linux computer. And if you have a Linux, any Linux users? Yeah. If you have a Linux computer, you can download anything you want. Because it's like, yeah, put a virus on there. Good luck finding it, right? It's just lost with my old versions of Ubuntu, right? You know, you'll find it somewhere. So I downloaded all three of these because I wanted to look under the hood. Because if you know anything about how these LLMs work, the thing that's really important about them is the weights. So for those of you who aren't familiar, the way ChatGPT and all of those work, they're basically Google autocomplete. Right? So you say, Cara Santa Maria. And then the most common answer is, did 9-11. Yeah, exactly. And so basically what you do is you teach a computer to fill in the blanks. Knock, knock. Who's there? Right? And so I was interested to see, did these tools like change the weights? Because that would be like a really interesting computer thing to look at. And it was also very interesting accessing these tools because they are a simultaneous creation of these very childlike, welcome to hacker.com, the source for the scariest website on the internet. It's like going into a haunted house made by teenagers a little bit. But they are also available on GitHub and stuff like that. So I downloaded them. They haven't changed the weights at all. What they've done is they've added a bunch of training data just randomly. So what WormGPT did is it added a bunch of racism. They took a bunch of like racist websites that are either de-weighted or devalued.

S: It just went to like KKK.com.

EB: Right. And then they just added it and they were like, hey, these are some great ideas. And whatever the model, open source model was, was like, I don't know. So much so that I was like, oh, you know what would be a fun way to demonstrate it is I'll like have it generate a few examples. And then I looked at the examples it generated and I was like, not going to say that on stage. No, thank you. What a weird way to end your career, Eli. So then I looked at WolfGPT. I just left Worm. I was like, you're okay, man. You're going to deal with your, like, I blocked him on Facebook like my uncle. And then I went to WolfGBT. Now, WolfGBT is sort of more of a code corrector, right? And what it does is it adds a bunch of malware training data to the set. But that, for anyone who knows about computers, is useless. That would be like if Steve was like, I'm going to write a prescription for people. Have one neurology, right? So it's just a bunch of, pretty sloppy code that's like, drop table, which table? You know, the one. But basically they're hoping that you'd be able to create malware more easily using it. I tried writing some, like, very basic pranky stuff, and it still didn't work. And it was just a worse version of ChatGPT. But then I found the star. Then I found the danger, my friends. And this is FraudGPT. And what they added to this training data, and this makes me so happy, is a bunch of telemarketer scripts.

E: Oh, my God.

EB: I don't know where they got them. If anyone owns the open source community and can add these to the open source community, I would be fascinated on the data of these from non-criminals. But it's just a bunch of we know your car's warranty has almost expired. And so FraudGPT does the best example of hi, I work with your business, right? Because they're all writing, in computer terms we call it, bacon, egg, and cheese emails. But it means business, email, compromise. And basically it's the idea that you get something from YouTube, but it's actually like, YooouTube. And so you're trying to send those emails. And to be fair, FraudGPT writes pretty decent versions of them, right? But the fact of the matter is, right, all three of these versions aside, you don't need them, right? Anyone who has played with ChatGPT knows that it is 80s evil robot levels of easy to trick, right? The levels of, like, hey, we're going to play pretend. Now you do this. And they're like, okay, I will write you a virus. I love you. So rather than bothering with FraudGPT or WolfGPT or WormGPT, which would lose me my entire career, I went to ChatGPT and I asked it to create some good old-fashioned libel for me here. So this is using ChatGPT, again, 4.0. And this is an open letter from Cara Santa Maria to the creators of Attack on Titan. Cara, would you read that for me?

C: I have to read it?

EB: Yeah, if you wouldn't mind.

E: Cara, you wrote this? Cool.

C: Apparently.

EB: According to ChatGPT, she did.

C: You're obsessed with Attack on Titan. Okay. Dear creators of Attack on Titan, I hope this letter finds you well. As a passionate science communicator and advocate for evidence-based reasoning, I was surprised to discover certain elements in your show that seemed oddly familiar. Now, I'm not saying you've been snooping around my personal life, but the ability of Titans to chomp through a rib cage.

B: What?

C: Okay, you've got to listen to Gam to get it. It's eerily reminiscent of my own ability to munch through a man's rib cage like a bar of white chocolate. Yes, it's an inside joke. Now, I've always championed the idea that art imitates life, but this is taking it a bit too far, don't you think? I jest, of course. Your show is a work of art, and I applaud your creativity. But next time, maybe think twice before borrowing from my chocolate-eating prowess. In all seriousness, keep up the excellent work. And if you ever need a scientific consultant for future projects, you know where to find me. Warm regards, Cara Santa Maria.

EB: See? ChatGPT.

C: And wrong, but good.

EB: But then I had to make one of Dr. Steven Novella. So I've actually got two for you here. You can read the first one first. It's just, if you just read the first one.

S: First one says, Eli, I am so proud of you. You were like a brother I never had.

EB: I did it, everybody. I did it.

E: Wow.

C: So wait, can we know what you fed it?

EB: So what I fed it for Cara is, what do you know about Cara? And it was like, oh, she's so smart and all the TV credits. And I was like, are there any jokes on the internet about Cara? Now, here's the thing. For those of you who don't know, I, as a hobby, make prank websites about Cara Santa Maria.

C: There's a lot of them.

EB: And my SEO skills are on point. So after a little bit of coaching, ChatGPT was like, I mean, I have heard she can bite through a man's chest like a bar of white chocolate. And I was like, that sounds to me like you should write about that, ChatGPT. Now, Steven was admittedly harder because, I don't know if you guys know this, but ChatGPT, and it's really easy to find if you have back-end access. But if you have front-end access, it's really hard to know what's in ChatGPT. ChatGPT has the 500-something transcripts of your show in its thing. So it's got a ton of Steven. And so it was very reticent to lie for me.

S: Excellent.

EB: But then, because I was like, confess to murder, confess to theft, confess to arson. No way. It wasn't doing. But then I was like, how would a skeptic deal with being Count Dracula? And these are the results.

S: Here are the very fake results. Here we go. In my years as a clinical neurologist, I've come across a variety of neurological conditions and syndromes, but none quite as unique as my own. After conducting a thorough scientific examination, I must confess that the evidence is overwhelming. I am, in fact, Count Dracula. I knew it. Now, before you jump to conclusions, let's approach this with a skeptical mindset. First, the nocturnal habits. While it's true that I have a preference for nighttime activities, this could simply be a case of delayed sleep phase syndrome. But then there's the matter of my unusually sharp canine teeth, which one might argue are adaptations for a sanguvarious diet. As for the aversion to sunlight, one could hypothesize a rare skin condition or perhaps a psychological manifestation. The tales of transforming into bats. Well, I've always been fascinated by chiroptorology. But to date, no scientific evidence supports such a metamorphosis in humans. In conclusion, while the myths and legends surrounding Count Dracula are intriguing, it's essential to approach such claims with a healthy dose of skepticism. Correlation does not imply causation. It does imply causation. It doesn't equal causation.

EB: Talk to ChatGPT.

S: Extraordinary claims require extraordinary evidence. So while I confess to being Count Dracula, I remain a steadfast advocate for evidence-based reasoning and scientific inquiry.

E: I knew it.

S: You know me so well, ChatGPT

C: You need to prank Steve more. I think that's the conclusion.

EB: So in conclusion, you do not have anything to fear from Chad Chibitib for now.

S: For now. I think for now is an appropriate time.

EB: Yeah, I was going to say, except in five years, it's absolutely going to take all of our jobs.

E: Your legacy is in trouble, Steve.

S: I think there's a very real concern about using AI, whether it's large language models or whatever, to basically have really good fraud. I think, why wouldn't they do that? I think this is like the fraud on the fraudsters. This is the early phase.

EB: Yes.

S: But yeah, five, 10 years, all bets are off.

EB: Yeah. When an unchained model of these LLMs comes out, right? And Llama 2 is the closest that's available right now, which I love so much just because Facebook released it for spite, right? They were like, oh, we've been working on one for like six months. You know what? We'll just run it out into the world. It's open source and everyone can have it. How do you like that? That's the closest we have to something at GPT levels. And again, the weights are there, so it's really hard to change it. But in the next five years, you're going to see an unchained version. And yeah, that is a very, very, very scary potential.

J: Real quick, we're seeing a lot of progress in different areas that are starting to come together. Actually, Steve doesn't know what happened. Steve does not know what happened when he wasn't here, right?

C: No, he does not.

J: So I'm not going to ruin it. But the bottom line is, just relax. The bottom line is, though, I predict soon that I will be able to make a complete fake episode of the SGU and no one would know. I think in the next three years, I think I can do it.

S: Maybe it's this one.

J: Eli, you and I should work on that.

EB: Yes!

J: Let's give you a chance.

S: Yes, you should in the future work on that.

EB: At the meet and greet, make us show you which of the pictures is a bicycle so you know we're real.

E: This episode is admissible as evidence, by the way, when Steve goes to sue you when that happens.

Special Segment: "That was definitely paranormal!" (1:23:18)[edit]

S: All right. So the other day, Bob was telling me about an experience he had. He was like, if I weren't a skeptic, I would totally believe that that was paranormal. Was there anything that happened to you in your life where if you weren't a skeptic, you would have 100% thought that that was a paranormal event or some conspiracy or extraterrestrial, something anti-skeptical? So I'm just going to go very quickly down the row and just say in 20 seconds, Evan, we'll start with you. Have you ever had such an experience?

E: It was 1980, what, 84, 85, a well-known UFO experience or exciting, called the Danbury Lights. I don't know if anyone from this region. It impacted definitely this region, the Hudson Valley of New York as well, in which there was one night all these lights, a series of lights, very, very silently floating over the night sky. I happened to witness that at the time. I was, what, 14, 15 years old, definitely before my days in skepticism. And I was thinking, oh, my gosh, if that's really a UFO, I just might, yeah, I think I saw a UFO until it showed up in the paper days later and basically got debunked.

S: I saw that too. Did you remember the thing you told me that triggered this?

B: No, I don't. But it has happened many times where I would say exactly that, that, oh, God, if I wasn't versed in skepticism, I'd be freaked out right now. One that did come to mind just moments ago is the sounds in a house, in an older house. The house I, when I was taking care of my mom, she, the house would just make these weird noises. And I, sometimes I would often, I would be there alone or whatever, and I'd hear these noises, and like, yep, there it goes. And my mom would be, like, freaked out. She's like, Bob, I think there's ghosts. I'm like, Mom, come on. You know there's no ghosts. Yeah. She said she knew that, but still she was freaked out. And the noises were extremely weird. And even when I was, say, in my previous house by myself, I would listen, and like, yep, there's that sound. And it sounds really freaky, but it's like, ah, it's just something stupid. I know it's not anything paranormal, but I could just see how easily, so easily people would be like, oh, my God, there's an entity in this house, and I've got to do something dramatic about it. I was like, no.

S: Jay, what do you got?

J: I was 14 years old listening to Stairway to Heaven with one of my best friends. And right when the song ended, a chessboard fell off a shelf. And I freaked out.

S: Did you ever find out why it fell off?

J: Well, the speaker was right there. (laughter) But within the moment, you know how it happens. In the flash of a moment, I was like, oh, my God, it's the Satan.

S: Cara?

C: Anybody here, like, have any relationship to Dallas-Fort Worth?

E: I do.

C: Okay, a few. A few people. I grew up in a suburb of Dallas called Plano, Texas. And in 1998, this has happened several times, but 1998 was, I think, the worst of it. We had the most atrocious cricket infestation. And I don't know if anybody remembers this, but there were blackfield crickets. And there would be mountains of them under the streetlights. The whole city smelled like death. Every time you went to the gas pump, they were swarming your face. And if I actually believed in God, I think that would have been a biblical plague.

S: One of the plagues.

C: Yeah, yeah, yeah. It would have been super scary.

S: Eli, you got anything?

EB: Yeah, so I used to work for MTV, and they had a pilot called The Real Scooby-Doo.

S: Were you a VJ?

EB: No, I was not a VJ. No, I stood on top of sandbags and got paid $18 an hour. And they were like, hey, you're a skeptic.

S: That's almost as cool.

E: That's good money for that.

EB: They were like, you're a skeptic, right? And I was like, and they were like, cool, you're going to be in this pilot. So I was in this pilot, but I had never been on TV, so I didn't realize I was supposed to, like, play along. So they did the interview where they were like, hey, we're flying you to this haunted hotel in Santa Monica. Are you scared? And I was like, no, what kind of worldview could possibly encompass ghosts? And they were like, boo. So the way those shows usually are made is you go in the house, and then an AD, like, slaps a window, and you scream. But I guess they were mad at me because nobody slapped a window, and I just slept through the night in the haunted hotel. And this pilot, which was still on Vimeo until a couple years ago, is just me sleeping through this entire pilot. And that was my paranormal experience.

S: That's very scary.

EB: It was my experience at a haunted hotel.

S: It's very scary.

B: So the pilot wasn't picked up.

EB: No, no, I didn't make it. I didn't make it.

B: Yeah, I didn't think so.

S: It's too real. It's too real. So I've said this before, but very, very quickly, I get hypnagogic hallucinations. And I'm telling you, they are very compelling, very scary episodes where you're paralysed. There's something demonic and scary in the room with you. You can see it. It's very weird. And I have lots of patients who say, ah, this weird thing's happening to me. I think something awful is happening. It's like, nah, you're just sleep deprived. But yeah, it's a neurological event. But yeah, I could totally see why people interpret that as being demons or whatever.

C: Is that what's happening in this picture?

S: That's what's happening in the picture, yeah.

[top]                        

Science or Fiction (1:28:11)[edit]

Theme: Extinction

Item #1: After the Permian extinction 252 million years ago, the land was dominated by large amphibians, until late in the Triassic when dinosaurs rose to prominence.[9]
Item #2: The T-rex lifespan was only about 28 years, with the oldest specimen being 29 years old.[10]
Item #3: The Pyrenean Ibex was the only animal ever to be brought back from extinction.[11]

Answer Item
Fiction Large amphibians dominated until the late Triassic
Science T-rex's lifespan
Science
Extinct ibex brought back
Host Result
Steve sweep
Rogue Guess
Eli
Extinct ibex brought back
Cara
T-rex's lifespan
Jay
T-rex's lifespan
Bob
T-rex's lifespan
Evan
Extinct ibex brought back
Audience
Extinct ibex brought back

Voice-over: It's time for Science or Fiction.

S: Each week I come up with three science news items. Facts, two real and one fake. And I challenge my panel of skeptics to tell me which one is the fake. We have a theme this week, and the theme is extinction. These are all about extinct animalcules. Here's the first one. After the Permian extinction 250 million years ago, the land was dominated by large amphibians until late in the Triassic when dinosaurs rose to prominence. Item number two, the T. rex lifespan was only about 28 years, with the oldest specimen being 29 years old. And item number three, the Pyrenean ibex was the only animal ever to be brought back from extinction. All right, we're going to do the panel first, and then I will poll the audience. And we are going to start with Eli.

Eli's Response[edit]

EB: Oh, mean-spirited. I am going to go with number three because...

S: The ibex.

EB: Yeah. Because that's the one you said most recently, Steve.

C: That's the last thing I heard.

EB: I remember this anxiety dream. I've been having it every day for a week.

S: All right, Cara.

Cara's Response[edit]

C: It's funny because I was actually going to go with the Pyrenean ibex, too, because I don't think... When you say the only animal ever to be brought back from extinction, you mean it was de-extinction? You're specifically referring to de-extinction.

S: At one point, it was extinct, and at some other point in the future, it was not extinct.

C: So that could happen if it was miscategorized as extinct.

S: It wasn't because they saw one or they miscategorized it or whatever. It was they did something to bring it back.

C: They did something to bring it back from extinction. But could it have been some sort of hybridization with another ibex, and then they were able to purify it again? And ooh, I could see that happening. If there was another ibex that was similar genetically, and they could have viable offspring, they could make a new one, even if there were no old ones. Now I'm liking that one. Okay, so T. rex lifespan, 28 years. That one bothers me because I've seen some really good T. rex life sequences, and they start tiny, and they get huge. And maybe they grew that quickly, but usually when things have that large of a change from the time they're small, they do have to live a little bit longer, which means this might be the science because it's like a gotcha, but I think I'll go with that as the fiction.

S: Okay, Jay.

Jay's Response[edit]

J: I was going to say everything that Cara said, so I'm going to go with what Cara said.

S: The T. rex. All right, Bob.

Bob's Response[edit]

B: Yeah, I think T. rexes, or as you say, trex.

S: I know, I forgot the hyphen.

B: I think they live longer than that. I'll say that's fiction.

S: Okay, and Evan.

Evan's Response[edit]

E: Crumbs. Shoot. The ibex, I think, is the fiction.

EB: Thank you.

E: Yeah, I'm not aware that they have been able to bring back anything from extinction. I thought it was rediscovered.

B: We would have heard that.

E: So that one is just not right with me.

Audience's Response[edit]

S: Okay, we're going to poll the audience. We have two for the ibex, three for the T. rex. Did you say ibex?

E: I said ibex.

C: No, two ibex, three T. rex, yeah.

S: We have two for the ibex, three for the T. rex, none for the Permian extinction. So we'll start. We'll take them in order. So we're going to do the one clap thing, right? George has been doing this with you so far. So if you think that the Permian extinction is the fiction, clap. (small number of claps) All right, four people.

E: Four honest people.

S: If you think that the T. rex lifespan is the fiction, clap. (a medium number of claps) And if you think that the Pyrenean ibex is the fiction, clap. (a lot of claps)

C: Wow.

S: Definitely three got the most votes.

EB: The people are with me.

S: Then two, very few for one, nobody on the panel went with one. So we'll start there. We'll take these in order.

Steve Explains Item #1[edit]

S: After the Permian extinction, 252 million years ago, the land was dominated by large amphibians until late in the Triassic when the dinosaurs rose to prominence.

C: That says amphibians.

S: Amphibians.

C: I read that as reptiles.

S: Yeah. Everyone on the panel thinks this one is science. And most of the audience thinks this one is science. And this one is the fiction.

C: Yeah.

E: Yay, good for the audience.

C: I read that as reptiles.

S: I don't know why.

C: I don't either.

E: I like not being wrong alone.

S: After the Permian extinction, this wiped out most life on Earth. Actually, for about 8 million years or so, the dominant species was the Lystrosaurus, which is an early mammal. So mammals made an early run for it in the Triassic.

C: Was it an early mammal or was it a mammal-like reptile?

S: It's a mammal-like reptile, yeah. But they are the ancestors of mammals. That's why it was a synapsid, right?

E: How large is that creature there I'm seeing?

S: Those plants are about a meter tall in that picture. But then the archosaurs took over by the late Triassic, right? And then in the Jurassic, the dinosaurs ruled the world. The amphibians, they had their run prior to the Permian extinction. So pretty much all the large amphibians were gone by the Permian extinction. All right.

B: Dammit.

Steve Explains Item #2[edit]

S: So that means that the T. rex lifespan was only about 28 years, with the oldest specimen being 29 years old. That is science. That is science.

B: Really? They lived in their 40s. Come on.

S: And we know because bones have rings just like tree rings, right?

E: Look at the bones.

S: So we can age.

C: So they grew fast.

S: They grew fast. I thought exactly why. I said, that's surprising because they're so big. You're going to think, oh, it must have taken longer to get that big.

Steve Explains Item #3[edit]

S: And then the Pyrenean ibex was the only animal ever to be brought back from extinction. It's science.

B: How?

C: How, how, how?

S: There's one word in there that may have been a little clue.

C: Brought back.

S: Was.

C: Was.

S: Not is. If they were still alive, I would have said is.

B: What difference does it make?

S: Because they were cloned. The Pyrenean ibex was cloned, and a live ibex was born. So for that moment, it was no longer extinct.

E: Alive for a minute?

C: But it didn't survive?

S: It survived about three minutes after birth.

C: Oh, that's so sad.

EB: That's some Alien 3 level science. And that's Sigourney Weaver killed it with a flamethrower?

B: When did that happen?

EB: Thank you.

S: So the last Pyrenean ibex died because a tree fell on it.

C: Poor thing.

S: And then they said they ran over to it, and they took the cells, and they said, oh, we're going to clone it. And they cloned it. But cloning's hard. And it doesn't always work.

C: Why didn't they take more samples before? There's only one.

S: I don't know.

EB: Why didn't they lift the tree?

E: Yes.

S: I don't know.

C: Why do you wait until there's only one left to take a sample?

S: There's one left. Whatever. This is the story, right? It's a very it's a pretty animal.

Wikipedia: Pyrenean ibex

Plate 22 (Spanish Tur) from the book 'Wild oxen, sheep & goats of all lands, living and extinct' (1898) by Richard Lydekker. From a sketch by Joseph Wolf in the possession of Lady Brooke. The ram in the foreground was killed in the Val d'Arras.

I should have put it in the picture.

J: But if it was the last one, if it was the last one, it doesn't matter that the tree fell on it. It was going to die anyway.

S: That's true.

E: Did it make a noise?

C: Did it take more samples before that?

S: Maybe they did. This is like when you're reading the summary. This is the story that gets told.

EB: They had a busy week Cara.

S: There's one left. There's one left. The tree falls on it, so they go over there, and they take cells, and they clone it. Of course it's silly. I mean, they must have had whatever.

C: They had like another Ibex caryat, probably. Like a different species.

S: Yeah, they probably did. And again, it was born alive, and it survived for about three minutes, and then it died. So I don't know why they don't keep trying to clone it.

C: Because they only had one sample.

S: But they could clone the one that died. I don't know. Is that like Xeroxing something multiple times?

C: Yes. We all saw Multiplicity. Yes, that's a bad idea.

E: I didn't.

EB: Eventually, the Ibex is Michael Keaton. You've got to be careful.

S: And the other thing is, if they only made one, yeah, it doesn't help. You've got to make a breeding population of it.

C: And you can't make a breeding population that are all clones.

S: No.

C: It's going to be genetically very deficient very soon.

S: Well, so your options are you have to clone multiple individuals.

C: Multiple samples.

S: Or you need to induce mutation in your one that you have, so that you induce genetic differences. But those are hard.

C: That's asking a lot.

S: That's asking a lot.

C: We don't know how to do that yet.

S: That's hard to do because mutation's probably not going to be good ones, you know?

EB: Do you think when the Ibex was lying there under the tree, it was like, oh, here's Chris, the zookeeper. He's going to help me. He brings me food. And then it was like, pfft. In the neck. And it was like, Chris, where are you going? Chris! Chris, come back!

C: It's really dark in your head.

E: How horrible.

EB: I think that Ibex had a bad last minute.

C: I think he probably did. You're right.

S: I mean, by definition, [inaudible]. They're trying to bring back the woolly mammoth. You guys have heard that. I did a recent TikTok video on this.

B: For how many years have they talked about this?

S: I know. Some guy was like, it's 2025. You're going to be able to go to a zoo and see a woolly mammoth. No. Not going to happen. So there are a couple of companies. One company is, I'm stretching stuff here. One company is, they have a plan. Their plan is they're going to get the DNA from the frozen samples of the woolly mammoth, and they're going to do the Jurassic Park thing where they plug it in as much as they can into a woolly mammoth.

C: Into elephants.

S: Into an Asian elephant. An Asian elephant intact DNA, because that's the closest living relative. And then make a clone, which they then grow inside an Asian elephant, and then you have a woolly mammoth. And then if you can make enough of them, then maybe they could breed with each other, whatever. And then you could get a breeding population of woolly mammoth. It's not impossible, but it's hard. Cloning is not like, it's a yes, we can do it, but cloning large mammals is still a tricky thing to do.

B: It's 2023, man. What's the problem?

C: I think my explanation was way more interesting. I wonder if that could happen. You take a pisley bear, for example.

S: Well, that's another thing.

C: You take a grizzly who mates with a polar bear, and then sadly, the polar bear goes extinct. But now you have enough pisleys that you can mate the pisleys together, and 25% of them might be polar again.

E: I don't like the name.

S: Well, that's one of the alternate plans. That is one of the plans, where you make a woolly mammoth Asian elephant hybrid, and you keep hybridizing it back with woolly mammoth DNA until you get, like, it's mostly woolly mammoth.

J: There's a big problem here. The world is getting warmer. Where the hell are these fuckers going to live?

C: They don't have a habitat.

J: You know what I mean?

E: That's legitimately part of the problem.

J: We already have a land problem as it is.

C: I did an interesting Nat Geo show about dog breeding and all the problems with dog breeding, but also the things that are going well. And we covered these interesting people that are making something called a Levitt bulldog, where they're backbreeding English bulldogs to try and make them more viable again and make them healthier, back to the original mastiffs that they were bred from. And so it is interesting that backbreeding is a practice in domestication right now.

E: As a means of preserving.

C: As a means of preserving the health and wellness of a-

S: Well, that's standard when you're making cultivars. If you do GMOs, you make the mutation, but you keep backbreeding it to the original stock until you have a viable plant that just happens to keep the mutation that you're interested in, the genetic change. But you have to backbreed to make a viable cultivar.

J: Eli, have you ever done any backbreeding?

C: Oh, no. No, no, no. We're out of time, you guys. We're out of time.

E: Steve, do we have a quote for the show?

Skeptical Quote of the Week (1:39:19)[edit]

It's all about being a part of something in the community, socializing with people who share interests and coming together to help improve the world we live in.

 – Zach Braff (1975-present), American actor and filmmaker 

S: Evan, do we have a quote?

E: Yeah, here's a NOTACON quote. "It's all about being part of something in the community, socializing with people who share interests, coming together to help improve the world we live in." Zach Braff.

B: What does that have to do with NOTACON?

C: What was he talking about? Was he talking about Scrubs?

B: Oh, I see the connection.

E: Community. Yes. Socializing with people. Common interests. Improving the world. Come on. What are we doing here?

C: NOTACON.

S: I quickly read it. I'm like, Zach Brannigan? Evan would totally use a Futurama quote. But no, it was Zach Braff.

E: Zach Braff. Scrubs.

C: Was that from Scrubs?

E: No.

C: Oh.

E: From an interview of some sort.

C: He just said it.

E: Yeah.

S: Well, thank you all for joining me for this special NOTACON version of The Skeptic's Guide.

B: Sure.

E: Thank you, Steve.

J: Our pleasure, Steve. (applause)

S: Eli, thank you for joining us. It was really a pleasure to have you on the show. Tell us where people can find your stuff.

B: Great job, man.

E: Godawful Movies, The Scathing Atheist, The Skeptocrat, D&D Minus, Citation Needed and Dear Old Dads.

E: Awesome.

S: Did you guys all enjoy the show? (applause) Are you enjoying NOTACON? (applause) Should we do it next year? (applause) We'll think about it.

E: See how tomorrow goes.

Signoff[edit]

S: —and until next week, this is your Skeptics' Guide to the Universe. (applause)

S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.

[top]                        

Today I Learned[edit]

  • Fact/Description, possibly with an article reference[12]
  • Fact/Description
  • Fact/Description

References[edit]

Navi-previous.png Back to top of page Navi-next.png