SGU Episode 1030

From SGUTranscripts
Revision as of 21:56, 1 October 2025 by Willdefraine (talk | contribs) (Science or Fiction (1:33:45): corrected side panels for host and rogues)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
  This episode needs: proofreading, links, 'Today I Learned' list, categories, segment redirects.
Please help out by contributing!
How to Contribute


SGU Episode 1030
April 05th 2025

A breathtaking spectacle of birds in flight, showcasing nature's incredible beauty and movement.

SGU 1029                      SGU 1031

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Quote of the Week

“The fool doth think he is wise, but the wise man knows himself to be a fool”

- William Shakespeare, As You Like It

Links
Download Podcast
Show Notes
SGU Forum


Intro

Voice-over: You're listening to the Skeptics' Guide to the Universe, your escape to reality.

S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Wednesday, April 2nd, 2025, and this is your host, Steven Novella. Joining me this week are Bob Novella...

B: Hey, everybody!

S: Cara Santa Maria...

C: Howdy.

S: Jay Novella...

J: Hey guys.

S: ...and Evan Bernstein.

E: Good evening everyone.

S: Cara, how did your test go last week?

C: It went well. I passed the second of two exams that are required to become a licensed psychologist in the state of California.

S: Achievement unlocked.

C: Yep.

E: Nice.

C: And so paperwork was delivered today. I sent it on Monday, but I got the, I guess, notification. So now it's just up to the board when they want to issue me the license.

S: Do you level up as well? Is that how it works?

C: Yes.

E: Do you get to wear a badge?

C: No. Nothing. I get to see patients without a supervisor.

S: You get to have a career.

C: Yeah. I get to have a career. And not be a fellow.

E: You get to charge for your professional services.

C: Exactly. That's nice.

J: That's when the fig bucks start rolling in.

S: So unfortunately, Val Kilmer died. Was it just today that he died or yesterday?

C: It was yesterday.

B: Yesterday.

E: Well, he was suffering a long time with an illness, right? Was it cancer?

S: He had throat cancer. Yeah.

E: Oh, my gosh. He was a fraction of the person he was before. Gee whiz.

C: But did he die from throat cancer? I thought he was-

J: No, he died from pneumonia.

C: Yeah. That's what I thought. I thought that he had, kind of, pneumonia. I thought that he had, kind of, was on the other side of it. I don't know if he was NED, but-

E: I remember seeing a picture of him from recent... And you could not identify him as Val Kilmer. You just couldn't.

S: Yeah. But pneumonia is, like, a very common final event for a chronic illness.

C: For sure.

S: You know what I mean? That's, like, the approximate cause of death.

C: Yeah. I just wasn't sure if he had active cancer when he died or if he was no evidence of disease. Because I think the diagnosis was, like, over a decade ago.

S: Yeah. And he had chemotherapy and everything.

E: He's been sick a while.

S: Yeah.

E: And he made his own documentary, which I haven't seen yet. I don't know if anyone else saw it, called Val, back in 2021.

J: It was a good documentary.

E: You know, everything that... All the things that he'd been to that led him up right to that point and to his illness and everything. Gee whiz. I mean...

J: Yeah. It was very interesting.

E: You have a favorite movie? You have a favorite...

S: My favorite Val Kilmer role was Doc Holliday.

B: Doc Holliday.

E: Oh, sure.

B: Yes. Absolutely.

S: By far.

B: He's got some good roles. But, man, that character, I just loved him to death.

S: And there was Mad Mardigan, too, from Willem.

E: Yes. He was that movie. That movie gets forgotten at times. You know. Maybe the Tolkien people don't like...

S: It was a good movie.

B: Do you remember the scene when he first appears? He's supposed to be kind of, like, mean and threatening in that movie. So they gave him, like, kind of, like, classic, you know, medieval teeth where his teeth just did not look good.

E: Oh, yeah. No. Yeah.

B: And then once that scene is over... I remember he grabbed a towel and he's rubbing his teeth and then his teeth are like Hollywood teeth. It's like, okay. That was easy.

S: I remember him chewing on leather or something to get that stuff off of his teeth.

E: Oh, yeah. Root. Root Marm, I think I called it.

S: No, he's definitely a Han Solo kind of character.

B: Oh, yeah.

S: You know. Scoundrel.

B: Absolutely.

S: Unlikely hero kind of character.

E: I probably first saw him in Real Genius. That's probably my first record.

S: Yep.

B: Oh, yeah, man. Fun character for his mnemonic abilities.

E: And, of course, Top Gun.

B: Oh, yeah. Top Gun was...

S: He was fine in Top Gun.

E: He was fine. No, yeah. But he well became superstar, I think, with those roles.

B: He didn't even want it. He didn't even like the script when he got it. He changed his mind on that, I'm sure.

J: For Top Gun?

B: Yeah. Top Gun was Top Gun, wasn't it?

J: Yeah. He had to do it. He had to do it.

B: Yeah. He was contractually obligated.

J: He was in a contract with the studio. So he owed them a movie. He didn't like the script, but that's what... A lot of actors get sucked into roles that they don't really want because...

E: Of course.

B: How about Doors?

E: Yeah. He was an excellent Doors.

B: He killed it. He killed it in Doors. Like, wow. What a tour de force. He really became that guy.

S: Jim Morrison?

B: Yeah.

C: That guy.

B: Yeah. The name escaped me for a picosecond there. But yeah, he was Morrison. I remember... I mean, I haven't seen it in quite a long time, but I remember being very impressed. You really did a great job there.

S: Well, let's go right on with our content.

What's the Word? (04:45)

  • enantiodromia

S: Cara, you're going to start us off with a what's the word.

C: Yeah. I'm going to do this one a little bit differently because a listener, Glenn Ellert, recommended a what's the word and basically did my job for me. So Glenn, thank you. I'm going to present to everybody what you presented to me. Don't worry. I double checked everything. I might add one or two things here and there. But he basically said, I have a suggestion for a word of the day that relates to our current political situation, enantiodromia, E-N-A-N-T-I-O-D-R-O-M-I-A, the tendency of things to change into their opposites, especially as a supposed governing principle of natural cycles and of psychological development. And that and is important because it's kind of a weird term that has a specific usage in one sort of esoteric branch of psychology. What Glenn says is that that definition basically reflects the two uses and histories of the word, the first one being ancient and the second one being modern. So when we look at the ancient use of that word, we can break it down in the original Greek into N meaning to contain, anti meaning opposite, O, and then droma, droma referring to like a road or a path or a race, like a running track. So it literally sort of translated to like the path with the properties of oppositeness, the path that is opposite to this one or the opposite running course, for example. Now this was a word that was used historically quite a bit. We see it discussed in a lot of traditional philosophies and religions. Definitely we see it a lot in Eastern philosophy and religion. You know, you think of like yin and yang, for example. But Jung decided, and this was much more recently because Carl Jung, we all know who Carl Jung was?

E: Yes.

B: Oh yeah.

C: Just making sure. So this would have been mid-century, around 1949, he decided to introduce this very old word into a newer understanding where he talked about the idea that somehow if you are doing something consciously, you will have an unconscious principle that is the opposite or vice versa, right? So you have an unconscious drive or an unconscious idea, and then there would be a conscious opposing action. So you know, Jung is known for having had a lot of, let's say, interesting, somewhat magical ideas that did influence modern psychology to an extent. There are actually practicing Jungian psychologists today. I am definitely not one of them. But basically his idea was that if you have this tendency to think in one extreme for long enough, there is going to be an opposite position that develops unconsciously, and it will be equivalent in strength, and then eventually that will erupt into consciousness. Now again, this is based on this earlier meaning, this path of opposites, where we first saw it in Heraclitus, 6th century BCE. He described the unity of opposites, so opposite things being identical, and then also the doctrine of flux, everything being constantly changing. This is very similar to modern ideas that we have of things like equilibrium, right? Balance. Like, these are important, both scientific but also philosophical and psychological concepts. But the idea here, and I love this, so Glenn wrote to us and said, in any case, I thought you might enjoy this word because it seems to describe the current political situation in the United States, as was mentioned on your Wednesday live stream. I'm not sure if they're referring to last Wednesday. The right wing has gone from the party of free trade, balanced budgets, and strong ties with democracies to one of tariffs, deficits, and strong ties with autocracies. I see this as an experience of enantiodromia right now. And then he says, I have to disagree with Jung, however, in that I don't think this is a good thing, because Jung often talked about this idea of balance as a good thing. I dug a little bit deeper, and I found an interesting post on Reddit, where somebody kind of defined it and grappled with the idea of enantiodromia. And a lot of people contributed to that thread and came up with their own examples. One example of enantiodromia is that, like, as a person gets older, let's say a responsible husband and father who's done that his whole life might leave his family and run off into a chaotic relationship with a younger woman, or a person who's working all their life for a charity ends up stealing money from them. And then more people contributed to that, like the Catholic Church being a great example, right? Centuries of sexual shame and repression. We know what that kind of balanced out to, or the pendulum swung in the other direction. And it's interesting, because I think about this, I've never used this term, and I've never thought of it as some sort of unconscious principle. But that's something we all grapple with in our kind of psychological functioning. Very often when I'm working with patients, we will talk about the pendulum swinging, and how, let's say you get in a fight with your partner, and you say something really mean. You can't just expect the pendulum to go back to the neutral state naturally with time. Often it needs to swing equal and opposite, right? There has to be behavior that is equal and opposite to the cruelty in order to sort of restore balance and trust. I wonder too, politically, if we aren't seeing these kinds of swings, you know, Glenn kind of mentioned their example of it, but, you know, is the political milieu right now a direct response to the previous two administrations?

S: 100%.

C: Right? And then will we now see an equal and opposite shift farther in the opposite direction as a direct response to this? And where do we sometimes net out in that balance? Like, I guess the real question is why are sometimes the swings quite violent, and why are sometimes the swings a little bit more moderate? But yeah, I do think we often see those kinds of swings in our own personal lives. We see them, like, Steve, you could probably speak to examples of this medically, where equilibrium has to be restored. Obviously physicists can speak about this, but it's an interesting term. You know, when you look at the dictionary descriptions of enantiodromia, it often will say like archaic. I don't think many people are using this term in their regular speech.

S: It doesn't roll off the tongue.

C: Yeah, it doesn't really roll off the tongue. But maybe it's one we could bring back, or maybe we could shorten it, or I don't know.

S: Cara, do you think, what it reminds me of, I don't know if you think it's part of this, is that the psychological phenomenon where if you are, you know, trying to accomplish some goal, but you're doing it in a thoughtless way, you often achieve the opposite. For example, if you are very anxious about your partner leaving you, that will motivate you to be clingy, but the clinginess might drive them away. So you actually accomplish the exact opposite of what you're trying to do. Is that related to this, do you think?

C: I think it could be. I mean, I don't know if Jung would see it that way, but I see it that way. And almost another example of that that just came to me is sort of, is it a Navy SEAL, or like a, maybe it's a Marine Corps statement, that like, slow is smooth and smooth is fast. That very often if you're trying to do something fast, very active, like rushing, you make so many mistakes that you have to correct that you end up being too slow.

S: Right.

C: So if you just slow down and breathe.

S: It's actually even slower. Yeah, slow down and breathe.

C: Exactly.

S: Right.

C: And in a way, I think that that speaks to it as well.

S: Yeah. It's like speeding, and you get pulled over for a ticket, and they make it take so long that you lose time.

E: Gosh, they're so rude when that happens.

S: No, it's effective.

E: Yeah, it is.

S: All right. Thank you, Cara.

News Items

AI Protein Sequencing (12:37)

S: Jay, tell us about artificial intelligence and protein sequencing.

J: So do you guys remember in 2023, the Nobel Prize in chemistry was given to researchers who were using artificial intelligence to dramatically increase our understanding of protein folding. You guys remember that?

B: Yeah.

E: I have a vague recollection of that.

B: Alpha fold? Is that alpha fold?

J: I think that was it. Yeah, that was a big deal. It was a huge step forward. And there has been another one of these AI events happening-

E: Beta fold?

J: -that I will tell you about. So recently there's been, it's very significant, and they're saying it's 100% due to artificial intelligence helping. And this time it has to do with protein sequencing. So they have a new generation of AI tools that have been developed, and this was led by a model called Instanova. And it's enabling researchers to identify proteins much faster, more accurately, and without relying on the current incomplete databases that we have that are hugely lacking information to really help them get the job done in the current methodology that they have. They're saying that the implications are wide-reaching because it could be used from medical diagnostics to environmental science, archaeology, you know, there's this long list of sciences that can use this technology to help them do the work that they do. So the number one problem here is that conventional protein sequencing is very time intensive, and that means it costs a lot of money. And scientists usually start this process by, you know, they cut a protein into smaller bits, and these are called peptides. And they measure how heavy those pieces or these individual peptides are using a machine called a mass spectrometer. So after they weigh it, they try to figure out what the protein is by comparing those cut pieces to a database of known protein pieces, right? Does that make sense so far?

B: Yeah.

J: That right there, you know, it's labor intensive. It's not the easiest thing in the world to do. But there's a big problem with that current method. The master list of proteins, of course, don't include all the proteins that are out there. And it certainly doesn't include all of the proteins that could potentially exist. In fact, they say that up to 70% of the pieces that they find, these peptides, don't match anything in the database. And that means that most proteins can't be identified using this common method. That's a big percentage. That is, you know, a huge percentage that go unevaluated to the point where they don't know what it is and they can't gain any information from using the current database. So researchers found that instead of searching for peptide matches, AI models like the one called Instanova, they predict likely peptide sequences based on patterns learned from millions of known proteins. And this seems like, really, is that it? Is that all it took? It's complicated. It sounds kind of easy, but it's complicated because, of course, they have to build this massive database of all of those patterns that exist. But, you know, they did it, which is fantastic. So this new approach accelerates the analysis, but it also opens up the possibility of identifying completely novel proteins, which is another awesome thing that they're finding that it can do. So this was developed by a team that was led by someone named Timothy Patrick Jenkins. Anybody?

S: Old man Jenkins?

J: Leroy Jenkins.

E: That's a 20-year anniversary item right there.

C: That's probably older than that.

J: Anyone who doesn't know what that is, look up Leroy Jenkins and you're going to have a good time. So Timothy Jenkins is at the University, the Technical University of Denmark, and Instanova represents this major step forward in AI-powered protonomics, right? You guys have, Steve, you must have heard of this, Cara, I'm sure you've heard of this. So protonomics is the large-scale study of proteins. What they are, how many are there, what do they do, how do they interact with each other, where are they found? You know, there's just a lot of different pieces of information that they will find in catalog and it's super helpful to have this giant database of information. So the model, this AI model uses a deep learning neural network. How many times have you heard that? And it's combined with a technique called diffusion modeling. This is the same strategy behind advanced image generators like DALI and protein structure predictors like AlphaFold. So the precursors have already been out there that some of this technology already existed. Diffusion models work by adding noise to input data and then they learn how to remove the noise, gradually refining the output. So this iterative process boosts accuracy, particularly when data is messy or incomplete. Jenkins team using Instanova with their database, they dubbed that Instanova Plus, which in lab tests identified 42% more peptides than all previous AI models known. And one of them you might recognize was called Casanova. So that's a really, really significant percentage and it's proving to be very successful. So if you don't remember, Casanova developed in 2021 by William Noble and his team at the University of Washington. This was the first AI sequencer to use deep neural networks similar to those behind large language models like ChatGPT. So in a head-to-head test, Instanova Plus was used to analyze a synthetic mixture of proteins from nine organisms in real-world applications. The model identified 1,225 peptides associated with the blood proteins albumin and infected leg wounds. You ever hear about this, Steve?

C: Do you say albumin?

J: Albumin, yeah.

B: Like eggs?

J: They're saying that the model identified 1,225 peptides associated with the blood protein albumin.

C: Yeah, albumin.

J: And infected leg wounds, right? Does that make any sense to you?

S: Well, albumin is like your most basic blood protein, right? Just have a lot of albumin in your blood and that is responsible for the osmotic pressure of the blood. But that basically keeps fluid in the blood, water in the blood.

J: So the model identified 1,225 peptides associated with the blood protein in an infected leg wound. So this is 10 times more than conventional methods. And of those 254 peptides, they were previously undocumented. And the AI also detected 52 bacterial proteins from the same sample showing its capacity to parse these complex mixtures, meaning that you can get a sample of blood and it has its own proteins, but there could be lots of other things in the blood, right? Like the bacteria, that it also was able to figure out what the peptides were in those protein chains. So that's a huge thing. It's able to parse through all that, sort it all out, and really give the scientists like a crystal clear picture of what are all the different things that were in this sample. So that said, huge success, you know, this thing could dramatically speed up the process of cataloging proteins and everything, which of course means that it could lead to advances in so many other things, you know, like new drug development, blah, blah, blah. You could just, you know, laundry list of benefits here that lots of different spheres of science could benefit from. Now outside the lab, researchers have already been putting these tools to work, like a researcher named Matthew Collins, he's at the University of Cambridge, and he's been, you know, testing several AI models that analyze his archaeological samples. So traditional sequencing methods have particularly, you know, the ones that they use, of course they fail and, you know, they can't use them to really get to the nitty gritty on the information that they want to get out of these, you know, partial proteins that they find that, youknow, could be thousands of years old and the samples are incomplete. They can't, you know, nothing we have today can really make it all make sense. It takes a lot of time and you have to find multiple samples. It's like just a mess. In this context, ancient proteins, like I said, they degrade or they come from extinct organisms and, you know, they're not found in any modern databases. But the new models have helped his team identify, for example, rabbit proteins at Neanderthal sites and fish muscle proteins in ancient Brazilian pottery, right? Like check, think about that. Researchers are moving, you know, they're moving over to using these AI models exclusively because it's crystal clear how much more powerful it is and how much success that they're seeing. So this is, guys, it's a fantastic example of, you know, these very, very narrow, very specific AI models that are being used in science to speed things up, to fill in huge gaps, to do things exactly like we want them to be able to do. We want these AI models to have the ability to speed up these types of research and make the scientist's jobs easier and less expensive. And you know, damn it, it's working. You know, this stuff is really working. Like when they do it this way, it seems like, oh my God, AI makes perfect sense. Juxtapose that to a lot of the other news items that we talk about where, you know, AI is being used for scary stuff and it's being used, you know, like people are talking about having it run governments. And I wish that humanity had a crystal clear vision on what's best to use AI for. I certainly don't want AI in the short, especially in the short term, like making any decisions that humans should be making, right? But this stuff, you know, chugging through data and, you know, analyzing, you know, huge reams of data and coming up with really brilliant conclusions. It's fantastic. And it's, you know, it's perfectly tuned to do things like this.

S: All right. Thanks, Jay.

Solving the Bat Cocktail Party Problem (22:35)

S: Guys, have you heard about the bat cocktail party problem?

J: Not at all.

S: What would you guess that refers to?

E: I didn't realize it was a problem.

J: Well, okay. So maybe bats are drinking.

B: Bats communicating in a cave?

S: Bob's obviously very close.

J: Oh, never mind.

C: Never mind.

S: It's not in the cave, though. It's when they're leaving the cave, because they all leave at the same time.

B: How do they coordinate that shit?

S: And yes, and they're going through a very small space. And yet collisions among the bats is very rare.

E: Well, sure.

B: Yeah, I never thought about that. Yeah, that's...

S: And they leave at the same time because they're trying to basically overwhelm their predators, right? So, you know, like there are raptors, hawks, eagles, whatever, waiting for them. And they'll pick off, you know, a bat here or there, but they're hiding in the crowd, basically. So it's advantageous for each individual bat to leave when all the other bats leave.

B: Imagine you're the first bat out and you know there's like three million bats behind you. That sounds exciting, doesn't it? Like, ooooh!

S: So how do they not run into each other? But now the problem is even worse when you consider the fact that they're all navigating with echolocation, right? And so if you have a hundred thousand bats all using echolocation in the same small space...

B: Cacophony!

S: Right. It's going to be overwhelming. And that's where the cocktail party analogy comes from. It's like trying to understand the conversation in a cocktail party when there's a ton of background noise. You have a hundred people talking in a room. How can you pick out one voice?

E: Well, they must have evolved a filter of some kind for it, I would think.

S: Yeah. So that's the question. Is how... What is the... How did they evolve a way to not all bang into each other? Because that could be fatal, you know, if you have a midair collision.

B: Talk about a pileup.

C: Do they have their own sonar sounds?

S: It's echolocation.

C: Or echo sounds? Can they hear their own signals?

S: We're coming up on a study, but the previous research looking at bats in the lab found that bats use slightly different frequency echolocation noises, right? So they could tell their sound from other bats. But that doesn't work when there's a hundred thousand bats in a very tight group. That's okay if there's not that many bats around and you're just trying to distinguish yourself from a few other bats or dozens of other bats, but not tens of thousands of other bats.

E: Well, maybe there's a physical component to this. Like they emit some kind of, I don't know, dust particle or dander or something that they know to avoid the dander.

C: So is the question how do they hear where they are in space or how do they hear where all the other bats are in space?

S: Well, it's really where all the other bats are. They have to avoid hitting any other bat. So the existing research didn't really solve the problem. They said, okay, they have ways of avoiding jamming each other. They call it jamming, right? If you're overwhelming another bat's echolocation with your own echolocation, that's jammed. So why aren't all of the bats being jammed at the same time when they're trying to get... How could they possibly make it through that small space in a very short period of time without having massive pileup? They just basically were not going to be able to solve this problem in the lab. They needed to attach recording devices to bats in the wild.

B: Oof.

E: GoPros on bats?

S: Well, not GoPros because they're not interested in visual information. They're interested in acoustic information, right?

E: Oh, microphones.

S: Exactly.

E: And audio detectors.

S: Microphones.

B: I hope they're small and lightweight.

S: Yeah, so that's what they did. They attached tiny little microphones to a bunch of bats and then released them near the cave. Now, they couldn't get them into the cave, I think. I think they just had to release them into the flock after they left the cave, and then they used computer modeling to extend the data that they got. What they found was that when the bats are in a very tight, tightly densed group of bats, they use a much higher frequency of echolocation, and they reduce the volume, right? So they increase the frequency and reduce the volume, which basically means they're narrowing the distance that they can see with their echolocation to a very short distance. So essentially, they optimize their echolocation so they could see very precisely where the bats right next to them are.

B: Yeah.

S: Does that make sense?

E: Sure.

S: Because that's all they care about. In that moment, all they care about are the bats that are right in front of them, right next to them. You know what I mean? That's it. That's all they care about. And it's like a flock of birds, right? In the same thing.

E: Yeah, I was going to relate to that. It's similar.

B: The simple rules.

S: Yeah. Each bat following a simple rule, stay this distance away from the bat right in front of me and to the left and to the right of me. And that's it. They only have to worry about the bats that are right next to them. And they optimize their echolocation for that purpose. They narrow their-

B: Right. If everyone does that, it works.

S: If everyone does that, it works, right? The collisions become very, very rare.

B: So cool.

S: Yeah. Yeah. So I thought it was interesting just because the question of how do the bats not bump into each other is not necessarily intuitive because there's a bunch of things, like you said, Evan, like maybe they're using some non-echolocation mechanism or something else. But yeah, and they had to go out into the field to answer this question.

E: Does that also mean there are other scenarios in which they make those kinds of adjustments with their echolocation in other scenarios as well, when they're hunting or-

S: So clearly they can adjust their echolocation, right? So they will avoid frequencies that other bats are using. They obviously, they will optimize it at other times for prey, right? They're looking for an insect. They need to be able to see an insect at a much bigger distance, right? To zero in on it. And now they have a different paradigm of echolocation when they're flocking in a tight group, you know? So that's really interesting. But also the technology, you know, the idea that they have to, you have to record bats in the field. You're not going to really solve these problems in the lab. And that we have the technology to attach tiny little ultrasound sensors or the sensors for the frequencies of sounds that bats are using in echolocation. Yeah. So it's a cool study.

B: I hope those poor bats aren't just stuck with those stupid sensors for the rest of their lives.

S: I don't know.

E: Should just use their eyes, you know, bat their eyes.

B: Oh boy.

C: I'm assuming that, Bob, I'm assuming they fall off after time.

E: Yeah, they probably don't last that long.

S: The study was done in the greater mouse-tailed bat.

E: Oh, right. Yeah. Are all bats, do all bats have a feature like this?

S: Well, I mean, probably all echolocating bats, especially ones that have really dense populations in caves.

E: In caves. I can't imagine all bats live in caves or, you know.

S: No. Like the fruit bats that we saw in Australia.

E: Those are tree dwellers.

S: Tree dwellers.

E: So they wouldn't need that specific ability.

S: They probably don't have, not all bats have echolocation either.

E: Aha. There we go.

B: True.

The Extremely Large Telescope (30:15)

S: All right. Bob, tell us about the Extremely Large Telescope.

B: Yeah.

E: Is it that big?

C: Extremely.

B: All righty. So, guys, a recent study suggests that the Extremely Large Telescope, ELT, which is being built in Chile, could detect signs of life on nearby exoplanets, including those that never transit in front of their stars, which current telescopes struggle with. So the study is, I'll just read the first part of the study title. The first part, it says, there's more to life in reflected light, which I really kind of like that. So I've become even more enamored with the ELT, or Extremely Large Telescope, once I did more of a deep dive the past couple of days. I really am looking forward to this thing being finished. Now it's, of course, it's aptly named. Its primary mirror will have a diameter of a whopping 39 meters, 130 feet primary mirror. What? That's nuts. It'll gather more optical light by an order of magnitude than all previous telescopes. Get this one. Its images will be 16 times sharper than Hubble, 16 times sharper than the Hubble. That's pretty big. All right. So its main superpower, though, will be examining the atmospheres of nearby exoplanets. Now, this is currently being done, for example, by the James Webb Space Telescope. And so what happens is that as the exoplanet transits in front of its star, the starlight goes through, it goes from the star, through the atmosphere, and directly to our instruments. That's the path. Boom. That's how it goes. Now, this creates the very, very clear, very sharp absorption spectra, which means that certain wavelengths of the star are absorbed by gases as it travels through the atmosphere. And then we know, once we get that light, then we can look at the absorption spectra and say, oh, look, these elements are missing when it goes through the atmosphere. That's what must be in the atmosphere, right? Now, the ELT, the Extremely Large Telescope, can do that as well. It can look at light coming directly from a star through the planet's, the exoplanet's atmosphere and right to our instruments. But it would also be able to do something that's impossible for James Webb. Now, many exoplanets don't transit right in front of its star from our point of view, right? I mean, we just happen to be lucky that these transiting exoplanets, we're kind of edge on, right? We're edge on to the solar system so we could see the planet going right in front of it. But non-transiting stars, this does not happen. We are at more at a higher angle, right? More of a perpendicular type of angle, and it just doesn't happen. But now those exoplanets, they will still obviously reflect the star's light as well, and that can be helpful, but not as helpful as the transit spectra. So my question to you guys is, why do you think that this reflected spectra, the light going from the star bouncing off the planet to our instruments, is not as good as the transit spectra, which is the light that's coming directly from the star through the atmosphere and right to our instruments? Why? Why is the reflected spectra not as good?

E: Diffuse, diffusion, right?

B: Kind of. Yeah, you're kind of right around the answer, I think. The reflected light, it's important to note that this reflected light does have absorption features. It's there, similar to the transit exoplanets. But the reflected light, those features are far weaker, and it's also more complex due to the reflection, there's scattering, and there's planet surface effects. So it's very like attenuated absorption data. It's very, very difficult and nuanced and complex. So teasing out that data is just not possible for James Webb. It can't look at the reflected light off of an exoplanet and do anything really with it. And that's mainly because James Webb, as awesome as it is, it's just too small. I mean, its mirror is like only, what, I think 10 meters, not near 40 meters. It's got much poorer spatial and spectral resolution. Its contrast detection is not nearly as good as what the extremely large telescopes will be. So the main reason is that the ELT, it's on our planet. It's not in space. So we could just load stuff onto it. It doesn't matter how heavy really it is, because it's on the surface of the earth. It's not like in some Lagrange point out in space. So the extremely large telescope is optimized to detect and find these biosignature gases hidden in this reflected light. It should do very well. That's what it was designed for. And that's all fine and good. But these researchers wanted to be able to predict, you know, how good is this telescope going to be when it looks at this reflected light? They wanted to up their confidence levels to say, all right, now that we know everything that we know about the design of the telescope and what it should do, what will it, you know, what can it really do probably, you know, in terms of just like, what do their instruments tell them it should be able to do once they go through what they did is they went through a special program with a cool name of SPECTRE, which stands for Spectral Planetary ELT Calculator for Terrestrial Retrieval, blah, blah, blah, whatever. It doesn't matter. Okay. So they use a computer program to model and analyze the exoplanet atmosphere. So what this program does is it simulates how different gases absorb or reflect starlight. So with this information, they then can predict with much greater confidence what the extremely large telescope should be able to do. So this is what they did. But this is one of my favorite parts of this is these test cases. They created these atmospheric test cases and they had four of them. So basically four distinct classes of terrestrial planet atmospheres that are possible. So one test case was a non-industrial earth, you know, earth as it was, you know, a couple hundred years ago, rich in water and photosynthesizing plants, yeah, photosynthetic biosphere without anthropogenic fluxes. So there's no, it is no humans, you know, necessarily with their industry mucking about with our atmosphere. So it's kind of before that. All right. The second test case was early Archean earth. Now this is where life was just starting to thrive on the earth, say about, you know, three and a half billion years ago or so. And then let's see, the third test case that they ran through their specter program was an earth-like world where oceans have evaporated. So similar to what, to Mars, to Venus, planets that at one point would have been almost indistinguishable from the earth perhaps, but they had oceans and then they disappeared and things just got worse after that. And then the fourth one was a prebiotic earth. So this is like an earth that's capable of life, but there's no life there. And then for comparison, they threw in another planet atmosphere, but this was more of a Neptune sized world with a very, very thick, much thicker atmosphere. And they threw that in there just for comparison. So why do you think these researchers had these different test cases? Well, they did it because they did it, that's fine, I know, this is, you know.

E: I was thinking.

B: They did that to determine if the telescope could distinguish between the different earth-likeworlds, right?

E: Of course.

B: They needed to know that no matter what kind of atmosphere was thrown at it, it would do a good job. But even more critical though, they wanted to be able to make sure that the extremely large telescope could distinguish and not trick us into like a false positive or a negative, right? So that was critical because you don't want to, that means that whether a lifeless world would seem to have life or if a living world would appear barren, right? You don't want that, especially you don't want to have a test case where you have this living world and the telescope says, yeah, there's nothing there, just go on to the next one. That's like the worst case scenario to miss something like that. So now the findings. So based on their simulations, the researchers found that the extremely large telescope should be able to make accurate distinctions for nearby star systems. So that's the good news. It should work as advertised, at least according to the spectrum program, it should do very well at distinguishing between these various worlds that have life, that don't have life, that don't have life, but may seem to have life and vice versa. They said that the program said that this is going to do very well. For me, the most interesting part was the closest star Proxima Centauri and its exoplanet called Proxima Centauri B. We've talked about that a couple of times on the show. We don't know much about Proxima Centauri B. It's the closest exoplanet, which is fascinating, but it's also a rocky world. It's within the habitable zone. And we don't know that much more about it, but it's, I mean, this is so encouraging, but we don't know if maybe it doesn't even have an atmosphere. You know, it looks like it's tidally locked and that's also very problematic, but this is the closest exoplanet to the earth. And that means it's going to be about as clear as anything, any other exoplanet is going to be. So the other thing that they found was that this telescope should be able to give us some really good or bad news very, very quickly. They said that it should be able to detect oxygen in 10 hours. It should be able to detect methane in five hours and water vapor in one hour, just in one hour, looking at a planet that's over four light years away. It could say, yep, there's water vapor on that planet with very high confidence. I also love how the survey will take into account important pairs of gases instead of just in isolation. So for example, you often will hear that, oh, you know, this exoplanet might have oxygen or it might have this, but it doesn't, you know, oxygen could be, it could be created by life processes or it could be created from geological processes. You don't know. So what they're doing for this is that they're doing, they're detecting these gases in pairs. They're going to focus on that. So for example, if you have an exoplanet with oxygen and methane, that implies that there's a continuous replenishment of those, of those gases, and that would make an even stronger case for life existing. If you notice, I'm pretty excited about this. It seems like we've been cataloging all, how many thousands of exoplanets have we found by now? Is it 5,000? 5,000 exoplanets? But we've got thousands of exoplanets and we've been discovering them since, what, the 90s? I mean, for so long we've been, I think it's time. It just seems the time is right to seriously take it to the next level and check out all of the closest exoplanets that are inhabitable zones for biosignatures. I mean, it could be, you know, the most tremendous news of the millennium. Imagine finding, hey, yeah, we've got a high probability, 95% probability that there is life on this planet that's only a few light years away, you know, four or five light years away. But let's see, I want to end with the XKCD comic that I came across. And it's funny because they have a list of telescope names and the top three boxes are checked. The Very Large Telescope, checked. The Extremely Large Telescope, we just talked about that, checked. There's also the Overwhelmingly Large Telescope. That was actually considered, in place of the ELT that I just talked about, they were going to make it the Overwhelmingly Large Telescope. That's the name, the name was going to be that because it was going to be not 39 meters but 100 meters wide. The primary mirror. But it was obviously too expensive. So they had to cancel it and they downsized it to the Extremely Large Telescope. But this comic's got a few more here that aren't checked yet. So guys, scientists, astronomers, these names have not been selected yet. The Oppressively Colossal Telescope, the Mind-Numbingly Vast Telescope, let's see, the Cataclysmic Telescope, and let's see, the Telescope of Devastation, that's interesting, Ominous. And then there's the Final Telescope. All of these are unchecked. But actually that makes me, that reminds me, some people are saying that this telescope, the Extremely Large Telescope, it might be the biggest one we ever create for this type of telescope, a reflector with a, you know, looking at optical frequencies. This is so big and expensive that some people think that we're never going to create anything bigger than that. And that would be, that would be a shame. But I totally get it. I mean, you can't be throwing around, you know, 10, 15, 20 billion dollars on something like this. Yeah, we might not see in our lifetimes anything bigger than this 39-meter behemoth. But I think we got many, you know, once this comes online at the end of this decade, I think, you know, we're going to have many, many years of amazing discoveries. Plus there's other telescopes, four or five other telescopes that I found that would qualify for this Extremely Large Telescope size range, you know, something like 20 to 60 meters or something like that. Or is it, or maybe it's a hundred, I think it goes up to a hundred. Because this is, it's a little confusing because the Extremely Large Telescope is being built in Chile, but there's also a classification of telescopes. When you're in the 20 meter to a hundred meter range, you are an Extremely Large Telescope. So there are like four other Extremely Large Telescopes that will be coming online for the next 10 years. I haven't looked at those two deeply yet, so, but they will also be a big change, I think, in astronomy in terms of just something that are so, they're just so big, so much, they're collecting so much light. We're going to see some amazing discoveries.

S: Bob, you left one name off the list, a telescope of unusual size.

B: Yeah. They didn't throw that in that comic, but they probably should have, because that's definitely a nice Douglas Adams.

S: No, that's not from Douglas Adams.

B: Oh, no, no. That's a, yeah, that's the princess.

S: Princess Bride.

B: You're right. Very good.

E: I don't think they exist.

B: Nice.

S: You don't?

CIA and the Ark of the Covenant (44:12)

S: Well, Evan, but do you believe that the CIA found the Ark of the Covenant using psychics?

E: I believe part of what you said there, pieces of it. All right. The Ark of the Covenant. What do you, and I mean the four of you folk, know about the Ark of the Covenant that you did not learn from that most wonderful movie, Raiders of the Lost Ark? I know I couldn't.

S: What do I know about the Ark that I didn't learn from Raiders?

E: That you didn't, right, right. In other words, was your first real exposure to anything having to do with the Ark of the Covenant, the movie?

S: No, I knew about it from Bible class, basically, but we didn't, yeah, yeah. We knew about the destruction of the temple in Jerusalem and that kind of stuff. Yeah, I'm sure the actual memory of it probably comes mainly from Raiders.

E: It's got to be, right, contaminated by Raiders of the Lost Ark in a certain sense. Okay, for those of you who don't-

B: It's a wonderful contamination.

E: Oh, it is. It's still one of my favorite action adventure movies. It's a wonderful movie. For those who don't know, I'll tell you what this is. Okay, an Ark. What is an Ark? Ark, A-R-K, from the Hebrew word Aran, and it literally means chest or box. It can also mean a vessel of protection or preservation, all right? That's an Ark. Now, what's a covenant? A covenant, that is a sacred agreement or promise between God and, in this case, the people of Israel. The Ark of the Covenant is a chest made of wood and fashioned in gold, which acts as a vessel of protection. Now, what is it protecting? What is said to be contained in the Ark? Do you know?

S: The pieces of the Ten Commandments.

E: Yes. That is one of, or several pieces of, one of the items in there. There are two other items, supposedly, in the Ark. Does anyone know what those are?

C: Are they also documents?

E: No, they're not documents.

J: It's a piece Jesus used at the Last Supper.

S: No, it's Moses' lunch.

B: I thought it was a piece.

E: And a note from his mom.

C: Is it a shroud?

E: Not a shroud.

S: No, that's wrong.

C: Is it a cup?

E: You're getting in the right...

C: Like a chalice? A jar?

S: That's New Testament, Cara. You gotta go Old Testament.

C: They had chalices in the Old Testament. They drank out of cups in the Old Testament? What are you talking about?

E: Along with the stone tablets, or the remnants of the stone tablets of the Ten Commandments that Steve said, there's also something called Aaron's rod, which is, you know, what? Some kind of, you know, like priestly rod. And a jar on mana. I know, isn't that weird? Which is described as the food god provided in the desert. Okay. Because of the time it was the Hebrews who were wandering and they escaped from bondage in Egypt. Spend a lot of time in the desert and supposedly magically I guess food appeared in the jar of mana. So it could help feed the people. Now Steve, my next question was going to be, where is the story or where is the what's the source of the ark? You mentioned it. It's the Bible and specifically the Old Testament. Very good. Do you know which book in the Old Testament the ark is mentioned in?

S: Are they out of Genesis at that point? I think.

E: They are definitely out of Genesis.

S: Deuteronomy, and then...

C: No, you skipped Exodus, Leviticus...

E: Exodus, Cara. Yes.

S: Of course.

E: And when was and do we know when Exodus was written, approximately?

C: A long time ago.

E: Oh yeah, very long time ago. In biblical times.

C: Yeah.

E: 14th century BCE. So we're talking roughly, what, 3,400 to 3,500 years ago from today. Now, final question. Other than the story in the Bible, is there any evidence that the Ark and its contents were real?

S: No.

J: I don't think so.

B: Probably not.

E: There is no evidence. So it is not mentioned in any contemporary non-biblical sources, and they've looked. They've studied tomes from Egypt and Canaan and Mesopotamia from the time, and nowhere else is it mentioned but in the Bible.

B: What about the Dead Sea Scrolls?

E: Some scholars believe it could have been a real actual object, but others think it's symbolic, written into biblical text as a way to show Israel's special connection with God. But the Bible itself treats the Ark as a very real item, and it was built to exact dimensions and handled with very strict rules and linked to many major events in Israel's early history. Some believe that the Ark was lost, destroyed, captured, and for a long time certainly forgotten. Did you know that there is a church, the Ethiopian Orthodox Church, obviously in Ethiopia, and they claim that they have it, that it's housed in a chapel in a place called Axum, A-X-U-M. However, nobody's allowed to see it.

C: Of course. But trust us, it's there.

E: On a tangent to that, there's a story about in World War II, I think it was an officer of the British Army, visited supposedly this place where it was housed, and supposedly saw it and deemed it to be a replica, something that looks like an Ark that they can claim is a holy rock. And we know that that's not unusual for churches throughout history to claim that they have specific relics of importance, but they are all, frankly, replicas of what was supposedly the original relic.

B: Churches duping people? What, that happened?

E: Never, ever, ever. So scant evidence, frankly, no evidence at all that it ever existed. But has that stopped people from believing the Ark of the Covenant is real? No, it hasn't. Because why let something like lack of evidence get in the way of a perfectly good belief? Now, of course, people in the latter 20th century, around the time that the movie Raiders of the Lost Ark came out, those sophisticated modern people inhabiting an age of science, technology, and reason would never waste their time and money and government resources looking into objects described in the Bible with no evidence to support that the objects even existed in the first place. Right? Am I right? Well, would you believe the United States Central Intelligence Agency, the CIA, or the U.S. Department of Defense, the DoD, would undertake efforts to go looking for the Ark of the Covenant? Oh, yes, they did. And of course, but, you know, well, they're going to use the most sophisticated tools at their disposal, right? The highest technology, sophisticated methods, and calling their most celebrated assets to search for the Lost Ark, right? Well, I'm here to tell you that not exactly. Hey, if you want a movie that more closely resembles reality than Raiders of the Lost Ark, go see the movie or read the book, The Men Who Stare at Goats. And that's a fascinating look at U.S. military's utilization of self-proclaimed psychics who use their so-called powers of remote viewing to see things that are hidden away, no matter where those things are on this planet or in some cases other planets. We'll get to that. And that's what the CIA and Department of Defense did in the 1980s. They called on the services of scammers, I mean psychics, to try and locate, by the power of remote viewing, the Lost Ark of the Covenant. And that is in the news this week, as a slew of declassified documents have been released by our government in recent weeks, some of which admit to these efforts. Now, this is not the first time that these documents have been talked about or parts of them declassified. It was back in 2000, actually, when this was first known and first declassified. However, the thing is, in the year 2000, the Internet is just a shadow of what it is now. I mean, yes, there was the Internet, but I don't know that it had the same cultural saturation sort of that it has today. If something like this hits the Internet now, obviously everybody's going to know about it. But back in 2000, maybe not so much. So it wasn't as easily an accessible story as it is now. But it's experiencing a revival now, and it's why it's a news item now, because it's all part of a slew of other declassified documents that are being released by our government on all kinds of different things. But this one is getting particular attention. Obviously, it's very clickbaity, and a lot of news outlets, tabloid and otherwise, are running with the story because, hey, who doesn't like a good story about the lost Ark of the Covenant? So what did the documents say? What has been revealed? All right. So yes, that they admitted that they paid for these efforts, the CIA and the Department of Defense. They hired these remote viewers to the tune of millions of dollars for these entire projects that ran for the better part of 20 years or so. And that means our tax dollars went ahead and paid for it. But not only did they go searching for it, but they were successful, and they actually found the lost Ark. However, remote viewing of an object is one thing, and the retrieval of a remotely viewed object, that is quite another thing. This isn't about the retrieval, but it's the story about the remote viewer's supposed success in actually finding it through their powers. You got to think back. So this was during the Cold War, like the end tale of the Cold War. And the US government was launching secret psychic research programs under the umbrella of what became eventually known as Project Stargate. The programs were aimed to determine if remote viewing could be used for espionage and mostly having to do with the troop movements and defense movements of the Soviet Union and taking a look at their missile silos and seeing what conditions they were under. But hey, if you could use remote viewing to look for those kinds of things, maybe you could also use this information to look for, oh, I don't know, lost religious artifacts and relics. So that's exactly what they did. In one particular case, they worked with a remote viewer who, let's see how they describe it. Remote viewer number 32, who knows how many hundreds of these remote viewers that they hired over the years. In a remote viewing session on December 5th, 1988, remote viewer 32 was tasked with identifying a hidden object. And they allegedly did not know that the object that they were being tasked to find was the Ark of the Covenant. So without that knowledge, just find this hidden object. The only thing that the remote viewer knew is it was somewhere, something that existed somewhere in the Middle East region, the region of the world known as the Middle East. Okay. So the psychic described a location in the Middle East that claimed housed an object that was being protected by entities. And here's what it says. The target is a container. This is right from the document. This container has another container inside of it. The target is fashioned of wood, gold, and silver, similar in shape to a coffin. And it's decorated with a seraphim, which is like a six winged angel or, you know, these angelic kind of creatures. The declassified document showed that several pages of drawings accompanying this written description would turn out to be something resembling what the Bible described as the lost Ark of the Covenant. That visuals of surrounding buildings indicated the presence of mosque domes. Well, gee, of course, if you're going to be, you know, if you're given the clue that you're looking somewhere in the Middle East, that's a pretty, even I could have probably told you that without any powers. But that the object was hidden underground and in dark, wet conditions. There's an aspect of spirituality, information, lessons, and historical knowledge far beyond what we know. Remote viewer number 32 continued. They described it as being protected by entities that would destroy individuals who attempted to damage the object. The target is protected and can only be opened by those who are authorized to do so.

S: Did they say it would melt your face off?

E: Right, exactly, Steve. I mean, come on, it's 1988and the movie, I mean, is it not clear that this remote viewer figured out on their own? And frankly, it doesn't take much if you're being hired to look for an object in the Middle East, why wouldn't you describe something akin to what you saw in Raiders of the Lost Ark and describing the Ark? But this is considered to be, oh my gosh, how could this remote, how could the psychic have known what we were going for when we didn't give him hardly any clues and so forth? Apparently, some people were very impressed by this information that the remote viewer provided. However, to us in the skeptical community, we know this as many things, not the least of which is a cold reading and using those kinds of techniques to come up with something akin to what would be a hit in this case and impressing people who otherwise are unaware of such tricks.

S: Yeah, it's completely unimaginative. I mean, you could have made that up off the top of your head, just winging it.

B: She could have asked ChatGPT and done 10 times better.

E: Totally.

S: ChatGPT would have been overkill.

E: So yeah, making the rounds, you know, Ark of the Covenant. Yeah, I never thought we'd be speaking about this on the show, you know, I mean, actually bringing it.

S: They never went to get it because they didn't say it's here on the map, it's just like they just described the place that it's in.

E: Right, yeah, give me the longitude, give me the latitude, give me the depth, you know, no details like that, just basically a cold reading.

S: I'm surprised they didn't say, I see a giant warehouse.

E: And a man with a hat and a whip.

B: And a rat in apparent pain.

E: All right, I'll leave you with this. Tune in next week when we go exploring for more archaeological treasures as those described in that other famous relic hunting movie, Monty Python and the Search for the Holy Grail. Thank you very much.

S: That's historically accurate, that one.

E: Just as much as Raiders of the Lost Ark probably.

S: Just as much. All right, thanks, Evan.

E: Yep.

23&Me Selling Personal Data (58:32)

S: All right, Cara, I don't think we mentioned last week that 23andMe went bankrupt, right? And now we're hearing a lot of stuff about what's going to happen with all that data they got.

C: So 23andMe, the genetic testing company, filed for bankruptcy on March 23rd, 2025. As you mentioned, a lot of people are kind of questioning what is going to happen. A few days after that filing, a U.S. judge did rule that the company could sell its consumer data as part of that bankruptcy. So could be chopped up for parts and sold to the highest bidder. We're seeing attorneys general across the country warning different state citizens, delete your data, do what you need to do, ask them to destroy the spit samples. A lot of people are talking about the genetic data, right? The actual DNA information as this big source of sort of fear. What's going to happen with the code that makes me? But there's a great article that was written in The Conversation by Kate Spector Bagdadi, I might not be saying that correctly, who's an associate professor of obstetrics and gynecology at the University of Michigan. And she wrote about what's going on with 23andMe and what kinds of things that maybe we haven't thought about and should draw some of our attention to. So the terms and conditions that we all signed up for. And when I say I'm going to say we and be inclusive throughout this process, because I am a 23andMe customer. Any of you four ever do 23andMe?

E: I did not.

J: Nope.

C: None of you. Okay. So when I say we, I'm referring to myself and all of the listeners who are also 23andMe customers. You know, when we originally signed up, we signed a terms and conditions and a privacy notice. And that said all sorts of stuff that we probably wouldn't have wanted it to say if we had read the fine print, right? Like the company can use-

E: [inaudible] details.

C: Yep. And I mean, you can't, but you can't say no, or you can't use the product, right? So it's a trade-off, right? So they can use our information for R&D. They can share the data in aggregate with third parties. If you did any additional research, which most people did, individual information could be shared with third parties. The language clearly stated that if there were a sale or a bankruptcy, that consumer information could be sold or transferred to another company's holdings.

B: That's it. Case closed.

C: There you go. So what do we do? So the writer of this article is a lawyer and a bioethicist, and she's especially interested in direct-to-consumer genetic testing. She talks a bit about what 23andMe is. We don't really have to get into that. I mean, I guess if you're interested, it started in 2007. Obviously, it's named after, you know, 23 chromosomes in our DNA. And there are other direct-to-consumer genetic testing companies, but it was probably, I think it was one of the first and definitely the largest. Interestingly, a lot of these other genetic testing companies didn't last. They just couldn't figure out the business model. They couldn't make enough money, and so they went by the wayside. But 23andMe had tried to hold on pretty strong. It looks like over 15 million consumers over the course of its life purchased 23andMe. Most of those people consented to research. It was valued at $6 billion at one point, but the stock has been declining, and the company owes a lot of money to its creditors. The author of this article attributes some of that to a 2023 hack, where 7 million people's data was shared, and also just kind of a lack of interest in doing, you know, the collecting and the genetic information. Like, just fewer people are interested. I think it was a big boom, and then it's had a long tail. There's the important statement, if you're not paying, you're the product. You guys have heard that before?

B: Yeah.

J: Yep.

E: Sure.

C: Right. So we know that when we talk about like social media companies, our data is valuable. Our data are valuable. Yeah, subject-verb agreement there. Our data are valuable. Our buying habits, our, you know, personal information that helps different corporations learn how to market very targeted things to us. There's a note in here that I found really interesting. The author references a book that was written this year by a former meta executive named Sarah Wynne Williams. The book's called Careless People. And she talks about just how deep this goes. And I think we all know this, but it's just kind of a chilling example, that Facebook, for example, would use notion behaviors that they deemed related to self consciousness about personal appearance. Like, let's say you put up a selfie and then you quickly deleted it. If you did that, you were more likely to have beauty products promoted to you. So it's not just demographic data, but it's also behavior online. And really, we're not talking about one datum over here and another datum over there. We're talking about aggregate data and how important a story data in the aggregate can tell about individual users. There are some concerns here, not only about my genetic code becoming available online to, let's say, nefarious actors, or becoming available online to a corporation that I originally did not consent to have that information. But if you've ever been involved in 23andMe, you know that it's not just your genetic data that's present. There are a lot of sort of quizzes and individual data collection experiences to try and hone the health and wellness and lifestyle portions of the of the 23andMe. So it's not just like, hey, here's your genealogy. You are likely to have come from this part of the world this far back and, you know, these different migrations out of Africa. It's also you're likely to have a widow's peak, you're likely to wake up at this time, you're likely to be less affected by caffeine. And they get a lot of that information by collecting vast quantities of self-report data and then comparing that to the genetic data that they have. So we're not just talking about privacy of genetic data. We're also talking about privacy of personal demographic data and survey data as well. I didn't struggle with the accuracy thing at all because the way that 23andMe works is it does sequence your DNA and it does so quite accurately. It's the interpretation that's less accurate, right? It's the way that they determined, oh, you are likely from many generations ago from this part of, you know, this continent because we're looking at extant individuals living in that continent and comparing your DNA to them, which it just doesn't work because there are massive, you know, migrations in and out of places all the time. And we know that the vast majority of users are like probably what we call weird, right? Western, educated, and I can't even remember what it all stands for, whatever, richer white, you know, Western users. And so you have like really specific data about what county from England your ancestors came from. But then it's like, yeah, you're just like broadly West African because they just, they didn't have enough participants in those areas. I was never that concerned about that though because 23andMe does sequence your genome and you have access to the raw data, which then you can plug into other programs if you're looking for specific SNPs, right? Like if I wanted to know if I was BRCA positive, I could find that out with my 23andMe data. I don't have to then go get that specific genetic test. So to me, I saw it as an empowering way for me to have access to my own genetic code. And yes, I had to do it through a third party, but you would have to do that also if you were getting sequenced for medical purposes, right? Yes, the protections are much stronger, but there would still be other individuals who have access to that data. There are still risks of data breaches, all those things. Now, don't get me wrong. My data could already exist on the internet. There is a lot of fear about this leak. I myself am really concerned. But what I am going to do is what, like I mentioned earlier, most of these attorneys general are recommending and what the author of this article in The Conversation recommends, which is to go into your 23andMe and delete your data. So I already started that process before we recorded the podcast, but I didn't want to finish the process because I was afraid I would no longer have access to my settings window. What I want to do is talk folks just really quickly who may already be enrolled in 23andMe and don't realize how much of their private information is actively being shared. I want to help empower you all to log in and figure out what to do. So if you log into your 23andMe and you go to the tab called settings, you're going to have a bunch of different windows. Most of those windows ask about your demographic and personal information, but then things get hairy when you get to privacy sharing, preferences, research and product consents, and 23andMe data. So what I did first is I went through all of my privacy and sharing and I blocked sharing or disallowed sharing for everything. I'm no longer participating in DNA relatives. I'm not allowing my connections to see my results. I've blocked sharing invitations. I'm not connected to any apps or any reports. I have no viewers. Then I went through, obviously changed all my email preferences, and then all of the research and product consents. I revoked consent or declined consent to all of the different research participation that I was actively engaged in and revoked or declined consent for sharing of individual non-aggregated data. So all of that now is blocked or revoked. And then the last step is that you can go in and download your data. And that's what I'm doing right now. There's a report summary. There's ancestry composition raw data. There's the family tree data. And then there's all of your raw data. You can even submit a request to download your imputed genotype data R6 in its uninterpreted format. So they say as a collection of variant calling data files. I'm probably going to do that too, just so that I have access to all of these things. And there's also phased genotype data that you can download.

B: So what format are all these files in?

C: So they're all different. Like the raw data is a plain text file. The imputed genotype data R6 and the phased genotype data are, well, the phased genotype data is an uninterpreted plain text file. The imputed genotype data R6 is called, it's a collection of what they call variant calling data files. I don't even know what that is. So you have to use software like BCF tools to be able to use it. I don't know if I would ever be able to, but at least I would have access to that raw data. And then the rest of them, I'm assuming are going to be like PDFs because these are the reports that 23andMe interpreted from your data. So you'll get a download of everything you've ever had access to. You can also request all of that raw data to get it. And then at the very end, you delete it. There's literally a section that says delete data. Caution, data deletion cannot be reversed. You will permanently lose access to your reports. Any pending data download requests you've made will be canceled. So you want to wait and delete it after you've taken everything that you want. There's even a text box. Why are you requesting to delete your data? You can tell them what you think right there, and then you can permanently delete your data. It will be gone. Now the hope is it will be gone from the databases altogether. That's what everybody's recommending doing. And then you will be one less of a massive database that is then sold for parts. And so I highly recommend if you haven't done it yet, start that process now because getting some of this data downloaded, if you want to download the data, can take some time. And that's what I'm doing right now.

B: All right, cool.

S: It's a good lesson in reading the fine print, right? I'm sure most people who signed up for it didn't necessarily know that if it goes bankrupt, that all of your stuff could be sold, all your data, all your...

C: Well, even if it doesn't go bankrupt.

S: Yeah, even if it doesn't, but I think they get even more ability to do so because they could transfer all of their data to somebody else.

C: Yeah, but I think it's also worth mentioning because we talk about the difference between efficacy and effectiveness research all the time. We often talk about these things a little bit ideologically. Yes, it is a good lesson in reading the fine print, but also even if we had read the fine print, every word of it, you're making a decision. And that decision is almost always a decision between privacy and convenience. And you can't say, I decline and also still engage or have access. And so for a lot of people, they're willing to take that risk because privacy at this point, especially if you look at folks younger than me, folks who kind of grew up in an internet era, they don't think that privacy is real. They've been hacked so many times. They've seen so many data leaks that they're like, yeah, it's just the cost of doing business. Whereas when you look at generations that are older than me, because I'm right, I'm an elder millennial. I'm not quite a cusper, but I'm an elder millennial. So you look at the older gen Xers and then the boomers who did not grow up with this, there is much more fear and skepticism. You look at the younger folks who always grew up with this, it's not that they're naive. In a way, it's that they're kind of less naive. It's that they know that this is how it is to engage with the world, albeit dangerous.

E: Ambivalent, maybe.

C: Maybe that's a good way to put it. And so where is that balance between privacy and convenience? It's different for different people. But I think it's also a little bit naive to believe that we can exist in the world, that we can engage in banking, that we can engage in data transfer, that we can do our jobs online and be perfectly private and protected.

B: Yeah.

S: Yeah. Unfortunately, I think that's true. But I don't think that means that we just give up and just do it freely.

C: I completely agree, which is why I'm deleting all my stuff. And I recommend you do too.

E: Delete, delete, delete.

B: You say if you decline, then you really can't even use the service. That's why I seem to get prompted a lot these days when I go to websites. All right, here's our cookie policy. What do you want to do? And I decline everything that I can, but still let you in. It's like, cool. Yeah, that's awesome.

C: But that's a legal protection. And we have to remember that. That is a legal protection that was fought for by consumer advocacy groups. And it's even stricter in Europe. That's why if you ever travel outside of America, for the American listeners, to certain European countries, you get a pop-up every time you go to a website, and it's kind of annoying. You're like, oh, my God, why do I have to do this? But there are legal protections in other countries where, yes, you have the right to opt out. And there are some legal protections here in the U.S., but they're not as robust. But that didn't happen because these corporations wanted it to. Trust me, they fought tooth and nail not to allow that. That's regulation. That's regulation protecting consumers.

S: Exactly.

B: Yeah, some companies make it difficult to opt out. I love the ones that just have this checkbox, decline all. Like, yes!

C: I know. That's so much easier. Unsubscribe from all. Yeah, yeah.

S: But I hate the ones where I hit unsubscribe, and then my malware blocks it. It's like, this is dangerous. What the hell are you taking me to? And the ones I struggle with, it's like, okay, you can opt out. Enter in your email, and we'll opt you out of that email.

C: Yeah, I don't like that either.

S: I don't like that either.

C: Yeah, I don't trust it.

S: Just unsubscribe me. I know, I don't trust it.

J: I don't like the ones where they let you choose your preferences. Like, I want to not receive from this, this, this list, this list, right? But they never say, delete your email from our database.

C: Right, exactly.

S: Yes, right.

C: And that's what I'm hoping this 23andMe data deletion, I'm hoping that there was legally baked in early on these abilities for that reason. Because it does say, once you've deleted it, you can't even access it anymore. So maybe that means it's no longer in the database.

S: Right, but you have your report, so what do you care?

C: Exactly. That's why I'm downloading them all right now.

S: Yeah. All right. Thanks, Cara.

Who's That Noisy? + Announcements (1:15:21)

S: Jay, it's who's that noisy time.

E: Noisy time.

J: All right, guys, last week, I played this noisy. I'm about to play a noisy that might really irritate some people. So if you don't want to hear it, or you want to just lower your volume, now's the time to do it. [plays noisy] So what do you think, guys?

E: Well, I mean-

C: There's a siren.

E: Car alarm.

C: Or an alarm, yeah.

E: But that couldn't be it, because it wouldn't be that simple, right?

C: Some sort of alarm. Might not be a car alarm.

J: I got a lot of people that wrote in on this one. So Brianna Bibel, Bible, B-I-B-E-L.

B: Bibelbrox?

J: So Brianna says, this week's who's that noisy sounds like a game of laser tag. And then she has called herself the bumbling biochemist. I've played laser tag, and there's all sorts of noises that happen. Your bass is exploding, whatever. Sure, that kind of klaxon noise is in laser tag. Absolutely. That is not this noise, but it's not a bad guess. A listener named Paul Cycle wrote in and said, in this week's noisy, we hear two men communicating during an alarm. The alarm is operating because of a critical function. So he's trying to parse through it here. And then he finally gets down to, my guess is that this is a flooded power generation facility, and the men are starting up the flood pumps, which we hear at the end. You get two points for how specific you were. You are not correct, but you're not 100% wrong, because you're on the right track. You just didn't get to the right thing. Timothy Jerchishish. I mean, come on. He didn't give me his pronunciation. So that's his name, Jerchishish. Oh, Timon. It's Timon Jerchishish. All right, Cara, J-U-R-S-H-H-I-T-S-C-H.

C: I have no idea.

J: Pronounce it correctly. He said, hey, everyone. Pretty sure I am too late, but this week's noisy sounds a lot like the pace setting of defibrillators. This is cool, guys. He said, it's a good audio cue, so people have to perform CPR. People that know how to perform CPR know how fast they should go. He said around 100 BPM. That's pretty cool. Never heard that. Didn't know it existed. Now we do. Evil Eye wrote in and said, in the 70s, my mom bought a Buick Regal that had a theft deterrent alarm. You had to use a key near the front left panel to arm or disarm it. If anyone tried to lift the door handle while it was armed, it made the same noise. So yeah, some of us go back to the 70s on this show. That is not correct, but thank you for reminding me of wood siding on cars that my parents used to own. We do have a winner from last week. The winner is a listener named Andrew Lotus. And the answer is, this is what Andrew said. Jay, I am an air traffic controller and pilot. That sounds like an ELT, emergency locator transmitter signal from an aircraft. So this was close enough. There are more details and more specifics, but Andrew was 100% there for what it was legitimately. And I'm not even sure that it's any different from airplanes to boats because the original noisy comes from a boat. So let me read to you what this is. It's an EPIRB. These are emergency beacons that are mandatory on all boats traveling a certain distance offshore. They can be manually activated or they will automatically activate when a vessel sinks. And once activated, they emit a signal to a satellite, which is then coordinated by rescue parties to assist. Similar devices on land might also be called personal locator beacons or PLB. So yeah, bottom line is, you know, crafts have these airplanes, boats, anything where people might need to be located. And you know, loud clacks on noise always gets people on their toes if an emergency is happening. So anyway, got a lot of guesses, but there was a lot of people guessing around it, but didn't completely hit it. But anyway, that was definitely a cool noisy. I have a new noisy for you guys this week, sent in by a listener named Emma Powers. [plays Noisy] There it is. If you think you know this week's noisy or you heard something cool, email me at WTN@theskepticsguide.org. Steve, a few quick announcements.

S: Sure.

J: One, NOTACON 2025. There are still tickets available for the conference. You can go to theskepticsguide.org or you can go to notaconcon.com to find out more details. The schedule is up. We have a secret surprise guest, but it's not a secret because we're telling people who he is. And we have all of us at the SGU. We have George Hrab. We have Andrea Jones-Roy. We have Brian Wecht. We have Ian, who you might actually get to see his face if you come to this conference. He will still be hiding it, but sometimes you can see him. You got to just try hard. He's like-

E: I'll point him out to you.

J: So we also have a new announcement, guys. We are going to be doing a show in Kansas.

E: Kansas.

J: The show is going to be on September 20th. It'll be outside of Kansas City in a town called Lawrence. It's a college town. What university is there, Ev?

E: University of Kansas, the Jayhawks.

J: Yep. We'll be right near there. So if you want more information, like I said, you can go to either of those two websites. We will also be announcing a private show, which will probably, the location and all the details will come out next week. But if you're interested in tickets for the extravaganza, just go to theskepticsguide.org. You'll see a button there. That's it for now, Steve.

S: Okay.

Emails (1:21:13)

S: Let's start with an email. This one is about RFK Jr. and access to vaccines. And the message is, hello, y'all. I believe I could write you an entire novel of my concerns in life right now and provide an unending list of questions. How much, say, does RFK have to limit or eliminate vaccine access for the U.S.? What could we do to prevent it? If he can remove it, how badly would this affect the manufacturing and supply of vaccines once he is gone from office? An example I look to is the Lyme disease vaccine that did exist and disappeared and seems to be making a comeback now. I just do not understand how someone in power like RFK is so willing to be ignorant. Does he believe seatbelts shouldn't exist because they aren't 100% effective and 50 years of data can be ignored on their effect? I could go on and on and on with the amount of anger that has been building up inside since five years ago. Thank you for doing a good job. I look forward to your weekly release. All right. So we've talked about RFK and vaccines many times before. David Gorski actually did an article exactly on this question. What could RFK do to undermine vaccines in the U.S.? So go to Science-Based Medicine if you want all of those details. But there is a lot, unfortunately, that he could do. I mean, he can't directly affect the industry, but he can destroy the support that the federal government, the CDC, gives for vaccines. And it doesn't have to create much increase in vaccine hesitancy to have a big effect. So, for example, I just learned today that the CDC removed its reporting on measles.

C: Yeah, I saw that.

S: And which included a be sure to get your measles vaccine. So that's gone. And the CDC public announcement, you know, public service statements on vaccines are vaccines are a personal choice. Talk to your doctor about the risks and the benefits of vaccines.

C: Well, didn't they also recently cancel, like, the meeting that's necessary for them to determine the flu strains that are going to be included next year? Like, that's the thing. They do have a lot of power.

S: Yeah. Totally.

C: It's not just about influence. Like, they're determining where money goes and what decisions are being made about some things, like annual flu shots, right?

S: Yeah.

C: They weigh in on that. That's terrifying.

S: Now, you could also, you know, again, he's ginning up an anti-vaccine study that's, you know, he's guaranteed is going to produce a result that he wants. We talked about that last week. Again, putting people in charge of the FDA who are also going to be pseudoscientists. There are lots of laws on the books that you could use to you could misuse, misinterpret, in such a way that, you know, just makes it financially or legally difficult to use vaccines or to sell vaccines. But I think the biggest negative effect that RFK is going to have is just hollowing out the institutional knowledge that we've built up over generations, both in research and regulation and, you know, in knowledge about these topics, which is, it will take, it's hard to know, like how much time it will take to reverse this. Definitely years, maybe decades before we actually, I mean, we never fully recover in that. We will always be farther behind than we would have been had this not happened.

C: And it's not just hollowing out what we know now. It's completely like, I mean, they're gutting funding to anything coming up. So there are whole labs that are going to be shuttered.

S: Yeah.

C: Whole, you know, lines of inquiry are going to be abandoned. And maybe, you know, how hard is it to pick those things back up? We don't know. I mean, I definitely, if I got on my vaccine, I just got my titers tested yesterday. So I'm waiting to see if I still have my measles immunity. And if I don't, I'm getting a booster ASAP. Like, do it now while your insurance still pays for it. If you have insurance.

S: Yep.

C: See what you're missing. Now's the time. Now's a better time than ever. I think the thing too, that the reader asked about, like, I don't understand how someone in power, like, that is willing to be so ignorant. Does he believe seatbelts shouldn't exist? Because this is not about regulation for RFK. RFK loves regulation. He was an environmental lawyer.

S: It's also not about ignorance.

C: No, and it's not about ignorance.

S: He is a conspiracy theorist.

C: Yeah, this is intentional.

S: His thinking is wrong. It's terrible. I mean, you know, he has serious problems when it comes to his ability to logic and to understand science. He cherry picks. He distorts. You know, he has preconceived notions. He basically, unfortunately, his experience as a lawyer is being used for evil, right? He makes a lawyer's case for whatever side he is on. Rather than actually looking at the evidence. Like, his side determines how he sees the evidence. The evidence does not determine what he believes. And that's why it's like he's beyond logic and evidence at this point. One other thing that David points out, he is in charge now of the committee that recommends the childhood vaccine schedule. He could alter those recommendations, which is what schools use in order to require vaccines to attend school. So basically, he could eliminate the need for some or all vaccines for attending public school.

C: And what kind of mental gymnastics is going to be used when we immediately see the results because, you know-

S: It's happened already.

C: Of course, but like, this is going to be a cross-sectional study, not just longitudinal, right? Like, there's always a new generation of kindergartners. And so within no time at all, we will have data that show kids are getting sick, really sick. And kids are dying because they're not vaccinated or because they're not getting their vaccines early enough. And what kind of mental gymnastics is the administration and this specific department going to pull to try to explain that away?

S: Yeah.

B: Let's just blame Biden.

C: Go blame Biden. Yeah, exactly. Blame Obama.

S: And even if he does anything to rein in industry on food additives or whatever, like if you are still there thinking, oh, he might do something good, it's going to be an order of magnitude worse with the negative effect that he has from the vaccines.

C: Oh, yeah. You think your red dye number five or whatever is more dangerous than measles? Are you kidding me?

B: Just another government official with a body count.

C: Yep. Yeah. I wonder how many websites now we have. We've got to have like a main website and then a drop down where we can click each government.

S: What's the harm?

C: Yeah.

S: One more quick one. I'm not going to read this whole email, but this email, William from California says that he was on Neil deGrasse Tyson's Facebook page and Neil dropped a math problem. It's one of those counterintuitive math problems, right, where there's an intuitive answer that's wrong and the real answer is hard to wrap your head around. So here's the problem. A driver aims to average 90 miles per hour over two laps, but he completed the first lap at an average of 60 miles per hour. What average speed is needed for the second lap? What do you guys think?

B: Well, with the knee jerk, it's 120, right?

S: That's the intuitive wrong answer.

E: Right.

S: The correct answer is 180 miles per hour. Why is that?

C: This is not my forte.

S: Because you complete the lap in less time, right? You don't average the laps. These laps are artificial. You average the amount of time you spend at each speed. So if he averaged 60 miles per hour completing one lap, if he goes 180 miles per hour, he's going to complete that lap in a third of the time. So it needs to be three times the speed. That make sense? That's why it's 180. It's not 120. But it gets a little bit crazier when you extend it. So he said, for example, then a second meme popped up where the first lap speed was changed to 45 miles per hour. I was the first person to jump on this one and didn't realize that for this reason, there is no solution. For this version, there's no solution. So essentially, if you do the math, it's like you would need to go infinite speed.

E: Yeah, it can't be done.

S: Yeah, it can't be done. So you can't get down to an average speed of 60 miles per hour. You would have to go so fast, you know what I mean? But he's saying there are people in that discussion on Facebook who refuse to accept it. Like, no, it's 100 and what's the intuitive? It's 135 miles per hour. That's it. And say, stop confusing me with complexity. That's the answer. No, that's the intuitive wrong answer. Which this one is not that, to me, it's not that hard. Like, once you explain, it's like, oh, yeah, you're going three times as fast. You have to, it's a third of the time, you have to do it that way. It's not just the lap.

C: It's the idea of the lap being an arbitrary metric, but that's not half.

S: Perhaps a better way to look at it also is to just, you have to consider the total time and the total speed. So let's say in the first example, each lap was 90 miles. That's how long the lap was. So you have to, he wants to go 90 miles times two, that's 180 miles, averaging 90 miles per hour. That's two hours. That's the total travel time. But if he's going 60 miles an hour for the first 90 mile lap, that's an hour and a half, which means he has to go the second 90 miles in 30 minutes or half an hour, hence 180 miles per hour, right? That way the total time is two hours for, in order for the two, the 180 miles, two times 90 miles per lap to be 90 miles per hour average speed. That's it. And the second one, 45 miles an hour takes up two hours. So there's zero time left, right? You have to get the rest of the second lap in zero seconds in order for you to average 90 miles per hour. That's it, right? That's, that's, once you get that, you're like, okay, yeah. All right. Now the, the, the jump to intuitive answer was wrong. And then you understand it, but people are just doubling, tripling down on the intuitive wrong answer. And he wants to know what fallacy that is. And he says, is this the Dunning-Kruger effect or whatever? Is it just a desire for simplicity? I do think it's partly the people don't want to let go of the intuitive answers because it gives them a sense of control. And it does feel simple and elegant. And when you hear a bunch of complexity that you can't wrap your head around, you're like, that's got to be wrong because it's too complex for me to understand. I'm going to stick with a simple intuitive answer.

C: And of course, that's not what they're actually overtly thinking, but it is. It's motivated reasoning. This has to be right. Because the other thing is, makes me feel so uncomfortable.

S: But there's, that's true. But I also think there's something that just feels right about the intuitive answer.

C: Oh, there is. That's why we all would say it.

S: And that's why it's hard to let go of that.

C: But it's the holding on even after you're shown that you're wrong. When you go, nope.

S: Yeah. It's the Monty Hall thing too. It's like the intuitive answer just feels right. It's like, no, it's 50-50. It's like, no, you just can't give it up.

E: Run the experiment and you'll see what the result is and you'll see what the answer is.

S: Which has been done, but still just without doing that, like just trying to think their way around it, they just can't make that connection. But again, these are fun, first of all, because they're fun. But second of all, because they reinforce how counterintuitive math could be. Our brains are not really built for calculus.

E: Nope, we suck at it.

S: Our brains are not really built to be able to think up to seven digits or something. But that's pretty much it.

C: Well, and I think it's even beyond that. Because what we're not talking about is a complex calculation. We're talking about a complex word problem that requires conceptualization of the problem. That's the step folks are missing. It's not a calculation problem they're missing.

S: They're failing on the word problem.

C: They're failing on the conception. Yeah.

B: Reminds me of a Carl Sagan quote I came across today. "Understanding is a kind of ecstasy." Never heard that one before. I love it.

S: All right. Well, thank you, William. That was a fun one. Let's go on with science or fiction.

Science or Fiction (1:33:45)

Theme: None

Item #1: A review of health records finds that getting the shingles vaccine is associated with a 20% reduction in the risk of developing dementia.[6]
Item #2: A new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socio-economic groups.[7]
Item #3: A systematic review finds that older adults, >35 years old, do not experience greater exercise induced muscle damage than younger adults age 18-25 from the same exercise.[8]

Answer Item
Fiction A new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socio-economic groups.
Science A review of health records finds that getting the shingles vaccine is associated with a 20% reduction in the risk of developing dementia.
Science
A systematic review finds that older adults, >35 years old, do not experience greater exercise induced muscle damage than younger adults age 18-25 from the same exercise.
Host Result
Steve swept
Rogue Guess
Cara
A new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socio-economic groups.
Jay
A new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socio-economic groups.
Bob
A new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socio-economic groups.
Evan
A new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socio-economic groups.


Voice-over: It's time for Science or Fiction.

S: Each week, I come up with three science news items or facts, two real and one fake. I challenge my panelists to tell me which one is the fake. Just three regular news items. Cara, you missed a sweep last week.

C: You swept?

S: No, they swept me.

C: Have you had any sweeps yet this year?

E: Yes, one, I think.

S: Yeah. I think I've had one. All right. Here we go. Item number one, a review of health records finds that getting the shingles vaccine is associated with a 20% reduction in the risk of developing dementia. Item number two, a new study finds that mortality rates are overall higher in the US than Europe, but these differences disappear for the highest socioeconomic groups. And item number three, a systematic review finds that older adults greater than 35 years old do not experience greater exercise-induced muscle damage than younger adults aged 18 to 25 from the same exercise. Cara, since you're back this week, why don't you go first?

C: Okey-smokey. So first and foremost, a review of health records, getting shingles vaccine. Is that varicella? Is that shingles?

S: Yeah.

C: Yeah, varicella associated with the 20. I am still so frustrated that the shingles vaccine is not available here in this country to folks that are younger than, I think it's like 50 or something.

S: Yeah, something like that.

C: I want to take it now. So many people I know who had shingles had shingles in their 30s and 40s. Yeah, like I had chicken pox. Oh, yeah, I'm at risk. Anyway, so 20% reduction in the risk of developing dementia. 20% is really high. If that's true, that's good to hear. I kind of could see a connection here with dementia and viral illness and sort of preventing that manifestation. Shingles is brutal. And I don't know, I feel like when it happens, it happens for like a long time. It's pretty intractable. We'll have to take herpes medication to try and reduce its impact. And I don't know. I do think that more and more we're reading about the risk of viral infection and its outcome on developing dementia later on. Mortality rates are overall higher in the US than Europe. Yeah, I buy it. But these differences disappear for the highest SES groups. So you're saying the highest European SES groups compared to the highest American SES groups. You're not saying the highest American SES groups compared to the average European?

S: Yeah, if you're comparing the same socioeconomic group, it's not different at the highest.

C: Then we would lose. You're saying we would lose that. I could kind of see it, but I could also see that not being true. I mean, I definitely buy that mortality rates are higher in the US than Europe. But I mean, we don't have universal access to health care. But I think that that affects everybody, even rich people, because there's a lot of social inequity that even rich people are still subject to a lot of systemic problems in this country. So I believe we are only as strong as our most vulnerable. So even the rich people aren't able to kind of get off that boat. So I don't know. That one kind of rubs me the wrong way. And then older adults. I don't like that you said older adults over 35.

S: That's what the study said.

C: That is not how you define older adults. They do not experience greater exercise-induced muscle damage than younger adults 18 to 25 from the same exercise. Yeah, I mean, maybe the oldest of old do. But yeah, 35 plus. I think a lot of people are in their prime, 35, 45. Yeah, I kind of buy this one. Why would their muscles be more? I mean, yes, their muscles are older. But does that mean they're more inclined to damage? Or does it just mean that they're more inclined to atrophy? I don't know. Yeah, I think the one that bugs me the most is the mortality rate. When I have a feeling that rich people in the US are not saved by being in the US.

S: OK, Jay.

J: Yeah, so the one about the shingles vaccine, I think that one is definitely science. And while Cara was talking about the second one about the people on the highest incomes in the United States versus those in Europe, the basic question here is, can rich people in the United States buy the same level of health care that people can get in Europe? And everything I've read says no to that. So I'm going to agree with Cara and say this one is the fiction.

S: OK, Bob.

B: The 20 percent reduction in developing dementia from shingles vaccine. That's dramatic. I don't necessarily want to use the card of like, I would have heard of that, but that's dramatic. Shit. And if it's true, I'll be pissed because that should be out there more.

C: Yeah.

B: Wow. All right. The second one here. Oh, yeah. I think on the surface here, it makes a lot of sense because, you know, these upper socioeconomic groups will have a lot better access to better health care. That's my gut reaction to that one. And let's look at this last one here. The third one makes sense to me, although 35 might be a little bit young for it. But yeah, you this is this assumes that that the muscle, the more muscle damage you do, the more hypertrophy you would experience, right? I think it's kind of kind of implying that. So that would make sense then that less damage, less hypertrophy, the older you are, which absolutely makes sense because sarcopenia is a bitch and that's just, you know, sarcopenia is just all that means is that as you get older, you know, you start losing muscle. And this is basically this is downward spiral to death. But yeah, losing muscle. I noticed at my age, building muscle is, you know, it's hard. It's a lot harder than, of course, it was in my teens and 20s. So, yeah. So I think there's still a relatively dramatic comparison between or a contrast between, you know, 35, 40 year old compared to like a 15 year old in the gym. So, yeah. So that was makes the most sense of all of these. So I can totally justify that one. So that one's almost certainly science, I think.

S: Are you reading it correct, Bob?

B: Let's see.

S: It sounds like you're arguing for the opposite of what it says.

C: Yeah, it sounds like you're arguing for the opposite.

B: Older adults. So if you're greater than 35, you do. Oh, they don't experience. Yeah, they don't experience greater damage. So the older you are, the less damage, right?

C: No, the older you are, you don't.

S: It's the same. It's the same.

C: It's the same. Yeah. It's no worse than if you're 18.

B: Oh, okay. Sorry. Oh, crap. Let me rethink this then. So and I don't know what to think about that. I was so confident about that damn thing. But they still might have less damage.

S: You're right. They could have less damage. It's either the same or less.

E: No worse.

B: All right. So based on that, I'm less convinced of it. But I think I'm going to go with the crew, though, with the socioeconomic groups. That one is kind of rubbing me a little bit the wrong way. I think my instincts is going to be wrong with that. So I'll say that that's fiction.

S: Okay. And Evan?

E: I'll agree with the group and say that the mortality rates one is the fiction. I'm very curious as to what the association between the shingles vaccine and the reduction in the risk of dementia is. If I knew more about maybe the shingles vaccine, that would make more sense to me. But I'm very curious.

S: Well, let's start with that one. Then we'll take these in order. A review of health records finds that getting the shingles vaccine is associated with a 20% reduction in the risk of developing dementia. You guys all think this one is science. Let me just say, if this were true, this would be the single most effective way of reducing your risk of dementia, right? That's a pretty significant effect size.

E: Combined with regular exercise. Sounds good to me.

S: This one is science. This is science.

B: Wow.

E: Get your shingles vaccine, everyone.

S: This is probably causation. This is probably not just a correlation because of the way they did the study.

C: Really? So it's the actual infection.

S: Because it was a natural experiment, it was sort of randomized. In other words, people were not choosing to get the vaccine or not, right? So that eliminates a lot of confounding factors. They used a database where the vaccine wasn't available and then it was available, right? So people didn't get it because it wasn't available. Not because they chose not to get it or couldn't afford it or some other confounding factor. So they just created an opportunity. Oh, let's make a comparison and see what happened. And there was a 20% reduction in the risk of developing dementia later in life. And they think it's directly related to the effects of the herpes zoster virus on the brain. That this is a systemic infection. It can cause brain damage, basically.

B: Oh, crap. So if you had it, you're still at greater risk.

S: Yeah. So get your shingles vaccine, basically.

C: Well, so, and that was...

B: I had it, goddammit.

E: I'm going to get it.

C: Was that irrespective of chicken pox status?

S: You can't get the shingles unless you had chicken pox in the past.

C: But I'm saying those who were and were not vaccinated, they all had had chicken pox or no?

S: Yeah, I don't know.

C: Because I guess I would be really interested to see... Because obviously, if there's a direct relationship between a shingles infection and increased risk of dementia, is there also a relationship between a chicken pox infection and an increased risk of dementia?

S: Definitely chicken pox as an adult is really bad.

C: As an adult, but yeah, maybe not as a child.

E: I got mine at 12. I was 12 when I had chicken pox.

C: Yeah, I was a kid too.

S: Yeah, I was a kid. I was a bambino.

E: Bob, Jay, did you have chicken pox?

J: I did not have it. I never had it.

S: Really, Jay? I don't remember having it for sure.

C: Can he get vaccinated? There's a chicken pox vaccine now.

S: Yeah, you can get the chicken pox vaccine.

C: Yeah, if you never had it, you should get vaccinated. Well, no, none of us had a vaccine.

S: My wife had chicken pox as an adult.

E: Oh, it must have been painful.

S: Yeah.

C: Jay, you should get vaccinated if it's approved for adults.

J: I think I did.

C: Oh, okay.

E: Consult your physician.

S: All right, let's go on to number two, a new study finds-

B: Wait, does it matter if I had a very minor case of shingles? Because it was barely nothing.

C: Well, that's good.

S: That matters, right? This is because the chicken pox is because it's severe and it affects your brain.

B: It was just like a weird feeling in my back. Not even necessarily even painful. Just kind of like, what the hell is that? And I hear people complain about it. I'm like, shit, it wasn't much for me at all.

S: It could be really painful.

C: I've had patients who were in active cancer treatment for severe cancers whose shingles was the number one complaint they had. Way worse than their chemo experience. Yeah, it can be really bad.

S: All right, let's go on to number two. A new study finds that mortality rates are overall higher in the U.S. than Europe, but these differences disappear for the highest socioeconomic groups. You guys all think this one is a fiction. It's interesting that you all assumed and didn't even question that the cause for the difference in mortality rates was healthcare.

C: Well, no, that's one of many-

E: I thought you brought it up, Cara.

C: Yeah, I brought up that that's one reason is that they don't have access, that we don't have access to universal healthcare. But there are a lot of other pressures in the U.S. that are different than that.

S: If it were just healthcare, then this would make more sense because if you have the money, you get world-class healthcare in the United States.

C: Totally. But I said towards the end that all of those systemic pressures that we're dealing with here apply to rich people, too.

S: All right. Like what?

C: I would say probably like certain things aren't as regulated from an environmental perspective in the U.S. as in certain European countries. Education is also not as socialized here. So fewer people have access to free public education for later in life, child care, post-maternal care. I mean, we pretty much get the shit end of the stick when it comes to social programs.

S: Yeah.

C: And I think all those things combined probably do contribute to mortality.

S: Okay. Well, this one is the fiction. It is the fiction. Because the higher mortality rate in the U.S. was greater at every socioeconomic level. In fact, the highest socioeconomic in the U.S. had the same mortality rate as the lowest socioeconomic group in Europe.

E: Yikes.

C: Which means that the lowest SES group is probably like on par with developing countries.

S: Yeah. That's bad.

C: That's terrifying. That's so sad.

S: And it didn't even – I mean, it probably doesn't have really much to do with healthcare, certainly not the higher socioeconomic groups. It has to do with environment. It is a lot of the regulatory things that you're talking about. It's like the lifestyle factors, diet, and environmental factors and things like that. And going at the lower socioeconomic groups, which were definitely much worse in the U.S. than the higher socioeconomic groups, access to healthcare is also increasingly a factor as well. That probably explains a big part of the difference between high and low in the U.S., but there are a lot of the other factors. And it includes things like even social mobility, smoking rates. There's a lot of things that you have to take into effect.

C: Something as simple as daycare. Access to daycare makes a huge difference if you're having to work three jobs and, yeah.

S: Right.

C: It'll burn you out fast.

S: There are other things as well. We are a large, sprawling nation with very diverse genetic populations. And it's hard to compare that to a country like Sweden, which is very small and homogenous. So there are some legitimate differences that may not be a policy difference, but I do think that there are clearly, I mean, Europe is way more socialized, and that does seem to correlate with better outcomes in terms of-

E: Longer life.

S: Healthcare and regulation, et cetera.

C: And also higher quality of life.

S: Yeah.

C: We see that all the time in studies.

S: All right. Number, that means that a systematic review finds that older adults greater than 35 years old do not experience greater exercise-induced muscle damage than younger adults aged 18 to 25 from the same exercise. That one is science. This was surprising. This was not what the researchers expected to find. They thought that, like, we're physiologically in our peak, you know, 18 to 25, and that, you know, the exercise-induced muscle damage is, again, what Bob was saying, is sort of how you build muscle. And they thought that that would be superior, basically, in the younger adults. Maybe because they're, even though they're doing the same exercise, they're working their muscles harder or something. But they found no difference. And in fact, in some measures, it was less in the older population.

C: That's interesting.

S: So it's, yeah, so they concluded advancing age is not associated with greater symptoms of EIMD. So they said, basically, older adults can pursue physical activity. No problem. You should do it.

C: Yeah, but this is not a contributor to that lack of muscle or that loss of muscle kind of tone and bulk. Yeah, that's good.

S: But the younger adults had more muscle soreness and CPK release into their blood, which makes sense if you have more muscle mass at baseline. You'd experience more soreness and you'd have more creatinine kinase in your blood.

C: I wonder then, too, if probably it is the case that younger people are more at risk of rhabdo than older people, maybe.

S: Yeah, you have more muscle mass to break down. Although we end up seeing it in the older population because they have more things that could trigger it. Like the most common reason is to fall down and you can't get up. You're laying down for two days.

C: Oh, that would make sense. Yeah, so I should say exercise-induced rhabdo.

S: Yeah, I've seen rhabdo many times. I've seen it almost every time it's a, quote unquote, crush injury. It's because you're down or you're literally trapped under something. People who are in building collapses or whatever, get it. I saw one case, probably the worst case of rhabdo I ever saw, was not due to that. It wasn't a young patient who had fulminant myositis. His muscles were so inflamed they were breaking down over days.

E: Painful.

S: Kidneys were overwhelmed, could not clear the creatine kinase.

C: Yeah, the only people I've ever known in life to have had rhabdo, it's like the CrossFit effect. Steve can explain this better, Bob. But is it rhabdomyolysis?

S: Myolysis.

C: Yeah, it's where the muscle, physical muscle breaks down and the components are too big for your kidneys to process that.

S: Yeah, it just overwhelms the kidneys' ability to clear it. So you got to give them a lot of fluid is the big thing.

J: What does it look like, Steve?

C: Coca-Cola colored urine. That's good.

S: It will just shut your kidneys down. It could destroy your kidneys.

C: But isn't that like the first symptom or like an important symptom is Coca-Cola colored urine? And like severe muscle pain.

S: All right, good job, everyone.

Skeptical Quote of the Week (1:49:50)


“The fool doth think he is wise, but the wise man knows himself to be a fool”

 – - William Shakespeare, As You Like It, (description of author)

S: Evan, give us a quote.

E: "The fool doth think he is wise, but the wise man knows himself to be a fool." Famous quote from William Shakespeare's play, As You Like It.

S: So you're saying that Shakespeare anticipated Dunning-Kruger by centuries.

E: I suppose so, yeah.

S: I mean, it's not exactly the same, but it's the same kind of thing.

E: No, right idea. Yeah, yeah.

S: Nope, I think it's very wise. I agree with that.

E: Hey, there you go.

S: Yeah.

E: We all know we're fools here.

S: Well, that's good. hat's the humility thing that we teach, right?

E: Yes.

S: When you know enough to realize how little you know compared to how much knowledge there is, whereas people who don't know anything, they don't even know how much knowledge there is. So they think whatever little bit of knowledge they have is all that there is. So they sort of overestimate their relative knowledge.

C: How sad.

S: But it's totally fixable.

C: Yep.

S: Right?

E: Yeah, become aware of it, first of all.

S: It always reminds me of that joke where someone says to a drunk person, you're drunk, and they say, and you're ugly, and I'll be sober in the morning.

E: I think that's attributed to Churchill.

B: Attributed to Churchill.

S: Is it?

E: Attributed. I don't know for certain if that's the case.

S: Right. Ignorance is completely treatable.

C: Yeah. Yeah.

S: All right. Thank you guys for joining me this week.

J: Right.

C: Thanks Steve.

Signoff

S: —and until next week, this is your Skeptics' Guide to the Universe.

S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.

[top]                        
Back to top of page