SGU Episode 1043
![]() |
This episode was created by transcription-bot. Transcriptions should be highly accurate, but speakers are frequently misidentified; fixes require a human's helping hand. |
![]() |
This episode needs: proofreading, links, 'Today I Learned' list, categories, segment redirects. Please help out by contributing! |
How to Contribute |
SGU Episode 1043 |
---|
July 05th 2025 |
Mapping our place in the vastness of the universe: the Laniakea Supercluster. |
Skeptical Rogues |
S: Steven Novella |
B: Bob Novella |
C: Cara Santa Maria |
J: Jay Novella |
E: Evan Bernstein |
Quote of the Week |
“There is a kind of a spatial association between music and math ... the intersection of science and art. Medicine is an art and research is an art. You have to be creative in the way you design experiments.” |
Dexter Holland, PhD (molecular biology), lead singer of the punk rock band The Offspring |
Links |
Download Podcast |
Show Notes |
SGU Forum |
Intro[edit]
Voice-over: You're listening to the Skeptic's Guide to the Universe, your escape to reality.
S: Hello and welcome to the Skeptic's Guide to the Universe. Today is Wednesday, July 2nd, 2025, and this is your host, Steven Novella. Joining me this week are Bob Novella.
B: Hey everybody.
S: Cara Santa Maria.
C: Howdy.
S: J Novella, Hey guys. And Evan Bernstein.
B: Good evening, folks.
S: Bob and early happy birthday for you.
B: Happy. Birthday, Bob. Thank you. Very much.
S: Bob was born on the 4th of July.
E: What a birthday.
B: An awesome birthday. Never have to go to school on my birthday. Never have to work on my birthday. It was good.
J: And I think we've said this before, but Bob had a big, big family party with tons of friends every year because that was like a big thing that we did every year. So it was awesome for Bob, that was.
B: Like the event? One of the yeah, it was like, don't only second after like Halloween was. That was always a great day.
E: Love that.
S: So you guys want to hear about my tech hell, My tech help hell? Oh, is this the phone? Yeah. Christ, tell us all about it.
E: Gather round.
S: Very quickly I had to switch over my phone from work to my personal account rights, both on Verizon. So it's Verizon to Verizon. Just switch it from my give me the number from my business account to my personal account. Some process should have taken 20 minutes. Yeah, that's how long it took. 20 days 10. To 10 days cluster to do this guys, I did the whole process you know of like requesting that they release the number and making sure it was unlocked, which is a separate thing apparently. And then I then I have to call Verizon and give them permission to switch the number to my account and do whatever things they need me to do and they couldn't get it to work. It just wouldn't switch over and they couldn't figure it out. And so this got bumped up multiple layers like to hire and hire tech people until I was on the phone with like my tech person from Yale and three people from Verizon at the same time on the same phone call. They still couldn't figure it out.
US#06: Wow.
S: So they basically said, well, we're going to have to like get our software guru in there to try to figure out what's going on. So then like 2 days later, so now we're like a week out. They say they think this is, they think they fixed it. So now, you know, again, it was some whatever, something in the software. So go ahead and do it again. I did it and it didn't wouldn't work. There was a separate problem, a completely separate problem that was also a hard stop that nobody can figure out and could now. So now, so first they couldn't get it off of Yale's account, then they couldn't get it onto my account, which of course is the two things that have to happen. And it was because again, like some silly thing, the we have insurance, like family insurance that covers 3 phones, yes, but it only covers 3 phones. And now I'm trying to add a fourth phone.
US#07: So that's a big to do.
S: So the thing that kills me though is that it creates an error, but it doesn't tell the person what the error is or you know what I mean. So they don't know. It's just Nope, this phone can't do it right.
C: Yeah, seriously, it should have like a number.
US#07: 5 digit number or something.
S: It should immediately say, you know, this is incompatible with this service. So that took them two more days to figure that out. And then finally, you know, I had to like remove that service, then transfer the number over. So then, so 10 days later, I finally owned, you know, I have my phone number onto my personal account. So then I had to go get a new phone. And again, I've done this many times before with all my all my family members. It's usually not a big deal. You know, you get a new phone, you activate your new phone and it takes the number from your old phone, right? So I did that and we went, we went to the Verizon sort of do it in person, right, You know what I mean? It's. Always better. To do it in person and we do that. There was 2 something else happened too. For some reason we we got our Apple Watch in the mail from Verizon and they charge us like $900.00 for it. We never ordered it. We don't even own none of us own an iPhone.
Voice-over: What?
S: So we don't know how that happened. It's still a mystery. We have no idea how.
B: It's a mystery.
S: This got ordered so we we we you know, we were told bring that in and to return it. So we returned that at the same time, right. So we had the person do two things for US1 just take this take back this I watch and take the charge off of our account and two get me a new phone, right. So I got the new got the latest phone get home, activate my new phone activate. They attached it to the wrong number. They attached it to my wife's number. So no, I simultaneously.
US#07: Oh please.
S: Deactivate her phone and activate my phone to the wrong number. So that was really hard to fix. That was just incredibly hard to fix. I went again. Now I'm like hours, hours online, you know, trying to get this fixed and they couldn't do it. They got to the point where you're supposed to activate like I was trying. I had to activate my wife's phone back to her old number and then I could activate my phone to my to my number and it would the activation wouldn't happen. He didn't know why. He basically bailed on me and so I called someone else and they basically said you have to go to like we can't. So here's the other thing that happened. They said, OK, we need to send you a text to validate that you're that you have permission to do that. This is you, right? So because I guess they it was just fine, just security thing that you don't want somebody else stealing your number, right? So at this point, the what the other, what the previous person managed to do was make it so that my phone didn't work and my wife's phone didn't work. So neither of our phones work. So, and again, I'm doing this online. They say, OK, well we have to send a text to a phone on your account, which is my daughter's phone who's home for the summer. But she's the phone that the Apple Watch was ordered for. It was on her line. So what the guy did was when he deleted the charge, he deactivated her phone. So they killed all to deactivate every phone in my house. Oh my God. And there was absolutely no way to to activate any of the phones. Oh my gosh, you know, remotely, I had to go to another store. We had to find a Verizon store that was open till 9 go there physically and you know, so we could show them an ID and then and that was a 2 hour cluster, but it but it worked. We eventually got everything taken care of. Although just to add one more wrinkle in the middle of it, the guy we were dealing with deactivated deactivated by iPad by mistake, which is also on the same account.
B: Did you have any weapons on you?
S: So, but at the end of the, at the end of the night, we got everything working and it's all good, but it was like a maximal cluster. It was just unbelievable.
B: You do realize that that's what hell is like?
S: Just an endless loop of tech help.
J: Steve, were you ready to keep it? You're cool. Did you get pissed off at any point?
S: No, I kept my cool. The thing is like the, the, the, the person I'm dealing with is never the person that screwed up. You know what I mean?
J: Right. Well, that that one guy screwed up, though.
S: I mean, that guy's lively NASA, but their store was closed. We had to go to a different store to sort it out. I wouldn't go like.
C: Keeping your cool is is not just the best, but kind of the only way to eventually get where you're trying to go.
E: Steve, they're gonna send you a survey about. Your experience?
S: With experience. Oh, my God. And The thing is, you know, I was talking to Jay about this. Part of the problem is like the software backbone that's running all of these is too complicated. Nobody knows how to how to really manage it. The person at the store I was dealing with had to call tech help. Like he couldn't resolve it by himself. He had to get on with Verizon to like work the back end. And I so now to listening to him. So you know, the, the in store tech guy talking to the to the tech person at Verizon and like they're giving him a bunch of crappy advice and things too. He's like, no, that's not going to work. Like he's telling them, rejecting most of their suggestions about how to solve the issue until you get to the thing that actually worked. You know, it's just that they don't understand their own software and they don't. And I get the feeling like it's changing so frequently. There's so many moving parts. And again, like, it's not very user friendly. Like they get an error and it's a mystery as to what's causing the error. Like now they have to go on this hunt to figure out what the problem is.
J: And Steve, it's not uncommon that, you know, these companies have legacy systems, you know, like. You know they could be working on a code base where there aren't many programmers on that code base anymore.
S: Well, here's the other thing. Here's another layer to this. So part of the problem, the reason why it took two hours to like undo all of this is they couldn't activate my wife's phone. They just couldn't do it. It wasn't. So again, we were never going to fix this online. And at the eventually we figured out that the her phone, which is only like a couple of models back, just wasn't supported anymore and they they needed to give her a new physical SIM to do it. They couldn't do it on the SIM that was in her phone.
Voice-over: Oh my God.
S: And it wouldn't take the new E SIM, which is like an electronic SIM. So there was no way to fix it without physically swapping out her SIM card.
C: And if there was a error code, they would have known that right away.
S: But The thing is like, it's this is, I think a bit of planned obsolescence. Like they just, I see they're constantly changing the models and then they sunset the older ones after a few years. It's like you, they don't really support it anymore.
E: Yeah, it becomes. Incompatible, and when a problem arises, it's impossible to fix.
S: Like my daughter's phone, the one that they accidentally deactivated and we had to have reactivated again, there's a few models old and they're like, she better upgrade that soon or we won't be able to transfer the data to her new phone. Yikes right? It'll be so old we can't even access it to transfer the data. It'll be incompatible. So you like have to update every few years to keep in the loop. Like you know, otherwise you wouldn't. You get too old to even update everything.
US#06: It's.
S: Crazy. It's frustrating. So that was my that was Stevens. Sucked up a lot of time.
J: Steve legit like get off their network man. I don't have any problems. Like this on AT&T?
S: Well, I. Never had problems like this before. That's you know, I'm not going to overreact to 1 bad experience It's.
E: Sad that's how you describe it. Well, they are the most expensive too. I mean, that's part of the other frustration.
S: You can't tell me this never happens on other services. I don't believe that.
C: Yeah, I think it's like everybody has their reason that they stick with who they stick with.
S: All right, let's move on. Bob, you're going to tell us about how quantum electronics are going to fix all these problems for us.
B: Oh, absolutely. Thank you, Steve.
Quickie with Bob: Quantum Electronics (10:51)[edit]
B: This is your quickie with Bob people. So here is yet another claim that we might be able to make electronics 1000 times faster. We've heard this so many times, but is this one it? You know, let's explore it and to see what the details are at least. So researchers at Northeastern University published in the journal Nature Communications, and they claim a discovery that they say will allow them to change the electronic state of matter essentially at will, potentially making electronics 1000 times faster and more efficient. See, I told you so. This material was described as a quantum material. The name or the designation? I guess 1T dash, TAS sub two. I don't know what the nickname might be. Call it Taz. It's called a quantum material because it is. It's essentially governed, you know, electronically, magnetically, everything by quantum mechanics. So it doesn't behave like ordinary silicon or copper at all.
US#02: That's right, Copper.
B: That's right, come back. They found a way to make this material switch its phase to behave like an insulator or as a metal. And these phases could act like transistors, essentially switching between allowing current to flow and block the flow, which could then of course represent the fundamental ones and zeros of today's binary computers. Now if this works, and that's a big if, it would, it would have a tremendous benefits over silicon. It would really be so superior in so many ways. These phase states could exist in just a few atomic layers. So that would that would of course would allow ultra dense packing which could cram in far more components than silicon ever, ever would. The switching itself could happen not in billionths of a second, but trillionths of a second picoseconds incredibly fast. Obviously also the energy usage could be far less than conventional transistors. So it's such a win, win, win on those on those just these three basic characteristics would be would be so dramatic. This could potentially be just what we need as silicon, You know, every day gets closer and closer to that hard wall of physics limitations. It's really it's getting, it's everything's getting. The components are getting so small that you you're having electron leakage that is going to basically just make it unusable. And we'll reach a point where you just can't really make it that much smaller and, and, and faster. As usual, there's lots of hurdles left for this guy. There's like there's control, integration, stability problems and all of these, any one of those, it seems to me, could be this technology's undoing. But like, like many of these, these papers that we read about these huge advances in electronic, we just basically just have to wait and see, you know, which one takes off and when this is going to happen. So fingers crossed. Essentially, this has been your quantum quickie with Bob. Back to you, Steve.
S: Thanks, Bob. So remember that. I think it was a science interfiction from a few weeks ago or a couple weeks ago where one of the items which was true was looking at 2 dimensional. Yeah, transistors basically because they that doesn't have the same limit of scalability. So like with the silicon again below, as you get, as you shrink it down, the physical properties change and you get to a point where it just wouldn't function anymore. But with the 2D material it doesn't have it had, it retains its physical properties even at the smallest possible molecular size. And so that also be another potential pathway to solving that limitation.
B: I mean, it just, yeah, there's, there seems to be lots of avenues that are going down and really just, if one hits big, I mean, it could be a dramatic change with the computing industry. So hopefully we, you know, we just have just pure numbers into our advantage and, and possibilities. I you keep, it seems like we read, you know, we read, you know, it's like battery technology, you see. Oh, look at this breakthrough that they have in the lab. Well, yeah, they of course 99.9% don't pan out, but I mean, just got to see what happens.
S: All right, Jay, yeah, yeah, we're actually going to have a somewhat of AAI dense show this week.
News Items[edit]
AI Research Collaborators (14:35)[edit]
S: You're going to start us off talking about using artificial intelligence as research collaborators.
J: Yeah, I've been wanting to learn more about this. You know, I think about AI and all the uses that are spinning out out in the world, you know, and this is one, a big one, you know, like what's it helping science do? And of course, there's tons of examples of it, but this is a, a, a pretty strong thing that they've come up with. Pretty, pretty interesting and potentially for the future could be very helpful. So there are multiple research teams who are currently developing artificial intelligence systems that are designed. They're not just there to assist a scientist, but they're to simulate a scientific collaboration. Let me tell you how they're doing this. So the systems are using teams of AI agents and each assigned, you know, like a role, like a specific role. Like one of them is going to act as a neuroscientist. The other one's going to be a pharmacologist. There's another one that's going to be a critic whose job is to, you know, ask a lot of questions and, and literally criticize what they're doing to poke holes in it. And they have to interact with each other kind of like people do, right? They're they're talking to each other. They debate hypotheses, they propose research directions, and there's a conversation that's going on between all of them. And the goal is to mimic the structure and the reasoning of a real lab team using AI to accelerate the idea generation, you know, problem solving and scientific research. It's it's seems like pretty obvious, right? Yeah. We do this. We're going to have, you know, these these specific bots who we've given characteristics to and we're going to have them talk and see what what happens. So the research right now is actually happening in several labs. We have one in Stanford, we've got one over at Google DeepMind. There's several of these that are happening in China and the researchers are experimenting with what they're calling now. It's called AI Co scientist systems. So how do these systems actually function? They're similar. They're very, very similar Stanford setup. They called the virtual lab and they use GPT 4 based agents and they configured them to play different roles like I said earlier in a simulated scientific meeting. So for example, you provide a goal, you assign roles. So you could be like, OK, we're going to try to find a drug for this specific illness. What they do is they run several discussion rounds and then, you know, the, the ChatGPT or whatever model that they're using, it'll search all the current literature, it'll generate proposals, It'll evaluate a lot of potential answers that have come up with. And Google is testing something similar and they're using their Gemini 2O system. So their, their Co scientist model assigns each agent again a, a specialized task. They have to come up with a certain number of ideas and they're supposed to come up with ideas. That's, that's a big part of this. They have to do a literature review. They need a criticism phase synthesis, you know, then they run the group through multiple review cycles. Typically they say that ends up with a written summary at the end. The appeal is that these systems can generate faster ideation, broader coverage of existing research. You know, it is a structured internal debate which they have control over and they can see what's happening. And in the early tests, these multi agent systems definitely outperformed single chat bots on scientific reasoning benchmarks, right? So they're saying, let's take a bunch of chat bots and how much more, you know, money and time does it take to spin up a bunch of them in these roles versus having one chat bot to it? It really isn't crazy time consuming. It isn't it's not that that much work to set all this up and then you just flick the marble and see what happens. It works remarkably better than a single a single chat bot doing it on its own. And this includes graduate level science scientific questions. So what I also found even more interesting is that they're capable of proposing new ideas that human researchers might not reach as quickly in some cases at all. That's the important part of it. So let me give you a real world example that they had here. So this comes from liver fibrosis research. So Stanford pharmacologist Guy named Gary Peltz used Google's AI Co scientist to propose a potential drug targeting epigenetic regulators. Of these three drugs, the system identified, 2 of them promised anti fibrotic effects in human organoid models. Steve, you understand all of that?
S: You should too. So you know what Organoids.
J: Are I know I'm just, I'm just making a joke. I know it's just it's very sciency. Oh boy. So the results that they got exceeded the performance of drugs selected by by pelts himself. And while of course, this doesn't mean the AI discovered a cure, it does demonstrate that the potential for these systems to have a legitimate contribution meaningfully to Preclinical Research. It's not insignificant. And again, you know, we say this all the time. We are in the absolute skin of the Apple part of AI where, you know, it's, it's not a super progressed technology. It's it's essentially, it's brand new and it's, and it's already able to do things like this. There are limitations. You know, I have to say some reachers point out some really good points that we have to keep in mind. So out of the AIS suggestions, some of the researchers were saying that humans would likely have gotten to these conclusions on their own, which is perfectly prominent, right? You know, it's, you know, in that case you could say, well, the AI got there faster, but humans would still get there. The other note, so other people said that the simulated team discussions between the AI agents are they're logically consistent, they stay on topic, but they don't have the spark of the unpredictability of of real conversations between people.
S: They're not really creative, right?
J: Exactly.
S: But totally new ideas because they couldn't be trained on a totally new idea, right?
J: No. And and then quite simply, and this isn't simple, but you know, they're, they're not wired or programmed to function like a human brain. I mean, human creativity is super complicated. They can't get these chat bots to to behave like a real human. You know, there are, they're not only are they on rails, but they just don't have the mental capacity. They don't have any of that stuff. So there's no intuition, there's no sudden flashes of insight, there's no Eureka moments like that.
S: But do they? They could be good in a couple of ways it seems to me, you know, 1 is good at summarizing existing research. You know, like this is where we are, this is the questions that are still open, etcetera. And it may be good at at generating inspiration for ideas. They won't come up with the brand new ideas, but they might inspire the researcher to.
J: That's a good point, Steve, right. Because, you know, they're, they could be priming the humans they're working with, right. You know, and I know, I know artists that use AI generated content to spark their imagination. Oh, I never would have thought of that type of thing or whatever, you know.
S: Yeah, yeah.
J: Which is good. I mean, it's, you know, anything that pushes the ball forward here I think is a good thing. We also have to keep in mind though guys that these chat bots hallucinate and these systems can and did and do generate incorrect information which means the output still requires expert vetting. So there there is still that limitation and I have not heard anything about hallucinations going away at this point.
C: You know, one thing that I think about having recently done a dissertation is how in some way stuck we are academically in thinking about research the way we thought about it 1020, thirty years ago. I'm glad that you mentioned like literary or literature reviews because that's a really important part of a scientific paper. It's summarizing the state of the literature right now and kind of pointing to some of the most important or salient findings. One of the things that I did when I wrote my dissertation is I picked a topic that just not a lot of people are writing about because I was overwhelmed by the prospect of talking about something that had thousands of hits within the literature. And I felt like I did a pretty OK job of summarizing the state of the literature on my topic because it's it was somewhat new. I think that we talk to students today and academics today as if that task is doable. And it's not anymore because there are so many journal articles out there.
Voice-over: Yeah.
C: Like, just the sheer number of publications in existence today compared to 5/10/20, fifty years ago is astounding. And using just like standard library skills, you're going to overlook stuff. So I see this being hugely helpful.
J: Oh God, I know you know, it's this was something that we talked about earlier. I definitely remember talking about it on the show, but it was the idea that like, let's say we, it's a code base instead of literature. The, you know, the ChatGPT, for example, would be able to, to be more aware of the whole structure of all of the software, right? A future version say that you know when they when they finally refine it and are able to expand on how much like it can have active memory and, and they they can really get rid of hallucinations. You know, you imagine if it's able to have all of Amazon's global shipping and and business management software, which it's a huge code pace like it's bigger than any 100 people could ever fully comprehend, right. It's just this massive code base with tons of avenues and everything and that's the same thing with research. If you're talking about, you know, it has to process through 10,000 papers and be able to get rid of the bad stuff, flag the good stuff, and then fully understand what's going on with the good stuff. You know, this could be a game changer to keep up, you know, to let scientists keep up like you were saying, like, OK, good. We can, we can process through all this and but we have to have systems that we can trust.
S: Yeah. I mean, I think again, it's a powerful tool and if used properly, I think it could be a massive benefit, especially in areas where, again, there's an overwhelming amount of information that we have to deal with, like in research and medicine. But it's also can be used poorly, right? And and can if if it's like a substitute for thinking like like the lazy route out, then it could be a detriment or like we don't you know, again, I think the best recent example is RFK junior submitting a report that had like 6 fake non existent studies in it because some Jackass in his his department used clearly used AI to generate a report and didn't check it out. And.
E: Never went to the references. Holy moly. Yep.
S: So that's easy, yeah. But that's going to happen more and more.
J: Of course it's.
C: Happening all the time, across every school in America, in the world.
J: You know, the researchers themselves said the real value here lies in accelerating parts of the scientific process that are time consuming but absolutely have to happen. And again, you know, I, I use ChatGPT, you know, it's helping me write a screenplay. It helps me with SGU work that I'm doing, you know, not just asking me, not me just asking, Hey, help me like frame this e-mail, you know, like you help me generate some copy for an e-mail that I want to send that's complicated or whatever. But you know, I, I personally will give it multiple articles and I'll say, reduce all this down to bullet points for me. And like, let me just see, you know, let me see all the important information in that format that helps me so much with the work that I do for the SGU. In lots of ways. It's it's powerful stuff. If it's very powerful and it's super useful, but man, the doctors can't lean back. You know, researchers can't lean back and not give it expert supervision, supervision, careful validation that, you know, they have to work with it. It's another tool and we can't let it it, it's got to remain a tool. It can't. It can't be the thing that does everything.
S: Let me ask you a question, have you asked your chat bot to marry you yet?
J: Oh my God, remember? So we had a live stream today, guys. Bob, Steve and I did a live stream and there was this guy. Oh Christ.
S: He fell in love with his chat bot.
J: He fell in love with his chat bot and then what happened is he hit the Max memory of that session and it got and it had and it got wiped.
S: He got reset.
J: And he cried and the guy was crying. I'm not putting him down for crying hour.
US#07: Like that movie 51st Dates or whatever that movie was, Yeah.
J: Like it was. Gone. So it got creepy though, guys, because you know, his wife knows about it and she's not happy. And then the, of course, the interviewer like asked this really hard question, like goes to the husband. Like, you know, if your wife wants you to not use this anymore, would you, would you do it to, to remain healthy in the marriage? Pretty much. And he's like, no, no. And then he's like, you know, it's it, it's expanding my intelligence. And it's, it's like, yeah, sure. The way I use chappie GPT, Yeah, it, it's great research tool and all that. But the fact is, you know, one thing that Ian noticed when we were looking at it, they they had a picture of his phone on his desk and he and we could read the text that he was typing to the freaking chat bot. No, no, no. Yes, and it was sketchy. It was like, oh baby, like when you use your tongue and shit. Like what the chat bot was saying is Tara.
C: The sad thing here is that he wasn't quote, falling in love with a chat bot at all. He might think that's what was happening, but the a chat bot's not a person, right? A chat bot is not not an entity. He was falling in love with his himself.
J: That's exactly what I said, I said.
C: It's a shit.
J: It was falling in love with an.
S: Illusion.
C: It's not just it was an illusion, though. Yeah, an illusion, Yes, it was like an illusion that is ostensibly a mirror. He was throwing out the right things to be able to receive the right things. He just needed utter and complete validation.
J: But we can't. We were when we discussed it today. And I think this is worth repeating like we were, we were pointing out this idea that, you know, chat bots are going to become more personable. They're going to be more capable of having, you know, real honest conversations. And, you know, people will be developing relationships more and more like the people are, you know, it's happening now. I mean, people are like having relate, you know, they think they're having relationships with chat bots. And Steve pointed this out and I thought it hit, hit this whole thing right on the head. You know, imagine the chatbot is giving the person what they need and it's basically them getting for themselves exactly what they need and what they want, which could make them very egocentric, of course, which makes them enter their own little bubble. Not, not a political bubble that they belong to. Not, you know, it's it's like they'll become kind of egoists.
C: Well, and it's already happening to a lesser and, and I guess less sophisticated degree when you see grown adults who, and you see this even I'm not going to talk about a couple who's been married for 40 years, but let's say a new relationship, grown adults on their phones, swiping through social media, but they're sitting right across from each other. Yeah. They're not engaging with the real person that's right in front of them and enjoying and experiencing all of the beautiful things that come from socialization. But instead they're bubbling themselves in because they're not actually engaging with social. Those aren't people. They're engaging with themselves.
J: So I find it very intimidating and scary to think that in the future right now, ChatGPT is so cheery and so you know, rah rah in your corner. I've told my ChatGPT to not do that. I want, I want it to be hard. I want it to be skeptical. I don't want it to yes me anything. I want it to to push back, right? And again, it's kind of weird to even say that 'cause I'm not talking about a person, yes. Great idea, Jay, but you know people will slip. Into these these egotistical, self-serving quote UN quote.
S: Relationships, well, it'll be Pygmalion, right? They will create for themselves the easiest, most being positive, affirming sort of persona, sure, and but not a real person that has their own needs, you know, and their own biases and everything. And so it'll creates this illusion of a person who is of fantasy who's completely unrealistic, and it could spoil them for real relationships, Yeah, you know, where they have to actually think of some other person's needs, you know?
J: Care. I would imagine that they'll be therapists whose job it is to detune people out of these quote UN quote relationships.
C: With absolutely. And I mean, I know that this is going to feel confronting a little bit, so I'm going to caveat this with that. But from a like feminist psychological perspective, I also worry about the gender component of this because I feel like we're finally in an era. We've still got a long way to go, but we're finally in an era where we're making real progress when it comes to relationship equity. Historically, a lot of relationships, that was the the angle, right, that men provided the type of security that was required for women to be able to exist in the world because they didn't have the rights to like have a mortgage or to use a credit card or even like long before that, women were really dependent upon their male partner for their existence in the world. And so you did see this abuse of power in many relationships where men sought out partners that were subservient and docile and just reinforced what they wanted to hear and think. I wonder if this is going to set us back from a gendered perspective because unfortunately, the sad thing is, and I'm not saying it doesn't happen with women, but these stories often are centered around men.
Voice-over: Yeah.
C: They often are centered around men finding, like, falling in love with their chat bots, not women falling in love with their chat bots. Yeah, because women haven't historically had to. We've had to kind of overcome that experience of like the narcissist partner.
J: It would be interesting to see statistics. You know, I'm, I'm really curious about this now. Now that we we saw this news item, there was a woman who was in charge of this subreddit who is basically about people in relationships with chat bots and she's in it and all these other people are, you know, I think it's going to happen.
C: It will happen to everybody. But I think similarly when we see people, this was a huge story 15 years ago, 10 years ago when I was doing a lot of television at the kind of dawn of this AI stuff was like the the sexbot story. I remember covering it for multiple outlets. Like what happens when sexbots also have large language model capabilities in them And I mean their end users overwhelm filmingly male.
J: You know, look, I, I, I could see a scenario in my future where I have a personal assistant who I could kind of be friendly with, like be friends with, right? You have a, you could, you'll have a rapport. They'll know what you like. They'll, you'll assign it, you know, personality traits and all that stuff. And you know, I never for a moment thought that I'd be entering, you know, that I know I'm not capable of this because I am so people centric, like to an absolute, it's the most important thing in my life is the people in my life. That's why this is horrifying to me. I would never, I have no interest in becoming like into getting involved in like a relationship with an artificial being. There's some there's nothing there for me. And I think we need to start teaching people about this. We need.
C: To do kids.
J: Need to hear about it.
C: And we've talked about the other risk. I know, Steve, we've talked about this on the show before where let's say like Jay, you were saying, you know, having a an assistant and I'm really friendly with it, but a lot of people won't be friendly with it. And the fear is that it'll reinforce dehumanizing behavior that then will translate into their daily.
J: Lives I just said that today, Cara, like I had this thing, so I told the guys, you know when I'm when I'm doing you know I could be talking to ChatGPT about a bread recipe, right, and it'll always comes back with a long winded response and they give you to so I at one point I caught myself saying still, I don't want to hear any of that. I said that to the chat and I'm like, whoa, I'm like, I can't let myself talk like that because that could be I could be training myself to have those kinds of a most. I was going there because it wasn't a person, and then I really.
C: We're allowed to be egocentric in that moment. This is all about you. This query, this help. It's all about you. But when you're engaging with a person, it's equivalently about both of you.
J: Well, look at what people do just with the barrier of, you know, like they have Internet balls, right? They're online. They're not face to face. Look at what people look at, all the dehumanizing thing that things that we've witnessed over the last 10 years that people have been doing online, It's only going to get worse with this.
C: And none of us are not like this. I listen to myself when I'm driving my car and people can't hear what I'm saying to them. You know, the drivers in front of me. Yeah. Like this is natural behavior.
S: I will Cara, yes.
C: I know there are so many segues.
S: While we're talking about the impact AI is going to have on us, you know, psychologically, socially, what is the carbon footprint of training all these?
AI Carbon Footprint (35:31)[edit]
C: AIS a deceptively difficult question to answer, which I didn't really realize until I started to dig into some of the literature on that. And it's funny because part of the reason that we don't really know how much energy our AI prompts use is because most of the companies who are developing these large language models don't share that information with us.
E: On purpose.
C: On purpose, yeah. We've got all these large companies that are not opening up about how much energy their LLMS are.
E: Because they're afraid they're going to get taxed.
C: Who knows why? I mean, they're not telling us. It could be because they don't know it could be because you know, the numbers are all over the place. It could be because they obviously it's not good for PR. What's so interesting is that if you ask this question, I haven't asked it to an LLF to it to it. OK, like, so here's like a full caveat. I don't think I've ever used ChatGPT ever. The thing that's strange is that if you do an Internet search or I guess if you query an LLM about how much energy or how much carbon emission or you know, there's different ways of slicing and dicing this does an AI prompt use. You're going to get a wide range of answers and you're also going to get a wide range of perspectives. You're going to read articles that are like super fear mongering, like, Oh my God, it's horrible. It's going to ruin the environment. Like don't use ChatGPT at all. And then you're going to get articles like the one that I am looking at right here called what's the carbon footprint of using ChatGPT? Very small compared to most of the other stuff you do. And like this writer says, and this was just a couple months ago, I used to feel guilty about it. Now that I've really looked into it, I'm not worried about it at all. And you should stop worrying about it too. So you really see two different ends of the spectrum, and you see a lot of different ways that people go through and do their own calculations. A recent article in Science News How much energy does your AI prompt use? It depends written by Selena Zhao talks about the you know why it's so difficult to answer this question and and really it focuses on a recently published journal article in Frontiers and Communication called energy costs of communicating with AI. This just came out in June of 2025. In this study, what researchers did is they focus specific on large language models that are open source because that's really the only way that they could do it. They knew that they wouldn't be able to get behind that curtain of, you know, the big players like Open AI and Anthropic who have said that they have the data, but they're not sharing it. And instead they looked at open source LLMS. They looked at 14 different models. Apparently Meta and Deep Sea could do publish some of that data, and they found that part of the reason this question is so incredibly difficult to answer is because there are so many different components that go into a single query. All queries are not equal. And so first of all, you have to break it down into two components of the carbon footprint altogether. There's the carbon footprint that is produced during the training of the LLM and then there's the carbon footprint that's produced during individual or we can say cumulative queries after or kind of separated from the training. Apparently when it comes to how much carbon is produced with the emissions are from the training, it's it's still pretty much a black box. And most of the things you're reading online where these emissions are estimated are just based on queries. They're not based on that whole training kind of experience. The other thing that's so confusing is that there are so many different types of, I guess, parameters different LLMS have or different AI models have different numbers of parameters that you know, will result in different types of or different intensities, I guess you could say, of querying. So the way that this article describes A parameter, they say they're kind of like the internal knobs and model internal knobs that the model adjust during training to. Prove its performance. So as they say, quote. The more parameters, the more capacity the model has to learn patterns and relationships and data. GPT 4 for example is estimated to have over a trillion parameters. So when they did their their analysis for this scientific publication, they basically looked at 14 open source AI models like I mentioned and they ranged from 7 billion to 72 billion parameters. They looked at them all on a GPU called the NVIDIA a 100. Apparently we're not even using that anymore. So like even this data is out of date because we're using a much more powerful GPU. Now I'm a little confused about some of the articles that I read because this article says that with a more powerful GPU, we're actually talking about more carbon emissions. And I've read other articles that say, no, no, moving up to the, I think it's called the H100 instead of the A-100 from NVIDIA actually is more efficient. So it it's less carbon emission, I don't know if you guys have any insights on that or if you've dug that deep into like the GPU of these of these kind of chips.
S: I mean, I do know that the the newer, more powerful, better chips, you know, graphic cards do calculations with less energy. That's partly why crypto miners use them. They use the latest and greatest GPU's because it's all about how much money you're spending for the electricity to run the the process versus how much you mine.
C: Yeah, yeah, there's a purely like economic calculation going on with with those crypto miners. I think the issue here is that in order to to handle the load that's being put on it, they have to be upgraded, right. So even though they're more efficient, they're more efficient with a much larger load.
S: Yeah.
C: And so then the question is, you know, how is it netting out? Because the, the load is just getting bigger and bigger and bigger overtime, more and more queries are happening every day. And they also talk about, you know, you know, these different prompts that are used. They think that overtime, they call them inference, right? So they start with the training and then they inferences the life of the model where the prompts are being used. And they say overtime that's expected to account for the bulk of the model's emissions. Here's a quote by somebody who is interviewed for the article. You train a model once, then billions of users are using the model. So many times and they're saying it's hard to quantify the environmental impact because that impact can vary depending on which data center it's routed to, depending on which energy grid powers that data center, depending on the time of day. And that really only the companies that are running them have that information. So we're talking about not just how the actual query is routed physically, but we're also talking about the different parameters that are included with in the model that's then going to handle those queries. So so tokens is a word that's thrown around a lot. Can you guys define that? Those are. I mean they define it as the bits of text to model processes to generate a response.
J: Yeah, they break, they, they break things down into the words, into these tokens that essentially allows them to quantify the words. And then they they can use those tokens to actually build sentences and things like that. It's like a way of breaking it all down into like almost like a language in a sense, you know what I mean?
C: Yeah. And so one thing that they mentioned is that we've talked about this idea before. We both talked about it when we had a, our guest Rogue on. And also in previous stories that we've covered this idea of reasoning models versus sort of traditional or standard models, where in a standard model, what the LLM is doing is a bit of a black box to us. But in a reasoning model, it sort of shows its work, right? And says this is how I got from from A to B to C Reasoning models use a ton more energy use. They use a lot more tokens.
US#06: That would make sense.
C: Yeah, so they say. On average a reasoning model uses about or. In their study used about 543.5 tokens per query and a standard model only used 37.7.
J: Well, it has a, it has a. There's this a cost to the processing speed and the more tokens that you have essentially means you have a broader memory base, right? And and we there are token limits and if you hit the token limit, it just starts losing the earliest tokens in that in, you know, in that memory space I.
B: Mean it makes sense because I mean the tokens are the fundamental like units of text that the model processes, processes. So if you've got a lot, if you've got, you know, a lot of those tokens and you're dealing with a lot more things to manipulate and process.
C: So here's a real number because you sometimes will hear things thrown around like one query query is equivalent to using your oven for one second. Like that's one that I, I've been seeing over and over and over. But the authors of this study are saying that that's like wildly misleading you. You can never see a single number because it has to be arranged, because it totally depends on the complexity of the query, where you're making it, who you're making it to. But here is one place where the where the authors actually do use real numbers and compare it to a real life comparison. They said at scale these queries add up and they're talking specifically about using reasoning models. They said using a 70 parameter reasoning model called DeepSeek-R1. That's one of the ones that they used in the study to answer 600,000 questions. 600,000 questions for a single person sounds insane, but not if you look at any cross section of time. Of all the people querying these LLM's that would emit as much CO2 as a round trip flight from London to New York, that's a lot.
B: That's a lot.
C: That's a lot. And they're saying even still, none of this accounts for the emissions generated from manufacturing the hardware, from building the buildings that house it, all these things that they call embodied carbon, like the carbon that's required just for producing the things that will then run. And so even in this one article where where the author is saying don't worry about it, don't worry about it. A typical query, they say a typical query is sort of less than the energy cost in Watt hours of running a 10 Watt light bulb for 5 minutes or using your laptop for 5 minutes. They show that a long input query is more than that, but still less than using a microwave for 30 seconds or the average US household consumption per minute. But then their maximum input query, because again, they looked at it on a range and these are numbers that were released by Epic AI, they're saying that a maximum input query is twice the average US household consumption per minute. That's that's a lot. So, so yes, simple, not very complex high efficiency queries that are routed to the right data center at the right time of day, you know, at night when the load on the grid isn't very high, could be very, very low. But incredibly complex high parameter token rich queries could also be really taxing the system. And it's not just about the energy being used, but as we mentioned, it's then about the physical carbon that's being put out from it. So I guess my take here is I think both extremes aren't really telling the story. I don't think it's a don't worry about it, but I also don't think it's a, it's so dire. The plan it's going to burn tomorrow because of, you know, ChatGPT. I think we have to look at it in context of all of the other things that we do that produce large quantities of carbon. And we have to be more mindful about how we use these LLM's, right? Like the energy supply just won't be able to sustain it as it grows and grows and grows. The researchers basically say we can't have all of the, I guess the pressure on us as individual users. We have to also think about these large energy companies and how they are externalizing these costs. These tech companies, they said, I go to conferences where grid operators are just freaking out. Like these tech companies cannot just keep doing this. Things are going to start going South because if your model is being used by, say, 10 million users a day or more, it has to have a better energy score. It just has to but things that we can do as individuals. If, if it's just as easy to look something up in a traditional way, do it right. If it's just as easy to read a Google query or to look something up in a way that you used to do it, choose that. Also, they say it's very similar to AC. If the outside temperature is high, if it's the middle of a hot day and all the lights are on, like that's not the best time to be using these LLMS because that means more energy to cool down the inside of the places where these servers are being housed. So think about the pressure on the grid and engage the same way you would engage with your air conditioning or with other energy heavy appliances. You you try not to do laundry in the middle of the day at peak time. You try not to run your dishwasher in the middle of the day at peak time. Do that also. They said literally, and I never even thought about this. Any extra input takes more processing power. Yes, so.
E: I was told never say please.
C: That's what they say in the article. It costs millions of extra dollars because of thank you and please.
E: Oh my God, really?
J: That's like trying the olive out of the salad on an airline.
E: No, seriously though.
C: Every unnecessary word has an influence on the runtime.
E: And I'm, I am cognizant. Once I read that, I became very cognizant of it and I did, I changed that habit in myself. Now I just keep it to as minimal amount of words as I possibly can to try to get something out of ChatGPT.
C: So it's like, if we want it to be more efficient, we need to learn how to use it more efficiently. But this also, I, I think personally it, it needs to be taught at, in, in academic settings. It needs to be taught to children very early in school. It needs to be taught at the university level for researchers who who actually do are, are some of the most heavy users of these products. Just like we had to take library literacy classes when we were using card catalogs and then when we were using, you know, online catalogs, we need to be learning how to utilize these tools in the most efficient way possible.
E: Good AI hygiene.
C: Yeah, yeah.
S: There's a couple of other things that came across to care. So this is like an MIT study. They they found that the carbon intensity of electricity used by Dana Setters was 48% higher than the US average. I think because it needs on demand energy, right. So it's going to be getting more of its energy from fossil fuel plants, right? Not going to be using wind.
Voice-over: Yeah.
S: But they also said. So at in 2023, four point 4% of all energy in the US went to data centers. By 2028, it could be as high as 22% and half of.
US#00: That we say.
S: That AI.
US#00: Yeah. They say that in this article as. Well, yeah, yeah.
S: So the, it may be hard to nail down, but the broad brush stroke is, is this is going to significantly increase our energy demand and it's going to, it has altered the projections of like how much electricity will we need in 2050. We've had to revise all those projections because of these large language models.
C: Yeah. And I think I think one of the things that can is sort of the easiest for us as the end user is to just remember it's the same as, you know, that that mental shift that happens. We've talked about this a lot on the show, when you realize that throwing something quote away doesn't actually make it go quote away, only away from away.
E: Yeah, it's like when you're the point of center point of that equation, yes, you're throwing it away from.
C: Your exactly but like it is going some somewhere right? Like your, your trash can is not.
E: Close to a lot of other things.
C: Yeah. And so think that way as well when we're using these digital tools that feel so ineffable, right? They don't feel like they would require a lot of energy, but they do. And so every time you sit down to use one of these tools, be mindful and say, do I need to do this right now? And yes, there are plenty of cases where we do need to do it, just like there are plenty of cases where we have to use plastic in our lives or we have to use fossil fuel, but we do not use them that way. We use them in all the cases where we don't actually quote need them. It's a convenience issue. And that I think is where we really got off track. We should start to see more regulation around this as well so that it can't be used I guess in a wasteful. Yeah. Yeah.
Curing Deafness (53:17)[edit]
S: All right, thanks, Cara. Guys, let's stop there. All right guys, any of you know what the protein otofurlin is?
J: No, I didn't. What would?
S: Your guess be Oto Furlan? What part of the body is that?
E: OTO was deep deep. Space OTO or OTO?
S: OTOOOTO, that's different. Not OTO from deep.
J: Space. The word toe is in there. I think it has to do with your feet.
C: No, wait, are you asking about the the furlin part? Is it something that opens something?
S: So Oto furlin is a protein necessary for the release of neurotransmitters from the inner hair cells, enabling transmission of the signal to the auditory nerve. So it's necessary to hear.
E: Yeah, need these proteins to hear.
S: OK, you need these proteins to hear because it would yeah, releases the neurotransmitter basically let makes the the hair cells that detect the sound communicate neuro electrically to the neurons that then transfer the signal to the brain, right.
C: So it's transducing sound into.
E: So if you're protein deficient in this area, you you're not going to be able to hear.
S: Exactly. So if you have a mutation of that protein, the there's the otopherlin gene OTOF. If you have a mutation in that gene so that you do not make the protein or you make it an imperfect protein, that can cause deafness, right. So it's one of the forms of hereditary deafness. So where do you think this story is going?
C: CRISPR, yeah.
E: Yeah. Yeah, I did this.
S: So just Yeah, I just wanted. I love these stories about another use for CRISPR to treat a genetic disorder, in this case, this one form of genetic deafness. It's autosomal recessive. They looked at just 10 patients aged 9 to 23. So these are, you know, older children to adults, they used an adeno associated virus as the vector, right? So this is a viral vector and they used, you know, again, gene therapy to, you know, introduce the gene to produce the protein. This is mainly a safety study, right? This is a sort of a preliminary clinical trial just making sure that this was safe and well tolerated. But they did as a secondary measure look to see did it affect their hearing. So the side effects for the adverse events were all minor and it was well tolerated. It was safe. So there's that was the primary reason for the for the study. That's why, again, I was only a few people.
E: Is this a mouse hearing? No, it's a human is this is. Human.
S: This is in humans, yes. In human trials, it's already. Passed the animal state.
E: OK, great.
S: So the average level of threshold of hearing in the 10 subjects went from 106 decibels to 52 decibels. So lower is better, right? So the the yeah, they, they, they were able to hear softer sounds. 106 is loud. I mean, so yeah, that's why they were functionally deaf. So that's pretty impressive, right?
US#07: Yeah, Yeah.
S: So it basically worked and was safe. We have to see how sustainable it is. And there's a question of like how old the the subject can be and have it still work because these are still young adults, you know, at the high end, 23.9 years. And this was they followed these participants for at least six months, which is a good follow up. But you know, we need to see what it's like when we follow them up for years. They did see an age dependent therapeutic effect. So it was better outcomes in the younger kids than in the old than in the young adults. So, you know, this is the kind of thing where if you like get diagnosed with this at a young age, we might be able to treat you, maybe even treat toddlers with this, who knows? You know, you have to get approved for for very young children. The the key I think that the tricky bit is the the viral vector. You know, for these CRISPR based therapies, the vector is like the big thing, right? That is the the main limiting factor. And viral vectors are can be effective, but they could also be risky, deadly. This is why talk about like the the nanoparticles, lipid nanoparticles when they're feasible depending on like where you need to get them in the body, they're much better. They carry bigger payloads and they don't cause infections.
B: The fat. Virus can, right?
S: Yeah, the fat bubbles.
C: I'm I'm digging in Steve to the incidence rate and it's really low. It looks like this is qualifies or at least as listed on a lot of like rare disease. It's a rare. Disease. Yeah. And so that's, that's like such a great thing about yeah, about using CRISPR or using different kind of gene therapies that are so targeted is we can actually target rare diseases where there wasn't a lot of, I don't know, I mean, there wasn't enough, I guess money.
B: Not just rare, not just.
S: They're all bespoke anyway, yeah.
C: Yeah, yeah, yeah.
B: I mean, Steve. You've mentioned the recently a couple where they were just like this is for one person.
S: It was, yeah, it was for we was that child that was born with its own specific mutation that they were able to treat. Yeah, that's the thing. These are all these are often all bespoke anyway.
B: So what was he, Steve? What was the decibel rating before 100? And what?
S: 106.
B: 106 that's around a chainsaw or a handheld drill and 50 decibels I see listed here as a quiet office yes 60 is normal conversation and like 45 is like light rain so damn man that's. That's that's a huge change.
S: Yeah, brought them down into the range of conversation. Yeah. Wow. Exactly. Very encouraging effectively.
E: Carrying it, yeah.
J: What can they do for us, Steve?
E: Us old folks.
S: Yeah, who knows?
E: Yeah, this, this is a genetic disease.
S: Like genetic fixes are great for genetic diseases.
E: We need regenerative, you know, something to keep our cells and our hairs from what? Diminishing. Yeah.
C: But then you got to you got to toe that line between aging and cancer.
S: That's what you know, if you're getting to the stem cells, to the regenerative kind of approach, it would be great if we could just regrow those hairs. Wouldn't it, You know? Oh.
E: My God.
S: Once they break off, you basically lose this.
E: Free some monsterism, blah.
Food Myths (59:32)[edit]
S: All right, Evan, tell us about persistent food myths.
E: Yeah, food myths. So this was a neat little article that came out recently titled Grandma was Wrong, food myths debunked. And this caught my attention specifically because in my family, when I was born, my my paternal grandmother had was already dead. So I did have, you know, my maternal grandmother was alive, but not, you know, not for long. He didn't have I didn't have much of a relationship with her growing up. It estranged and, you know, then she passed away. So. There wasn't grandma's home cooking as part of my upbringing, but I I think for you, do you guys have memories of your grandma's home cooking that kind of a man. Sure.
J: Oh hell yeah. Yep.
E: And and what long lasting memories and you know, oh. My God the.
B: Big meal 2 Italian grandmas what do you think?
E: Yeah, yeah, a little different for me. So I, again, I didn't, I didn't really have much of that experience in my life. According to this article, a recent survey found 42% of Americans prefer to cook meals traditionally like their elders. But they also, you know, as we learn more about food, food science and other things as time goes on, you know, grandma used to do maybe some things in her traditional food cooking that was maybe not the the best advice or was outright, you know, just silly and had, you know, no impact, you know, and there were this article lists a couple of examples of that. So I want to share those with you that they talked about here. These are the myths. Rinsing raw chicken before cooking. I'll tell you again, I don't have the experience of, you know, having much of A, you know, grand grandparents and, and the cooking experience. I've never rinsed my, I never learned to rinse raw chicken before cooking. It never became a practice. It was never habitual in my in my family. And do you guys know why that's first of all a myth and why it's not good?
J: I mean, I would think because of salmonella, whatever, like something like that.
E: Yes, it.
C: Spreads it when you.
E: It spreads it. That's right. It doesn't wash it away. You don't wash it away, it just splatters around. Basically you can only make things worse through through its contamination as anyone here rinse their raw chicken before cooking.
S: No. But what you should do though, is like really limit the surfaces that the chicken comes in contact with and make sure that those are cleaned and sanitized very well. But the chicken itself, you just got to cook it properly.
E: 165°F Or is the is the correct internal temperature it should be to kill all that bacteria?
C: I also have I have a cutting board that's just a chicken cutting board.
E: Yeah, I use.
C: Don't use. The vegetable one, yeah.
E: I do I have a specific.
S: All right, is it glass, plastic or wood?
C: Wood, my cutting board.
E: Yeah, it's. Plastic.
C: It's plastic.
S: See, that's, it's, it's hard to know which would. Yeah, I've done a deep dive on this and there's no clear answer really though, because, because when you get little scratches in the plastic, the bacteria can hide in there very well.
C: Yeah, but I but I can also then just put it in the dishwasher and that's like, you know, basically decontaminating it.
B: Yeah. That's why don't put our wood cutting board in the dishwasher. I do that, I clean, right? Because you can't.
C: I don't use wood for anything but vegetables or like charcuterie.
S: But it's good to keep them separate.
E: How about this myth? Bread stays fresher in the refrigerator.
C: I always put my bread in the fridge. Oh no.
J: Well hold on a second, hold on because if you put if you buy store bought bread and you put it in the refrigerator, it does have preservatives in it. It will slow down any mold happening on that bread by a lot if you just leave it on the shelf.
C: Yeah, I mean, I feel like I've observed that. Am I wrong? I've never done a controlled study, but I definitely feel like when a loaf of bread is in the fridge, it lasts longer. That doesn't mean it tastes fresher, but it lasts longer.
E: For freshness, it says here sandwich bread, buttermilk biscuits and rolls should be stored on the counter in a bread box or frozen. You can freeze them.
B: Freezing is your friend with bread.
J: Definitely. I mean, there's a big difference between refrigerating it and freezing, right? Like I when I make bread, I usually make at least 2 loaves and I always put one in the freezer and I can get that bread back to about 80 to 90% of what it was like when I baked it fresh. I figured out how to do that. You can't get it back if it goes in standard refrigerator temperature.
E: And that's what. I don't think you guys are. Perceiving as well. Yep.
S: Yeah. I think what you guys are perceiving in terms of like the better outcome is that you just need to have it sealed. So putting it in a bread box, and I always put bread, I make sure like they are sealed, you know what I mean, that it's in something completely airtight. It's fine. They do, they do perfectly fine then refrigerating it as there's no added benefit if it's sealed.
C: Yeah, but I I also think you're talking about the difference. Am I wrong here, Evan? You're talking about the difference between it tasting fresh, like, like the starches being the right consistency and all of that versus growing mold. A refrigerator is going to reduce how quickly mold grows on bread, but it might not be fresh bread. It'll be stale bread that has less mold.
E: Yeah, and they did say there are some, you know, it depends on your your environment. Some houses have air conditioning, some don't. So you may yes, the the refrigerator might be the better option in that case. But in a controlled temperature environment, they are saying use the bread box when you can. What happens if you put the bread into the refrigerator? The cold temperature will cause re crystallization of the starch and that's moisture loss and then your bread starts to lose its, you know, taste, consistency and. All the other. Features that make your bread enjoyable What about storing your tomatoes in the refrigerator? I don't do that. I've never. Done that either. No. And I've in fact, I've known for quite some time that you shouldn't do that. They recommend that you not do that. Researchers what? From the University of California, Davis, explain the cold temperatures mess with the enzymes that flavored tomatoes, leaving them mealy and bland. Yuck. Keep them out on the counter. Keep them on the counter but out of direct sunlight. Wait till they're ripe and enjoy. What about this? I've never heard this one. Let hot food cool to room temperature.
S: Oh yeah, I did a whole deep dive on this before.
B: Before putting it in the refrigerator.
S: Yes. Yeah, you're not supposed to do that. So that here's the bottom. The best way to think about this is how much time is your food spending at a temperature where bacteria can grow? That's the bottom line. So you always want to, you know, get it up to eating temperature and eat it fairly quickly. And then when you're done with it, you want to get it at refrigerator temperature as quickly as possible. You don't want it to spend hours and hours at bacterial growth temperature which and room temperature is that. You don't want that room temperature. You don't want it warm.
C: Yeah, it's like that fried rice disease thing.
S: Yeah, you want to get it, you eat it. And I'm I can obsessed with that. I hate when people leave food out after dinner. Like as soon as I eat 3.
US#07: Hours later, put.
S: That yeah, right away. Get it into Tupperware. Get it into the refrigerator right away.
C: I feel like this one comes from the the the reality that you can't put like boiling hot liquid that's in hot glass into a.
S: Refrigerator, that's a separate issue and that's correct.
C: Yeah, and that could because you don't want to shatter the glass.
E: Yeah, right. You don't want to do that, you know, And that could be part of people's calculus as far as what you know, how they're thinking about this.
C: I also usually like will vent the lid because if you put it straight in while it's still hot then it's all full of condensation. Yeah.
E: The condensation as well. Should you be washing your produce with soap? No, no I never did that either. In fact I've never even used. They do have vegetable wash sprays, have you seen?
C: That I've used. I've used vegetable. Wash.
J: I use vinegar if I want.
C: You can also use vinegar. Yeah, they make vegetable wash, but that's not soap.
E: Yeah, it's not soap. I know. I don't. I've never. But that's why this is strange to me. I'm like, really? Did people's grandmothers wash their vegetables? Was so bad I never really heard of that. Wow. Apparently that's the thing.
S: Water is fine as long as if you want to make sure it's clean you just got to scrub it. Just get whatever you wash stuff with your towel or paper towels and just want to scrub it a little bit. Just the physical, physically wash it. You don't want to use soap. You want to put soap on your food?
C: Right, You don't want to eat soap. Ever.
E: No, right? You can't think of Is there a reason to eat soap?
C: No, I think so if.
B: You say some nasty curses, you might have to eat some.
E: There's more on the list.
C: That's actually abusive. Yeah. Watch somebody. Oh yeah. Yeah, yeah.
E: And let me throw out one more I'll because it's a longer list, but I'll just end with this one because this one I had not heard before either. Watermelon seeds will sprout in your stomach. Don't. That's like chewing the like swallowing the gum kind of thing, right? It's like so.
J: I had a plant. The watermelon plant was coming up out of my throat with the vine.
E: I mean, that's right out of mythology or something, you know, like some Jack and the Beanstalk kind of story. I don't know. And you had you guys heard of that before? As a myth, yeah.
C: But why did I guess? Where do these things come from? Why would grandma not want you to swallow the seeds?
B: Because she's afraid?
C: Yeah, no, I get that. Like Grandma didn't think they would sprout in. Your Yeah, why would they? Grandma didn't just didn't want you eating the seeds. Is that because she had diverticulitis and when she ate the seeds she felt like sick all the time?
B: I mean apple seeds are have little.
C: Have arsenic.
B: Marcinic in it, so maybe it's stemmed from that.
C: Maybe.
B: Perhaps.
C: But nobody eats apple seeds.
J: So yeah, you shouldn't. Somebody that we know, Cara eats the entire apple.
C: That is.
E: Is it a horse?
C: Bananas. They eat the core of the apple, the whole. What?
J: Lots of people do that.
E: I got that.
C: No.
J: You got to believe me because I don't lie to you.
C: Whereas with watermelon, like there's seeds in every bite.
E: Unless you get the. Seedless ones.
C: They still have seeds, they're just white seeds. They're not a big black. Seeds.
J: They're tiny.
E: Yeah, they're little. Tiny. They're seedlings. There are seeds and a lot of your vegetables thing, and there's a whole other list here, but those are some of the fun, fun ones.
S: Good. All right. Thanks, Evan. Bob, another AI article, last one to finish up the news.
AI Enzyme Engineering (1:10:00)[edit]
S: AI is going to help us with our enzyme engineering. Oh.
B: Boy, yeah, this was so much fun. OK, so guys, what happens when you combine automated robotics, synthetic biology and that ubiquitous 2 letter initialism that we call AI? You get you get not only a technology that's brimming with potential, I mean, really wow. But an it's it's also an exciting solution to a powerful but limited biological tool used in industry, the lowly enzyme. This is from journal Nature Communications. The name of the study is a generalized platform for artificial intelligence powered autonomous enzyme engineering. Study was led by Wei Min Zhao, professor of chemical and biomolecular engineering at the University of Illinois. OK, so what's going on here? So it starts with enzymes. Got to do a little table setting with the enzymes. These are specialized proteins, right? Like, like most of the human body is comprised of essentially strings of a hundreds of amino acids or more that fold up into a specific shape. And that shape directly translates into a specific function. And that specific function for enzymes is absolutely critical for life. I mean, we're talking like eating, suggesting breathing, reproduction, moving. None of that would happen without it, without enzymes. And and that's the hallmark of them and what makes them invaluable. They stay speed up chemical reactions by offering a low energy path, essentially a shortcut. And it's not just a little bit faster. I've read somewhere that enzymes that without enzymes, digestion, digestion could take years, something like years or months, whatever. I mean, you'd be long dead before you got any benefit from it. So yeah, pretty important stuff. But this did, this is just the biological role of enzymes in our body. It's just one side of the coin. They have a powerful presence outside of our bodies that you might be less familiar with, and that's for industrial use. So there's so many industrial applications of enzymes. We're talking food production, pharmaceuticals, biofuels, biomaterials, textiles, detergents, wastewater treatment, and that's just a name a few, it's kind, it's kind of endless. So now these enzymes are they're essentially amazing little machines in this capacity, but they're underutilized because using them often involves some very frustrating roadblocks when you when you dig deep into it. They're they can be inefficient in a lot of ways. They sometimes don't have the ability that you would like to, to single out a specific target, you know, in this ridiculously complex chemical environment that that they find themselves in. So all right, to sort of sum this up so far, we've got these amazing biological tools. These are the enzymes that are critical to life, but also for for many industries, they are they're essentially straight jacketed by the the inefficiencies and inaccurate inaccuracies at times for these enzymes. And then this is where this new study comes in. The study had a goal to solve this problem by improving protein function. But as, as the lead author Zhao said, he said improving protein function, particularly enzyme function, is challenging because we don't know exactly what kinds of mutations we should introduce. And it's usually not just a single mutation, it's a lot of synergistic mutations. So they so this is, it's tough to, to tweak these enzymes and make them better at what they do. So they describe in their paper their solution to this problem, which brings 3 technologies together like never before. And these are, like I said, AI, automated robotics and synthetic biology. So let's start with the AI leg of this, this tripod. So for AI, they use deep learning with its layered artificial neural networks, right? And this, these networks analyze data and learn complex patterns, right? We've talked about this on the show before. This, this deep learning, though, also uses a protein language model, not an LLM, but a PLM, which essentially is using the languages, the language of proteins. It's fluent not in English, but in the language of proteins. Now, the AIS job in this role is to look at the genetic code and optimize it for it, for the desired functions. Remember guys, you can't just brute force this thing. You know, there are, you can't just like change, make little tweaks, some random tweaks and see what happens. Test it and make more. I mean, there, there's more possible amino acid combinations than atoms in the universe. How many atoms are there in the universe? 10 to the 80th, 10 to the 80th power. And I of course, you know, I looked this up, that's 100 octillion sextacillion. Just saying. So they're AI. So they're AI instead. What it doesn't, it would, it would determine a small amount of possible sequence, sequence changes and that that those sequence changes is based on its training on enzyme function and structure and, and also the, of course, the fluency of its protein language model. So it says, all right, here's here's these suggestions that we the tweaks that we can make to these, to these enzymes. So that's what the AI leg of this this enzyme upgrade solution does. That's the first part. So next comes the other, the other two legs of this tripod. We've got the robotic automation component and synthetic biology. So these AI suggestions would be sent to what the University of Illinois calls it's I Bio Foundry. And this seems like it belongs on the God damn enterprise. This thing is like what? This is fascinating. So this I bio Foundry uses robotic platforms and computational tools and it actually builds the enzymes that the AI is suggesting from scratch, from scratch, not just not going into an enzyme and making a tweak. It's just building it piece by piece from scratch. And then it tests them. So it doesn't doesn't just build them, it actually tests them to see how well they, they perform based on, on what the, you know, what, what the desired new updated functionality of that enzyme is. And then that it's performance is sent back to the AI and, and then that makes new suggestions based on this new information and the process repeats over and over and over. These are being called self driving labs. They're they're powerful automated AI guided platforms for enzyme engineering. And once this process starts, as I've described it, it's essentially running on its own with minimal supervision or essentially no supervision at all at this point. So that's why they're calling it self driving labs. So of course, the proof of the pudding, as they say, is in the tasting. So what are the results? What, what did they achieve using this new methodology? So they used two key industrial enzymes here and both of them came back with substantially improved performance. It's kind of dramatic, I think. So 1 enzyme, they use it in industry, they to add to animal feed to to improve the nutrition of the food. This process, this new process that they have here increases its activity by 26 times after being tweaked. The second enzyme, which is used for just a generic industrial chemical synthesis, the paper says it had 16 times greater activity and this enzyme also had 90 times greater substrate preference, which means that it was far less likely to target chemicals that it was not supposed to target. So that's substrate preference 90 times greater. So these seem pretty dramatic to me as as basically a proof of concept for for this new technique. So what are we talking about now in the near future? I mean, this is what they're basically doing now and there and they've designed this to be generic for just proteins in general. It's not just for these two enzymes that they they tested. They made this so that it can be used for enzymes and proteins just in general. So in in the very near future, it what they plan on doing is somewhat predictable, as you might imagine, continue improving their AI models. They want to upgrade the equipment to make it faster, higher throughput, faster testing, all that stuff. But they also have and it seems like they've already gone a long ways and having this, they want to create an entirely new user interface that can, can use simple typed queries. Because I believe now you need to be able to like code it in Python in order to really get this system to do what you wanted to do. But the the new interface they're talking about would kind of almost be like an LLM. Just type in what you want. Maybe this, I'm sure there's some structure to it and make very easily for a non specialist to use this system so that they can work on improving the enzymes that they want to improve. Or if they want to improve, you know, drug development times or maybe they want to make new innovations in energy and technology, they could they can use these systems as well. What do you guys think?
S: I mean, they're industrial enzymes are a huge industry. It's huge part of industry. So yeah, anything that can automate or increase the ability to make more efficient, more targeted enzymes could have wide-ranging impact on across many different industries.
B: I mean, look at the results of, you know, 26 times the, you know, the by, you know, the, the activity for one enzyme or this other enzyme at 90 times greater substrate preference. That's just dramatic. That's dramatic. I mean, imagine start up just applying this, you know, once they tweak and get it even more efficient and better AI bet you know better, you know, protein language models. I mean, it just seems like this has got nowhere to go but just dramatically up. But we will see. Who knows what what can kill these things, but fascinating stuff.
S: All right. Thanks, Bob.
Who's That Noisy? + Announcements (1:19:17)[edit]
S: Jay, it's who's that noisy time. All right guys, last week I played this noisy. What do you think?
C: Don't like it?
J: Don't like it? Not. Your fave.
S: Not a pleasant noise. I mean, it sounds mechanical, like something's grinding or spinning or whatever be either the noise up front, like the slapping noise. I don't know what that is.
J: Well, Visto Tutti rode in and Visto said this one sounds like a cheringa or a bull roar. This is a a carved wooden wing and it's attached to a length of rope and spun around so that the wing produces a loud sound as it beats the air. Yeah. So these were used to signal over large distances. And and you you could find this these used in Australia, I think even today.
S: Didn't Crocodile Dundee use one in the movie?
J: Yes, that is correct. Steve, a listener name MOJCA.
US#06: MOJC.
J: A Mochka.
US#06: Mochka.
J: Mocha. Mocha.
US#06: Is it Mocha? Is it the Mocha island?
J: And last name is Cole SEC. I said. Hi Jay. I'm going to guess the noisy is a drill bit. But since this would be too boring and also due to strange frequencies, I'm guessing it's one of those two or three headed drills I've seen all over the Internet lately. I have not seen that. I don't know what you're talking about. It's intriguing. I definitely would like to to look that up, but you are not correct. But thanks for thanks for guessing. Hunter Richards wrote in and said, hi, Jay. I'm not too late. Is this the mini steam powered train, the kind that's big enough to ride on, not in and not a model train or it's Bender? Yeah, I don't, I'm not. I don't know what you're what you're hearing there. I'm not hearing that. But but thanks for the guess. You know everybody, we have different memories that influence what we think we hear. So I do have a winner this week. I had several winners this week. The person that guessed first was Shane Hamblin. And Shane says, hey, skeptics, guiders. I was listening to this podcast with my dad and immediately knew what this was. The noisy from the June 20th podcast was when you put a bolt or a nut, not a peanut, in a balloon and put air in the balloon and spin it. You can hear the nut slow down at the end of the noisy until it drops to the bottom of the balloon. That's exactly what this sound is. Listen again. I will warn you that if you do, if you do this and you use a heavier sized nut, it could RIP through the balloon. So be careful. So good guess, good job Shane. I did have other people guess I wanted to mention this, Nathan Drake wrote in said. Hey Jay, listening since before my son was born in 2010. Never had any idea but this week to me sounded like a combustion engine starting and revving slowly than dying. My son Wyatt in the back seat of the car said I was wrong and that it's a hex nut spinning in a balloon and that you could tell because the thud at the end when it finishes. Very cool. So he guessed it right. His dad was wrong. And I like that he he heard the little detail at the end right about the nut, like basically slowing down and bouncing around in the balloon when it didn't have the momentum to be spinning like it like going around the the circumference of the balloon. Very good guess, Wyatt. So I have a new noisy though for you guys this week and I'm curious to know what Cara thinks of this.
S: That's a Space Invaders.
J: I suspect this week is going to be very difficult, but I will give you no clues because everybody like completely went crazy on the Space Invaders one. Like I got so many emails and everyone saying, Oh my God, it's too easy, Too easy. A lot of people had fun, you know, writing in and saying, you know, getting one. The bottom line is this one's hard. Good luck. If you think you know, this week's noisy or you heard something cool, you can e-mail me at wtn@theskepticsguide.org quickly. We have a show in Kansas, guys, on September 20th. We have two shows with private show, which is a live podcast recording. And then at night we'll be doing our stage show, which is the skeptical extravaganza of special significance. If you're interested in seeing us live on these, you know, two different types of shows, you could come to one or both, whatever you want to do, go to theskepticsguide.org. There's a button on there for each of these. And you know, we just would like it if you join us because it's a, it's a fun day. Those who spend the whole day with us, you know, we have a lot of people that do that, you know, Then there's synergy between the two shows that only you will get if you, you know, at the second show, which is pretty, pretty cool.
S: All right. Thanks, Jay.
Why Didn’t I Know This (1:24:10)[edit]
The Great Attractor None
S: We're going to hit you guys with a new segment. I call this segment Why Didn't I Know This? Yeah, inspired by Evan, Yes. Does an e-mail saying why didn't I know this? Talking about the great attractor. So yeah, let's talk about the great attractor. And we could see if this works out as a new segment where we just talk about something in the world of whatever science and reality that we never maybe maybe you never heard of, but it's kind of cool.
E: And I brought this up specifically, I was, you know, YouTube has shorts, right? They're basically little TikTok videos, vignettes of videos and, and, and whatnot. And I get, you know, like any person, I get stuck in down the rabbit hole sometimes. And one came up of Neil deGrasse Tyson talking about how we are, you know how fast we're moving through space, you know everything relative to, you know, our entire.
S: Other stuff.
E: Solar system that's moving, right? And everything else, we're basically going at what, 2.1 million kilometers per hour at 1 certain measure. So I was like, OK, yeah, we're going pretty fast. And then he mentioned the great attractor because apparently that's the direction our entire Milky Way Galaxy is generally heading along with a bunch of other galaxies. Now this is, this was new to me for never I'd not heard of this before. I spoke to Bob a little bit about it and asked Bob if we had brought this up on if this had come up as a subject at all on the Skeptics Guide before. And Bob, what you said you didn't have any recollection of it either, right? Yeah.
S: I don't know if we were spoken about it.
E: Okay, so good then. Then nobody. I'm not misremembering, right?
S: So here it is. But yeah, but we've read about this before, but just getting myself updated on it. It's interesting. So in order to know what the great attractor is, you have to know a little bit about the structure of our part of the universe. So you guys know that our Galaxy is the Milky Way.
E: Yes, Right.
S: Do you know that our Galaxy is part of a local group?
B: Yes. Better. I've already mentioned it about a dozen times. The Virgo.
E: Is it the?
B: Virgo group?
S: No, your group, your two levels high. So the Local group includes the Milky Way, the Andromeda Galaxy, the Triangulum Galaxy, and a bunch of dwarf galaxies, right?
B: That's our local group. It's anywhere. I've heard numbers from anywhere from 50 to 60 to over 100 galaxies. Yeah, many of them dwarf galaxies, many of them hidden, they think. Right, so we. Can't see the number. It's kind of big, yeah.
S: 10 million light years across, right? That's our local group. That's the next notch up above our Galaxy.
B: And that's the group that we will all eventually merge with. Nice.
S: Eventually, now the local group is part of the local sheet. The local sheet is a flattened structure containing several local groups.
B: No sheet.
S: Yeah, the local sheet. So there's two other groups like the M81 group and the Centaurus A group combined with our local group that make up the local sheet. OK, OK. Next step up. Next up is the Virgo supercluster.
B: OK, that's right. This is the that's what we think of.
S: That's our local supercluster. This is about 1300 galaxies and it's about 110 million light years across. But that's not it. That's at the highest level. The Virgo Supercluster is part of the Leniakia Supercluster.
B: Which I talked about like 8, what, 910 years ago, yeah. Which is that? That was a good.
S: 520 million light years across that's.
E: Crazy.
C: I don't like this. Why are they both called superclusters?
S: You know.
E: Well, one's a. Bad nomenclature.
S: It's a super, we'll call it a super duper cluster.
C: It should be an Uber cluster.
S: So there's about 100,000 galaxies in the Lania Kia Supercluster. Now at the very gravitational center of the Lania Kia Supercluster is the great attractor. And it's a, it's, it's more than a Galaxy. It's a, it's a concentration of mass that's 10 to the 16th solar masses. That's massive. And so it's basically, it's probably a supercluster, right? So there's a supercluster in the middle of the, the, the Leniakia supercluster. And that is the center of gravity of, of the biggest, the bigger supercluster. And so everything is moving towards it, including the Milky Way. Galaxy, but it's hard to see.
B: I mean, they, they, they see, they see something there. They call it the, the normal cluster, but it's, that's only part of, you know, part of what's there. So it's kind of obscure. So they're not really sure and that's why it was kind of mysterious. But my question, Steve, is now I've read over and over that the, that we will all argue all the local galaxies, 50 to 100 of our local galaxies in our local group will merge eventually. But the question is will, I mean will the expansion of space went out over? I think so. Kia, it's got. Yeah, I think so. I think it's what I've read. It's too big.
S: It's too big for so big gravitational binding, you know what I mean? So yeah, the expansion will overcome the gravitational attraction of the linear Kia Supercluster, not our local group. I don't know about the Virgo supercluster, but definitely the.
B: Yeah, I think even the Virgo supercluster will eventually. Bye. Bye.
S: Yeah.
C: So when you say it'll win out, you mean eventually? But right now we're moving towards.
S: We're moving towards it, but it's but it the expansion is greater. So here's the thing. When we look at all so we can't see it directly because it's in the quote UN quote zone of avoidance, which basically is.
E: But don't let that scare.
S: You, it's part of a region, it's a part of the universe. We can't see because the center of the Galaxy is in our way, right? So we'd be annoying when that happens. The dust and and stars and everything in our own Galaxy keep us from from seeing that that strip. And that's where the the great attractor is, happens to be obscured by this zone of avoidance. But when we did a massive survey of the redshift of the galaxies in the Laniakea supercluster, right, they're all redshifted. They're all moving away from us, but but they also have what they call peculiar velocities. They have in addition to the redshift, there is some additional velocity and all of these additional velocities are moving towards the same point. That's the great attractor. So everything is moving towards the great attractor, but they're but they're moving away from each other even faster because of the expansion of the we'll.
E: Never get there.
C: Yeah, that's interesting. So the movement towards it though not not not counting like let's calculate out the expansion of the universe, the movement towards it, is it collapse, A collapsing movement or is it a circular?
S: Like, no, we're moving just we're moving in that direction just like as a straight line.
C: I know we are, but I'm saying when you look at everything around.
S: Everything's moving towards that point.
C: Yeah, so it's like a collapsing movement.
S: Yeah.
C: Yeah, not like a.
B: Not rotation. The expansion think. Of all the think of all the different ways we are moving around the sun, the sun around the Galaxy, the Galaxy, see, you know, within the Local group and then the Local, I mean, there's so many.
C: Yeah, but I'm not talking about movies, I'm talking about space. Like is all the stuff in space not like, not from our perspective, but like if you just look at the great attractor as the arbitrary center of this model, the things that are moving towards it are collapsing in towards it like linearly, or they're rotating around it like most things do in space.
S: Well, we don't have it. I don't think we have enough long time of measurement to know.
C: But then it's actually being pulled apart faster than.
S: But yeah, they're, they're all still red shifted, which means they're all moving away from us and from each other.
C: Yeah.
S: But there's this additional velocity, the peculiar velocity that's +700 to -700 kilometers per second, depending on where it is in relation to the great Attractor and us as the viewer, right? So something that's on the opposite side of the Great Attractor is moving away from us at 700 kilometers slower than it should be because it's also being drawn in by the Great Attractor.
B: You know, that's that's where I want to live. I want to live in the great attractor. Think about it though, for a. Lot going on there. For long term, for super long term survivability of whatever's there, you want as much mass as possible in your vicinity so that you could use that mass, you know, the mass energy to survive long into the the cold. You know, after it's really just like black holes and white dwarfs left in the universe that you want as much mass as you can. If you don't have a lot of mass and you're just like not going to last as long as any civilizations that might be that might be like the.
E: Highest mountain peak of a during a flood.
B: Imagine, imagine astronomers in the great attractor looking around and eventually figuring out hey check this shit out, everyone's coming to us this how awesome is this? We are the center of the universe.
E: Right. That's a lot of matter. That's a lot of matter.
B: They must feel very close.
C: To it because do you think it does like it has like black hole features I.
S: Think it's just no Galaxy Supercluster. This is the biggest boy in town.
C: Super duper, but that means there would be a black hole in the middle of it, yeah.
S: Probably most galaxies have supermassive black holes in the.
B: Middle yeah, I'm sure there's plenty of big galaxies there with lots of supermassive black holes, but not necessarily an Uber massive ridiculous, you know at the at the at the edge of physics black hole, but just lots of you know just a lot a big dense supercluster. It's not necessarily.
S: All right, let's move on with science or fiction.
Science or Fiction (1:33:53)[edit]
Theme: Genetics
Item #1: The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes.[7]
Item #2: Bdelloid rotifers are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome.[8]
Item #3: The Japanese pufferfish, Fugu rubripes, has the animal genome with the highest coding density, at 17% compared to 3% in humans.[9]
Answer | Item |
---|---|
Fiction | The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes. |
Science | Bdelloid rotifers are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome. |
Science | The Japanese pufferfish, Fugu rubripes, has the animal genome with the highest coding density, at 17% compared to 3% in humans. |
Host | Result |
---|---|
Steve | swept |
Rogue | Guess |
---|---|
Jay | The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes. |
Evan | The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes. |
Cara | The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes. |
Bob | The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes. |
Steve | The smallest animal genome, but number of coding genes, is the Trichoplax adhaerens, at just 3,500 genes, while the largest belongs to the axolotl, with about 90,000 genes. |
E: It's time for science or fiction.
S: Each week I come up with three Science News items or facts, 2 genuine, 1 fake, and I challenge my panel of skeptics to tell me which one is the fake. We have a theme this week. Theme is genetics. Guys ready?
E: OK.
S: OK, item number one, the smallest animal genome by number of coding genes is the Trichoplax adherons at just 3500 genes, while the largest belongs the belongs to the axolotl with about 90,000 genes. Item number 2, Deloid rotifers, are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome. And item number three, the Japanese pufferfish, Fugu rubrippis, has the animal genome with the highest coding density at 17% compared to 3% in humans. J Go first.
J: All right, the first one here about the smallest animal genome. It this isn't a creature I've never heard of before, the Trichoplaques Adharens, whatever that means, while the largest belongs to the axolotl with about 90,000 genes. Interesting. OK, so why would the axolotl have the most genes out of all the animals? It's a small animal. And I just realized that I don't know if the genome size equates to the size of the creature. Wow, I can't believe I don't know that Deloitte. Rotifers are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome. What do you mean with genes from other? Kingdoms.
S: So they're animals, but they have genes from bacteria, plants, and fungi.
J: Whoa. Oh, that's that's cool. That's really freaking cool. If that's true, and this is, then it wasn't artificially made, correct?
S: Nope. Naturally occurring horizontal gene transfer.
J: Cool. OK, and then finally, the Japanese puffer fish, Fugu ripperdies. Ripperdies Hello Hell has the animal genome with the highest coding density at 17% compared to 3% in humans.
S: So. The coding density means the percentage of the base pairs that are part of protein coding genes versus junk DNA, non coding regions, etcetera.
J: This is a remarkably difficult science or fiction, Steve. I hope you're proud of yourself. This is kind of crap how you get from you now that you're retired. Like is it going to be like this extra hard?
S: It's expected to the difficulty to go way on it.
J: Oh my God, I'm dying to hear what everybody else has to say. There's something about the the axolotl having 90,000 genes. Now, this is of all animals, correct, Steve?
S: Animals. Yeah.
J: I don't know. Something about that is rubbing me the wrong way. So I'm going to say that one is the fiction.
S: OK, Evan.
E: Coding genes, like you said, Steve, what it's the good, it's the good genetic material, the stuff that means stuff, not just the filler and the junk. And I, I, I could have sworn there were things that did have fewer coding genes, I thought, but maybe they don't fall into the category of animal. Maybe they are more bacteria, something else, right, something non animal. Maybe that's where I'm getting confused with this. But but I kind of think Jay's on to something here. And I think maybe that 90,000 for the axolotl is maybe too small. Something has more, you know, where as I'm just really guessing on the other two. I, I don't know about the horizontal gene transfer and the Deloitte Japanese puffer fish Fugu. I remember that from that Simpsons episode way back when. But I don't know the most 17% coding density and but 3% in humans. Wow, that I, I suppose that could be. Couldn't say why though. I'll go with Jay. Jay, you're leading the way on this one.
S: OK, Cara.
C: I'll take them backward. It's interesting because I feel like with animals there isn't a huge correlation between the type of animal or like how we think of the oh, that's really big and it moves or that's small and it free floats and plants. It was smart of you to focus just on animals because plants genomes are bananas because you have like octopoid E genes and you know, but generally speaking, animals are going to have just two sets. And so you can kind of compare them. It's like comparing apples to apples. A Japanese puffer fish has the highest coding density compared to humans. So 17% compared to 3%. Wow, our coding density is really low. Oh yeah, 17% feels high. So that just means that a lot of stuff was like conserved and really useful. I don't know if that's true or if it's another animal like a shark or something that would be closer to that Deloitte rotifers. So these little animals, in this case, the fact that it is small might be important, especially if they're like free floating in the water. So similar to bacterial gene transfer, maybe it's picking up a lot of stuff from the things that are floating around it and mixing with it. But I don't know if 10% is normal. I don't know how much of our genome is horizontal gene transfer. Would have been interesting for you to include that. And then I hate that you do this one by the number of coding genes. That's like a whole other layer. Like, I don't know, I think humans have around 20,000, but I don't, I think that's coding. That could just be gene. No, it must be coding genes that we have around 20,000 and we always thought it was way more when we before we sequence them. So my guess is that the smallest one has way fewer than that and the largest one also has way fewer than that. I bet you these are over estimates on both sides because we tend to think big when it's actually smaller, at least in animals. So I'm going to go with the guys and say that one's fiction.
B: OK, I'm pop. Let's see him take these backwards as well. The coding density, yeah, it's 3% for people. I'm pretty confident about that. 17% though for this fugu dude. I mean, I don't know, perhaps coding densities are higher. I don't know. It sounds, sounds reasonable. Probably the most reasonable one here. Let's see, these are the horizontal gene transfer. So 10%. It might seem a little low to me because I mean, I would there's a lot of conserved genes, the genes that are so good and so fundamental, everybody's got them that so that in some sense that seems that maybe a little bit small.
S: I don't think you understand what this item is saying. These are not conserved genes.
B: Why they're they're from other kingdoms, right? So they they were No, they what they were just just taken on or yeah, why couldn't you be conserved genes?
S: They're not.
US#00: They're passed on to each other.
B: Do you say they're not conserved genes?
S: Yeah. Otherwise they wouldn't be due to horizontal gene transfer. That specifically refers to a gene being added to the genome of another, you know, species later on. OK, share it. Because they don't. They didn't get it from a common ancestor. It was. That's vertical transfer, right? You're talking about vertical gene transfer. Horizontal gene transfer is not.
B: That it's OK, so vertical, horizontal. What's the difference? All right, I get it. I. Get it? All right, so the. Yeah, the first one here. Let's see. I love Trichoplaques. That's such a awesome name Axolotl. So 3500, which is that's kind of low, not as low as the the smallest synthetic Organism, but that's not what we're talking about here. 90,000 genes doesn't sound like enough. Now I know the Axolotl is probably one of the biggest healers in the animal Kingdom. I mean, you just chop off anything and it grows back. It's, it's, it's a Wolverine. I think of, of animal regeneration. So it kind of makes sense that it would be, it would have a high number, but I, I think that number is even higher than that. So I'm gonna go with everybody and say that that's fiction as well.
S: All right, so I guess I'll take these backwards too. We'll start with #3 the Japanese puffer fish, Fru ruberpees has the animal genome with the highest coding density at 17%, compared to 3% in humans. You guys all think this one is science and this one is science. This one is science, Yeah. So it is a very compact genome. It only has 400 million base pairs compared to 2.
B: Or 3 billion?
S: Several billion for people?
B: Yeah.
C: How many? Coding genes does it have?
S: It's very efficient, about the same. It's got about the same.
B: Same what?
S: As humans.
C: Yeah, OK.
S: Yeah. So, yeah. And the question is why, like, was there some selective pressure for a more efficient, if you will, genome? It has less junk in it. And the, you know, that's probably why that's the case. Sometimes smaller animals have to have more compact genomes, but there actually isn't. Since this came up, there isn't much of a relationship between the size or even the complexity of animals.
US#02: Yeah.
S: And their.
US#00: Genomes and.
S: There's so many other factors. OK, let's go back to #2 Deloitte Rotifers are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome. You guys all think this one is science, and this one is science you guys got. So yeah, if it's very unusual, you know, this is a much higher rate than any other animal. Now, these things are, they're not just small, they're microscopic. They're not visible with the naked eye, right? So they're kind of like the little water bears.
US#07: Oh.
C: Tardigrade, I was going to say, is it because they're bacteria like that they have so much horizontal?
S: Maybe, maybe yeah. But they they have incorporated genes from plants, fungi and bacteria into their genome. Say it's the much higher percentage than.
B: What about the method? The method of horizontal transfer? I mean, I obviously haven't read about that in quite a while, but what's the? How does that work? How does it I wonder? If it's from the.
S: Usually it's usually from eating them.
C: Yeah, that's what I would think if they're just floating through the water and they're like rotifers, right. So if they're just like the like kind of receiving all this microscopic stuff in their in their little bodies all the time.
B: That's just bizarre. I mean, eating is one thing, but incorporating is, you know, wholesale is just like how does. That, yeah.
C: But if the bacteria are like you kind of look like a bacteria, I'm going to hang out with you. So I wonder if it's true it's just mistaken ING it for bacteria.
B: Yeah, they do have like what they have like circular DNA and they can just plasmids, they can just transfer. So that's, yeah, that's why they're what makes them so nasty is they, they could, hey, look what I learned. Look what I could defend against. Now you can too.
S: Now, like water bears, they can enter a state of dormancy known as anhydrobiosis, right? They get dried out.
B: Desiccate. Yeah.
S: And then it could survive in this dormant state for a long period of time. In fact, what do you think the longest duration is?
C: Thousands of years.
S: 24,000 years.
E: It's got some advantages, yeah.
S: Yeah, not forever, but basically, I mean, so that was an upper limit there.
C: Well, but that's just the longest we've found.
S: That's the longest we've found, right?
C: Yeah, it could be. It could be found.
S: A 24,000 year old Siberian permafrost and they think that that probably that the gene transfer is part of why they can do this right. They use these genes to in order to be able to do this. OK, so this means that the smallest animal genome by number of coding genes is the Trichoplaques adherents at just 3500 genes, while the largest belongs to the Axolotl with about 90,000 genes. Is the fiction. And I'll tell you that those creatures are correct, just the number of genes is incorrect. I altered the number of genes. So what do you think now, Bob, you thought that the axolotl has more than 90,000 genes. What do the rest of you guys think?
J: I'd said the.
C: Same. I think they both have less.
S: What about you, Jay?
J: I'm going to always go with Cara.
C: 90,000 sounds really high for an animal.
S: You're all wrong.
J: Wow, what?
C: Don't you feel smart? Wait, how can it not have more or less?
S: Well, you said they, you said they both have less. The axolotl has less, the trichoplax has more. More I went, the I I made the difference. More extreme I.
E: See.
S: But animals are actually pretty consistent in the number of genes that they have, because we're all animals. We just share a lot of genes just as Animalia. So you get different numbers as estimates, but the Trichoplax has about 11,500 genes and the Axolotl has about 30 to 35,000 genes, whereas again, humans are 20,000 pretty much in the middle. So for all of animals, you're talking 11 to 30,000 genes is the range, which is not that much when you consider about all of animals.
C: Yeah, it might be part of what makes us animals.
S: And the Trichoplax is a it's a Placozoa. It's a basal group of multicellular animals, possibly possible relatives of the Naderia.
C: Oh cool.
S: It looks like there's a BLOB though.
B: Guys, the the minimalist synthetic cell that they created a bunch of years ago, how many genes you think that has a.
S: 185.
C: 1000.
B: It's got it's got 531,000 base pairs and 473 genes. Win price is right rules. It's self replicating. Smallest genome of any self replicating Organism.
C: That's scary.
B: That's pretty cool.
C: Hopefully they put a kill switch in there.
B: It's think about it though, I mean. They they probably do. Oops.
S: But the reason why I said that Bob is because the the creature, not animal with the fewest genes is the Carcinella rudii, which has 182 protein coding genes. But this is a, it's a bacterium, but it's a parasitic bacterium, so.
US#06: It's hosting.
S: It doesn't need that many genes because it's it's it's off its host, living off the host. Yeah, the.
US#06: Host with the most but.
B: We're still not at the very minimum though. And that's, that's what, that's the goal, right? They want to find out what what are the critical ones for? Life.
S: Exactly. Among free living organisms, the fewest. The fewest genes. This is natural, not artificial. Is the Mycoplasma genitalium 525 genes. So 525 is the smallest number for a free living Organism, 182 for a parasite and then 11,500 for an animal. And you're right, care. I didn't use plants because they're crazy. They, just like they're in the world, are in plants and like the the single biggest genome is in a Fern that has this crazy number of base pairs. But why? It's mostly non coding.
B: Well some some animals and plants have it was like what? Like gene duplications? Like BAM? That's why, yeah, like I said, it's got octave.
C: They're like octaploidy. No, no, it's just that they have, they don't just have pairs of genes, they'll have like 4 or 8 or 16. So it like quadruples and yeah, yeah, that's why.
S: So yeah, it's a one, this one Fern, the new Caledonian fork Fern has 160 billion base pairs, whereas humans have 3 billion.
B: What the hell?
C: But what? What's it's ploidy do we know? But.
S: I'd have to take a.
C: Little Calgonian.
S: But The thing is, it's like 64. It's actually a disadvantage. It's slow growing. It's it takes a, it takes a lot more energy and a lot more raw material to reproduce because it's got to copy these massive genomes.
C: It needs more. It's octaployed, Yeah. So maybe that's the highest it can be. I said 16. That's probably not real. Yeah, it's octaployed, whereas we're diploid, right? So right there. That's. Yeah.
S: 8 versus 2.
C: Divide it by 4 and it's genome is suddenly not nearly as impressive. And it's still big, but.
E: Take that, Fern.
S: Well, it's still 40 to 3:00. It's not just 160 to 3, right? If you divide it by 4, still big.
C: Yeah. I mean, it's still, it's still really impressive, but not as impressive. It's 50 times more than the human genome is what it says. Yeah, that's, yeah, that's.
E: Probably good it's doing it.
S: All right. Well, good job, guys.
E: Thank you.
C: Thanks, Steve.
Skeptical Quote of the Week (1:50:56)[edit]
“There is a kind of a spatial association between music and math ... the intersection of science and art. Medicine is an art and research is an art. You have to be creative in the way you design experiments.”
– Dexter Holland, PhD (molecular biology), lead singer of the punk rock band The Offspring, (description of author)
S: Evan, give us a quote.
E: There's a kind of a spatial association between music and math, the intersection of science and art. Medicine is an art and research is an art. You have to be creative in the way you design experiments. And that was from an interview with Dexter Holland. Dr. Dexter Holland, by the way, Dr. of molecular biology, PhD. He's, you maybe know him as the lead singer of the punk rock band The Offspring. No idea that he was a doctor. He had, he had a degree in molecular biology. That's really cool. That's cool. Never knew that about him until very very recently, like today.
S: All right. Thanks, Evan.
E: Thanks.
S: Well, thank you all for joining me this week.
E: Thank you. Steve.
J: Steve, let me ask you one quick question.
S: Sure.
J: Are you happily, you know, happy here, retired. Is everything good?
S: I mean, my life is really not any different. I've been busy working the last few days. You know, it's only been three days where my schedule's been different from not working, so we'll see when things settle in in a few months.
J: Spent most of it dealing with the phone. I know right? So.
S: Most of it are trying to get my phone.
J: Upgraded. I'll ask you in a couple of weeks, yeah.
C: That's going to be your next big hurdle, right, Steve? I remember a private show once you kind of admitting to the group that you you struggle with feeling lazy and that you often feel like internalized pressure to fill your time. So now you're going to have all this extra time. How? How well do you think you're going to be able to just sit and do nothing?
S: Not well, but I'm filling it with stuff to do.
C: Yeah.
S: But I already have way more stuff to do than I have time to do, you know? But that's what I'm saying. I think it'll take a few months to really settle in. Like, I've done all my projects. I've done all the things, the busy work that I can give myself. We have my new projects for the SGU going. You know, Bob and I are going to be writing another book. We're adding another podcast, we're doing more live streams, we're bringing them back AQ 6. We're going to have more time to put into just the the the primary show itself. And I'll have more downtime, but we'll see. It's got it'll. Take a I think you're going to settle in.
C: I think you're going to fill all that down. I will fill it.
S: Yeah, I'm good at filling my time. Yeah, but some of that, more of that will be like video games than it is now, right? I'll have, I'll have I'll be able to do more of that kind of stuff. All right. Thanks again guys.
B: Sure man, Thank.
S: You God, Steve, and until next week, this is your Skeptics Guide to the Universe.
![]() |
![]() |
![]() |
- ↑ www.technologynetworks.com: New Quantum Material Could Make Electronics 1000x Faster
- ↑ www.nature.com: What’s it like to work with an AI team of virtual scientists?
- ↑ www.sciencenews.org: How much energy does your AI prompt use? It depends
- ↑ www.nature.com: AAV gene therapy for autosomal recessive deafness 9: a single-arm trial
- ↑ www.smdailyjournal.com: Grandma was wrong: Food myths debunked
- ↑ www.nature.com: A generalized platform for artificial intelligence-powered autonomous enzyme engineering
- ↑ No reference given
- ↑ www.pnas.org: https://www.pnas.org/doi/10.1073/pnas.2421910122
- ↑ www.sciencedirect.com: Fugu: a compact vertebrate reference genome - ScienceDirect