SGU Episode 619

From SGUTranscripts
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
  Emblem-pen-orange.png This episode needs: transcription, time stamps, formatting, links, 'Today I Learned' list, categories, segment redirects.
Please help out by contributing!
How to Contribute


SGU Episode 619
May 20th 2017
Deccan traps.jpg
(brief caption for the episode icon)

SGU 618                      SGU 620

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Quote of the Week

People tend to hold overly favorable views of their abilities in many social and intellectual domains... This overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.

'Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments' - by Justin Kruger and David Dunning

Links
Download Podcast
Show Notes
Forum Discussion


Introduction

You're listening to the Skeptics' Guide to the Universe, your escape to reality.

What's the Word ()

  • Degenerate

News Items

Are You Being Watched? ()

Mechanism Behind Dark Energy ()

Dinosaur Extinction ()

Rational Arguments for God? ()

Who's That Noisy ()

  • Answer to last week: La La Dog

Questions and Emails

Question #1: Indigenous Science ()

The March For Science was very uplifting. I was very pleased with what I saw and heard while watching the TV coverage of the D.C. event. Congratulations to Cara for her outstanding work as an MC. Being there, Cara must have heard these words from Robin Kimmerer (identified as Distinguished Teaching Professor and Director, Center for Native Peoples and the Environment, SUNY College of Environmental Science and Forestry). Professor Kimmerer read from what she identified as the 'Indigenous Science Declaration': 'Let us remember that long before western science came to these shores there were indigenous scientists here; native astronomers, geneticists, botanists, engineers; and we are still here. Let us celebrate indigenous science that promotes the flourishing of both humans and the beings with whom we share the planet. Indigenous science provides not only a wealth of factual knowledge but a powerful paradigm to understand the world and our relation to it, embedded in cultures of respect of reciprocity and reverence. Indigenous science couples knowledge to responsibility. indigenous science supports society aligned with ecological principles, not against it. It is ancient, and it is urgent. Western science is a powerful approach – it's not the only one. Let's march not just for science but for scienceS.' I'm interested to know the rogues' reaction to this declaration. Is there more than one science? Glenn Emigh La Mirada, California

Question #2: AI Follow Up ()

This is feedback directed toward Steve regarding some of the arguments he made during the Elon Musk and AI discussion. 1. Steve makes the argument that we need not worry about the immediate future regarding artificial general intelligence because there is no active research attempting to produce it. It is unclear what exactly he meant by this, so it is difficult to fully adhere to the principle of charity. But let us assume there exists evidence that could refute his claim. What would it look like? Steve may disagree, but given the strength of his claim and my understanding of the argument he was making, I expect any research group with a mission statement to pursue artificial general intelligence would be sufficient. So, what about Google's DeepMind Technologies? For example, Demis Hassabis (CEO) has stated the goal of the company is, 'solving intelligence, and then using that to solve everything else' (https://www.technologyreview.com/s/601139/how-google-plans-to-solve-artificial-intelligence/). Sure, this may just be popsci journalism perpetuating a general impression of AI; however, this goal is similarly stated in their research publications (this is what I find as convincing evidence of their motivations). For example, consider the recent publication about PathNet: https://arxiv.org/abs/1701.08734. From the language used in the abstract, it sure seems like the research goal of this group is artificial general intelligence. The first two sentences are pretty explicit: 'For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction.' So, would this type of research fall into the category that Steve was claiming is not being pursued? 2. Steve also made the claim 'Why would we make it self-aware when we don't need it to be self-aware. It could be really powerful AI and do exactly what we want it to do and we don't have to worry about it wanting anything or thinking anything.' This is very confusing to me. First, the relevance of self-awareness is unclear. I think the conversation should be grounded in a discussion of behavior rather than projected theory-of-mind analogies. Second, what does Steve mean when he says, '…do exactly what we want it to do and we don't have to worry about it wanting anything…'? I struggle to make sense of this statement. What is the mechanism by which humans will communicate what they want? How do those goals get represented? Predominantly, the state-of-the-art involves the construction of a 'cost function' that is designed to be optimized when choosing and generating behaviors. For example, consider a scenario in which a mobile robot is traveling across a room containing a few obstacles. When deciding how to move across the room, there are multiple relevant considerations: how bad is it if it collides with an obstacle? is there a maximum velocity for safety? how does these interact with a concern for speed and efficiency? how does the precision and accuracy of the robot's actuation and sensing impact these factors? All of these concepts could be encoded in a cost function to then be optimized during the planning phase of the robot decision making, e.g. determining the optimal trajectory. So, I would argue that we already are producing agents that 'want' something because the behavior of the agents is conditioned on the evaluation of this cost function. Now, one might argue that the cost function is designed by humans and is therefore really just what humans want. Does this hold if we introduce machine learning techniques for autonomously constructing cost functions or choosing parameter values? The argument would then become that the humans designed the machine learning algorithm that optimizes the cost function, so this is still just what the humans want. I think there is an assumption that needs to hold for this to be defensible, namely humans fully understand the space of possible outcomes produced by automatically generating a complex and high-dimensional set of cost functions that describe the behavior of the AI in the environment (where the environment could be virtual or physical). Stated more simply, it assumes humans can perfectly express the desired AI behavior in a cost function. I think this is the crux of the problem, and this is the idea that Nick Bostrom put forth with his 'paperclip maximizer' thought experiment. It is easy to dismiss this thought experiment by saying we would encode or impose bounds on the degrees of freedom of the AI in order to guarantee safe and desirable behavior, but I think it is a mistake to underestimate how difficult that is. The field of research that focuses on providing such guarantees is called 'formal methods', and I expect it will become more and more important as the complexity and degrees of control of the agents increase. But, we should recognize that the state-of-the-art is currently insufficient to justify such an assumption. Also, this does not even address the question of 'unknown unknowns', or how to provide a guarantee about how an AI will behave in situations previously not considered. I have several other criticisms (e.g. neuromorphic chips as a prerequisite), but I think they are mostly captured by TheTentacles comments on Steve's NeuroLogica blog post about this topic, particularly the one from April 2 at 7:52AM: http://theness.com/neurologicablog/index.php/is-ai-going-to-save-or-destroy-us/ Thank you for taking the time to read this feedback. I really enjoy the content you all produce, so keep up the great work! -Jake Arkin

Science or Fiction ()

Item #1: NASA data finds that terrestrial radio communications push the Van Allen Belts and other high energy radiation away from the Earth, and may be useful as a shield against harmful space weather. Item #2: A new study supports the hypothesis that metabolism arose prior to RNA in the origins of life on Earth. Item #3: New evidence suggests that the first mass extinction at the end of the Ordovician was caused by climate change triggered by a massive coronal mass ejection.

Skeptical Quote of the Week ()

'People tend to hold overly favorable views of their abilities in many social and intellectual domains... This overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.' -'Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments' - by Justin Kruger and David Dunning

S: The Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information on this and other episodes, please visit our website at theskepticsguide.org, where you will find the show notes as well as links to our blogs, videos, online forum, and other content. You can send us feedback or questions to info@theskepticsguide.org. Also, please consider supporting the SGU by visiting the store page on our website, where you will find merchandise, premium content, and subscription information. Our listeners are what make SGU possible.


References


Navi-previous.png Back to top of page Navi-next.png