Cover art for podcast Who Am I ?

Who Am I ?

60 EpisodesProduced by William BlacoeWebsite

Join us on a journey of self-discovery!

[Ep.02] Ghost in the Machine

In this episode Steven and William share their excitement for science-fiction as it pertains to self-awareness, free will and the difference between man and machine, if there is any. Will there ever be conscious robots, indistinguishable from humans?

Watch this episode on YouTube

Steven: Hello, welcome to the podcast "Who am I". I'm Steven.

William: And I'm William.

Steven: And this is episode 2. Welcome. Last podcast was really fun and we had a good time, just chatting away. You've been working hard on all the editing and stuff, so that's much appreciated.

William: Yeah, that was an interesting process. I got to learn about video editing, uploading, publishing. And pretty soon I will have the transcript ready to put on the webpage. And then hopefully our pages will become more searchable that way. Also, I really want us to make a trailer. I'm going to collect some material for that. It could just be cutouts from the show.

Steven: Yeah.

William: Maybe funny moments, maybe dropping my phone could be part of it. It must be good for something, right?

Steven: Organizing a podcast from scratch and what big things could go wrong.

William: Exactly, yeah. But there's no time pressure. It just takes however long it takes. But it'll become a routine and go quicker. So I look forward to that.

Steven: Yes.

William: Okay, today my suggestion for a topic of discussion is self-awareness. For example, you can look at it from the perspective of artificial intelligence or robotics, if you will. How are humans different from robots? I really love reading Isaac Asimov. He's my favorite author, and he is known for writing lots and lots about robots. He is, in my mind, similar to Tolkien. Whereas Tolkien built this whole planet, continent, you know, with lots of different creatures, languages, scripts, fauna, flora, Asimov created a galaxy in the future. And it's so far in the future that Earth has become a myth. This one planet where human life started. And in some of the stories people try to find out where it was by looking into really old texts. But anyway, that's a tangent.

Steven: You go ahead. Because, again, I haven't read as much Asimov as I would like. So hearing what it's about actually from you, it'll be educational for me and I'm sure for the readers as well.

William: Which books or movies are you familiar with?

Steven: So I have… this is going to be difficult because my memories are so bad. The most famous one, I haven't read much of that one.

William: The foundation series.

Steven: Yeah, foundation series. I jumped into one at first to try…

William: Empire series, robot series.

Steven: It might have been the robot series.

William: Ok, yeah, that's probably the most known one. Or next to Empire, next to foundation at least.

Steven: I heard of it being really good.

William: There's lots of short stories in one compilation, and the movies come from that series: I Robot and the Bicentennial man. Yeah, and so in those movies and many of his books he deals, or the the main person, deals with questions like "is this a robot?". Similar to that film with Harrison Ford.

Steven: Bladerunner.

William: Yeah, where he thinks he has the test to know whether someone is human or a robot. And you find out just how tricky that can be. And so this is a popular topic in sci-fi, one of the reasons I love sci-fi. And it's not just a computer science question, but also a philosophy question.

Steven: Yeah. It's a humanity question?

William: Humanity?

Steven: Yes, humanity question.

William: It's interesting that humans want create machines in their own image, first of all. Why do machines have to be humanoid? Our shape is really useful for lots of things, but not all things. And so there are machines in other shapes as well, obviously. But it's interesting how much we want machines to be like ourselves. And so in some of these stories by Asimov he allows, or he writes without his machines his robots become so much like humans that the question gets more and more difficult: what's the difference? In one of them a robot man falls in love with a human woman and tries to become more and more human, a bit like data in Star Trek, except that he really succeeds. His maker is really helpful, keeps updating his biology, I want to say, his anatomy. There you go. And in the end there is no more significant difference. I won't spoil the end: what was the very last thing that he that he needed to change to become human? But the Bicentennial man was granted human status: a civilian of humanity, or a citizen of humanity. And so but that's really interesting. What does it take? And since they're fiction I don't know how close they are to reality. But we are certainly going in a similar direction of making machines more and more similar to humans.

Steven: It seems to be the peak goal: okay we can make computers, we can make machines. But the goal by science and computing and all that seems to be to make a machine that can be a human, that can be classed as human in nature, human in being, human in looks. It would be interesting to know why exactly that actually is, why we so strongly desire a lot of…

William: A lot of scientific research is not just about analyzing and understanding aspects of nature, but then also recreating it and simulating it. And then you compare it with nature and see how close you can get. So I am in natural language processing, for example. And there we try to understand language which you could say is nature. We use it every day. But we don't know how it works. You get taught grammar in school, but that's very far from the actual thing, language. It's just a very basic approach, usually having more to do with spelling and writing, which is not… they're not the most natural part of natural language. You could you could argue it's artificial and, well, for one, you can even tell that it's different from how we speak because it's something that government can change. When they introduce writing reform, for example. And how you speak cannot be changed by institutions. Also, thousands of years ago, I don't know how long ago, there were tons of languages without any scripts. I believe there still are some. And so they just evolved without any government, just in anarchy and that's the more natural part of language. Anyway why was I getting into this? Oh yeah, we don't understand how language works. We just try. We make some language technology that comes close and is pretty useful. We use it every day, for example in search engines. But we still cannot reproduce exactly what goes on in our brain. And I kind of like the mystery. The thing that makes us different from robots is not knowing how we really work. But on the other side, I also enjoy discovering the knowledge about how we work.

Steven: Yeah.

William: That makes us more self-aware as well.

Steven: Yeah, it's a good mirror to hold up to, again, a self-discovery of "Who am I?" and how that can be explored, again, through the robot, I guess, question that you've just pretty much raised. Again, we mentioned the matrix last time. You've mentioned Asimov. There's one recently called "ex machina".

William: Oh a movie, you mean?

Steven: Yeah, the film.

William: Because the book has been around. I just haven't gotten around to read it.

Steven: I didn't realize it's based on the book. I thought it was just a film.

William: Or "deus ex machina". It's probably a different thing, right?

Steven: Yeah, deus ex machina is only a phrase. I don't know if that is a book they've used to make the film. I don't know. I just I assume the film was just… Maybe I'm wrong. I've no idea.

William: I'll have to look that up. "Deus ex machina" means "the god out of the machine".

Steven: Yeah.

William: And "ex machina" then is just "out of the machine".

Steven: Yeah.

William: So, did you see it? Was something unexpected coming out of a machine?

Steven: Well it's, the film itself what we're discussing. The premise is: a young man wins a competition of software encoding, goes to a secret place where he wins a prize to spend a week with the boss of this big company. And then he has to… That's what he thinks he's won: it's just being able to spend a week with him. But actually he's now going to do the Turing test, which is the test: "can a I be classed as human, here's consciousness?" And that's the film. The film cogent all that. And, again, I won't talk any more about that in terms of what the plot is and stuff. It's worth watching. It's about the dude's, the young adult, his, I guess, discovery.

William: The Turing test is still around. It's I mean… Turing has been dead for over 50 years. But wait, Turing lived during the Second World War, so I'm not sure when exactly he died. But there was an anniversary a couple of years ago, maybe for his birth. Anyway the Turing test, I believe, has prize money on it.

Steven: Oh, wow.

William: Yeah, if you can fool someone, well, if a machine can fool someone or, you know, act so well like a human that you cannot tell them apart. And that doesn't go into looks it's basically about language and reasoning. But we're not there yet. Yeah, it's exciting to see progress in that area. But making progress in technology doesn't mean that we're making progress in understanding how our own faculty fully works. Philosophers have discussed this question for a long time. They call an entity that comes really close to being and behaving like a human in every way, but it's not human… They call that a zombie.

Steven: Okay.

William: That's the real philosophical term. It's not slang, at least not anymore. So a zombie would pass the Turing test. But the difference is it does not have the subjective conscious experience that humans do. And this is something really hard to pin down and talk about, even for philosophers, I believe, because it's controversial. People have very different views on consciousness, and free will for that matter. It ties in closely, I believe.

Steven: Oh, yeah.

William: Are we really alive, in the sense that we… See, I can't even put it into words.

Steven: It's hard, isn't it.

William: Oh, it reminds me of that film with Johnny Depp and Morgan Freeman where the Machine asks Morgan Freeman… Or no, first Martin Freeman ask the machine "can you prove that you're self-aware?" And the machine says "can you?". It's called "transcendence". It's great. It came out about five years ago.

Steven: Okay. Don't think I've see that one.

William: It's great. Yeah, Johnny Depp dies, and so he uploads his consciousness into a computer just before. He has a wife who's also genius. And she helps him stay alive. Because humanity is getting really critical. And will there be destruction. And then you have to wait to find out if that really happens, you know, or if it can be thwarted, or if it really is a danger, and so on. It's great.

Steven: That's really funny because I remember seeing that come out in terms of like the trailers and stuff, and be like "I want to see that" and then being like "okay, I'll watch it when it comes out like on DVD or whatever" and then you just forgot about it completely. I have no remembrance of it at all, wow. until you just mentioned it.

William: If you're not constantly in touch with cinema you tend to be…

Steven: I tend to be kind of on the ball with these things because I love. You know how much I love any media that tells story. Media, medium. That tells a story. How long do you think it will be until conscious machines? Because, I mean, to be honest, I think it'll happen.

William: Uhm, I have a different view. I really enjoy the Australian philosopher David Chalmers on this topic. And I watched some of the interviews with him. And I can't say for certain, obviously, but I like how he says that no matter how complex machines get, no matter how many artificial neurons you put together in a network, you will never achieve the same kind of consciousness that humans experience. So he makes a hard difference, a strict difference between artificial intelligence and, I guess, natural intelligence. And so… Sorry?

Steven: The question there is… because we're machines in biology. We have electrical currents going through us. So ultimately we'll just come full circle, won't we? We will get to the point where through machine and biology we just combine the two to be able to make humans. Is that not…?

William: That's a popular view. That's right. I'm glad you raised that and not just go along with what I'm saying because, yes, a lot of people believe that we are just a biological machine.

Steven: Ok. Yeah, I don't know what I personally believe, but I'm just throwing in different viewpoints.

William: No, it's important that we mention especially the questions that a lot of people have. And this is a common question. What's the name of the English physicist who died recently? Hawking, Stephen Hawking. I read…

Steven: He passed away?! Wow. Sorry.

William: Yes. he did, a few years ago.

Steven: Oh my goodness. This is hilarious. I'm going to literally look this up because it's just… this happens a lot with me.

William: I miss a lot of stuff too but…

Steven: Yeah, March. Wow, yeah. Again, okay, with my health stuff, my memory and that. It's, yeah… I would have… The time I think…

William: He was just on the Big Bang Theory a few years ago.

Steven: Okay, I must have… Obviously I would have seen it. I would have known that he had passed away. Again. because of health stuff my memory, it's just gone.

William: Information overload? You can't go away?

Steven: I don't know what it was. My stupid health, he ability to retain memory at the moment…

William: Are you saying that you knew at one point and forgot?

Steven: Yeah, well I would have had, it would been on Facebook or whatever else I use. So I would have known, but the problem gets where you tell me, you said, he died. And I'm like "I can't remember", I'm like "okay, that sounds right", but there's no grounding in my brain where I can go. There is no point of reference. I remember hearing it at that point. So I question the reality of it just for myself, knowing you're right, but not having any context for myself.

William: That's confusing.

Steven: Oh, it can be it can be really difficult sometimes. Yeah, there's certain things, there's no rhyme or reason to it. Anyway, off-topic.

William: I mentioned Stephen Hawking because he was very popular. He was stubborn. But his views had a big influence on people. Anyway, he said… Every time he made a new discovery, or in science in general made a new discovery, he believed… he felt confirmed, so that his view became more likely to be true, which is that we are just matter in action and there's nothing immaterial behind it. He likes to say at the end of an explanation "and since we know now how this works there is no more need for God". Because, as you know, one reason that religion is popular is whenever people hit a wall with their explainability, with their knowledge on a topic they like to say "…and everything beyond that is up to God". Since our brains are so, let's go with finite,, and God's isn't this is something that he can understand that we can't. And I think that's true, you know, in a sense. But I like to think that God can tell us everything. So therefore nothing is really unaccessible. It just might not be in our control. We cannot always force progress or knowledge.

Steven: It's a way of just saying "etc., etc." or "and so on and so forth." We have got this point of knowledge that's as far as we can go at a specific period of time. And that's to be able to understand that we can't understand it yet. We can't assume it…

William: How can we know what we cannot know?

Steven: Yeah, we kind of submit and go "okay, well, sometimes ignorance is bliss. And we will get there at some point, but right now we don't." It's like there has to be an answer. Why can't it just put "dot dot dot, and just we'll get there eventually. Give us time."

William: Yeah, you know, a Black Swan might come along. And it wouldn't be a Black Swan if you expected it. So there are emergent… there are emergent phenomena like that where knowledge suddenly makes a leap.

Steven: Yeah.

William: But… I forgot how we got onto the topic.

Steven: Stephen Hawking, at the end of his statements…

William: So when scientists or people making discoveries or even people catching up on already existing knowledge, you know, knowledge that is in the human society already, when they catch up, and this happens to me too, I start noticing patterns in nature, and I say "wow this explanation is really good and explains this and this and answers this question I always had", and you get so confident that you start believing this can explain everything or it'll just be another five to ten years until we can explain everything. And that is not advisable, not careful.

Steven: I get very confused by the whole "looking for the one question that answers everything". I find it very ignorant and, again, it's just me being a kind of human being with not much knowledge of anything, really. But I find it very strange because at any given point something can prove a situation, and then time goes by, something happens which contradicts that. It doesn't change the other thing, it doesn't change that at that point the proof that was a certain way… Again, it's proof. It's not just a theory. Life is contradictory. It's paradoxical. There is a Dune quote. I won't go into it. It talks about how paradox is just an entertaining thing. People that get wound up in paradox need to chill out. That's the Steven way of interpreting the quote because actually paradox is just part of existence. And so why… and contradiction… why do we need to suddenly say that was wrong and now this is right. We have an obsession with that, I think, as a species.

William: Yeah, I think the more humble scientists say that scientific discoveries are really just theories, and you can never prove that they're true. You can just show that they're true in certain contexts and certain experiments. And they are assumed to be true until they're disproven. So every theory is falsifiable but not really provable. They would need to test it in every possible situation and we don't even know how to do that. We would need to know everything about everything to have absolute certainty about something to be true. So the theory doesn't just mean some discovery that meta-scientists formulated, but also something, like most people use the word, something not certain. And, I guess, linking it back to the robotics side of things, where that's what we're exploring as a humanity: projecting our, I guess, our desire to discover who we are as beings by creation, because that's pretty much how we will always do it, and creating a machine robot that reflects ourselves is a way to explore ourselves, because you're creating something that is like you or like us as a species. I guess you'd have to do it, again, I don't do robotics, but you'd have to get and go through so much data, so much, you have to understand so much philosophy, so much emotions. You have to understand… It hurts my head, trying to think about it… then be able to create something like that, that does it by itself.

William: But, so when you do create something that behaves similarly to you, or to a human in general, then I think the next step is that you believe that what you did to make this machine the way it is is the same that made you who you are. I remember when… This blew my mind the first time I sat in a lecture about logic in computer science, and I realized that the formulas, the logical formulas that we're writing down, explaining something mathematical, just not as complex as something about human behavior; just something in math, something abstract… I thought "okay this describes the behavior. But at the same people believe that this IS the behavior. That this is what generates the behavior in the first place." But I believe that there's still a step in between, and that they might just seem the same but they don't have to be the same. Does that make sense? hen I say, when I say "oh, you seem to drink orange juice every second day", and I watch you. Say this is with someone I live with so that I get I can notice this really every day. And then so I set up this rule. I can write it down mathematically if I want to. Every second day this person drinks orange juice, and the other days they don't, but usually you don't have to say that. Anyway, now does the person do that because they had a rule written somewhere inside of their brain that that is what they do? Or is it that they they drink orange juice every time they feel sad and that's just how they deal with it? And for some totally different reason they feel sad; it's just a coincidence that they feel sad every secon day. So the rule is "let's do something totally different, but…"

Steven: Where did the rule come from? Is it wired into our being? Is it a consequence of variables throughout the past of their existence? So it wasn't necessarily why I did again environment type factors that can change at any point in life. Or is it, as you say, they've just consciously decided? "Well okay, I just wanna… I'm gonna have it every other day because that's it we're gonna stick to." To me, I mean it's more, but it's, to me there's three different things going on there.

William: Okay. I already forgot the first two. So tell me: environmental factors?

Steven: The first one is hardwired into their system.

William: Okay.

Steven: So DNA, genetics, whatever, coded… who knows that there's something in there that was gonna make them always get to the point where they were gonna have orange juice every second day. Or they go through life and just think things happen, environment, external factors. Maybe, like you say, their mom used to do it every second day. Who knows. They decided that they… that's more of a philosophy.

William: And the third one was conscious choice.

Steven: Conscious choice: that kind of crossed over there a little bit from environmental conscious choice.

William: Yeah, they're not totally clearly divisible.

Steven: I'm trying to divide something that isn't actually, you know, separated out so I can understand those three different points, but actually they all intertwine. whatever.

William: So it is really useful though. The first two points are very similar to the nature versus nurture debate.

Steven: Yeah, that's the obvious one.

William: And the third one is relevant for our discussion about consciousness and free will, because certain people, I mean a lot of people, believe that conscious decision-making is just an epiphenomenon, it's just something that we tell ourselves exists, but actually it's just an illusion; and everything we do is predictable or, what's the other word, determined, so that our behavior is deterministic. And machines are deterministic, at least the way they are right now. You say one day they might not be, and that's interesting. But yeah, it's really difficult to know how I make decisions. You could say… You can reduce a lot of what I do back to something else in the past, like the way I was raised.

Steven: Yeah.

William: The things already mentioned; or just how I work. And I do a lot of things without consciously knowing that I'm doing them, like I heard once that you are attracted to someone if when you kiss your saliva chemically harmonizes. Or just pheromones that you give off, you can like them about someone else and then you're attracted to them. So there are all these factors that you don't know are working on you, and they influence your decision-making. So things like that certainly show that there is truth to this unconscious decision-making. But does it mean that everything works that way? I think that would be too far-reaching and, again, extrapolating from the knowledge you do have to "oh this is how everything works".

Steven: Yeah. I always find important not to jump to absolutes; again, just one thing I've learnt from Dune, where you say THIS is how it is. Well no, this is how I believe it is, but there's a hundred different factors that I'm going to absorb, listen to, take on board that can also be right, or could be part of the equation. Right and wrong very absolute terms, trying to express the variation of the situation. It's difficult.

William: Yeah, I think it's a matter of arrogance or humility. At least once you have reached a point where you become aware that you're aware. That's critical too. So if you think about children, they do a lot of things out of necessity, just automatically, basically. But something will change, and it's a process, I don't know if it can happen all of a sudden. But when they become aware how their decision-making works and that they can actually change what they want to do…

Steven: Isn't it in the test, the sweet test. You put a child, honestly, a table, and you give them sweet in front of them, and you say "if you wait ten minutes you get two sweets", And the child just sits there; and most children just eat the one sweet because they feel the necessity just… they're like "this is good now. I love it."

William: Is it important for the test that you leave the room?

Steven: Yes. I think so, yeah. They're on their own with the sweet. I think it's again it does many different things. But I think, again, it shows what you just said, where you learn a point where you actually, you can override the conscious impulse to eat the good thing or sugar egg, to realize that you get something better if you wait or something, it's a better, a positive outcome if you just wait. Because I think that's what a lot of choices are. I think a lot of choices are action and reaction. It's something happens to you and you automatically respond. It's putting up a wall in my, again, this is my opinion, that free will is where you're able to put up a wall and go "okay, just pause for five seconds and do a bit of conscious choice-making, to then make the decision, to then do what you want to do. Because again, it's not necessarily about what is right or wrong, it's what you as an individual want to do in that situation.

William: Right.

Steven: You might want to go and punch the person, that's an example here, is if you're in an argument or something, and your impuls is, let's say, fight, fight or flight. And in the past it seems you've just done it out of impulse. You hit the person and then you got in trouble, or whatever, or you've not. If you can get the point where you can go "No, I don't want to hit the person", walk away or "Yes, I do want to do it" and then still do it; that shows a conscious choice-making situation, again, in my view, my little naive view.

William: You know, I think that's really a good perspective for this topic. It's easy for us to look at children and say "oh they've just made… they've just taken a step to another level of consciousness", and then it helps to, when I remind myself that I'm not that different from children. I've made certain changes but I can still make lots and lots of changes. So I'm just on one of the many steps on this staircase towards higher and higher consciousness, you could say. I also love that you used the word "respond". I can respond to the situation, I mean impulsively, or I can just, you know, react without knowing why or how. Or I can choose how to respond. And the word responsibility comes from "respond", because I have the "ability to respond". I am response-able. And so that is a power that you can gain. I'm not sure how exactly the process works. But you can certainly decide to be open for it. And then with experience and just trying over and over you can gain a higher level of responsibility, of awareness. And I think that's a very valuable thing to work on.

Steven: Definitely. On the flip side, just to throw a contradictory example, discussion out there, is you can spend too long thinking and worrying about things. You can over-analyze. A lot of my anxiety and depression problems come from that. They've come from just sitting there and thinking too much about a situation and not just letting my own impulses do the thing. Because sometimes I'm like "Okay, is this the right way? Is this the wrong way?", in the past. Now I just go to that core thing of what do I want, what makes me happy? And then I just do it. And if it goes wrong or seems to go wrong I'm like "meh, whatever it doesn't matter. It's what I wanted to do. I tried it." I have that integrity and that happiness that I did it. No matter what happens that went or what goes wrong. I'm like "Wow, yeah. Go me. I had the consciousness to just do what I thought would help me be happy."

William: The non-useful thoughts, do they go in circles? Is that what you're talking about?

Steven: They can do, yeah.

William: You try to analyze and be all cognizant, but it never leads anywhere.

Steven: Yes, yeah. Again, that's where children come in. I mean, you're example of humility links back to kids because you're showing you're humble by looking at them as an example, because we're not much different from them. We are not- We think as adults we think we've suddenly become this higher being than those, the universe and knows how much more in children. But children just they are such prime examples of so many different things, and fun, fun and play. We talk about last time or talked about it befor where, yeah, just it was last time: being able to play is a great way to learn, just having fun and not overthinking things.

William: CBT stands for cognitive behavioral therapy, and the cognitive therapy part is the self-analyzing. trying to figure out what's going on, thinking about your own thoughts and seeing how you can change those patterns, to help. But behavioral part is what you're just talking about, at least I believe that's what behavioral therapy is about: how you just need to do something sometimes; just act out what you believe to be right or what you believe to be healthy in the moment, without knowing it in theory, you know, in your brain. And both are necessary for mental health or for health in general.

Steven: A lot of it is to do with fear fear, of getting it wrong or making a fool of ourselves, or whatever the reasons that we prefer just to think about it and go through it in our brains. And again, in our brains we… often it lies to us. It tells us things like this social situation is gonna be scary or whatever. And in many ways it's a self-fulfilling and prophecy where, because you think it's scary, your brain releases the biological side of things, yeah, anxiety; when actually if you just went and did it, and didn't think about it, you'd be like "okay, cool this is fun or whatever else". That's what I got stuck in way back. But again, I keep trying to refer this back to the original sort of questions about robotics. Do you think that they could make a machine, I guess, express what we're expressing, because you know more about these things than I do. Expressing awareness of fear? Awareness of fun through learning?

William: Let me give you a high-level answer that doesn't go into so much detail. The first thing that jumps to my mind is Kurt Gödel. Have you heard of him? He was a mathematician in the 30s, at least that's when he made his huge contributions. He was a party-pooper. I like to call him that because people at the beginning of the 20th century were so excited about all these new discoveries about logic. So you know we're excited about technological advances in the last few decades. But they were excited about the theoretical advances about understanding math and logic better and what you can write down on paper about nature. So do you see the link, yeah? You first need to understand these things in theory before you can recreate them.

Steven: Definitely.

William: David Hilbert, at the turn of the century came up with, I don't know, 15 problems or so that he formulated for scientists in the 20th century and he said if you have nothing to work on, try one of these. And there were really difficult problems. You can look on Wikipedia which of them have been solved, which have't been solved. Sorry?

Steven: I think I've heard of those.

William: Good. And I think maybe a third of them are solved, and that's about it. But one of them was: can you describe everything that happens in the universe mathematically? And people believed yes. They were very optimistic about this problem. And then Kurt Gödel came along and he showed that "Nope". He didn't have to go through every possible situation and show, and find one that doesn't work. He showed that mathematically in itself, mathematics in itself is incomplete. It doesn't mean it's wrong or useless. It just means it has its limitations. For example, when you create a formula, when you write down a formula that is trying to express something about itself, and this is similar to a machine trying to think about itself, it's analogous, in theory it's the same. When you, when you try to do that it is possible, but only to a certain extent. Because there are certain formulas, mathematical formulas, that will express something and you can derive it to be true from basic truths. I'll give you an example of some of the logical steps that you can make in mathematics: modus ponens is a rule that is when you have two rules one of them says if A then B, and the second one says A which means A is true; and the first one says if A is true then B must be true. If you combine the two if you say that both are true then you can know as a consequence that B is true. You know that A is true and we know that if A is true then B is true. If you combine the two you get B, or B is true. So that's one basic rule that most people most scientists can agree on in most situations it holds true. You can come up with some quirky situations but it doesn't, but let's stay in the common sense area. And so you have a set of these, maybe a dozen. And you you can express infinitely many very complex things about nature or any mathematical system. And the the logic will be consistent for that. If X is true then it's negation must be false, right? If I say sun is shining and it is, then the sentence "the Sun is not shining" must be false. But with the mathematical logic that we have, that we're using right now. the standard arithmetics, you can derive a sentence, the system will predict that the sentence is true and that its negation is true.

Steven: Yeah.

William: So, I don't remember how, I looked at a proof once. It's complicated. But Kurt Gödel came up with this in the 30s and showed that the mathematical system has been very useful, but it has its limitations, because obviously something and it's negation cannot always be true at the same time. And so that shows that No, that the mathematical system we were using right now is not able to cover everything. We cannot express everything with it. And that to me is fundamentally important because it shows that there are still things outside of the very system, the language, the mode of thinking that we use to make sense of nature, our environment and ourselves. Sp I hope that you can see the connection between that discovery and your question about self conscious machines. As long as we build machines based on the mathematics we use right now, I believe that they will at least be limited.

Steven: Okay.

William: You might be able to create zombies.

Steven: Well we're limited as well, so. So I guess our limitation creates the limited machine. Therefore we need to improve our own limitations to be able to create.

William: The last thing we said was humans are limited in our thinking in our scientific methods. And then you said that then it's no surprise that the machines that we create are also limited.

Steven: Yeah.

William: And that makes a lot of sense since, I believe, you need to be at a higher level than the thing you create because basically you need to be able to create everything it does, you need to be able to understand everything it does.

Steven: Interesting point, because again, creation often happens through accident, I guess. like just random chaos, I guess. I don't think people realized exactly what they're doing when they split the atom. I mean, maybe they did. I don't know. But is it something they planned for?

William: There are examples that what you're talking about. Like penicillin was discovered accidentally.

Steven: Yeah, so leaps in technology can… that's where, again, all the I guess disaster films come from where it's like "do we jump forward in robotics?" It's the point where we actually create something that is beyond us. And then…

William: We should definitely discuss that another time.

Steven: I think it would be a good part two. I think this is a good ongoing discussion that could quite easily take up another session.

William: Of course, yeah.

Steven: So if you want to…

William: Some people call this topic "AI safety". Elon Musk is a big proponent of being more careful and not just going for every idea you have. Especially people do that when there are economic incentives behind it. He had a fight, you could say, a debate with Mark Zuckerberg on Twitter once. But they went back and forth what is safe to do? When should you be more careful and not care so much about all the money that you could make just because you're the first to try this new approach? Yeah, it's a big topic.

Steven: It is, it really is. Have you felt like you've discussed a lot of what you wanted to in this session?

William: Yeah, obviously it evolved organically, just like you predicted. And I didn't expect to go into mental health or some of the history of mathematics. But it's all related, obviously. So yeah, I'm very happy with the way it's gone. And we could spend another hour on it, which would definitely make another episode.

Steven: Yeah, I'm happy to do that.

William: Good. What do you think about the idea of sometime in the future also getting on someone else to speak with us or with one of us about a topic?

Steven: Yeah. I'm always yeah like I mean, I just kind of just talked about limitations like it's always good to get another viewpoint. It'd be nice. Wouldn't it be amazing if we can link up you know seven billion people and get a lot of different perspectives together? But again, limitations; so having someone else that has a perspective and knowledge and is always good fun, yeah.

William: It doesn't have to be an expert, just another voice that brings some diversity.

Steven: Yeah. Again, we're not exactly experts.

William: Yeah.

Steven: Well, you are in many things.

William: Well, I studied certain things, and you could tell I'm really excited to talk about them.

Steven: Yeah.

William: And I'm happy to have this outlet, this way to express them.

Steven: I like listening to them. I've always went… In our previous discussions. I've lost number of years. I've really enjoyed because we were very different in our own knowledge base, I guess. I guess that's what happens as humans. But just you have such a incredible understanding of, again, the coding, computing language and how it relates to humans, to us. And you do a very good job of humanizing it all. Because a lot of times I separate myself from that kind of stuff because to me it feels so foreign to how my own perspective perceives things, that it seems unhuman. But actually the way you break it down and talk about it really helps me to open my mind more to that. I think I still very rarely understand it, but it just gives me a little bit of knowledge.

William: I know what I mean, yeah. I think it's sad when people strictly try to separate the science from what it means for us in everyday life. And humanization is important, like the ethics question for example. That should always come along with the scientific development. You shouldn't just do something because it's fun or because there's lots of money in it. Progress isn't always all good, or at least you should say there are different types of progress.

Steven: Yeah, again, it's intention, isn't it. Ontention behind the progress.

William: Yeah, and the unintended consequences that you need to repair afterwards…

Steven: Which, I guess, is a good link. Right there actually. That question could be our starting point for next session: the intentions behind creating, again, machines that have consciousness and affect our day-to-day lives potentially. And (1) why you want to do that? And (2) should we do that? (3) Is it she worthwhile? These things we can discuss, I guess.

William: Definitely, yeah.

Steven: To start with. Maybe you have our ideas. But just…

William: No, I have tons of ideas, just in response to the things you were saying. But I'd better write them down, and not try to cover them all now.

Steven: There we go.

Listen to Who Am I ?


A free podcast app for iPhone and Android

  • User-created playlists and collections
  • Download episodes while on WiFi to listen without using mobile data
  • Stream podcast episodes without waiting for a download
  • Queue episodes to create a personal continuous playlist
RadioPublic on iOS and Android
Or by RSS
RSS feed

Connect with listeners

Podcasters use the RadioPublic listener relationship platform to build lasting connections with fans

Yes, let's begin connecting
Browser window

Find new listeners

  • A dedicated website for your podcast
  • Web embed players designed to convert visitors to listeners in the RadioPublic apps for iPhone and Android
Clicking mouse cursor

Understand your audience

  • Capture listener activity with affinity scores
  • Measure your promotional campaigns and integrate with Google and Facebook analytics
Graph of increasing value

Engage your fanbase

  • Deliver timely Calls To Action, including email acquistion for your mailing list
  • Share exactly the right moment in an episode via text, email, and social media
Icon of cellphone with money

Make money

  • Tip and transfer funds directly to podcastsers
  • Earn money for qualified plays in the RadioPublic apps with Paid Listens