In this episode, Byron and Ira discuss transfer learning and AI ethics.
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Ira Cohen, he is the cofounder and chief data scientist at Anodot, which has created an AI-based anomaly detection system. Before that he was chief data scientist over at HP. He has a BS in electrical engineering and computer engineering, as well as an MS and a PhD in the same disciplines from The University of Illinois. Welcome to the show, Ira.
Ira Cohen: Thank you very much for having me.
So I’d love to start with the simple question, what is artificial intelligence?
Well there is the definition of artificial intelligence of machines being able to perform cognitive tasks, that we as humans can do very easily. What I like to think about in artificial intelligence, is machines taking on tasks for us that do require intelligence, but leave us time to do more thinking and more imagination, in the real world. So autonomous cars, I would love to have one, that requires artificial intelligence, and I hate driving, I hate the fact that I have to drive for 30 minutes to an hour every day, and waste a lot of time, my cognitive time, thinking about the road. So when I think about AI, I think how it improves my life to give me more time to think about even higher level things.
Well, let me ask the question a different way, what is intelligence?
That’s a very philosophical question, yes, so it has a lot of layers in it. So, when I think about intelligence for humans, it’s the ability to imagine something new, so imagine, have a problem and imagine a solution and think about how it will look like without actually having to build it yet, and then going in and implementing it. That’s what I think about [as] intelligence..
But a computer can’t do that, right?
That’s right, so when I think about artificial intelligence, personally at least, I don’t think that, at least in our lifetime, computers will be able to solve those kind of problems, but, there is a lower level of intelligence of understanding the context of where you are, and being able to take actions on it, and that’s where I think that machines can do a good task. So understanding a context of the environment and taking immediate actions based on that, that are not new, but are already… people know how to do them, and therefore we can code them into machines to do them.
I’m only going to ask you one more question along these lines and then we’ll move on, but you keep using the word “understand.” Can a computer understand anything?
So, yeah, the word understanding is another hard word to say. I think it can understand, well, at least it can recognize concepts. Understanding maybe requires a higher level of thinking, but understanding context and being able to take an action on it, is what I think understanding is. So if I see a kid going into the road while I’m driving, I understand that this is a kid, I understand that I need to hit the brake, and I think machines can do these types of understanding tasks.
Fair enough, so, if someone said what is the state of the art like, they said, where are we at with this, because it’s in the news all the time and people read about it all the time, so where are we at?
So, I think we’re at the point where machines can now recognize a lot of images and audio or various types of data, recognize with sensors, recognize that there are objects, recognize that there are words being spoken, and identify them. That’s really where we’re at today, we’re not… we’re getting to the point where they’re starting to also act on these recognition tasks, but most of the research, most of what AI is today, is the recognition tasks. That’s the first step.
And so let’s just talk about one of those. Give me something, some kind of recognition that you’ve worked on and have deep knowledge of, teaching a computer how to do…
All right, so, when I did my PhD, I worked on affective computing, so, part of the PHD was to have machines recognize emotions from facial expressions. So, it’s not really recognizing emotion, it’s recognizing a facial expression and what it may express. So there are 6 universal facial expressions that we as humans exhibit, so, smiling is associated with happiness, there is surprise, anger, disgust, and those are actually universal. So, the task that I worked on was to build classifiers, that given an image or a sequence of a video of a person, a person’s face, would recognize whether they’re happy or sad or disgusted or surprised or afraid…
So how do you do that? Like do you start with biology and you say “well how do people do it?” Or do you start by saying “it doesn’t really matter how people are doing it, I’m just going to brute force, show enough labeled data, that it can figure it out, that it just learns without ever having a deep understanding of it?”
All right so this was in the early 2000s, and we didn’t have deep learning yet, so we had neural networks, but we weren’t able to train them with huge amounts of data. There wasn’t a huge amount of data, so the brute force approach was not the way to go. What I actually worked on is based on research by a psychologist, that actually mapped facial movements to known expressions, and therefore to known emotions. So it started out in the 70s, by people in the psychology field, [such as] Charles Akemann, in San Francisco, who mapped out actual… he created a map of facial movements into facial expressions, and so that was the basis of what are the type of features I need to extract from video and then feed that to a classifier, and then you go through the regular process of machine learning of collecting a lot of data, but the data is transformed, so these videos were transformed into known features of facial movements, and then, you can feed that into a classifier that learns in a supervised way. So I think a lot of the tasks around intelligence are that way. It’s being changed a little bit by deep learning, which supposedly takes away the need to know the features are a priori, and do the feature engineering for the machinery task…
Why do you say “supposedly”?
Because it’s not completely true. You still have to do, even in speech, even in images, you still have to do some transformations of the raw data, it’s not just take it as is, and it will work magically and do everything for you. There is some… you do have to, for example in speech, you do have to do various transformations of the speech into all sorts of short term Fourier transform or other types of transformations, without which, the methods afterwards will not produce results.
So, if I look at a photo of a cat, that somebody’s posted online or a dog, that’s in surprise, you know, it’s kind of comical, the look of surprise, say, but a human can recognize that in something as simple as a stick figure… What are we doing there do you think? Is that a kind of transferred learning, or how is it that you can show me an alien and I would say, “Ah, he’s happy…”What do you think we’re doing there…?
Yeah, we’re doing transferred learning. Those are really examples of us taking one concept that we were trained on from the day we were born, with our visual cortex and also then in the brain, because our brain is designed to identify emotions, just out of the need to survive, and then when we see something else, we try to map it onto a concept that we already know, and then if something happens that is different from what we expected, then we start training to that new concept. So if we see an alien smiling, and all of a sudden when he smiles, he shoots at you, you would quickly understand that smiling for an alien, is not associated with happiness, but you will start offby thinking, “this could be happy”.
Yeah, I think that I remember reading that, hours after birth, children who haven’t even been trained on it, can recognize the difference between a happy and sad face. I think they got sticks and put drawings on them and try to see the baby’s reactions. It may even be even something deeper than something we learn, something that’s encoded in our DNA.
Yeah, and that may be true because we need to survive.
So why do you think we’re so good at it and machines aren’t, right, like, machines are terrible right now at transfer learning. We don’t really know how it works do we, because we can’t really code that abstraction that a human gets, so..
I think that from what I see first, it’s being changed. I see work coming out of Google AI labs that is starting to show how they are able to train single models, very large models, that are able to do some transfer learning on some tasks, and, so it is starting to change. So machines have a very different… they don’t have to survive – they don’t have this notion of danger, and surviving, and I think until we are able to somehow encode that in them, we would always have to, ourselves, code the new concepts or understand how to code for them, how to learn new concepts using transfer learning…
You know the roboticist Rodney Brooks, talks about “the juice”, he talks about how, if you put an animal in a box, it feels trapped, it just tries and tries to get out and it clearly has a deep desire to get out, but you but in a robot to do it, the robot doesn’t have what he calls “the juice,” and he of course doesn’t think it’s anything spiritual or metaphysical or anything like that. But what do you think that is? What do you think is the juice? Because that’s what you just alluded to, machines don’t have to survive, so what do you think that is?
So I think he’s right, they don’t have the juice. Actually in my lab, during my PhD, we had some students working on teaching robots to move around, and actually, the way they did it was rewards and punishments. So they would get… they actually coded—just like you have in reinforcement learning—if you hit a wall, you get a negative reward. If the robot moved and did something he wasn’t supposed to, the PhD student would yell at them, and that would be encoded into a negative reward, and if he did something right, they had actions that gave them positive rewards. Now it was all kind of fun and games, but potentially if you do this for long enough, with enough feedback, the robot would learn what to do and what not to do, the main thing that’s different is that it still lives in the small world of where they were, in the lab or in the hallways of our labs. It didn’t have the intelligence to then take it and transfer it to somewhere else…
But the computer can never… I mean the inherent limit in that is that the computer can never be afraid, be ashamed, be motivated, be happy…
Yes. It doesn’t have the long term reward or the urge to survive, I guess.
You may be familiar with this, but I’d like to set it up anyway. There was a robot in Japan, it was released in a mall, and it was basically being taught how to get around and if it ran into a person, if it came up to a person, it would politely ask the person to move, and if the person didn’t, it would just zoom around them. And what happened was children would just kind of mess with it, maybe jump in front of it when it tried to go around them again and again and again, but the more kids there were, the more likely they were to get brutal. They would hit it with things, they would yell at it and all of that, and the programmers ended up having to program it, that if it had a bunch of short people around it, like children, it needed to find a tall person, an adult, and zip towards it, but the distressing thing about it is when they later asked those children who had done that, they said, “Did you cause the robot distress?” 75% of them said yes, and then they asked if it behaved human-like or machine-like, and only 15% said machine-like, and so they thought that they were actually causing distress and it was behavinglike a humanoid.What do you think that says? Does that concern you in any way?
Personally, it doesn’t, because I know that, as long as machines don’t have real affect in them, then, we might be transferring what we think stress is onto a machine that doesn’t really feel that stress… it’s really about codes…
I guess the concern is that if you get in the habit of treating something that you regard as being in distress, if you get into the habit of treating it callously, this is what Weizenbaum said, he thought that it would have a dampening effect on human empathy, which would not be good… Let me ask you this, what do you think about embodying artificial intelligence? Because you think about the different devices: Amazon has theirs, it’s right next to me, so I can’t say its name, but it’s a person’s name… Apple has Siri, Microsoft has Cortana… But Google just has the google system, it doesn’t have a name. Do you think there’s anything about that… why do you think it is? Why would we want to name it or not name it, why would we decide not to name it? Do you think we’re going to want to interact with these devices as if they’re other people? Or are we always going to want them to be obviously mechanistic?
My personal feeling is that we want them to be mechanistic, they’re there not to exist on their own accord, and reproduce and create a new world. They’re there to help us, that’s the way I think AI should be, to help us in our tasks. Therefore when you start humanizing it, then you’re going to either have the danger of mistreating it, treating it like basically slaves, or you’re going to give it other attributes that are not what they are, thinking that they are human, and then going the other route, and they’re there to help us, just like robots, or just like the industrial revolution brought machines that help humans manufacture things better… So they’re there to help us, I mean we’re creating them, not as beings, but rather as machines that help us improve humanity, and if we start humanizing them and then, either mistreating them, like you mentioned with the Japanese example, then it’s going to get muddled and strange things can happen…
But isn’t that really what is going to happen? Your PhD alone, which is how do you spot emotions? Presumably would be used in a robot, so it could spot your emotions, and then presumably it would be programmed to empathize with you, like “don’t be worried, it’s okay, don’t be worried,” and then to the degree it has empathy with you, you have emotional attachment to it, don’t you go down that path?
It might, but I think we can stop it. So the reason to identify the emotion is because it’s going to help me do something, so, for example, our research project was around creating assistance for kids to learn, so in order to help the kid learn better, we need to empathize with the state of mind of the child, so it can help them learn better. So that was the goal of the task, and I think as long as we encapsulate it in well-defined goals that help humans, then, we won’t have the danger of creating… the other way around. Now, of course maybe in 20 years, what I’m saying now will be completely wrong and we will have a new world where we do have a world of robots that we have to think about how do we protect them from us. But I think we’re not there yet, I think it’s a bit science fiction, this one.
So I’m still referring back to your earlier “supposedly” comment about neural nets, what do you think are other misconceptions that you run across about artificial intelligence? What do you think are, like your own pet peeves, like “that’s not true, or that’s not how it works?” Does anything come to mind?
People think, because of the hype, that it does a lot more than it really does. We know that it’s really good at classification tasks, it’s not yet very good at anything that’s not classification, unsupervised tasks, it’s not being able to learn new concepts all by itself, you really have to code it, and it’s really hard. You need a lot of good people that know the art of applying neural nets to different problems. It doesn’t happen just magically, the way people think.
I mean you’re of course aware of high profile people: Elon Musk, Stephen Hawking, Bill Gates, and so forth who [have been] worried about what a general intelligence would do, they use terms like “existential threat” and all that, and they also, not to put words in their mouth, believe that it will happen sooner rather than later… Because you get Andrew Ng, who says, “worry about overpopulation of Mars,” maybe in a couple hundred years you have to give it some thought, but you don’t really right now…So where do you think their concern comes from?
So, I’m not really sure and I don’t want to put any words in their mouth either, but, I mean the way I see it, we’re still far off from it being an existential threat. The main concern is you might have people who will try to abuse AI, to actually fool other people, that I think is the biggest danger, I mean, I don’t know if you saw the South Park episode last week, they had their first episode where Cartman actually bought an Alexa and started talking to his Alexa, and I hope your Alexa doesn’t start working now…. So it basically activated a lot of Alexas around the country, so he was adding stuff to the shopping cart, really disgusting stuff, he was setting alarm clocks, he was doing all sorts of things, and I think the danger of the AI today is really getting abused by other people, for bad purposes, in this case it was just funny… But you can have cases where people will control autonomous cars, other people’s autonomous cars by putting pictures by the side of the road and causing them to swerve or stop, or do things they’re not supposed to, or building AI that will attack other types of AI machines. So I think the danger comes from the misuse of the technology, just like any other technology that came out into the world… And we have to… I think that’s where the worry comes from, and making sure that we put some sort of ethical code of how to do that…
What would that look like? I mean that’s a vexing problem…
Yes, I don’t know, I don’t have the answer to that…
So there are a number of countries, maybe as many as twenty, that are working on weaponizing, building AI-based weapons systems, that can make autonomous kill decisions. Does that worry you? Because that sounds like where you’re going with this… if they put a plastic deer on the side of the road and make the car swerve, that’s one thing, but if you literally make a killer robot that goes around killing people, that’s a whole different thing. Does that concern you, or would you call that a legitimate use of the technology…?
I mean this kind of use will happen, I think it will happen no matter what, it’s already happening with drones that are not completely autonomous, but they will be autonomous probably in the future. I think that I don’t know how it can be… this kind of progress can be stopped, the question is, I mean, the danger I think is, do these robots start having their own decision-making and intelligence that decides, just like in the movies, to attack all humankind, and not just the side they’re fighting on… Because technology in [the] military is something that… I don’t know how it can be stopped, because it’s driven by humans… Our need to wage war against each other… The real danger is, do they turn on us? And if there is real intelligence in the artificial intelligence, and real understanding and need to survive as a being, that’s where it becomes really scary…
So it sounds like you don’t necessarily think we’re anywhere near close to an AGI, and I’m going to ask you how far away you think we are… I want to set the question up as saying that, there are people who think we’re 5-10 years away from a general intelligence and then there are people who think we’re 500 years [away].Oren Etzioni was on the show, and he said he would give anyone 1000:1 odds that we wouldn’t have it in 5 years, so if you want to send him $10 he’ll put $10,000 against that. So why do you think there’s such a gap, and where are you in that continuum?
Well, because the methods we’re using are still so… as smart as they got, they’re still doing rudimentary tasks. They’re still recognizing images—the agents that are doing automated things for us, they’re still doing very rudimentary tasks. General intelligence requires a lot more than that, that requires a lot more understanding of context. I mean the example of Alexa last week, that’s a perfect example of not understanding context, for us as humans, we would never react to something on TV like that and add something to our shopping cart, just because Cartman said it, where even the very, very smart Alexa with amazing speech understanding, and taking actions based on that, it still doesn’t understand the context of the world, so I think prophecy is for fools, but I think it’s at least 20 years out…
You know, we often look at artificial intelligence and its progress based on games where it beats the best player, that goes back to [Garry] Kasparov in 97, you have of course Jeopardy, you have Alpha Go, you had… an AI beat some world rated poker players, what do you think…And those are all kind of… they create a stir, you want to reflect on it, what do you think is the next thing like that, that one day, snap your fingers and all of a sudden an AI just did… what?
Okay, I haven’t thought about that… All these games, what makes them unique is that they are a very closed world; the world of the game, is finite and the rules are very clear, even if there’s a lot of probability going on, the rules are very clear, and if you think in the real world—and this may be going back to the questions why it will take time—for artificial intelligence to really be general intelligence, the real world is almost infinite in possibilities and the way things can go, and even for us, it’s really hard.
Now trying to think of a game that machines would beat us next in. I wonder if we were able to build robots that can do lots of sports, I think they could beat us easily in a lot of games, because if you take any sports game like football or basketball, they require intelligence, they require a lot of thinking, very fast thinking and path finding by the players, and if we were able to build the body of the robot that can do the motions just like humans, I think they can easily beat us at all these games.
Do you, as a practitioner… I’m intrigued by it, on the topic of general intelligence, intrigued by the idea that, human DNA isn’t really that much code, and if you look at how much code that we are different than say a chimp, it’s very small, I mean it’s a few megabytes. That would be, how we are programmatically different, and yet, that little bit of code, makes us have a general intelligence and a chimp not. Does that persuade you or suggest to you that general intelligence is a simple thing, that we just haven’t discovered, or do you think that general intelligence is a hack of a hundred thousand different… like it’s going to be a long slog and then we finally get it together…?
So, I think [it’s] the latter, just because the way you see human progress, and it’s not just about one person’s intelligence. I think what makes us unique is the ability to combine intelligence of a lot of different people to solve tasks, and that’s another thing that makes us very different. So you do have some people that are geniuses that can solve really really hard tasks by themselves, but if you look at human progress, it’s always been around combined intelligence of getting one person’s contribution, then another person’s contribution, and thinking about how it comes together to solve that, and sometimes you have breakthroughs that come from an individual, but more often than not, it’s the combined intelligence that creates the drive forward, and that’s the part that I think is hard to put into a computer…
You know there are people that have, amazing savant-like abilities. I remember reading about a man named [George] Dantzig, and he was a graduate student in statistics, and his professor put two famous unsolvable/unsolved problems on the blackboard, and Dantzig arrived late that day. He saw them and just assumed that they were the homework, so he copied them down and went home, and later he said he thought they were a little harder than normal, but he solved them both and turned them in… and that like really happened. It’s not one of those urban legend kind of things, you have people who can read the left and right page of a book at the same exact time, you have… you just have people that are these extraordinarily edge cases of human ability,does that suggest that our intellects are actually far more robust than they are? Does that suggest anything to you as an artificial intelligence guy?
Right, so coming from the probability space, it just means that our intelligence has wide distribution, and there are always exceptions in the tails, right? And these kind of people are in the tails, and often when they are discovered, they can create monumental breakthroughs in our understanding of the world, and that’s what makes us so unique. You have a lot of people in the center of the distribution, that are still contributing a lot, and making advances to the world and to our understanding of it, and not just understanding, but actually creating new things. So I’m not a genius, most people are not geniuses, but we still create new things, and are able to advance things, and then, every once in a while you get these tails of a distribution intelligence, that could solve the really hard problems that nobody else can solve, and that’s a… so the combination of all that actually makes us push things forward in the world, and I think that kind of combined intelligence, I think that artificial intelligence is way, way off. It’s not anywhere near, because we don’t understand how it works, I think it would be hard for us to even code that into machines. That’s one of the reasons I think AI, the way people are afraid of it, it’s still way off…
But by that analysis, that sounds like, to circle that back, there will be somebody that comes along that has some big breakthrough in a general intelligence, and ta-da, it turns out all along it was, you know, bubble sort or….
I don’t think it’s that simple, that’s the thing, and solving a statistical problem that’s really, really tough, it’s not like… I don’t think it’s a well-defined enough problem, that some will take a genius just to understand.. “Oh, it’s that neuron going right to left,” and that’s it… so I don’t think it’s that simple… there might be breakthroughs in mathematics, that help you understand the computation better, maybe quantum computers that will help you do faster computation, so you can train much, much faster than machines so they can do the task much better, but, it’s not about understanding the concept of what makes a genius. I think that’s more complicated, but maybe it’s my limited way of thinking, maybe I’m not intelligent enough with it…
So to stay on that point for a minute… it’s interesting and I think perhaps, telling, that we don’t really understand how human intelligence works, like if you knew that.. like we don’t know how a thought is encoded in the brain… like if I said…Ira, what color was your first bicycle, can you answer that question?
I don’t remember… probably blue…
Let’s assume for a minute that you did remember. It makes my example bad, but there’s no bicycle location in your brain that stored the first “bicycle”… like an icon, or database lookup…like nobody knows how that happens… not only how it’s encoded, but how it’s retrieved… And then, you were talking earlier about synthesis and how we use it all together, we don’t know any of that… Does that suggest to you that, on the other end, maybe we can’t make a general intelligence… or at the very least, we cannot make a general intelligence until we understand how it is that people are intelligent…?
That may be, but yeah. First of all even if we made it, if we don’t understand it, then how would we know that we made it? Circling back to that… I think the way we… it’s just like the kids, they were thinking that they were causing stress to the robot, because they were giving it… they thought they understood stress and the affect of it, and they were transferring it onto the robot. So maybe when we create something very intelligent that looks to be like us, we would think we created intelligence, but we wouldn’t know that for sure until we know what is… general intelligence really is…
So do you believe that general intelligence is an evolutionary invention that will come along if, in 20 years, 50 years, 1,000 years… whatever it is, that it is something that will come along out of the techniques we use today from the early AI, like, are we building really, really, really primitive general intelligences, or do you have a feeling that a real AGI is going to be a whole different kind of approach in technology?
I think it’s going to be a whole different approach. I think what we’re building today are just machines that do tasks that we humans do, in a much, much better way, and just like we built machines in the industrial revolution that did what people did with their hands, but did it in a much faster way, and better way… that’s the way I see what we’re doing today… And maybe I’m wrong, maybe I’m totally wrong, and we’re giving them a lot more general intelligence than we’re thinking, but the way I see it, it’s driven by economic powers, it’s driven by the need of companies to advance, and take away tasks that cost too much money to do by humans, or are too slow to do by humans… And, revolutionizing that way, and I’m not sure that we’re really giving them general intelligence yet, still we’re giving them ways to solve specific tasks that we want them to solve, and not something very very general that can just live by itself, and create new things by itself.
Let’s take up this thread, that you just touched on, about, we build them to do jobs we don’t want to do, and you analogize it to the Industrial Revolution… so as you know, just to set the problem up, there are 3 different narratives about the effect this technology, combined with robotics, or we’ll call it automation, in general, are going to have on jobs. And the three scenarios are: one is that, it’s going to destroy an enormous number of quote, low-skill jobs, and that, they will, by definition, be fewer low skilled jobs, and more and more people competing for them and you will have this permanent class of unemployable… it’s like the Great Depression in the US, just forever. And then you have people who say, no, it’s different than that, what it really is, is, they’re going to be able to do everything we can do, they’re going to have escape… Once a machine can learn a new task faster than a person, they’ll take every job, even the creative ones, they’ll take everything. And the third one says no, for 250 years we’ve had 5-10% of unemployment, its never really gotten out of that range other than the anomalous depression, and in that time we had electricity, we had mechanization, we had steam power, we had the assembly line… we had all these things come along that sure looked like job eaters, but what people did is they used the new technology to increase their own productivity and drive their own wages higher, and that’s the story of progress, that we have experienced…So which of those three theories, or maybe a fourth one, do you think is the correct narrative?
I think the third theory is probably the more correct narrative. It just gives us more time to use our imagination and be more productive at doing more things, improve things, so, all of a sudden we’ll have time to think about going and conquering the stars, and living in the stars, or improving our lives here in various ways… The only thing that scares me is the speed of it, if it happens too quickly, too fast.. So, we’re humans, it takes, as a human race, some time to adapt. If the change happens so fast and people lose their jobs too quickly, before they’re able to retrain for the new economy, the new way of [work], the fact that some positions will not be available anymore, that’s the real danger and I think if it happens too fast around the world, then, there could be a backlash.
I think what will happen is that the progress will stop because some backlash will happen in the form of wars, or all sorts of uprisings, because, at the end, people need to live, people need to eat, and if they don’t have that, they don’t have anything to live for, they’re going to rise up, they’re not just going to disappear and die by themselves. So, that’s the real danger, if the change happens too rapidly, you can have a depression that will actually cause the progress to slow down, and I hope we don’t reach that because I would not want us, as a world, to reach that stage where we have to slow down, with all the weapons we have today, this could actually be catastrophic too…
What do you mean by that last sentence?
So I mean we have nuclear weapons…
Oh, I see, I see, I see.
We have actual weapons that can, not just… could actually annihilate us completely…
You know, I hear you Like…what would “too fast” be? First of all, we had that when the Industrial Revolution came along… you had the Luddite movement, when Ludd broke two spinning wheels you had the thresher riots [or Swing riots] in England in the 1820s, when the automated threat, you had the… the first day the London Times was printed using steam power instead of people. They were going to go find the guy who invented that, and string him up, you had a deep-rooted fear of labor-changing technology, that’s a whole current that constantly runs, but what would too fast look like? The electrification of industry just happened lightning fast, we went from generating 5% of our power from steam to 85% in just22 years…Give me a “too fast” scenario. Are you thinking about the truck drivers, or… tell me how it could “be too fast,” because you seem to be very cautious, like, “man, these technologies are hard and they take a long time and there’s a lot of work and a lot of slog,” and then, so what would too fast look like to you?
If it’s less than a generation, let’s say in 5 years, really, all taxi drivers and truck drivers lose their job because everything becomes automated, that seems to be too fast. If it happens in 20 years, that’s probably enough time to adjust, and I think… the transition is starting, it will start in the next 5 years, but it will still take some time for it to really take hold, because if people lose those jobs today, and you have thousands or hundreds of thousands, or even millions of people doing that, what are they going to do?
Well, presumably, I mean, classic economics says that, if that happened, the cost of taking a cab goes way down, right? And if that happens, that frees up money that I no longer have to spend on an expensive cab, and therefore I spend that money elsewhere, which generates demand for more jobs, but, is the 5-year scenario… it may be a technical possibility, like we may “technically” do it, if we don’t have a legislative hurdle.
I read this article in India, which said they’re not going to allow self-driving cars in India because that would put people out of work, then you have the retrofit problem, then every city’s going to want to regulate it and say well, you can have a self-driving car, but it needs to have a person behind the wheel just in case. I mean like you would say, look, we’ve been able to fly airplanes without a pilot for decades, yet no airline in the world would touch that, in this plane, we have no pilot… even though that’s probably a better way to do it…So, do you really think we can have all the taxi drivers gone in 5 years?
No, and exactly for that reason, even if our technology really allows it. First of all, I don’t think it will totally allow it, because for it to really take hold you have to have a majority of cars on the road to be autonomous. Just yesterday I was in San Francisco, and I heard a guy say he was driving behind one of those self-driving cars in San Francisco, and he got stuck behind it, because it wouldn’t take a left turn when it was green, and it just forever wouldn’t take a left turn that humans would… The reason why it wouldn’t take a left turn was there were other cars that are human-driven on the road, and it was coded to be very, very careful about it, and he was 15 minutes late to our meeting just because of that self-driving car…
Now, so I think there will be a long transition partly because legislation will regulate it, and slow it down a bit, which is a good thing. You don’t want to change too fast, too quickly without making sure that it really works well in the world, and as long as there is a mixture of humans driving and machines driving, the machines will be a little bit “lame,” because they will be coded to be a lot more careful than us, and we’re impatient, so, that will slow things down which is a good thing, I think making a change too fast can lead to all sorts of economic problems as well…
You know in Europe they had… I could be wrong on this, I think it was first passed in France, but I think it was being considered by the entire EU, and it’s the right to know why the AI decided what it did. If an AI made the decision to deny you a loan, or what have you, you have the right to know why it did that… I had a simple question which was, is that possible? Could Google ever say, I’m number four for this search and my competitor’s number three, why am I number four and they’re number three? Is Google big and complicated enough, and you don’t have to talk specifically about Google, but, are systems big and complicated enough that we don’t know… there are so many thousands of factors that go into this thing, that many people never even look at, it’s just a whole lot of training…
Right, so in principle, the methods could tell you why they made that decision. I mean, even if there are thousands of factors, you can go through all of them and have not just the output of their recognition, but also highlight what were the attributes that caused it to decide it’s one thing or another. So from the technology point of view, it’s possible, from the practical point of view, I think for a lot of problems, you don’t, you won’t really care. I mean, if it recognized that there’s a cat in the image, and you know it’s right, you won’t care why it’s recognized that cat. I guess for some problems where the system made a decision that you don’t necessarily know why it made the decision, or you have to take action based on that recognition, you would want to know. So if I predicted for you that your revenue is going to increase by 20% in the next week, you would probably want that system to tell you, why do you think that’s happened, because there isn’t a clear reason for it that you would imagine yourself, but, if the system told you there is a face in this image, and you just look at the image, and you can see that there’s a face in that image, then you won’t have a problem with it, so I think it really depends on the problem that you’re trying to solve…
We talked about games earlier and you pointed out that they were closed environments and that’s really a place with explicit rules, a place that an AI can excel, and I’ll add to that, there’s a clear cut idea of what winning looks like, and what a point is. I think somebody on the show said, “Who’s winning this conversation right now?” There’s no way to do that, so my question to you is,if you walk around an enterprise and you say “where can I apply artificial intelligence to my business?” would you look for things that looked like games? Like, okay, HR you have all these successful employees that get high performance ratings, and then you have all these people you had to fire because they didn’t, and then you get all these resumes in. Which ones more look like the good people as opposed to the bad people? Are there lots of things like that in life that look like games… or is the whole game thing really a distraction from solving real world problems, nothing really is a game in the real world…
Yeah, I think it’d be wrong to look at it as a game, because the rules… first there is no real clear notion of winning. What you want is progress, you have goals that you want to progress towards, you want, for example, in business, you want your company to grow. That could be your goal, or you want the profits to grow, you want your revenue to grow, so you make these goals, because that’s how you want things to progress and then you can look at all the factors that help it grow. The world of how to “make it grow” is very large, there are so many factors, so if I look at my employees, there might be a low-performing employee in one aspect of my business, but maybe that employee brings to the team, you know, a lot of humor that causes them to be productive, and I can’t measure that. Those kind of things are really, really hard to measure and, so looking at it from a very analytic point of view of just a “game,” would probably miss a lot of important factors.
So tell me about the company you co-founded, Anodot, because you make an anomaly detection system using AIs. So first of all, explain what that is and what that looks like, but how did you approach that problem? If it’s not a game, instead of… you looked at it this way…
So, what are anomalies? Anomalies are anything that’s unexpected, so our approach was: you’re a business and you’re collecting lots and lots and lots of data related to your business. At the end, you want to know what’s going on with the business, that’s the reason you collect a lot of data. Now, when today, people have a lot of different tools that help them kind of slice and dice the data, ask questions about what’s happening there, so you can make informed decisions about the future or react to things that are happening right now, that could affect your business.
The problem with that, is that basically… why isn’t it AI? It’s not AI because you’re basically asking a question and letting the computers compute something for you and giving you and answer; whereas anomalies, by nature, are things that happen that are unexpected, so you don’t necessarily know to ask the question in advance, and unexpected things could happen. In businesses for example, you see a certain revenue for a product you’re selling going down in a certain city, why’s that happening? If you don’t look at it, and if you don’t ask the question in advance, you’re not even aware that that is happening… so, the great thing about AI, and machine learning algorithms, is they can process a lot of data, and if you can encode into a machine, an algorithm that identifies what are anomalies, you can find them in very, very large scale, and that helps the companies actually detect that things are going wrong, or detect the opportunities that they have, that they might miss otherwise. Where the endgame is very simple, to help you improve your business constantly and maintain it and avoid the risks of doing business, so, it’s not a “game,” it’s actually bringing immediate value to a company, highlighting, putting light on the data that they really need to look at with respect to their business, and the great thing about machine-learning algorithms, [is] they can process all of this data much better than we could, because what do humans do? We graph them, we visualize the data in various ways, you know, we create queries from database about questions that we think might be relevant, but we can’t really process all the data, all the time in an economical way. You would have to hire armies of people to do that, and machines are very good at that, so, that’s why we built Anodot…
Give me an example, like tell me a use case or a real world example of something that Anodot, well that you were able to spot that a person might not have been able to…?
So, we have various customers that are in the e-commerce business, and if you’re in e-commerce and you’re selling a lot of different products, various things could go wrong or opportunities might be missed. For example, if I’m selling coats, and I’m selling a thousand other products, I’m selling coats, and now in a certain area of the country, there is an anomalous weather condition that became cold, all of a sudden I’ll see, I won’t be able to see it because it’s hiding in my data, but people will start buying… in that state will start buying more coats. Now it’s not like if… if somebody actually looked at it, they would probably be able to spot it, but because there is so much data, so many things, so many moving parts, nobody actually notices it. Now our AI system finds… “Oh, there is an anomalous weather condition and there is an uptick in selling that coat, you better do something to seize that opportunity to sell more coats,” so either you have to send more inventory to that region to make sure that if somebody really wants a coat, you’re not out of stock. If you’re out of stock, you’re losing revenue, potential revenue, or you can even offer discounts for that region because you want to bring more people to your e-commerce site, rather than the competition, so, that’s one example…
And I assume it’s also used in security or fraud and what not, or are you really focused on an e-commerce-use case?
So we built a fairly generic platform that can handle a wide variety of use cases. We don’t focus on security as-is, but we do have customers that, in part of their data, we’re able to detect all sort of security-related breaches, like bot activity happening on a site or fraud rings—not the individual fraud of an individual person doing a transaction—but, it’s a lot of the time, frauds are not just one credit card, but somebody actually doing it over time, and then you can create or you can identify those fraud rings.
Most of our use cases have been around more the business-related data, either in ecommerce, ad tech companies, online services. And so online services, anybody that is really data-dependent to run their business, and very data-driven in running their business, and most businesses are transforming into that, even the old-fashioned businesses are transforming into that, because that data has competitive advantage, and being able to process that data to find all the anomalies, gives you an even larger competitive advantage.
So, last question: You made a comment earlier about freeing up people so we can focus on living in the stars. People who say that are generally science fiction fans I’ve noticed. If that is true, what view of the future, as expressed in science fiction, do you think is compelling or interesting or could happen?
That’s a great question. I think that that, what’s compelling to me about the future, really, is not whether we live in the stars or not in the stars, but really about having to free up our time to thinkabout stars, to thinkabout the next big things that progress humanity to the next levels, to be able to explore new dimensions and solve new problems, that…
Seek out new life and new civilizations…
Could be, and it could be in the stars, it could be on Earth, it could be just having more time, having more time on your hands, gives you more time to think about “What’s next?” When you’re busy surviving, then you don’t have any time to think about art, and think about music, and advancing it, or think about the stars, or think about the oceans, so, that’s the way I see AI and technology helping us—really freeing up our time to do more, and to use our collective intelligence and individual intelligence to imagine places that we haven’t thought about before… Or we don’t have time to think about before because we’re busy doing the mundane tasks. That’s really for me, what it’s all about…
Well that is a great place to end it, Ira. I want to thank you for taking the time and going on that journey with me of talking about all these different topics. It’s such an exciting time we live in and your reflections on them are fascinating, so thank you again..
Thank you very much, bye-bye.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.