AI

Selfishness for Self-Driving Cars: AI Greed is Good

By Dr. Lance B. Eliot, the AI Trends Insider

You might be aware of this famous movie quote: “Greed, for lack of a better word, is good. Greed is right. Greed works.” This was uttered by the fictional character Gordon Gekko in the 1987 movie “Wall Street” and became a rather popular topic of discussion. Some thought it was the perfect way to do things and that we are all and all should be motivated by personal greed, which presumably will in the end be of benefit to not only the individual but to all. Others condemned this way of thinking and argued that it is merely a crass way of trying to excuse “screwing over” the little guy by the big and powerful.

This notion of personal self-interest or personal selfishness is not new. Adam Smith famously became known for the economic theory of the invisible hand. In his 1776 book “The Wealth of Nations,” he used the phrase “invisible hand” in one small passage and it has since then become the cornerstone phrase for an entire economic philosophy and spawned much debate. The concept is that people can go ahead and be selfish and pursue their own particular self-interests, which, though at first glance it might seem like this would not be for the greater good of all, it turns out that it will produce a greater benefit for all (according to those that claim to believe in the invisible hand idea).

What does this have to do with self-driving cars?

A lot. When humans are driving cars, how do they drive their cars? I don’t mean how they operate the controls and use the accelerator or brake, but instead I mean what is their behavior as drivers. What motivates them as they drive. Humans are not mindless robots that obey every rule of law while driving. You’ve certainly witnessed your share of drivers that seem to be rude: they take advantage of situations, they cut-off other drivers, they push into the pack, they hog the road, you name it.

Let me give you an example. Each morning, during my freeway commute, I see drivers that are doing the following. As I near a popular part of Los Angeles, the volume of traffic and the curve of the freeway causes traffic to start to back-up. It is entirely predictable and happens each day, around the same time of day. Driving up to this scene, as a human driver you can see that the traffic ahead of you is starting to get bogged down. It’s obvious. Imagine if you were at the grocery store and you could see that lots of people were getting into the cashier lines to pay for their groceries, and you could see that things are starting to pile-up in terms of long lines ahead of you, prior to your getting into line.

Now, the civil way to deal with the upcoming bogged down traffic would be to presumably stay in your respective lane and just gradually enter into the back of the pack. This would be the least disruptive to everyone. Your car and you would be easily predictable. If you are in the fast lane, stay there, and just come to a slowing speed or even stopped in your lane as you near the back of the pack for that particular lane. Likewise, if you are in the slow lane, stay in it, and gently come upon the cars that are up ahead bogged down. This would be similar to the grocery store, wherein you just pick whichever cashier line you happen to be near and get into that line to then ultimately pay for your groceries.

Is this what happens? Do cars stay in their lanes and without much fuss just come upon the back of the pack? Do humans coming up to cashier lines in the grocery store just willingly and without any fuss stand in whatever line they happen to be near? I am betting you would all pretty much agree with me that this is not the way that things go. Instead, it is a madcap free-for-all. It is every man, woman, and child for their own savagery and survival. In the grocery store, I see people dart over to whatever line they think the shortest. But, if they get there and it slows down, they get frustrated and try to sneak over into another line instead.

It is a dog-eat-dog world out there. For the cars, here’s what I see happen every morning. A car that’s in the fast lane and going at a high speed will dart across several lanes of traffic to get into the slow lane if that lane looks shorter. Cars in the slow lane will try to jam across lanes of traffic and get into the fast lane, if that lane looks shorter. Since no one really knows which lane is the “best” in terms of being able to get through the bogged down traffic, each person is making a personal judgment and deciding which way to go. Thus, you have crazy drivers crisscrossing each other, slow lane drivers nearly colliding with fast lane drivers that are each diving toward the other one’s lanes under the assumption that it is better to be in the other lane.

I suppose it looks like frenzied ants. The cars seem to be randomly going this way and that way. Sure, there are a few drivers that opt to stay in their already seated lane, but around them is chaos. A car to your left is trying to squeeze in front of you. A car to your right is trying to go back around you. It is a swirl of cars that are playing a dangerous game.

With this number of cars jockeying across the lanes, you are ultimately going to have someone that slams into someone else. I witness this about once or twice a week, it is a fender bender paradise at this freeway curve.  I don’t know whether the ones that get their bumpers locked are caught off-guard and this is the first time they had been in this particular bogged down part of the freeway, or whether it is seasoned crazies that finally had statistical chance catch up with them due to their rude behavior repeatedly playing out day after day. If you play with fire, you’ll eventually get burned, as they say.

Sadly, it could also be that there are “innocents” that aren’t playing the game, and an avid and careless game player rams into them. Thus, those that weren’t trying to be selfish get perhaps drawn into it, and get wrecked, by one of those that is. Somehow, I am guessing we’d all wish that it was more like two selfish drivers that smacked into each other. Their just deserves, we’d think.

I am sure that some of you are saying that this is atrocious behavior. Those human drivers should be shot. Well, it depends. If you believe in the Gordon Gekko claim that greed is good, or if you believe that Adam Smith is right about the invisible hand, you would be arguing that the behavior of these drivers is actually a good thing. They are presumably optimizing the roadway use. They are finding a means to get as many cars through the bogged down freeway section as fast as possible. If they instead were “mindless sheep” that stayed in their lanes, you might argue that this would be less effective use of the roadway and be less expeditious overall to the flow of traffic.

Some of you might be pondering this notion. Could it be the case that the selfish driver is actually ensuring that we all benefit? Does the selfishness lead to more effective use of the roadways and shorter drive times for us all? There are many traffic simulations that seem to suggest this is the case. Controversy abounds about this notion, but it is absolutely a consideration and one that many have shown to have merit. We as a society are based on selfish behavior, which manifests itself throughout our lives, and so it makes sense that it would manifest itself in our driving behavior too. We don’t become some different person simply because we are seated behind the wheel of a car. We take our biases and our approaches into that driver’s seat, and our consequent driving behavior is based on that foundation. Good or bad. Ugly or pretty.

So, should self-driving cars do the same thing? Some AI developers are appalled that I would even ask the question. Their viewpoint is that of course we should not have self-driving cars that behave this way.

In their utopian world, all self-driving cars are civil toward each other. Once we have V2V (vehicle to vehicle) communications, one car will politely say to another car that it wants to change lanes. The other car will politely reply, yes, I am would be happy to help you change lanes. They then offer each other sufficient space to make the lane change. Thank you, says the one car to the other car. It’s a wonderful world. I call this the crock world. If this ever does happen, it is many years from now, in some far future that we can’t even yet see.

Furthermore, self-driving cars are going to be mixing with other human driven cars for the foreseeable future. The question arises whether the self-driving cars should be the “mindless sheep” and allow the human driven cars to run wild around them (see my column on defensive driving for self-driving cars). I assert that self-driving cars that are timid will actually make driving conditions worse for us all, and especially human driven cars.

At the Cybernetic Self-Driving Car Institute, we are exploring the infusion of selfishness into self-driving cars. We aim to allow self-driving cars to act in a safe but intentionally selfish manner, which, one could argue will in the end be of benefit to the greater good of society. It’s the invisible hand, if you will.

Here’s the way it works. When a self-driving car has its selfishness going, it will tend toward making choices that seem to benefit itself the most. Suppose it comes up to a four way stop. Right now, we’ve had circumstances of a self-driving car that never gets a chance to move forward from a four way stop, due to human drivers being aggressive at the four way stop. The human drivers challenge the self-driving car, doing so by creeping forward at their respective stop sign, which causes the self-driving car to back-down from trying to go forward. In this game of chicken, all the human drivers currently know that the self-driving car will be the chicken and back down.

With a self-driving car that has the selfishness engaged, it will be at the four way stop and be aggressive. It is not going to let other cars intimidate it. It edges forward and makes things clear that it intends to go ahead. If a human driver wants to test the will of the self-driving car, the self-driving car is gladly willing to oblige. Take me on, it is essentially saying to the other drivers. This is what a human driver would do, and so the self-driving car is mimicking that same behavior.

How far does this go? Well, human drivers tend to have thresholds to their selfishness. On my freeway commute, I see most of the self-interested drivers that are crisscrossing lanes, and this is what I would characterize as say average selfishness. You then have a few that go across the selfishness barrier into ultra-selfishness. They go onto the shoulder of the freeway and use it to get ahead of the bogged down traffic. This is blatantly against the law. Each morning, I say a small prayer that a cop will get these drivers.  I really wish there was some automatic way to get them targeted and punished for their behavior, maybe even use a laser system to melt their tires. I can dream, can’t I?

Anyway, the point is that when I say that we are using AI to develop selfishness driving behavior, I also want to emphasize that it is something that can be engaged and disengaged, so you can choose when the selfishness arises. Furthermore, it can be scaled to low, medium, or high, meaning that it can be either subtle selfishness, more pronounced selfishness, and can be full selfishness. We aren’t though including the ultra-selfishness, though I suppose some would say we should, and we could somewhat easily by allowing the anti-illegal driving routines to let the selfishness override them (see my column about when self-driving cars drive illegally).

When I mentioned earlier that some AI developers for self-driving cars do not think that self-driving cars should behave selfishly, it is one reason why you aren’t yet seeing this kind of behavior emerging in self-driving cars. There are several added reasons. Let’s look at them.

Reasons for not allowing selfishness for self-driving cars:

  •        Because in a utopian world of all self-driving cars they will allow cooperate willingly with each other (far off in the future, maybe never).
  •        Because some AI developers haven’t thought about the selfishness aspects and so they are blissfully unaware of the practical nature of it (hopefully I am getting their attention).
  •        Because if you believe in Isaac Asimov’s “Three Rules of Robotics” you would argue that self-driving cars are robots and should abide by Asimov and therefore presumably not be selfish (though, one can argue that Asimov does not tackle this topic per se).
  •        Because of concern about a public perception backlash (will people tolerate a self-driving car that beats them in a game of chicken?).
  •        Because of the potential for car accidents involving self-driving cars (I’ve already debunked the idea that it will be zero fatalities once we have self-driving cars, see my column, and so the question arises whether there would be a rise in car accidents due to selfish self-driving cars or whether there might be a net reduction).
  •        Because of not being able to figure out how to include selfishness into self-driving cars (we’ve got that one covered, it involves using AI techniques accordingly and encompassing behavior modifying routines).
  •        Because of a fundamental belief that selfishness is bad, no matter what anyone else says about it (this is the camp that doesn’t go for the invisible hand).

If you are a self-driving car maker, you can presumably decide whether you want to include selfishness into how your AI and your self-driving car is created. One perspective is that if you don’t want it, don’t include it. The only rub there is that if you think that maybe consumers will like having such a feature, and if your self-driving car lacks it, then consumers might buy your competitors self-driving car instead.

Thus, another viewpoint for a self-driving car maker would be to include the selfishness capability, so you can say you have it, and be on par with other self-driving car makers, and then allow for it to be used or not used. For usage, it could either be activated by the car owner or occupant (see my column on in-car commands), or it could be activated by the AI itself. In essence, you could have the AI determine when selfishness makes sense to use. On my morning commute, the AI might decide for itself that the selfishness will help at the freeway curve, and so invoke the selfishness capability, or it might decide that it isn’t worth the added effort and keep the selfishness in a box and awaiting usage for some other occasion or circumstance.

Some might say that we should collectively as a society decide whether to allow self-driving cars to be selfish. Maybe we should have regulations that dictate whether self-driving car makers can include a selfishness component or not. Others think that the self-driving car makers ought to reach their own collective accord on this. Perhaps they all could get together and agree that no one will use selfishness in the AI driving elements of the self-driving car. Or, maybe they agree to allowing it, but then also agree to how it is to be implemented and how far it can go. Maybe this will happen. Probably, the selfishness aspects will arise by happenstance as car makers realize its value. Once enough self-driving cars are on the roads, and if some have selfishness, and if the selfishness leads to accidents or deaths, we’ll likely then get a spotlight shined on what this is, how it came to be, and what we’ll do about it. Guess we’ll wait and see. Or am I being selfish in saying so?

This content is originally posted on AI Trends.

Let’s block ads! (Why?)

AI Trends

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top
Social Media Auto Publish Powered By : XYZScripts.com