AI

AI Fake News about Self-Driving Cars

By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor

Fake news. That’s an expression that has gained a lot of notoriety lately. One person’s fake news often seems to be another person’s real news. Real news versus fake news. Fake news versus real news. How can we discern one from the other? Believe it or not, some states like California are now even trying to make mandatory a core course for all high school students that would involve teaching them the differences between fake news and real news. Though this generally seems like a good idea, questions abound about what constitutes fake news versus real news. Also, some worry that this would become nothing more than an agenda to try and impose upon impressionable minds a particular political bent, whether liberal or conservative, and use the guise of outing fake news as a means to brainwash our children.

For now, I’d like to concentrate on a specific genre of fake news. There is a lot of “fake news” about AI and self-driving cars. I see it every day in the headlines of major media outlets. It appears on the back-page stories and the front-page stories. It creeps into the dialogue about self-driving cars. The general public is misled by many of these stories. Regulators are being misled. Companies are both helping to mislead and also being misled. The bonanza of self-driving cars has produced a jackpot of fake news. I’ll explain what I mean by fake news, and you can decide whether my perspective of fake news is your perspective too, or whether you consider my fake news to be real news.  You decide.

One place to start when talking about fake news is to initially aim at hoaxes or scams. I think that we can all pretty much agree that if a news story is based on a hoax or scam, we would likely be willing to label that news as fake news. Let’s suppose I reported that John Smith has turned lead into gold. This is of course the infamous alchemists dream. Mankind has sought to turn various ores into gold as long as gold has been considered a valuable commodity. If I take John Smith’s word that he did turn lead into gold, and I don’t do any background research, and if he really did not turn lead into gold, some would say that this was fake news. He didn’t actually turn lead into gold. It’s a fake.

Suppose that I did some background research and had found out that it is possible to actually turn lead into gold, in spite of the assumption that it would seem an impossible task. Via nuclear transmutation, involving converting a chemical element from one kind into another, presumably we could turn lead into gold. If John Smith is a renowned chemist at a top-notch university, and I take his word that he turned lead into gold, would my news reporting still be considered fake news?  Well, if he didn’t actually turn lead into gold, and if that’s what the claim was, it seems that it is still fake news, even though the possibility of it happening was reasonable. This just means I didn’t do much a reporting job to ferret out whether the news was true or fake.

How does this apply to AI and self-driving cars? Let’s take an example that I previously wrote about, involving the self-driving truck that in Colorado delivered a bunch of beer (see my article on self-driving trucks and cars).

Here’s some headlines that heraleded the feat:

  •        “Driverless beer run: Bud makes shipment with self-driving truck” as per CNBC
  •        “Uber’s self-driving truck makes its first commercial delivery: Beer” as per Los Angeles Times
  •        “Self-driving truck makes first trip: 120-mile beer run” as per USA Today
  •        “Uber’s Otto debuts delivery by self-driving truck” as per Wall Street Journal

Are these headlines appropriate for what actually took place? As mentioned in my earlier piece on this matter, I pointed out that the self-driving truck only did the freeway portion of the ride and did not do the actual pick-up and delivery on side streets, it was guarded by highway patrol while it did the self-driving, it had done the same route several times before to train on it, the route had been cleared of basically any traffic or obstacles, and so on.  

The average reader would likely assume from the headlines that self-driving trucks actually exist and are good enough to make pick-ups and deliveries, all by themselves. Going beyond the headlines, the articles barely mentioned the numerous caveats and limitations of the feat. Turns out that this was a huge public relations boost for Uber and Otto, while the mainstream media played a part in aiding the PR stunt. The media liked the story since it was a feel-good kind of story and sold newspapers and content. Reporters didn’t need to dig very far into the story because it was seen as a light oriented story, one that presumably doesn’t need the same level of scrutiny as say a story about a cure for cancer.

You can say that the feat did happen and so in that sense it is not like my suggestion of someone falsely claiming that they could turn lead into gold. At the same time, I would argue that the self-driving truck story suggests that lead was turned into gold, and the mainstream media got it wrong by assuming this to be true. There was a taint of fake news that got past most of the mainstream media filters. Many stories about AI are often relished by the mainstream media. They want to run headlines and do stories about the exciting future of AI.  There is also the other side of that coin, namely they like to also run headlines that AI will bring us humans utter destruction and AI will take over the world.

Here’s a headline that seems reasonable: “Ready or Not, Driverless Cars Are Coming” as per Time magazine.

The headline doesn’t say that driverless cars are here, and so buys itself some latitude by simply asserting that one day we will see driverless cars. Admittedly that’s pretty hard to debate since yes, some day we will have truly driverless cars.  Suppose I told you that the headline for that Times magazine piece came from the year 2014.  The article is still Okay since it never stated when we would have driverless cars, but the implication in the headline and the piece was that we were on the verge of seeing driverless cars.  We aren’t there yet, and it is now some three years after that article was published.

This also brings up the notion of what it means when you use the wording of a driverless car. I’ve discussed at length the nuances of self-driving car versus autonomous and driverless cars (see my article on this topic). In any case, the Time magazine piece could be considered correct if you consider the Tesla to be a driverless car. I don’t think any reasonable person considers Tesla to be a driverless car, and indeed even Tesla emphasizes that it is a car that must be human-ready to be driven at all times, regardless of whether the Autopilot is engaged.

As mentioned, the news does not also paint a necessarily positive picture about AI and self-driving cars.

Here’s a headline that caught my eye: “Does AI Make Self-Driving Cars Less Safe?”  This piece causes anyone that is truly into AI to wonder what they are trying to assert. Turns out that the piece claims that machine learning is too limited and will make self-driving cars more dangerous than otherwise. An expert is quoted about this.

Well, let’s unpack that logic. If we only use machine learning, and limit it to the nature of what we understand machine learning to be today, and if that is what we are going to label as AI, perhaps the story makes sense. I’ve written that machine learning of today can get us into some bad spots. Suppose a machine learning technique causes a self-driving car to “learn” that it is okay to try and go over a curb to make a right turn at a red light. This could end-up being a catastrophic tactic that ultimately would kill either the passengers or pedestrians. Machine learning that is unbounded and unrestrained is indeed questionable. It is for those reasons that the rest of the AI has to help to guide and bound the machine learning. Likewise, the human developers that put in place machine learning for self-driving cars need to do so with the appropriate kinds of restrictions and safety measures.

Here’s a provocative headline that appeared in the prestigious MIT Technology Review: “Why Self-Driving Cars Must Be Programmed to Kill.” You’ve got to give the editor and writer some credit for coming up with a catchy headline. Similar to my piece about the ethics of self-driving cars, the MIT Technology Review article discusses that ultimately a self-driving car will need to make some tough decisions about who to kill. If a self-driving car is going down a street and a child suddenly darts into the road, should the self-driving car take evasive action that might cause the car to swerve off-the-road into a telephone pole that possibly kills the passengers and saves the child, or should the self-driving car “decide” to plow into and kill the child but save the car passengers.  That’s the rub of the matter and the MIT Technology Review article covers the same type of ethical dilemmas that self-driving cars and car makers must address.

Take a look at this bold and assertive headline: “Humans vs robots: Driverless cars are safer than human driven vehicles.” What do you think of the claim made in this headline? We’ve seen many such claims being made that self-driving cars are going to save the world and eliminate car related deaths. I’ve debunked this notion in my piece on the goal of zero car fatalities having a zero chance of happening. This particular headline takes a more nuanced approach and tries to suggest that self-driving cars are safer drivers than human drivers. This is something we cannot prove to be the case, and so the headline is at least misleading. Our logic tells us that if we eliminate all emotion and have an always aware robotic driver that presumably it must be safer than a human driver that is susceptible to normal human foibles.

But this also omits the aspect that humans have abilities that we don’t know whether we will ever be able to have embodied in a self-driving car. For example, consider the role of common sense. When you as a human are driving a car, you are aware of the significance of the world around the car. You understand that physics plays a role in what the car can and cannot do. You know that pedestrians that look toward you might be subtly signaling to you that they are going to dash out into the street. You might see a child playing ball up ahead in the front yard of a house and assume that the child might wander into the street to get their ball.  These are aspects that we are still not expecting that the upcoming self-driving cars will be able to do.

Someday, maybe years and years into the future, we might have self-driving cars that have this same kind of human based common sense. So, we might suggest that self-driving cars in one hundred years will be safer than human drivers, but it is harder to make such a claim in the nearer term.  We also need to consider what it means to say that someone or something is safer.  Is that measured by the least number of deaths derived by being a driver? Or by least number of injuries? Or least number of accidents involved in?

The suggestion that self-driving cars are going to eliminate car related deaths is a popular one in the mainstream media. This brings us to another aspect of fake news, namely the echo chamber effect. Often, fake news gains traction by the aspect that others pick-up the fake news and repeat it. This continues over and over. Furthermore, other news that might counter the fake news tends to be filtered out. These are called filter bubbles. There are lots of stakeholders that want and hope that self-driving cars will materially reduce car related deaths, and I also certainly wish the same to be true. The thing about this is that we should not believe this assertion simply because it keeps getting repeated.

The general hubbub and excitement over self-driving cars is going to continue and we’ll have “fake news” (or at least misleading news) that will continue to support the surge toward self-driving cars. Without this push, we might not be able to make as much progress as we’ve been able to do. A few years ago, no one was willing to put big bucks into self-driving cars and the field of autonomous vehicles barely survived off of federal grants and other charitable kinds of support. All of the dollars flowing into self-driving cars is exciting and aiding the progress that otherwise would have only happened in sporadic bits and spurts.

Some real news that might come eventually as a shocker could be an appalling accident by a self-driving car. I’ve written about the Tesla incident involving the autopilot-engaged Tesla that rammed into a truck and killed the driver of the Tesla. This was cleared by the feds by saying that the automation was not at fault due to the human driver ultimately being responsible for the driving of the car. That’s a perspective that luckily saved Tesla and the self-driving car industry, though would seem of little solace to the driver that died and his family.

Remember when the oil tanker Exxon Valdez went aground in Prince William Sound, spilling many millions of gallons of oil and creating an environmental nightmare? That took place in 1989. Or, do you remember the 1985 crash of the Delta Airline flight that was flying from Florida to Los Angeles and crashed due to riding through a thunderstorm and not being able to detect microbursts? Both of these tragedies led to new regulations and also advances in technology. For the Valdez, we got advances in ship navigation systems and also improved spill detection and spill clean-up technology. For the Delta flight, we got new airborne wind-shear detection systems and improved alert systems that made flying safer for us all.

I am predicting that we will soon have a self-driving car that causes a dreadful accident. The self-driving car will be found at fault, or at least that it contributed to the fault. I shudder to think that this is going to happen.  I don’t want it to happen. When it does happen, the headlines will suddenly change tone and there will be all sorts of hand wringing about how we let self-driving cars get onto the roads. Regulators will be drilled out of office for having let this occur. New regulations will be demanded. All self-driving car makers will be forced to reexamine their systems and safety approaches. A cleansing of the self-driving car marketplace will occur.

This is not a doom and gloom kind of prediction. It is the likely scenario of a build-up of excitement that will have a bit of a burst bubble. It won’t kill off the industry. The industry will continue to exist and grow. What it will do is make the news media more questioning and skeptical. We’ll see less stories of the uplifting nature about AI and self-driving cars.  And for those self-driving car makers that have invested in sufficient safety systems and capabilities, they will be rewarded while those firms that have given lip service to that aspect of their AI will find themselves in trouble.

Be on the watch for fake news about self-driving cars. Keep your wits about you. Many old timers know and lived through the early AI heydays of the 1980s and 1990s, and then the AI winter arrived. There were naysayers at the time that derided a lot of the AI fake news that was being published (well, it wasn’t called fake news then, but it had the same sentiment). We are now in the AI spring and things are looking up. Let’s hope that the sun keeps shining and the flowers will bloom.

This content is original to AI Trends.

Let’s block ads! (Why?)

AI Trends

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top
Social Media Auto Publish Powered By : XYZScripts.com