By Dr. Lance B. Eliot, AI Trends Insider
Yesterday, I was driving on the freeway and the traffic was especially bogged down. Most mornings the traffic is slow, but on this occasion, it was really slow, pure stop-and-go kind of driving. Here in Southern California you never can predict what seems to cause traffic patterns to change. Sometimes you guess that there must be an accident up ahead and yet you end-up never seeing any indication that an accident was keeping traffic bottled up. Other times you eventually crawl up upon an accident scene and kind of think, aha, I knew that’s what was making me get late to work today. This might seem callous since our first thoughts should go toward whomever was involved in the accident and hoping that they are safe, but with the high volume of accidents we have it is possible to become numb to the nature of accidents and just perceive them as irritating stoppages when you’ve got to get to someplace on time.
In this case, I did eventually crawl up upon the accident scene. At first, I could just barely see up ahead. From a distance of maybe a quarter mile away of the accident, I could see that there were some emergency vehicles on the freeway, which had flashing lights on. They were tall enough to standout over the cars of traffic. They were stopped on the freeway. They were askew and parked at odd angles, occupying the leftmost two lanes of the freeway. The car traffic itself was like water that flows in whatever direction provides fluidity, and was gradually flowing to the right of the accident scene. This meant that roughly five lanes of traffic were trying to narrow over into about two lanes that were the only way to make your way around the accident scene. Thus, the car traffic was constricted by having to squeeze down into just two lanes, slowing everything down, plus the drivers were gawking at the accident scene and so that too slowed traffic even further.
After seemingly forever, I finally also managed to squeeze over into the rightmost lanes. I had been in the leftmost lane and so it was quite a chore to get toward the right lanes. Even though I had my turn indicator blinker on, nobody in the other lanes wanted to let anyone into the rightmost lanes. It is a cruel world out there on the freeways and often no civility about letting others into your lane. For a circumstance like this, each car was edging forward and wanting to get through the morass as soon as possible, so they weren’t interested in letting other cars get over or get ahead of them. At times, some cars tried to force themselves over into the rightmost lanes, doing so an inch at a time and there were “bumper wars” of cars that dare to get within a fraction of an inch of another car, trying to push their way into the next lane over from them and the other cars were desperately and doggedly trying to keep them out.
When my turn came to be in those two scrawny lanes, I then could see the accident scene clearly as I slowly drove past it. The emergency vehicles had formed a kind of cocoon around a motorcycle and a motorcyclist that were both splayed onto the freeway asphalt. There was a medic tending to the motorcyclist. Police were standing nearby and seemed to be taking notes about the accident scene. An ambulance was waiting to take the motorcyclist to the nearest hospital. A fire truck was there, serving to help block the freeway traffic and create the cocoon, and often is one of the first responding emergency vehicles. This is partially due to the aspect that there are sometimes fires that erupt from a car accident and the fire department needs to get there to put out the fire, or prevent one from starting since there is also usually fuel spilled onto the freeway that can easily get ignited.
Within maybe ten seconds, I had finally driven past the accident itself. I had been in line behind the accident scene for nearly an extra twenty minutes of driving time. Now, finally past it, the traffic was thinned out because of the constricting pattern and accident scene blockage. As such, cars that had squeezed past the accident scene were now gunning their engine and racing ahead at breakneck speeds. It was as though the accident scene was a starting line for an Indy car race and the cars that managed to get past the accident scene were excited to hit the gas and see how fast their cars could accelerate from near zero to the top allowed speeds on the freeway.
Most drivers don’t realize that this is actually one of the more dangerous aspects of an accident scene. Namely, traffic post-scene drives much too fast and can recklessly then create another new accident. It is not unusual to have an initial accident that spawns a second accident within about a tenth of a mile ahead of the first accident. This is due to cars that misuse the sudden freedom of an open passage to then speed ahead and end-up colliding with other cars. You can imagine how the drivers that have suffered the long wait of getting up to and past an accident scene are in a state of mind of wanting to get going, and so they throw caution to the wind and just push the accelerator pedal to the floor. They are doing so because they are late as a result of being stalled in the wake of the initial accident, and because they are frustrated that a car that can go 80 miles per hour has only been going 5 miles per hour throughout the accident scene area. This potent combination of stressed out drivers and anxiety of wanting to get going will tend to lead to unsafe driving post-scene and result in another accident.
In our Cybernetics Self-Driving Car Lab, we have been creating AI-based capabilities for self-driving cars that allows self-driving cars to traverse these kinds of accident scenes.
Traversing an accident scene is not as easy as you might at first think it would be. Let’s take a look at what a self-driving car needs to do when confronted with an accident scene. Us humans are pretty used to dealing with accident scenes and so it is engrained in our driving practices. If you watch a teenage first-time driver that comes upon an accident scene, you’ll see them be unsure of what to do. Most of today’s self-driving cars are in the same boat. The self-driving car does not have any special AI-algorithms about accident scene traversal. It tries instead to rely upon overall driving practices, but those overall driving practices are not tuned and honed to the specifics of what occurs in an accident scene. Thus, the need for a specialty component for the AI of the self-driving car, a component that provides particular expertise about accident scenes and how to traverse them.
First, the self-driving car needs to realize that there is an accident up ahead. It can do so by detecting that traffic is slowing down and shifting into a stop-and-go kind of pattern. This is done via the use of the cameras, LIDAR, radar and other sensors that the AI is using to detect car traffic. Of course, traffic that slows down does not necessarily indicate there is an accident up ahead. As I mentioned earlier in this piece, I often find that there wasn’t any accident and that the traffic slowed for some other reasons, and so the traffic slowing is just one such potential indicator.
Next, the self-driving car needs to detect that there are emergency vehicles involved. In the case of the motorcyclist that was downed, the flashing lights of the emergency vehicles could be seen from quite a distance away of the accident and can be detected via the cameras on the self-driving car. Also, the emergency vehicles were slightly taller than the rest of the traffic and so they could be picked out of the visual images of the traffic and be matched to images of emergency vehicles by the AI-fusion software (see my other column about emergency vehicles and self-driving cars). This is similar to say doing facial recognition in Facebook, except matching the images of vehicles to what the cameras on the self-driving car are sending into the AI system that’s driving the car.
The AI now has several crucial clues in-hand, including that the traffic has slowed down, there are emergency vehicles ahead, they are parked at askew positions, and they have flashing lights on. This is pretty much a high probability that there is an accident scene there. The AI software is using probabilities to assign the odds of what is taking place on the roadway. This is similar to how humans are “guessing” or estimating what is taking place on the roadways. You don’t necessarily know things for sure, and so need to gauge the chances that something is or is not taking place. The self-driving car uses probabilities to make these same kinds of guesses and hunches.
A self-driving car that is not wise to accident scenes might just continue to stay in its lane, let’s assume that it was in the leftmost lane as I was, and come forward until it reaches the actual accident scene, not realizing that once it gets there that the freeway is effectively blocked off. Today’s self-driving cars would normally then bluntly shove the control of the car back to the human, since it has gotten itself into a pickle that it cannot figure out what to do. Once the human driver took over, presumably the human driver would then get the car over to the right, drive past the accident, and then once past it then could re-engage the self-driving car capabilities. These are what the levels 0 to 4 of self-driving cars tend to do in accident scene traversal, i.e., just hand control to the human, but a level 5 by definition cannot just hand the car over to the human driver. There isn’t supposed to be a human driver needed by a level 5. So, a level 5 for sure needs to have some kind of “smarts” about driving accident scenes. It would be handy for this to also exist at the levels less than 5, but for sure it must exist at the level 5 (see my column about the Richter scale of self-driving cars).
I realize that there are some self-driving car pundits that will be screaming at me that if we had exclusively self-driving cars on the roadway that this accident scene traversal problem would be “solved” and no need to worry about it. In their view, the utopia of all self-driving cars would also imply that the self-driving cars and their AI is communicating via V2V (vehicle-to-vehicle) and so would orchestrate their own dance about how to handle the accident scene. Self-driving cars would collaborate and let each other over into the rightmost lanes in an orderly and efficient manner. The ones closest to the accident would be telling the other self-driving cars what is taking place. What a wonderful world it will be. If that does actually ever happen, it will be decades and decades from now. I think we’ll be living on Mars by that time and so maybe we won’t even care about day-to-day freeway traffic by then. The point is that this utopian viewpoint is not going to happen anytime soon and so we do need to have self-driving cars that are working independently of each other and able to individually traverse accident scenes. Period.
Back to the matter at-hand, once the self-driving car suspects that an accident scene is up ahead, it then moves into the specialty realm of how to cope with an accident scene. So, first we needed detection to ascertain that an accident scene is now confronting the self-driving car. Next, the self-driving car and its AI invokes the specialty routines for dealing with traversing of accident scenes. These algorithms begin to undertake the planning needed to traverse the accident scene. Some of these plans are based on AI developers that created templates, while other plans are based on the outcome of machine learning. Via machine learning, the system has identified various kinds of accident scenes, and used tons of data about accident scenes and their traversal to try and identify ways to best traverse an accident scene.
There are a number of key factors that impact what a self-driving car should and should not do when confronted with an accident scene. Has the accident just happened or has it become stabilized? In the case of the downed motorcyclist, it was an accident scene that was greatly stabilized. There were already emergency vehicles there. The emergency vehicles had already made their way to the accident scene and were parked there. They had created a cocoon to protect the scene. The human responders were already walking around on the freeway, rendering aid, and otherwise handling the accident scene.
If an accident has just happened, the accident scene itself will be much more chaotic and less predictable. For example, about a month ago, I saw a motorcycle that hit a car ahead of me on the freeway and he flopped onto the freeway. Traffic was moving along at about 30 miles per hour when this happened. The incident occurred right before my eyes. Cars instantly started to jockey out of their lanes to keep from running over the downed motorcyclist and his downed motorcycle. Some cars weren’t even aware that anything had happened. Some cars were stopping to block traffic and try to offer aid to the motorcyclist. This whole scene for me lasted only about 15-20 seconds since I was then past it and the evolving aspects were occurring behind me as I continued to drive. I would have stopped if I thought I could be of assistance, but I could see that many others were already doing so.
Anyway, the point is that this was an accident that had just occurred, and there weren’t any emergency vehicles there as yet. There wasn’t a protective cocoon setup. There weren’t any flashing lights. Etc. A self-driving car needs to be able to detect accident scenes that are evolved over time. There is the it-just-is-happening part of the time continuum, and then the accident scene is stabilized portion, and so on. We classify these into emerging accident, happened accident, stabilizing accident scene, rescue accident scene, recovery accident scene, clean-up recovery scene, and re-opened accident scene. For each of these classifications, we have trained the AI to plan and act accordingly.
Besides the phases or stages of an accident, there are other aspects for the self-driving car to be concerned about. What is the traffic situation? Is there almost no traffic or heavy traffic? This is important since it can either make options available or constrain options as to how the self-driving car should react. If there isn’t much traffic, the self-driving car can have more lanes to shift into or other evasive actions to avoid or go around the accident scene. There are also special cases such as a toxic spill, or when there is fire involved.
Here’s a harrowing experience that I had one time. I was driving on the freeway and in SoCal we sometimes have brush fires during the hot and dry summer months. Turns out that brush on the side of the freeway had gotten on fire and the smoke was billowing across lanes of traffic. Furthermore, flames were leaping from the side of the freeway and actually threatening to burn cars that were driving past the burning brush. What do you do as a human driver? Would you stop your car before you got to the flames and smoke? Or, would you try to drive at a high speed and kind of scoot through and past the flames and smoke? Some cars were trying the high-speed escape, while others were frantically trying to get into the leftmost lane and come to a stop ahead of the flames and smoke.
This illustrates that the nature of an accident scene can be very dynamic. The self-driving car not only has to consider what it will do, but also what other drivers will do. And, as stated earlier, the self-driving car is going to be having to predict what human drivers are going to do. Will those human drivers that are nearby to the self-driving car going to do something sensible or maybe something wild? The AI cannot rely upon some pre-scripted and canned approach to handling the accident scene. The dynamic and evolving nature of an accident scene requires not only some kind of template but also a capability of judging on-the-fly what to do.
Another consideration is that the self-driving car itself might become part of the accident scene. For example, when I mentioned before that I had seen a motorcyclist get hit, you could argue that I now was part of the accident scene and that I should either stop to render aid or that I should stop to serve as a witness to what happened. Will we expect our self-driving cars to do likewise? In other words, the self-driving car “saw” an accident with its cameras, LIDAR, etc., and so it should potentially stop to provide that info for purposes of analysis of the accident and how it happened. Also, what if the occupants inside the self-driving car also saw the accident? Shouldn’t the self-driving car make them available? Or, suppose the human occupants want to stop and render aid, they somehow have to tell the self-driving car to do so (with a level 5 it is still an open question about how the occupants will be communicating with the self-driving car).
I have so far focused on accidents occurring on a freeway, but accidents happen in all kinds of driving contexts. One day, I came upon an accident that had happened in a residential neighborhood and two cars had run into each other. The two cars were completely blocking the road, and there were parked cars at the curbs that further blocked the road. Other cars could not get past this accident scene. As a result, cars would drive up to the accident scene, the human driver would realize they could not get around it, and then they would try to make a U-turn there in the middle of the road. Other cars coming behind them didn’t know what was going on. The U-turn cars were then going head-to-head with other unsuspecting cars driving up to the scene. It was a mess.
This is the kind of AI specialty that we need to have in our self-driving cars. Coping with accident scenes must be handled in circumstances involving freeways, highways, city driving, suburbs, and so on. The weather also plays a big role in how to traverse an accident scene, such as whether the road has ice and snow on it. Time of day as to whether it is nighttime or daytime is a significant factor. Whether there is one car involved in an accident, or multiple cars, also plays into this. I was living in Germany and one day saw a pile-up of cars that involved around 40 vehicles that had all hit each other in a crazy domino way.
The severity of the accident is another crucial factor. We have all come upon a simple fender bender, which usually involves two cars that then get off the road and deal with exchanging insurance info. This is quite different from an accident scene where there has been injury or deaths involved. Traversal in an area that has humans injured or dying is something that a self-driving car has to be especially cautious about.
There are also aspects of obeying whatever is taking place at the scene such as when a police officer decides to direct traffic as part of aiding accident scene traversal. I came up to an accident recently and there was a stop sign just before it. A police officer that had stabilized the accident scene was directing traffic. He was motioning for cars to drive through the stop and not come to a stop. In fact, when cars were stopping (because they saw the stop sign and figured they legally had to stop), he got quite irritated as he was purposely wanting to keep traffic flowing and so his motions were overriding the stop sign. Getting a self-driving car to realize that it should ignore a stop sign and instead obey the motions of a police officer is a difficult task. The AI has to be able to visually recognize the officer, it must be able to override its usual rules about stop signs, etc.
There are an estimated 6 million accidents per year in the United States. The odds are pretty high that on any daily driving journey of any substance that you will likely come upon an accident, whether it is an accident that is just occurring or the aftermath of one that already occurred. Self-driving cars need to know what to do when they come upon an accident scene. For most self-driving cars of today, they just kind of give-up and hand the driving over to the human driver. This is not only dangerous since the human driver has to suddenly become engaged in driving, it also belies the hope of someday getting to a level 5 true self-driving car that does not rely upon having a human driver available. By our developing specialized AI algorithms for self-driving cars that are faced with accident scenes, we are improving the capabilities of self-driving cars. Even if by some miracle we begin to see less accidents once self-driving cars are readily available (though see my debunking of that by my piece on zero accidents due to self-driving cars), there will still be a mixture of human drivers and self-driving cars, and there will still be accidents. Self-driving cars need to know what to do. Drive safely out there.
This content is original to AI Trends.
Let’s block ads! (Why?)