Whether those onboard or angled housings are the final destination for AI autonomous cars

Put yourself in a position to make some kind of loop without probable end.

Imagine you’re driving your car and you run into a dog that ran to the pavement.

Most of us have experienced this. I hope you have been able to take steps to prevent it. Assuming everything went well, the dog is fine, and no one in his car was injured.

In the manner of a Groundhog Day movie, let’s repeat the script. However, we are going to make a small change.

you ready?

Imagine you’re driving your car and you run into a deer that rushed over the pavement.

Fewer of us have had this, it is a fairly common phenomenon for those who live in a domain that is full of deer.

In any case, would you perform the same avoidance movements when you came across a deer as you did with the dog in the middle of the road?

We can debate this openly and deduce possible differences.

Some might argue that a deer is more likely to leave the street and be more likely to run to the side of the street. The dog might decide to stay on the street and go around in circles. That said, this type of general will upset dog or deer lovers, or both. It’s hard to say whether there would be a pronounced difference in behavior.

Let’s repeat this once again and make the change.

Imagine you drive your car and stumble upon a bird that rushed to the pavement.

What do you do?

For some drivers, a bird is an absolutely different matter than a deer or a dog. If I were passing fast in the car and there wasn’t much freedom to avoid the bird smoothly, it would take the difficult resolution of passing forward and hitting the bird. The logic is that sometimes we settle for having a bird in our daily meals, so it is said that one less bird is acceptable, especially compared to the threat of driving your car or turning into a ditch during sudden braking.

Essentially, you might be more susceptible to receiving threats if the animal were a deer or a dog and be willing to disclose a greater threat to save the deer or dog. But when it comes to a chicken, the public threat compared to harming the intrusive creature may not be balanced differently. Of course, some will vehemently say that chicken, deer and dog are all the same, and drivers deserve not to go out to cut hair saying that one animal is more valuable than the other. other.

Continue.

Let’s make some other replacement. This replaces considerations that were not explicitly discussed above.

Without saying it, chances are you assumed that the weather for those scenarios of animals crossing the street was relatively neutral. It would possibly have been a sunny day and the situations on the road were undeniable or uneventful.

Change this assumption about situations and believe that there has been heavy rain, and add that you are in the middle of a heavy downpour. of you. The pavement is absolutely soaked and incredibly slippery.

Are your possible driving options becoming now that the weather is unfavorable?

Any user would say so.

While it is possible that in the past you have chosen to radically elude the animal, such a maneuver now, in the rain, is much riskier. Tires may not stick to the pavement due to the water layer. Its visibility is reduced and possibly not being able to make a pleasant judgment about where the animal is and what might be near the street. In general, bad weather makes the scenario even worse.

The point of this variable scenario is that the nature of the driving scene and the overall scenario can have a significant effect on the movements you may decide to take as a driver.

We can with this scenario.

For example, pretend it’s night rather than day. In fact, it makes a difference. Another aspect would be the surrounding traffic. Imagine that the parameter does not involve any other traffic in miles. After thinking conscientiously about this situation, reimagine things and pretend that there is traffic around you, cars and trucks in abundance, and that there is also heavy traffic on the other side of the road.

How many twists and turns of this kind can we invent?

In a sense, you can claim that there is an infinite variety. We can continue to load or adjust the elements, doing it again and again. Each new example becomes its own specific consideration. I would want to mentally recalculate what to do as a driver. Some of the changes in history may simply diminish your viable options, while other changes may simply expand the number of feasible options.

The mix can already be dizzying.

A novice teenage driving force is surprised through the variability of driving. They encounter a situation they have never encountered before and go into momentary panic mode. What to do? Most of the time, they make their way and do so without scratches or calamities. Hopefully, they will be told what to do the next time a similar environment presents itself and will be less caught off guard.

Experienced drivers seem to have noticed everything and are therefore able to react as needed. This vast wisdom of driving conditions has its limits. You would unduly tempt fate to pretend you’ve noticed everything.

For example, there is a report about a plane that landed on a highway due to in-flight engine problems. I ask you, how many of us have noticed a plane landing on the pavement in front of them?A rarity, no doubt.

These examples raise a debate about the so-called onboard or corner cases that can occur while driving a car. An edge or angle case is a reference to the example of anything that is considered infrequent or unusual. These are occasions that tend to occur once each blue moon. They are invented as outliers.

Landing an aircraft on the road in the middle of vehicular traffic would be a candidate for a case on board or on a corner. The first case is extraordinary, while the last case is somewhat banal.

Another way to delineate a border or angle case comes with the somewhat obvious appearance that those are moments beyond the core or node of everything we focus on. By analogy, think we have a giant game table and we’re going to try to combine a puzzle. We will put all the pieces of the puzzle on the table, first discarding them here and there. Our next task is to organize the scattered rooms.

The pillar of the table will have the fundamental set of puzzle pieces. Towards the corners or the edge of the table we will place the pieces that appear poorly arranged. To assemble the puzzle we will try to take care of the center or knot first. , now living in the middle of the table. Once we have finished this fundamental work, we will focus our attention on the annoying edges or corners.

I commented that there is a debate about excessive instances or angle instances.

Here’s the catch.

How do we do what’s on the edge or in the corner, instead of classifying it as in the center?

This can become very confusing and be the subject of scathing speeches. Instances that someone thinks are edge or angle times can be classified more as kernel components. Meanwhile, times thrown into the core can potentially be considered as more legitimately located in the category of peripheral or angular times. If you’ve ever done a puzzle with someone else, you’ll want to know what it’s like to have lively discussions about these topics.

One facet that escapes attention is that the core does not have to be larger than the number or length of the edges. We simply assumed that this would be the logical arrangement. However, it may be that we have a very small core and an incredibly wide set of edges or angles.

The total thing turns pale if you don’t have a definitive agreement on what constitutes a core with respect to an edge. Those who debate whether something is imperative or a merit can communicate seamlessly with each other. They don’t realize they’re going to argue until the cows come home, because everyone has hidden assumptions about who’s who.

We can refuel this chimney by evoking the concept of having a long tail (not in itself the type of tail like that of a dog that stirs).

People use the slogan “long tail” to refer to cases where it has a preponderance of anything as a central or constituted core, and then, in an auxiliary sense, is presumed to have many other elements that fade away. You can mentally create a symbol of a giant domain grouped in a graph, and then have a narrow branch that goes on and on, adjusting a genuine tail for the grouped part.

This concept is taken from the statistics box. There is a more accurate meaning in a purely statistical sense, but that is not the ultimate way other people use the expression. The casual meaning is that you might have many less visual facets that are in the tail of everything else you do.

A company could have a product that is considered a blockbuster or a number one seller. Maybe they sell little, but they do it at a maximum value of each one and this gives them importance in the market. It turns out that the company also offers many other products. These are not so well known. When you load the full profit from the sales of your products, it’s possible that all those small products make more money than successful products.

Based on this description, I hope you realize that the long tail can be large, even if it doesn’t get much apparent attention. The long tail can be the foundation of a business and be incredibly important. If the company only helps keep an eye on blockbusters, it may end up in ruins if it ignores or underestimates the long tail.

This doesn’t have to be the case. It may be that the long tail delays the company. Maybe they have a multitude of smaller products that just aren’t worth keeping. These long-tail products can lose money and move away from blockbuster.

On a lighter note, it can be said that the dog wags its tail, and other times the tail wags the dog. And since I have mentioned dogs several times in this saga, let me rephrase the last sentence. It can only be said that the cat wags its tail, and other times the tail moves the cat. This deserves to calm dog and cat lovers.

In general, the long tail receives its due and is examined. By combining the concept of long tail with the concept of edge or angle instances, we can recommend that edge or angle instances be grouped in this long tail.

Re-driving a car, dog or even deer that has run down the street presents an incident or driving occasion that we will agree somewhere in the center of driving. It turns out that you will find it difficult to announce that there is something highly unlikely about such an occasion.

In terms of bringing a bird to the pavement, well, unless you live near a farm, it would seem a little more extreme. On a daily basis in a typical urban setting, you probably wouldn’t see many birds carrying on the street. (for my policy of such a case, see the link here).

Speaking of drivers and driving, the long range of cars are autonomous cars.

Autonomous cars are driven through an AI driving system. There is no need for a human to push the steering wheel, nor is there a willingness for a human to drive the vehicle. For my broad policy of autonomous cars (AV) and specifically autonomous cars, see the link here.

Some experts fervently claim that we will never succeed in genuine autonomous cars because of the long tail problem. The argument is that there are millions of edge or angle instances that will continually rise unexpectedly, and the AI driving formula possibly wouldn’t. be able to deal with those instances. This in turn means that autonomous cars will not be prepared to perform as well on our public roads.

What’s more, those experts claim that no matter how tenaciously those stubborn AI developers continue to try to program AI driving systems, they won’t reach the goal. as a game of shooting the mole, in which another mole will appear.

The fact is, it’s not just a game, it’s a matter of life and death, as everything a driver does behind the wheel of a car can mean the life or death of the driver, passengers, and drivers of nearby vehicles. cars, pedestrians, etc.

Here’s an intriguing value reflection: Are true AI-powered autonomous cars doomed to never be capable on our roads because of the infinite probabilities of edge or angle instances and the notorious enigma of the long tail?

Before going into details, I would like to explain what it is about by referring to authentic autonomous cars.

Understanding autonomous cars

By way of clarification, true driverless cars are those in which the AI drives completely by itself and there is no human assistance in the driving task.

These cars without motive force are considered Level Four and Level Five (see my explanation in this link here), while a car that requires a human motive force to share the driving effort is considered as point 2 or point 3. Cars that share the task of driving are described as semi-autonomous and commonly involve a variety of automated add-ons called Advanced Driver Assistance Systems (ADAS).

There is still no real autonomous car in point 5, of which we do not even know yet if it will be possible to achieve it, nor how long it will take to arrive.

Meanwhile, level four efforts gradually seek to achieve some traction through very narrow and selective testing on public roads, there is controversy as to whether such testing deserves to be allowed in itself (we are all life-and-death guinea pigs in an experiment taking a stand on our roads and paths, some dispute it, see my canopy in this link here).

Since semi-autonomous cars require a human driver, the adoption of such cars would not be much different from driving traditional vehicles, so there are not many novelties related to the subject (although, as we will see in a moment, the following problems are sometimes applicable).

For semi-autonomous cars, it is vital that the public is warned of a disturbing facet that has emerged lately, and that is that even though human forces continue to publish videos of themselves falling asleep at the wheel of a point 2 or point 3 car, we all want to be fooled into thinking that the driving force can divert their attention from the driving task while still driving a semi-autonomous car.

You are to blame for the driving movements of the vehicle, regardless of the degree of automation that may be initiated at a point 2 or point 3.

Autonomous driving and the long tail

For true Level Four and Level Five autonomous vehicles, there will be no human driving force involved in the driving task.

All occupants will be passengers.

AI is in charge of driving.

One aspect to talk about without delay is the fact that the AI involved in today’s AI driving systems is not sensitive. Human

Why this additional emphasis on the fact that AI is sensitive?

Because I must emphasize that in discussing the role of the AI driving system, I am not attributing human qualities to AI. Keep in mind that there is an ongoing and harmful trend of anthropomorphizing AI. Essentially, other people exhibit a human appearance. sensitivity to current AI, despite the undeniable and undeniable fact that no such AI yet exists.

With this clarification, you may believe that the AI driving formula will not natively “know” the aspects of driving. Driving and all that it entails will have to be programmed as a component of the autonomous car hardware and software.

Let’s dive into the myriad facets that come into play in this issue.

First, it is almost obvious that the number of combinations and permutations of the possible driving conditions is going to be enormous. We can discuss whether it is an infinite number or a finite number, in practical terms it is one of those that counts similar dilemmas. the number of grains of sand on all the beaches in the world. All in all, it’s a very, very, very giant number.

If I were to program an AI driving formula based on each and every case imaginable, it would indeed be a difficult task. Even if you’ve added a real herd of AI software developers, you may be expecting it to take years and years to undertake, probably several decades or perhaps centuries, and at all times be faced with the fact that there is some other marginal case or uncounted angle left.

The pragmatic view is that there would be the latter that escapes the pre-established.

Some are quick to argue that simulations solve this dilemma.

Most automakers and autonomous generation corporations use computer simulations to look for driving conditions and prepare their AI driving systems for anything that might happen. The confidence of some is that if enough simulations are run, the totality of everything that will happen in the genuine global will have already been discovered and processed before entering the genuine global with driverless cars.

The other aspect of the coin is the claim that the simulations are based on what humans believe may happen. As such, true global can be unexpected compared to what humans can normally imagine. These computer simulations will be inadequate and will not cover all possibilities. , those critics say.

Amid heated debates about using simulations, don’t get lost in the fray and conclude that simulations are the last silver bullet or fall into the trap that simulations would possibly not succeed at the highest bar and therefore deserve to be totally ignored.

Don’t get me wrong, simulations are a must-have and a tool in the search for true AI-powered autonomous cars.

There is a floating argument that genuine autonomous cars should not be tested on public roads until complete and probably exhaustive simulations have been conducted. The counterargument is that this is impractical, as it would delay road testing indefinitely and stop means more lives lost due to daily human driving. There’s a lot more to those important muddles, which I covered in my previous columns (see link here), and I encourage interested readers to check out those analyses.

A similar theme occurs with the use of closed tracks, in particular, designed to control autonomous vehicles. Being off the public road, a control floor ensures that the general public is not threatened by mishaps that may occur during driverless control. The same arguments surrounding the closed track or verification flooring technique are similar to the trade-offs discussed in the discussion about using simulations (again, see my comments posted in my columns).

This made us close the circle and brought us back to the anguish of an endless source of edge or angle shells. It also brought us directly back to the dilemma of what constitutes an edge or corner case in the context of driving a car. The long tail of autonomous cars is evoked by waving hands. This ambiguity is stimulated or triggered by the lack of definitive agreement on what is really in the long tail compared to what is in the center.

This fluffy has an effect of appearance.

Whenever an autonomous car does something wrong, it’s easy to excuse the case by claiming that the act is only in the long tail. This disarms anyone who is concerned about mischief. Here’s how it works. The claim is that any fear or finger pointing is misplaced since the marginal case is only a marginal case, implying a low-priority and less onerous aspect, and not significant compared to everything at the core.

There’s also the factor.

Those who refer directly to the long tail of driverless cars would possibly have the appearance of a superior offering, keeping the court to those who don’t know what the long tail is or what it contains. With the right kind of outrage and inflection tone, the haughty speaker can make others feel incomplete or ignorant when they “naively” take a look to refute the mythical (and notorious) long tail.

Closing remarks

There are many twists and turns in this topic.

Due to area limitations, I will offer you some additional extracts to whet your appetite.

One attitude is that it makes little sense to try to list every imaginable excessive case. Presumably, human drivers don’t know all the odds and, despite this lack of knowledge, they can drive a car and do it safely most of the time. You can simply argue that humans organize limit instances into more macroscopic collectives and treat limit instances as specific examples of those broader conceptualizations.

You sit on the wheel with those macroscopic intellectual styles and invoke them when an express case arises, even if the main points are somewhat unexpected or unexpected. If you have dealt with a dog that was loose on the street, you have probably trained style to know when they walk loose on the street almost all kinds of animals, adding deer, chickens, turtles, etc. You didn’t want to prepare yourself in advance for all the animals on the planet.

Developers of AI driving systems can try to take advantage of one approach.

Some also claim that emerging ontologies for autonomous cars will contribute to this effort.

You see, for level four autonomous cars, the developers intend to involve the operational design domain (SDG) in which the AI driving formula is capable of driving the vehicle. Perhaps ontologies evolving into a more definitive appearance of SDGs would give the types of driving action models needed (see my analyses in this link here).

The other trick is reasoning of non-unusual sense.

One view is that humans fill in the gaps in what they might know by exploiting their ability to carry out reasoning of non-unusual sense. This acts as the candidate in a position to face unforeseen circumstances. Sense reasoning turns out to happen, and for now, we can’t rely on this supposed must-have safety net (for my AI policy and non-unusual sense reasoning, see my columns).

The prophets of doom would imply that autonomous cars will effectively be ready for use on public roads until all instances on board or on corners have been conquered. In this sense, this long-term nirvana can be interpreted as the day and time when it will have emptied and covered absolutely all the bases that are stealthily living in the imperious long tail of autonomous driving.

It’s a big challenge and a story that can be eye-opening, or it can be just a tail wagging the dog and we can find other tactics to deal with the pesky edges and corners.

Leave a Comment

Your email address will not be published. Required fields are marked *