Jump to content

Recommended Posts

5 minutes ago, DonMuncy said:

Can anyone tell me how a self driving car can make a turn on a non standard intersection. I understand how it can get within the limits of GPS accuracy, but what about "within" that range. I have seen a lot of intersections without very clearly defined edges. Do they have sensors that can differentiate between dirt and dirt on top of paving?

Same way that AI pathfinding in videogames has worked for decades now.  The car certainly has a GPS nav database with roadmaps, it knows the expected position of each road down to a couple feet (in 2016, actual GPS accuracy for civilian applications was observed at 2.3 feet, with 95% confidence).

 

That alone is more than enough to navigate a 6' wide car within the limits of a 10' wide lane.  After that, the car has loads of sensors: LIDAR and milimeter-wave radar chief among them, to say nothing of plain-jane optics.  That car has a clearer picture of its immediate surroundings than a human could ever hope to, knows exactly where the road should be, even without the active sensors, and has a brainbox that can process all this data a hundred times over before a human can process it once.

 

The more interesting question, to me, would be "how can it it fail to do a driving task better than a human could do?"

Link to comment
Share on other sites

Paul, I grew up and learned to drive on oiled rural roads in east Texas. I have a really hard time understanding how programs can be written to take into consideration all those variables, and how a self learning system can learn those variables. Maybe I am just not astute enough to understand how smart machines can be made.

Link to comment
Share on other sites

5 minutes ago, gsxrpilot said:

I'll take that challenge. I'm 50 and I expect I'll see it in my life time.

The naysayers on this thread seem to be expecting that software will have to be programed to handle every possible contingency. But not so, the software is self learning and redeveloping its self. And it can do this much faster than humans can learn anything.

I do some work in this field and the curve towards this AI is not linear by any means. It's very steep and going vertical quickly. Expect a few more years of this and then it will all of a sudden be done.

Some people here have accused me of being anti-technology. I actually work as an engineer in the high tech world. 

I completely agree with you. They aren't quite there yet, but I bet they will be by 2020....

The real challenge will be reducing the sensor package to something that you don't notice.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, DonMuncy said:

Paul, I grew up and learned to drive on oiled rural roads in east Texas. I have a really hard time understanding how programs can be written to take into consideration all those variables, and how a self learning system can learn those variables. Maybe I am just not astute enough to understand how smart machines can be made.

Computers can already do every individual thing better than humans can. All the senses are better, and much more varied. Computers can also process much more information much faster. The last bit is that up until recently computers has to be told everything to do. They don't anymore. With infinitely more information to work with, and infinitely faster processing of that information, all that is needed now is the ability to learn, self teach, self improve, or whatever you call it. That is happening now. This is all right around the corner.

Link to comment
Share on other sites

2 minutes ago, ShuRugal said:

Same way that AI pathfinding in videogames has worked for decades now.  The car certainly has a GPS nav database with roadmaps, it knows the expected position of each road down to a couple feet (in 2016, actual GPS accuracy for civilian applications was observed at 2.3 feet, with 95% confidence).

Aha, there is part of the problem. I have no idea what pathfinding in videogames is. 

I am also a little concerned about that 95% accuracy. I worry about that other 5%. If I had to watch and take over even 1% of the time, I might as well drive myself.

Yes, I'm a dinosaur. But I'm willing to learn. 

Link to comment
Share on other sites

7 minutes ago, DonMuncy said:

Paul, I grew up and learned to drive on oiled rural roads in east Texas. I have a really hard time understanding how programs can be written to take into consideration all those variables, and how a self learning system can learn those variables. Maybe I am just not astute enough to understand how smart machines can be made.

that one's easy, "traction control" computers have been solving that problem on human-driven cars for decades now:

1 - ABS rotor for each wheel

2 - torque-sensing at each wheel

3 - active differential

4 - if rate of RPM on any wheel changes abruptly, reduce torque to that wheel

 

With an auto-driving car, we add a few more steps

 

5 - check traction every <n> milliseconds by polling for unexpected RPM changes at the ABS sensors

6 - reduce power output if a traction loss is detected

7 - reduce maximum safe speed until traction is restored

8 - add additional safety buffer to predicted stopping distance, turning forces, etc etc, until traction is restored

 

 

Even if we completely ignore the leaps and bound in self-learning AI, any problem that a human driver can overcome can be scripted into a computer.  If a human can imagine a scenario, then a human can imagine a solution to it.  Steps 1-8 above are not any different than the way a human handles that problem, with the exception of we can't monitor the individual RPM of each wheel in real time, so we can only detect a loss of traction after it has manifested other symptoms (the steering feels "squirrely", the car is going sideways). 

In fact, humans often completely fail to detect a loss of traction until it is too late, and frequently apply "corrections" that make the problem worse.  Whereas a computer which faithfully executes steps 1-8 above each time, and suffers no malfunction, is guaranteed to catch the problem before it can manifest in any secondary effects, and will correct for it properly.

4 minutes ago, DonMuncy said:

Aha, there is part of the problem. I have no idea what pathfinding in videogames is. 

I am also a little concerned about that 95% accuracy. I worry about that other 5%. If I had to watch and take over even 1% of the time, I might as well drive myself.

Yes, I'm a dinosaur. But I'm willing to learn. 

 

not 95% accuracy, 95% confidence.  GPS signal contains self-diagnosis and self-correcting components, and any measurement that does not pass the confidence test is discarded.  For a properly working GPS receiver (not some Chinesium throwaway part) to produce significant errors in position, a great many successive measurements must fail. 

 

If a high-end GPS receiver is sampling at 100 hz, the 5 samples which are discarded due to failing error-checking are completely irrelevant, the other 95 which passed provide more than enough good data points to produce a new accurate fix every second.  In fact, since GPS needs a minimum of only 1 good signal each from 4 satellites to provide a fix, this combination of sampling rate and confidence interval would allow for a minimum of 23 fixes per second.

Link to comment
Share on other sites

14 minutes ago, DonMuncy said:

Thanks. Am I the only one on MS that seems to constantly demonstrate my ignorance.

I wouldn't be embarrassed about that either particular piece of ignorance, it's tolerably specialized knowledge.  Outside of actual systems engineering/support, I wouldn't expect anyone to know how a computer can be made to perform tasks like that (though I do enjoy explaining it, so thank you for giving me a chance to spread the knowledge).

The bit about the GPS signals being self-correcting is also pretty obscure:  Only reason i have that piece of trivia in my head is from learning about GPS theory of operation in my MI Systems Maintenance course.

Link to comment
Share on other sites

3 hours ago, DonMuncy said:

Thanks. Am I the only one on MS that seems to constantly demonstrate my ignorance.

I have learned plenty from you over the last few years. And I'm very much looking forward to attending the Texas MooneySpace fly in to your home field. Especially since buying a turbo Mooney, I've always considered you to be one of the go to guys for knowledge on this board. 

Ignorance... this thread isn't even about flying!

  • Like 1
Link to comment
Share on other sites

Ain't buying it.  The one thing AI types always underestimate is the utter complexity of the human mind.  We're analog, old school, and we can do things machines just can't.  Right now when the processor takes dump maybe you miss your porn for a few minutes or your E-mail goes wonky.  Totally different thing in a rapidly moving vehicle full of kiddies.  Sure, some brave self-certain fool will declare it done and safe. and then the things will all go back to the store after the first one goes nuts and hurts someone.  Mark my words.

Link to comment
Share on other sites

On 3/21/2018 at 10:02 AM, steingar said:

and then the things will all go back to the store after the first one goes nuts and hurts someone.

I very much doubt this.  After all, haven't pilots been saying the same thing about Airbus for decades?  I don't see them folding because fully-automated aircraft didn't "take off".  Right now, we have no shortage of human drivers who "go nuts and hurt someone".  In the US alone, we manage to kill 40,000 people a year using human drivers.  the existing automated driving features have shown themselves to be an order of magnitude more capable than human drivers.  Even if one in ten suffered a catastrophic failure that killed someone, we'd still be at the break-even point with death toll relative to human drivers.

  • Like 1
Link to comment
Share on other sites

11 hours ago, ShuRugal said:

I very much doubt this.  After all, haven't pilots been saying the same thing about Airbus for decades?  I don't see them folding because fully-automated aircraft didn't "take off".  Right now, we have no shortage of human drivers who "go nuts and hurt someone".  In the US alone, we manage to kill 40,000 people a year using human drivers.  the existing automated driving features have shown themselves to be an order of magnitude more capable than human drivers.  Even if one in ten suffered a catastrophic failure that killed someone, we'd still be at the break-even point with death toll relative to human drivers.

This, and the fact that fatalities caused by drunk driving will drop to zero.

Link to comment
Share on other sites

The article linked below includes video of the crash.   I cut my engineering teeth on radar remote imaging and sensing before doing comm stuff, and still do a lot of embedded and control work.   The UBER cars have radar and a sophisticated LIDAR system that sees 360 degrees around the car.   IMHO this is a failure of the autonomous system or a demonstration that it is 100% not ready for deployment.    A human driver could make this mistake, an autonomous vehicle with adequate sensors absolutely should not.

http://www.fox10phoenix.com/news/arizona-news/attorney-explains-who-could-be-held-liable-in-the-fatal-self-driving-uber-crash#/

Link to comment
Share on other sites

I looked at the Uber video carefully.

The first visual appearance of the woman was her shoes when the car was about 100 feet away, confirmed because that's about the pavement range of typical low beams. Stopping distance from 40 mph should be about 80 feet plus reaction time.  A human driver unless they swerved would have hit her. But most experienced drivers automatically  swerve to avoid a collision and would have done so here.  Attention and reflexes would be the coin toss.

However, Lidar should have seen her much earlier and should have stopped in plenty of time. Even some existing production automatic colision avoidance systems would have hit the brakes. We'll see what the telemetry says.  Did the car even slow down? 

  • Like 1
Link to comment
Share on other sites

2 hours ago, Cyril Gibb said:

Did the car even slow down? 

Didn't look like it to me.   I didn't see any attempt to slow or avoid.   And I agree completely that the additional sensors should have sorted this out way ahead of time.

Several of the locals that drive that street frequently at night say that it is very well lit, and it is very common for pedestrians to cross the street there due to the routing of the sidewalks.    The camera doesn't pick up light nearly as well as a human eye, and I suspect an alert driver would probably have been able to avoid the collision (but that's just me speculating...if I get some time I may pop down there some night and see for myself).

 

Link to comment
Share on other sites

Where are you going to park one of these contraptions?. To be competitive with the automobile it needs to fit in my car garage or any underground parking area. You can not even park it on the side of the road. And most roof tops are inclined. Maybe you can drop passengers on a rope. And when you get to your destination you have to wait for recharge, while in a car you just fill up (if needed) and go. And if one of those props fail you go down quickly. While in car a flat tire is a minor event. I think I will keep my M20J and make use of the crew car at the FBO. 

José

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.