Jump to content

Are pilots going to be replaced by AI?


Recommended Posts

1 minute ago, N201MKTurbo said:

https://www.intel.com/content/www/us/en/newsroom/news/gordon-moore-obituary.html

Moore's law isn't about the speed of semiconductors, it is about the transistor count.

I stand corrected about performance, but even transistor counts per unit of the volume is reaching close to the allowed by nature, couple more iterations and we will be dealing with quantum effects that will preclude deterministic operation the switching element known as "transistor" since the switching mechanism itself will be comparable in size with an atom... perhaps some some new principles will be discovered... who knows.. state might be stored in quarks... or we will start using non-binary switches (aka "neuron") :) 

Link to comment
Share on other sites

2 minutes ago, alexz said:

I stand corrected about performance, but even transistor counts per unit of the volume is reaching close to the allowed by nature, couple more iterations and we will be dealing with quantum effects that will preclude deterministic operation the switching element known as "transistor" since the switching mechanism itself will be comparable in size with an atom... perhaps some some new principles will be discovered... who knows.. state might be stored in quarks... or we will start using non-binary switches (aka "neuron") :) 

People keep saying that (including me), but they keep pulling it off. The people at Intel may not be the smartest people on earth, but they are the hardest working and most tenacious and well funded group I've ever worked with. They certainly are not idiots, well maybe a few... No Todd, i'm not talking about you...

Link to comment
Share on other sites

3 minutes ago, N201MKTurbo said:

People keep saying that (including me), but they keep pulling it off. The people at Intel may not be the smartest people on earth, but they are the hardest working and most tenacious and well funded group I've ever worked with. They certainly are not idiots, well maybe a few... No Todd, i'm not talking about you...

I'm all rooting for more performance, but sad truth is that tasks that require performance - like FPGA place&route algorithms got only marginally bit faster over last 5-7 years. Last 5-7 (2016-2023) years amount of performance improvement in computing that can't be done in parallel computation was much less than in preceeding  5-7 years (2009-2015)

Link to comment
Share on other sites

as best I can tell skimming the thread, no one mentioned hacking. I won't live to see the day I fly in a plane without a human pilot. I understand it can be done but I know more about computers than the average person and I'd never trust one to that extent. computers and code are all built by other humans and quality and workmanship are all but gone these days. 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

M5 and Dr Daystrom.  I often say how I hate getting old but thank God I’m not any younger.  I think that those of you that just love all this technology are most likely correct about its application but I just don’t understand how blind you are as to the potential for total loss of control of the human experience.  And when an Ai starts a new thread and a bunch of A I are making all the replies well what the f... is that going to be about I remember a quote from the Will Smith robot movie “so robots making robots, well that’s just stupid”.  Anyway, enjoy your virtual world I’m going to go out and split some firewood for the coming storm.  

  • Haha 1
Link to comment
Share on other sites

1 hour ago, JayMatt said:

as best I can tell skimming the thread, no one mentioned hacking. I won't live to see the day I fly in a plane without a human pilot. I understand it can be done but I know more about computers than the average person and I'd never trust one to that extent. computers and code are all built by other humans and quality and workmanship are all but gone these days. 

Very true. Many more suicidal AIs than suicidal pilots.

AI doesn’t have to be perfect, just better than humans. I think some of you are still failing to understand that.

  • Like 2
Link to comment
Share on other sites

4 hours ago, Pinecone said:

 

Oh, and yes, the Airbus does that in Normal Law, and this caused the crash at the Paris Airshow as the AI would not allow the pilot to try to eek out an extra 1/10 of a degree of AOA to avoid hitting the ground. 

That’s not what caused the airshow crash, 1/10  of a degree wouldn’t have prevented it.

The test pilot was trying to demonstrate Alpha Floor, where the computers automatically command TOGA thrust.  Close to the ground, as measured by the radar altimeter, the computers assume you want to land (which is smart, otherwise it wouldn’t let you land).  In the accident, by the time the test pilot realized that the computers weren’t going to command TOGA thrust, he shoved the thrust levers forward (you can hear it in the video).  But it was too late for the engines to spool up and develop enough thrust to miss the trees.

  • Like 1
Link to comment
Share on other sites

3 hours ago, 1980Mooney said:

Perhaps a better way to say it is forming opinions on what AI will do to/for aviation and pilots over the coming decade is like forming opinions on the capabilities of ForeFlight or GarminPilot based upon an iPhone 3G or a Blackberry 4.  I suspect you were not impressed back then either. 
 

But today there are testimonials of pilots using an iPad with ForeFlight to safely successfully land IFR when their primary navigation or panel failed. 
 

Moore’s Law tells us processing power will double every 2 years. We have seen it in smartphones and tables. Avionics are no different. I suspect in 10 years AutoLand will be ho-hum and every avionics company will have a version. - Maybe they will be throwing it in for free to motivate you to buy their panel. 
 

You may not like where AI is going but you need to get out of the way because it is coming regardless. 

You are delusional if you think the FAA would permit actual progress in a reasonable time frame. 

Link to comment
Share on other sites

2 hours ago, ilovecornfields said:

AI doesn’t have to be perfect, just better than humans. I think some of you are still failing to understand that.

My Tesla Model 3 Full Self Drive isn’t perfect.  But it is better than me on the interstate at night.    

  • Like 1
Link to comment
Share on other sites

17 minutes ago, Jerry 5TJ said:

My Tesla Model 3 Full Self Drive isn’t perfect.  But it is better than me on the interstate at night.    

Would you put your sleeping children in it, alone, for the self drive home? 

  • Like 2
Link to comment
Share on other sites

2 hours ago, Hank said:

Would you put your sleeping children in it, alone, for the self drive home? 

1 hour ago, Jerry 5TJ said:

No, but I would doze off briefly while it drives me home.  

Then you aren't ready to ride in a no-pilot, AI-flown airliner. Once thenthrottles on that thing go forward, there's no getting out until the door opens or the fuselage blows apart . . . .

Probably won't even be a seat for a human up in the pointy end, just a few server racks, witheach engine throttle way out on the engine and the aileron servos out in the wing wired to the correct server(s). No need for all of the controls coming all the way to the front of the plane.

Because deep down, you know you can wake up and take control of the Tesla by grabbing the wheel and canceling AutoPilot. Try that from Seat 32B in an airliner.

Link to comment
Share on other sites

14 hours ago, ilovecornfields said:

My son wanted to go to AI camp this summer so I sent him a scholarship application. He used ChatGPT to fill out the application then added a couple of lines at the end about how he’d used ChatGPT to fill out the application. He got the scholarship.

Oh no..! If only I could get my college admissions essays back.

THIS MORNING we on the faculty senate - a good citizenry but inane activity that is both necessary and maddening in self governance associated with academia, we got a corporate looking almost meaningless two page THING from one of our administrators full of business gobldegoop words - and one of the faculty said it was clearly written by Chatgpt - in gest because it had so many big words but literally like no content.  

So it was an in jest insult - the new insult of the ages.  Tell someone they sound like chatgpt.

Link to comment
Share on other sites

14 hours ago, 1980Mooney said:

"One more time, initially they went nose high, once the plane was fully stalled the nose was about on the horizon."  On the horizon?.... It was the middle of the night in IMC towering cumulonimbus.  There was no "horizon".  Not sure what point you are trying to make.

"But that assumes the people who programmed it can think of everything."  - That is why AI is taking over programming and testing in the future.  It can run through all possibilities more completely and quickly than humans.  

Uuuh, they did have the artificial horizon.

So the point is, if you are IFR, applying full aft stick and the AI shows the nose on the horizon, what are you going to do?

The real key was the altitude winding down, but unless you have stalled a swept wing airplane would you be thinking stall???

Link to comment
Share on other sites

13 hours ago, ilovecornfields said:

Very true. Many more suicidal AIs than suicidal pilots.

AI doesn’t have to be perfect, just better than humans. I think some of you are still failing to understand that.

Would the AI opt to land on the river like sully? 

Artificial intelligence isn't some alien brain. It's just programmed code. there are plenty of situations in airplanes where you have to take X risk vs B risk and it will have been impossible to pre program that unknow feature into AI.

If "russians" can hack our elections and companies and whatever else they could certainly hack an airplane and pile drive it right into a crowd of politicians. On second thought... yeah lets get AI airplanes on board asap please  

Link to comment
Share on other sites

4 hours ago, JayMatt said:

Would the AI opt to land on the river like sully? 

Artificial intelligence isn't some alien brain. It's just programmed code. there are plenty of situations in airplanes where you have to take X risk vs B risk and it will have been impossible to pre program that unknow feature into AI.

If "russians" can hack our elections and companies and whatever else they could certainly hack an airplane and pile drive it right into a crowd of politicians. On second thought... yeah lets get AI airplanes on board asap please  

It might have landed in the river. Or LGA. Probably wouldn’t have been programmed to head for the tallest building or the most populated area and as far as I can tell those who claim the Hudson River as their primary residence tend to not fill out their census forms (but I do hear concrete shoes are a fashion trend there). 

You all really think it’s easier to hack a computer in a moving airplane than to hijack one with human pilots? No one is saying AI is perfect or it’s ready to take over today. Why don’t we talk about something less controversial like LOP? Should I see what the chatbot has to say about that?

Link to comment
Share on other sites

6 minutes ago, ilovecornfields said:

It might have landed in the river. Or LGA. Probably wouldn’t have been programmed to head for the tallest building or the most populated area and as far as I can tell those who claim the Hudson River as their primary residence tend to not fill out their census forms (but I do hear concrete shoes are a fashion trend there). 

You all really think it’s easier to hack a computer in a moving airplane than to hijack one with human pilots? No one is saying AI is perfect or it’s ready to take over today. Why don’t we talk about something less controversial like LOP? Should I see what the chatbot has to say about that?

Ha! This should give you guys plenty to talk about:

Running an aircraft engine lean of peak is generally considered to be better than running it rich of peak. It can reduce fuel consumption by up to 20%, and it can also extend the life of the engine. However, it is important to note that lean of peak operation can also be more difficult to manage, and it can also increase the risk of engine damage if it is not done properly.

The ideal mixture setting for an aircraft engine is just lean of peak. This means that the mixture is slightly richer than the air-fuel mixture that produces the maximum exhaust gas temperature (EGT). Lean of peak operation is more efficient because it allows the engine to burn more of the fuel-air mixture before it reaches the spark plug. This results in a lower EGT and less wear on the engine.

However, lean of peak operation can also be more difficult to manage. The mixture must be adjusted carefully to avoid running too lean, which can cause the engine to overheat and damage the cylinders. It is also important to monitor the EGT and CHT (cylinder head temperature) closely to make sure that they do not exceed the manufacturer's limits.

Overall, lean of peak operation is a good way to save fuel and extend the life of your aircraft engine. However, it is important to do it properly to avoid engine damage.

Link to comment
Share on other sites

On 3/27/2023 at 10:18 AM, dzeleski said:

Everyone should really read this. ChatGPT and Bard are not really "AI", they are really NLPs and are VERY limited in what they can do and will happily answer incorrectly with full confidence.

Lots of software uses the term AI for making a decision when thats not really what AI is. Code is always making decisions thats quite literally how code works.

We do not need AI to fly airplanes autonomously. In fact it could be argued a non-deterministic piece of software is not the correct application for flight. Well defined deterministic tested code paths are ideal, its essentially impossible to test AI because you can get a different result every time it comes up to a possible problem. With deterministic code we can test inputs for expected outputs. Rockets have been flying completely autonomously to space using deterministic code since basically the inception of space flight. If something happens do this, of this list of airports find me the nearest one with a runway of X length and a gps approach, if hydraulic system A fails follow these steps to isolate, etc. Controlled inputs == controlled outputs, this requires significant redundancy with an odd number of systems to ignore out of family values but its all possible with current, existing tech.

https://theaircurrent.com/technology/emergency-autoland-puts-garmin-on-the-bleeding-edge-of-autonomous-flying/

Not Artificial Intelligence

While on its surface, EA looks a lot like the work of artificial intelligence. “We could’ve done artificial intelligence, we could’ve done computer vision,” said Kilgore of the patented system.

It’s not. In 2019, certification of an AI-driven system isn’t possible. The FAA, nor any regulator, has yet to create a framework for certifying so-called non-deterministic systems. In technical terms, non-deterministic refers to the inability to objectively predict an outcome. “I think [AI] has its use,“ said Kilgore. “But as far as certifying it, I don’t know that you’re going to find a good use case where you can certify until you can explain what it’s doing.”


EA is, however, entirely deterministic. At every point, Garmin’s algorithm knows why it making the decisions it is given the inputs. “In this case, we can go back and we can understand exactly what it’s going to do and it’s going to be repeatable,” said Kilgore. “We tried not to overcomplicate the problem for what it’s intended to do.”

And what it’s intended to do is to get a healthy airplane with an ailing pilot out of harms way. It’s not ready to be a fully automated system in normal operations, and those challenges — particularly related to the interaction of a healthy pilot and their ailing airplane — have yet to be solved, but it is one big step in that direction.

please explain in as much technical detail as possible what is non-deterministic about machine learning systems? in particular, where does the non-determinism originate? 

Link to comment
Share on other sites

Can an AI system be validated? It seams it can change its mind. One aspect of software validation is you get the expected output every time. How could you prove that an AI system is reliable, when it can change its mind?

It seems that there is a lot of confusion between a traditionally software system, machine learning and AI. I'm not an expert on AI or machine learning, even though I just installed a machine learning system on my work computer. I try to avoid them because the licensing fees are quite high for what they do.

Link to comment
Share on other sites

I used to think pilotless aircraft would be 100 years out but after seeing our son not even want to go get his drivers license on the day of his birthday, it made me pause to realize this generation of children would much rather keep their head buried in a telephone if they could have a car that drove them around Without their input.  they were already raised with parents driving around so they don’t see much difference between the two so it’s not a far jump to see if especially the price of an airline ticket was two to $300 cheaper that the new generation of children would gladly pay that  For the cost savings alone. Once our generation dies out from growing up seeing computer systems struggle to be coded and low reliability, the next gen will not have those concerns. Even right now the Airbus A320 has a speed window of +- 20 knots not even PTS standards. It commonly will cross an altitude restriction at +200 feet again a failure if you were hand flying on your check ride and did this. The real sad part is sometimes the computer will try to stay on the calculated vertical path to a point that it will fly even faster than it’s +20kt bracket. When guys see this in real life not in sim world and get frustrated i say relax this is job security right here. I asked my instructor when i was going through capt upgrade “if you were coming in on an arrival and had a fed jumpseater on board and saw the airplane was going to be high by 200ft of the next waypoint hard altitude, would you disengage the autopilot to get within 100ft of the altitude or would you keep the automation on. He replied he would keep the automation on as the FAA certified the airplane with these issues. I think the next gen of software code and a faster processor would eliminate these issues but the concern has not over come the cost expense to make the change. Not to mention the times a reboot is required to get a system back running again. The reliability just isn’t there yet for me. Reminds me of the guy that was a hugh fan of tesla’s auto drive. So much so he would watch movies while the car drove. Well one day a simi tractor trailer pulled out into the intersection and the tesla car saw it was clear to keep going but unfortunately the car’s cabin was too high to go under the trailer and thus ripped the top off the car and decapitated the driver. Computer systems are great until there is a situation that a programmer has not thought of and the code doesn’t have a contingency for. I’m sure they have added code or changed the camera angle to include hight requirements for the roof of the car to get through. 
 

https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk

 

Edited by Will.iam
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.