Jump to content

Are pilots going to be replaced by AI?


Recommended Posts

4 hours ago, 1980Mooney said:

AF447 crash was caused by the pilots.  Yes the pitot froze for 1 minute causing the autopilot to disconnect.  But it was the pilots that, rather than maintaining attitude and power while the airspeed issue cleared and the autopilot re-engaged, put the plane into an 8,000 fpm climb and then into a 38,000 ft. stall.   The pilot and co-pilot were at odds and making simultaneous and contradictory inputs. 

Sort of

They were holding attitude and power.  But swept wing aircraft do not stall like straight wing aircraft.  The nose does not drop, it stays about at the horizon.  So, the AI showed level, but the VVI was showing massive down.  But level with climb power should have meant increasing airspeed or climbing, and they did not notice that.

I still recall doing a full aft stick stall in the T-38.

And, the two pilots were making contradictory inputs, but in the Airbus, only one of them has control at the same time.  And the sticks are not interconnected, so the one pilot thought he had control, but he did not.  And he did not know the other pilot, who did have control, was making the opposite control input.

Link to comment
Share on other sites

1 hour ago, Jerry 5TJ said:

AF447 wasn’t handled at all well by the three human pilots.  Current AI technology would have done better.  

Sully’s landing in the Hudson was assisted by the Airbus flight computers in Normal Law, arguably a form of AI.  

United 232, well, that was a crash with a miraculous outcome.  I doubt AI could have helped.  

The captain, once he got to the cockpit, DID know what to do, but it was too late by that time.

How was he assisted over what would have happened in a Boeing?

Link to comment
Share on other sites

4 minutes ago, Pinecone said:

Sort of

They were holding attitude and power.  But swept wing aircraft do not stall like straight wing aircraft.  The nose does not drop, it stays about at the horizon.  So, the AI showed level, but the VVI was showing massive down.  But level with climb power should have meant increasing airspeed or climbing, and they did not notice that.

I still recall doing a full aft stick stall in the T-38.

And, the two pilots were making contradictory inputs, but in the Airbus, only one of them has control at the same time.  And the sticks are not interconnected, so the one pilot thought he had control, but he did not.  And he did not know the other pilot, who did have control, was making the opposite control input.

Airbus was  actually much worse: it was averaging control from left and right. I think having full control on either side would not have resulted in catastrophic outcome…. Averaging was bad decision made by non pilot software developers and some how it made thorough all the testing. 

  • Like 1
Link to comment
Share on other sites

6 hours ago, Pinecone said:

Hmm, so why do systems have bugs????

And, you are assuming that the AI system doing the testing does not have any issues. :D

Lots of optimism from some but, after 40 years in IT, I can say with confidence that all software systems with any degree of complexity have bugs.  Or as we liked to say "undocumented features".

  • Thanks 2
Link to comment
Share on other sites

1 hour ago, alexz said:

Airbus was  actually much worse: it was averaging control from left and right. I think having full control on either side would not have resulted in catastrophic outcome…. Averaging was bad decision made by non pilot software developers and some how it made thorough all the testing. 

Not according to the reports and analysis I read.  Supposedly there is a button, that when pushed, gives to command to that stick.

If it averaged, they would not have crashed.  One was full stick aft (in control) and one was full stick forward.  Average would have been stick neutral

Link to comment
Share on other sites

14 minutes ago, 1980Mooney said:

Absolutely false.  From the Final Report:

"Following the autopilot disconnection, the PF very quickly applied nose-up sidestick inputs. The PF’s inputs may be classified as abrupt and excessive. The excessive amplitude of these inputs made them unsuitable and incompatible with the recommended aeroplane handling practices for high altitude flight. "

He thought he was in an overspeed condition for some reason - he did not scan all his inputs and put it together.

That is how it started.  But it reached a steady state with full aft stick control.

At least from the reports and analysis I have seen.

Link to comment
Share on other sites

Google Bard seemed like it would do just fine handling a dual engine failure:

If I were flying an A320 over the Hudson River and both engines failed out of gliding distance to any airport, I would follow the same steps that Captain Chesley Sullenberger followed on January 15, 2009.

  1. I would try to restart the engines.
  2. If the engines could not be restarted, I would calmly call for help and declare an emergency.
  3. I would then try to glide the plane to the nearest available landing spot.
  4. If I could not make it to a landing spot, I would ditch the plane in the water and instruct the passengers to evacuate.

I would also try to stay calm and focused in order to make the best decisions possible in a difficult situation.

Although if you believe this article, it might have done even better and glided back to LGA instead of having to ditch:

https://www.cbsnews.com/news/ntsb-sully-could-have-made-it-back-to-laguardia/

Link to comment
Share on other sites

7 hours ago, alexz said:

For those that think we have an age of AI here is the article from one of the smartest gentleman  involved in computer technology: https://rodneybrooks.com/what-will-transformers-transform/

Everyone should really read this. ChatGPT and Bard are not really "AI", they are really NLPs and are VERY limited in what they can do and will happily answer incorrectly with full confidence.

Lots of software uses the term AI for making a decision when thats not really what AI is. Code is always making decisions thats quite literally how code works.

We do not need AI to fly airplanes autonomously. In fact it could be argued a non-deterministic piece of software is not the correct application for flight. Well defined deterministic tested code paths are ideal, its essentially impossible to test AI because you can get a different result every time it comes up to a possible problem. With deterministic code we can test inputs for expected outputs. Rockets have been flying completely autonomously to space using deterministic code since basically the inception of space flight. If something happens do this, of this list of airports find me the nearest one with a runway of X length and a gps approach, if hydraulic system A fails follow these steps to isolate, etc. Controlled inputs == controlled outputs, this requires significant redundancy with an odd number of systems to ignore out of family values but its all possible with current, existing tech.

https://theaircurrent.com/technology/emergency-autoland-puts-garmin-on-the-bleeding-edge-of-autonomous-flying/

Not Artificial Intelligence

While on its surface, EA looks a lot like the work of artificial intelligence. “We could’ve done artificial intelligence, we could’ve done computer vision,” said Kilgore of the patented system.

It’s not. In 2019, certification of an AI-driven system isn’t possible. The FAA, nor any regulator, has yet to create a framework for certifying so-called non-deterministic systems. In technical terms, non-deterministic refers to the inability to objectively predict an outcome. “I think [AI] has its use,“ said Kilgore. “But as far as certifying it, I don’t know that you’re going to find a good use case where you can certify until you can explain what it’s doing.”


EA is, however, entirely deterministic. At every point, Garmin’s algorithm knows why it making the decisions it is given the inputs. “In this case, we can go back and we can understand exactly what it’s going to do and it’s going to be repeatable,” said Kilgore. “We tried not to overcomplicate the problem for what it’s intended to do.”

And what it’s intended to do is to get a healthy airplane with an ailing pilot out of harms way. It’s not ready to be a fully automated system in normal operations, and those challenges — particularly related to the interaction of a healthy pilot and their ailing airplane — have yet to be solved, but it is one big step in that direction.

  • Like 2
Link to comment
Share on other sites

14 minutes ago, dzeleski said:

Everyone should really read this. ChatGPT and Bard are not really "AI", they are really NLPs and are VERY limited in what they can do and will happily answer incorrectly with full confidence.

Well said a lot of what you said - but you said what I and a lot of us at my work place have been saying and noticing - chatgpt sounds good but if you look closely it tends to sometimes make spectacular errors with full confidence.  

Interestingly at my university I am recruited to be on our AI steering committee - what to do with chatgpt and the likes with students thinking they can hand in a chatgpt essay and no one will notice.  But also how to useful teach how to use this newly coming of age stuff just as we might use a google search, but critically.

Sometimes reading chatgpt responses that are often smoothly written in terms of English and sometimes nondescript in response - it feels like listening to a well trained slick politician who is trained in the art of yapping convincingly while saying nothing.  

Can't we please just replace the politicians with AI instead of replacing the pilots and surgeons with AI?

 

  • Haha 2
Link to comment
Share on other sites

24 minutes ago, aviatoreb said:

Sometimes reading chatgpt responses that are often smoothly written in terms of English and sometimes nondescript in response - it feels like listening to a well trained slick politician who is trained in the art of yapping convincingly while saying nothing.  

My son wanted to go to AI camp this summer so I sent him a scholarship application. He used ChatGPT to fill out the application then added a couple of lines at the end about how he’d used ChatGPT to fill out the application. He got the scholarship.

  • Like 3
Link to comment
Share on other sites

1 hour ago, ilovecornfields said:

Google Bard seemed like it would do just fine handling a dual engine failure:

If I were flying an A320 over the Hudson River and both engines failed out of gliding distance to any airport, I would follow the same steps that Captain Chesley Sullenberger followed on January 15, 2009.

  1. I would try to restart the engines.
  2. If the engines could not be restarted, I would calmly call for help and declare an emergency.
  3. I would then try to glide the plane to the nearest available landing spot.
  4. If I could not make it to a landing spot, I would ditch the plane in the water and instruct the passengers to evacuate.

I would also try to stay calm and focused in order to make the best decisions possible in a difficult situation.

Although if you believe this article, it might have done even better and glided back to LGA instead of having to ditch:

https://www.cbsnews.com/news/ntsb-sully-could-have-made-it-back-to-laguardia/

Wow, AI decided that the way it was done was the right way.  Which is not news.

But, if this had not occurred before, would the AI be able to handle it?

Link to comment
Share on other sites

1 hour ago, 1980Mooney said:

"As for AI "solving" the issue, all it said was to adjust the pitch attitude to avoid stall (nose high)"

Just that simple recognition by AI was all it would have taken.  The pilot never put the nose down on AF447 during the entire 5 minutes as it slammed into the ocean.

Let's not forget that AI in actual application will be monitoring all the aircraft inputs as well as external data such as GPS.  It would have recognized that the trajectory and performance of the plane had not changed - only the pitot tube froze over. It would recognize far faster than a human that nothing was wrong with the plane except for the ice that was in the process of melting blocking the pitot tube.

AI would have put a message on the flight deck screens "Don't change attitude or throttles."

And if the stupid human behind the side stick started pulling back it would say "Dave - I can't let you do that".....  ;)

One more time, initially they went nose high, once the plane was fully stalled the nose was about on the horizon.  Swept wing aircraft to not behave in a stall like straight winged aircraft.

And you are assuming the the AI would know what do to in every circumstance.  But that assumes the people who programmed it can think of everything.

Oh, and yes, the Airbus does that in Normal Law, and this caused the crash at the Paris Airshow as the AI would not allow the pilot to try to eek out an extra 1/10 of a degree of AOA to avoid hitting the ground.   Sorry Dave, I cannot let you pull the nose any higher even if doing so means we will hit the ground.

Link to comment
Share on other sites

I think you'all are giving AI more credit than it deserves. I look at what these chatbots produce, and all I see is a google search formatted into conversational english. 

Of course it has to have an algorithm to deal with conflicting data. Does it somehow just go by the amount of data on one side of the argument vs the other? There has been a lot written recently about the political bias in the chatbot responses. Is this because of the unbalance in the source data, or because of its programmed bias? 

They would like to convince you that it is pure intelligence, but I have a hard time believing that. 

  • Like 1
Link to comment
Share on other sites

2 hours ago, dzeleski said:

Everyone should really read this. ChatGPT and Bard are not really "AI", they are really NLPs and are VERY limited in what they can do and will happily answer incorrectly with full confidence.

Lots of software uses the term AI for making a decision when thats not really what AI is. Code is always making decisions thats quite literally how code works.

We do not need AI to fly airplanes autonomously. In fact it could be argued a non-deterministic piece of software is not the correct application for flight. Well defined deterministic tested code paths are ideal, its essentially impossible to test AI because you can get a different result every time it comes up to a possible problem. With deterministic code we can test inputs for expected outputs. Rockets have been flying completely autonomously to space using deterministic code since basically the inception of space flight. If something happens do this, of this list of airports find me the nearest one with a runway of X length and a gps approach, if hydraulic system A fails follow these steps to isolate, etc. Controlled inputs == controlled outputs, this requires significant redundancy with an odd number of systems to ignore out of family values but its all possible with current, existing tech.

https://theaircurrent.com/technology/emergency-autoland-puts-garmin-on-the-bleeding-edge-of-autonomous-flying/

Not Artificial Intelligence

While on its surface, EA looks a lot like the work of artificial intelligence. “We could’ve done artificial intelligence, we could’ve done computer vision,” said Kilgore of the patented system.

It’s not. In 2019, certification of an AI-driven system isn’t possible. The FAA, nor any regulator, has yet to create a framework for certifying so-called non-deterministic systems. In technical terms, non-deterministic refers to the inability to objectively predict an outcome. “I think [AI] has its use,“ said Kilgore. “But as far as certifying it, I don’t know that you’re going to find a good use case where you can certify until you can explain what it’s doing.”


EA is, however, entirely deterministic. At every point, Garmin’s algorithm knows why it making the decisions it is given the inputs. “In this case, we can go back and we can understand exactly what it’s going to do and it’s going to be repeatable,” said Kilgore. “We tried not to overcomplicate the problem for what it’s intended to do.”

And what it’s intended to do is to get a healthy airplane with an ailing pilot out of harms way. It’s not ready to be a fully automated system in normal operations, and those challenges — particularly related to the interaction of a healthy pilot and their ailing airplane — have yet to be solved, but it is one big step in that direction.

That's double standard, no one complains when I make stupid, inexplicable decisions :)

On a more serious note: I agree with that. I think well defined decision paths will be in use.

Link to comment
Share on other sites

21 minutes ago, 1980Mooney said:

Perhaps a better way to say it is forming opinions on what AI will do to/for aviation and pilots over the coming decade is like forming opinions on the capabilities of ForeFlight or GarminPilot based upon an iPhone 3G or a Blackberry 4.  I suspect you were not impressed back then either. 
 

But today there are testimonials of pilots using an iPad with ForeFlight to safely successfully land IFR when their primary navigation or panel failed. 
 

Moore’s Law tells us processing power will double every 2 years. We have seen it in smartphones and tables. Avionics are no different. I suspect in 10 years AutoLand will be ho-hum and every avionics company will have a version. - Maybe they will be throwing it in for free to motivate you to buy their panel. 
 

You may not like where AI is going but you need to get out of the way because it is coming regardless. 

FYI Moore died yesterday.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, 1980Mooney said:

"One more time, initially they went nose high, once the plane was fully stalled the nose was about on the horizon."  On the horizon?.... It was the middle of the night in IMC towering cumulonimbus.  There was no "horizon".  Not sure what point you are trying to make.

"But that assumes the people who programmed it can think of everything."  - That is why AI is taking over programming and testing in the future.  It can run through all possibilities more completely and quickly than humans.  

That's is not correct   statement. None of the currently developed "AI" can run though all possiblilites. All they repsent currently is a bunch of nonlinear equations with coefficients and step functions ajdusted by training. The chance that is covers all possibillites is close to nil. AI can help in some tasks where human expert is still involved, but won’t be running unsupervised for long time to come. Moore “hypothesis” stopped being true in high performance computing for well over 10 years already. Chips are not becoming faster, they are becoming larger but there is whole bunch of tasks that would benefit immensely from much faster chips. 

Link to comment
Share on other sites

Btw, here is exchange where "AI" fails horribly bad and then double downs on the mistake:

"describe Boston class B airspace"

Chat GPT:

Boston Class B airspace is an area of controlled airspace that surrounds the Boston Logan International Airport (KBOS) and extends from the surface up to 10,000 feet mean sea level (MSL) in a radius of approximately 30 nautical miles from the airport. The airspace is designed to provide a safe and efficient flow of air traffic into and out of the airport, as well as for transit through the airspace.

Within the Boston Class B airspace, all aircraft must obtain clearance from air traffic control (ATC) before entering. Pilots must also have a two-way radio communication with ATC and must operate with a transponder that is capable of automatic altitude reporting.

The airspace is divided into several layers, each with its own altitude limits and communication requirements. The innermost layer, called the "Core," extends from the surface to 3,000 feet MSL and has the strictest communication requirements. The next layer, called the "Bravo," extends from 3,000 feet MSL to 10,000 feet MSL and has less strict communication requirements.

To operate within the Boston Class B airspace, pilots must receive proper training and have a thorough understanding of the airspace requirements and procedures. It is important for pilots to closely monitor their altitude and location to ensure compliance with the airspace restrictions and to maintain safety for all aircraft operating within the airspace.

"where did you get 10000 feet MSL value ?"

I apologize for any confusion. Upon double-checking, the top altitude of Boston Class B airspace is actually 12,000 feet MSL, not 10,000 feet MSL as I mentioned earlier. The airspace extends from the surface up to 12,000 feet MSL within a 30 nautical mile radius of Boston Logan International Airport.

I obtained this information from the FAA's Aeronautical Information Manual (AIM) section 3-2-3, which provides details on the characteristics and requirements of Class B airspace. Thank you for bringing this to my attention, and please let me know if you have any further questions.

 

Link to comment
Share on other sites

10 minutes ago, alexz said:

Curious, can you provide more details on it ? Thank you.

https://www.intel.com/content/www/us/en/newsroom/news/gordon-moore-obituary.html

Moore's law isn't about the speed of semiconductors, it is about the transistor count.

Couldn't find any interior shots, but the mural I'm talking about will be right in front of you after you enter from the walking bridge from the parking structures.

Día de manufactura: reforzando nuestra responsabilidad con la  sustentabilidad (English Only) | Intel Latinoamérica Newsroom

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.