Jump to content

Backup, disaster recovery, Level of redundancy


Yetti

Recommended Posts

As people upgrade things in their plane, we should probably have this discussion.   I have been doing backup plans, business continuance plans, redundancy plans, and system design for 30 years since the time you could stretch a tape.  Living on the coast where hurricanes come to visit and working in critical infrastructure where you have to bug out control centers has taught me a few things, And living through a couple of storms where things went really wrong, has taught me some more things.   My best rule for in the moment of the crisis was told to me by an old pipeliner "if you hear the explosion you are oK"    Think about that for a moment

The best way to approach your redundancy plan is to create a scenario.    GPS signal lost, GPS system down, one screen goes down, alternator fails.  

When we talk about layers of redundancy it means one layer is not dependent on the other layer to survive.  

So lets say the primary nav in the plane fails.   I have a tablet with GPS,  I have a stratux with GPS and I have a phone with GPS.   I have an old handheld GPS with GPS and all the airports loaded into it.  and I have pilotage, and I have old charts. and I have a compass, and I have a handheld with Nav, and I have ATC with vectors. And I carry one or two battery packs to power the devices.   Those would be layers since they are independent of the plane power source.

If the coms fails I have a handheld with the headset adapter laying on the backseat.   I have a transponder, I kind of remember the light gun signals.

Next is working through these in your head to remember them after you know that you are OK so you don't freak out.

Best design theory I ever read was Netflix    When they moved to AWS they created a chaos monkey that would randomly shut their services down and the show had to remain on.   It was designed around the since things are going to fail why not design to fail

https://netflixtechblog.com/lessons-netflix-learned-from-the-aws-outage-deefe5fd0c04

On the second AWS outage we had a 1.2 Billion dollar revenue a year system that was running in AWS, because we designed properly we had no interruption of service with that  outage.

Most system are designed to succeed when that is not their natural state.   Hard drives fail, memory fails,    When was the last time you backed up your key files on your computer and phone?

 

 

 

  • Like 5
Link to comment
Share on other sites

I have also done some of that thinking.

If my GTN650 dies I have: tablet, KNS80, radar vectors using my #2 comm.

If GPS dies I have my KNS80 and radar vectors.

If I have electrical failure I have dual G5's and tablet on battery.

If a G5 fails, I have the other G5 and a TC.

I'd like to put an AV20S in as my clock when needed but acting as another AI when IMC.  That would give me three attitude sources (G5, AV20S, TC) to make it easier to pick the odd man out and decide which one I believe.

Link to comment
Share on other sites

Im an SRE for a major tech company, the nature of my job is to keep things running and reliable even when things break. I caught a lot of flack for carrying 2 ipads, my cell phone, a sentry, a dual bluetooth gps adapter, a portable radio, backup batteries, and flash lights with me in my flight bag. My thought was if my instructor had a stroke or something I had to get this thing back on the ground some how. To this day I still carry all that stuff.

Anyone that isnt planning for everything to fail is unprepared. I've tried to think about every single possible item that can break and what I would use to replace it to get on the ground. Im slowly removing the old avionics in my plane and replacing them with solid state tech that has redundancy built in. I dont trust vacuum instruments and want them out yesterday. Especially after I slowly watched my AI fail on a trip back home 2 weeks ago. I had my avionics shop yank it out and replace it with a G5, no questions asked.

 

  • Like 1
Link to comment
Share on other sites

I figure that in VFR conditions, day or night, I don't need anything but my eyeballs out the window to get safely on the ground. If we're talking about completing the trip, getting to my destination, getting to my home field, etc... that's a different requirement. But if worst case scenario happens, I just need to get on the ground safely, VFR conditions are my best option. And to that end, the best backup/redundancy/disaster recovery, kit is the Speed and Range of my Mooney. And this is one of the advantages of living in the West. Even on the worst weather days, I typically have the speed and range to go find VFR conditions if everything in my cockpit goes to shit.

But just to make things even easier...

  • iPad w/GPS/SV
  • iPhone w/GPS
  • Battery pack for both above
  • Redundant AI's both electric, one with extra battery
  • 2 Nav/Coms
  • Hot Prop
  • LED lights all round (they run forever on battery power)
  • Portable O2

Notably absent is a handheld radio... I should probably get one.

  • Like 2
Link to comment
Share on other sites

4 hours ago, Yetti said:

Best design theory I ever read was Netflix    When they moved to AWS they created a chaos monkey that would randomly shut their services down and the show had to remain on.   It was designed around the since things are going to fail why not design to fail

In aviation we call the 'chaos monkey' a 'Flight Instructor'.

There's nothing less reliable than a plane with a flight instructor in the right seat.

  • Like 1
  • Haha 3
Link to comment
Share on other sites

5 hours ago, Yetti said:

Best design theory I ever read was Netflix    When they moved to AWS they created a chaos monkey that would randomly shut their services down and the show had to remain on.   It was designed around the since things are going to fail why not design to fail

https://netflixtechblog.com/lessons-netflix-learned-from-the-aws-outage-deefe5fd0c04

On the second AWS outage we had a 1.2 Billion dollar revenue a year system that was running in AWS, because we designed properly we had no interruption of service with that  outage.

Most system are designed to succeed when that is not their natural state.   Hard drives fail, memory fails,    When was the last time you backed up your key files on your computer and phone?

Most professional server farms do a pretty good job with this sort of thing.   Years ago I was a principal at a company that licensed IP to a manufacturer and we set up a server to provide license codes based on the options that the ultimate customer selected, and could serve new codes if the customer wanted to upgrade features later.   For all the usual reasons we needed a reliable server, and I toured a local data center to check them out as a candidate.

They had two backup generators, one on each end of the building, and either could power the entire building.   They had two broadband fiber bundles coming from two different providers that came from two different directions into the building.   They did a good job with server redundancy, storage redundancy, power supply redundancy, etc., etc...they'd done a really good job of really putting together a high-reliability system. 

And then I saw the rack where the two fiber systems came together.   They came together in a single open cabinet with a single power supply, and the cabinet was directly under a sprinkler.

There's always a Swiss Cheese failure model that can be applied, but it does help if you try to minimize the number and size of the holes as well as manage the number of layers (slices).   I'm a big fan of redundancy, but as you say it needs to be planned and managed.

Edited by EricJ
Link to comment
Share on other sites

18 minutes ago, EricJ said:

And then I saw the rack where the two fiber systems came together.   They came together in a single open cabinet with a single power supply, and the cabinet was directly under a sprinkler.

I had a datacenter with a dual redundant fiber system, dual power, dual router/switches, but the final fibers ran together for a couple of floor tiles.   One day a local tech dropped a floor tile, severed both feeds.   You can build all of the redundancy you want,  you have to design for losing the entire datacenter.  It happens, and details matter. 

 

Now for an aviation example:  

We were long taught to look for vertical speed on two instruments before raising the gear, or on IFR climb out.   

With the glass panels the altimeter and vertical speed are not separate instruments as they were in the steam gauge era.  The cross check needs to be the glass panel ADC and the backup altimeter..  The system details matter. 

Edited by PaulM
  • Like 1
Link to comment
Share on other sites

Avionics used to be simple. Every radio or instrument performed essentially one function and there was minimal interaction between devices. Failures were pretty easy to detect, and their effects reasonably obvious. But, as my recent GTX 345 AHRS failure has shown, that's no longer the case. The AHRS acted squirrelly in both pitch and bank without any warning indication on the iPad running ForeFlight. Garmin Pilot did put up a DEGRADED warning long after I would have lost control of the airplane. Garmin tells me the AHRS system in the GTX 345 is the same used in many of their primary flight displays. 

I started working on a simplified Failure Modes and Effects Analysis (FMEA) for my set up. This requires several steps:

1. Identify all possible failures modes

2. Identify how each failure mode can be detected

3. Identify the effects of each failure mode

4. Design a mitigation plan for each failure mode

The problem I had with doing this was identifying the failure modes for complex equipment, and knowing how to detect that a failure has occurred. Consider something complex like a PFD which displays primary instruments and navigation information. It's easy to know you have a problem if you get a big red X. But will all failures cause this indication? Who knows? Without understanding the internals, it's hard to tell what failures there might be or how you would come to find out that something has failed.

The manufacturers aren't a lot of help. I called Garmin and asked how loss of the external GPS signal would affect the accuracy of the internal AHRS in the GTX 345. The tech support person didn't know and put me on hold to ask an engineer. The answer was, "It shouldn't have an effect." That's not entirely comforting.

I think what saves us from falling victim to a probable host of subtle and undocumented failure modes is that modern avionics are pretty reliable. I have reasonable backups and don't worry too much about it. But, I'm retired, try not to have to be anywhere on a strict schedule, fly a NA M20J and avoid bad weather minimizing (but not completely eliminating) my IMC exposure. If I had a turbocharged airplane with FIKI and flew in the weather a lot, I would think about this more carefully -- a lot more carefully.

Skip

 

  • Like 1
Link to comment
Share on other sites

18 minutes ago, PT20J said:

Avionics used to be simple. Every radio or instrument performed essentially one function and there was minimal interaction between devices. Failures were pretty easy to detect, and their effects reasonably obvious. But, as my recent GTX 345 AHRS failure has shown, that's no longer the case. The AHRS acted squirrelly in both pitch and bank without any warning indication on the iPad running ForeFlight. Garmin Pilot did put up a DEGRADED warning long after I would have lost control of the airplane. Garmin tells me the AHRS system in the GTX 345 is the same used in many of their primary flight displays. 

I started working on a simplified Failure Modes and Effects Analysis (FMEA) for my set up. This requires several steps:

1. Identify all possible failures modes

2. Identify how each failure mode can be detected

3. Identify the effects of each failure mode

4. Design a mitigation plan for each failure mode

The problem I had with doing this was identifying the failure modes for complex equipment, and knowing how to detect that a failure has occurred. Consider something complex like a PFD which displays primary instruments and navigation information. It's easy to know you have a problem if you get a big red X. But will all failures cause this indication? Who knows? Without understanding the internals, it's hard to tell what failures there might be or how you would come to find out that something has failed.

The manufacturers aren't a lot of help. I called Garmin and asked how loss of the external GPS signal would affect the accuracy of the internal AHRS in the GTX 345. The tech support person didn't know and put me on hold to ask an engineer. The answer was, "It shouldn't have an effect." That's not entirely comforting.

I think what saves us from falling victim to a probable host of subtle and undocumented failure modes is that modern avionics are pretty reliable. I have reasonable backups and don't worry too much about it. But, I'm retired, try not to have to be anywhere on a strict schedule, fly a NA M20J and avoid bad weather minimizing (but not completely eliminating) my IMC exposure. If I had a turbocharged airplane with FIKI and flew in the weather a lot, I would think about this more carefully -- a lot more carefully.

Skip

 

FMEA pretty much has to be done at the engineering level, as the user doesn't always have access to a lot of the test/measurement/failure points, as you describe.   It can certainly be applied to what you can see and control, but the effectiveness is limited by the reduced visibility into the system.

When I was an engineer working at Honeywell on 777 system avionics in the early 90s, FMEA was gospel and we did it at every level all the way down to individual circuit traces and signal wires.  The formal testing practices on software was similar in that every line of code and decision branch had to be demonstrated to be tested by formal verification.  I can only guess that if that was still done the sensor failure effects like what happened with MCAS would have been quickly identified well before the system was deployed to the field.   Or maybe it was done and excused somehow or something, which seems more likely.   I'm puzzled by the whole thing.

Doing as much as possible or practical analysis and critical thinking into system operation as you describe will definitely make one more prepared for potential failures.

 

Link to comment
Share on other sites

3 minutes ago, EricJ said:

FMEA pretty much has to be done at the engineering level, as the user doesn't always have access to a lot of the test/measurement/failure points, as you describe.   It can certainly be applied to what you can see and control, but the effectiveness is limited by the reduced visibility into the system.

Agreed :) I was really just trying to see how far I could get at the system component level, but the exercise quickly pointed out to me how little I know -- or could find out -- about the internals of these boxes. 

Skip

  • Like 1
Link to comment
Share on other sites

Just now, PT20J said:

Agreed :) I was really just trying to see how far I could get at the system component level, but the exercise quickly pointed out to me how little I know -- or could find out -- about the internals of these boxes. 

Skip

It's definitely one downside of integrated systems.   Trusting the engineering and integration is not confidence-inspiring when Boeing et al can't get it right for transport category, and the bar is a lot lower for GA. 

  • Like 2
Link to comment
Share on other sites

15 hours ago, PT20J said:

The problem I had with doing this was identifying the failure modes for complex equipment, and knowing how to detect that a failure has occurred. Consider something complex like a PFD which displays primary instruments and navigation information. It's easy to know you have a problem if you get a big red X. But will all failures cause this indication? Who knows? Without understanding the internals, it's hard to tell what failures there might be or how you would come to find out that something has failed.

 

Dynon actually has a debug page that tells you what the sensors are doing or not.   But I don't think I would flip over there in flight.   It is better to just go to a different layer of redundancy and use it.   It changes the CRM and like everything you have to be proficient in that new system you are applying to the task at hand.   With SCADA systems and Server Rooms you try to write good failure detection.  The better way is to have the redundancy in primary so you only have less speed or accuracy when one point of the triangle fails then the trick is to know that something has failed before things cascade.   See DARPAnet RAID.

Link to comment
Share on other sites

37 minutes ago, Yetti said:

Dynon actually has a debug page that tells you what the sensors are doing or not.   But I don't think I would flip over there in flight

I don't think it is a good idea to see lot of raw sensors data in flight ;) but with new avionics we get the chance to see more than we should: even engine data is getting complicated, high CHT measurement (probably unreliable) on cylinder 6, will you land? 

Not big a fan of debugging engine/avionics data the air, I just need simple opinion of the situation and YES/NO decision, having too much backups makes things worse in regards to this, but I fly for fun with reduced mission profiles, so don't need much robust backups, @PT20J comment applies to my flying:

16 hours ago, PT20J said:

I think what saves us from falling victim to a probable host of subtle and undocumented failure modes is that modern avionics are pretty reliable. I have reasonable backups and don't worry too much about it. But, I'm retired, try not to have to be anywhere on a strict schedule, fly a NA M20J and avoid bad weather minimizing (but not completely eliminating) my IMC exposure. If I had a turbocharged airplane with FIKI and flew in the weather a lot, I would think about this more carefully -- a lot more carefully.

It would be great if engines/avionics engineers are able to build high level integrity tests for us pilots to be able to check combination of various avionics/components, this is easier to do when one is merging sensor inputs/software code from various pieces, but tough when fitting panel and thinking about backups & failures, going after low level raw data in-flight will only make it worse...

The ECU test for electronic engines is a good example, while the formula combining various instruments and sensors is really complex and suffers from false positives, at least when flying you get a single OK/NO, if NO you have 30min to land, if you can run it for 2h on ground with no issue, I may disregard and fly ;)

Lot of people monitor their heart EGC raw data using an Apple Watch or Phone to figure out if they have heart conditions by themselves, some even use 6 Apps in 3 devices to do analysis and cross-check, personally I still think conclusion needs a single machine with reliable sensors that was tested on many people plus opinion from 10 years of medical study and practice (or the new generation of automated softwares :lol:), looking at those various random graphs and backups feels like astrology...

Link to comment
Share on other sites

20 hours ago, EricJ said:

Most professional server farms do a pretty good job with this sort of thing.   Years ago I was a principal at a company that licensed IP to a manufacturer and we set up a server to provide license codes based on the options that the ultimate customer selected, and could serve new codes if the customer wanted to upgrade features later.   For all the usual reasons we needed a reliable server, and I toured a local data center to check them out as a candidate.

They had two backup generators, one on each end of the building, and either could power the entire building.   They had two broadband fiber bundles coming from two different providers that came from two different directions into the building.   They did a good job with server redundancy, storage redundancy, power supply redundancy, etc., etc...they'd done a really good job of really putting together a high-reliability system. 

And then I saw the rack where the two fiber systems came together.   They came together in a single open cabinet with a single power supply, and the cabinet was directly under a sprinkler.

There's always a Swiss Cheese failure model that can be applied, but it does help if you try to minimize the number and size of the holes as well as manage the number of layers (slices).   I'm a big fan of redundancy, but as you say it needs to be planned and managed.

I have too many stories of data centers.   The second AWS failure took out Redit.   AWS lost a row of servers.    Geographically diverse was not part of Redit planning "oh just throw it in the cloud"  We lost one of three servers.   It was funny because when we were planning we laughed at "oh we will never need three servers"

  • Like 2
Link to comment
Share on other sites

I have been thinking about this a lot lately because of my upgrade. I recall when iPads and charts were just coming out, my CFI chided me for relying on the iPad and insisted I keep paper charts in the plane. I figured that if my primary nav went out and my iPad went out, and coms went out, it was a really bad day. But I carried them anyway, until one day my door popped open in cruise and sucked all my charts from the pax seat out the window. Never bought another set. 

I avoided the G1000 planes because I didn't like the lack of redundancy and also lack of upgrade path. But here I go replacing all that with the new Garmin system. There may be a little less redundancy, but I think there will be fewer failures. 20 year old gyros, crap vacuum pumps that can go out at anytime for any reason worry me more than total electrical or gps failure. Plus the integration brings a lot more situational awareness during stressful days. 

No longer do I need to worry about old gyros and vacuum pumps, but I do need to worry about complete electrical failure. I don't have back up Alternators but I do have dual ship batteries, and back up batteries in the avionics. And if everything goes, I have my iPad and handheld radio. If the electrical and GPS go out at the same time.....well?  If it gets that bad, I will have my legacy compass, ASI and Altimeter installed on the co-pilot side. I was going to install the turn coordinator as well, but it is in need of overhaul. God help me if I am left flying in IMC, at night overwater with only a whiskey compass, ASI and Altimeter and hand held to navigate/communicate. But it would have to be a really bad day for that to happen and I do plan on practicing with it. 

  • Like 2
Link to comment
Share on other sites

Has anyone run a G5 all the way down on its internal battery? Did the "time remaining" displayed on the screen match reality? How long did it actually last?

I've never had a laptop that accurately predicted battery life and so I'm wondering if the aircraft instruments are any better.

Skip

Link to comment
Share on other sites

I have a 231 as everyone probably knows by now.  One of the weak spots is the electrical system.  14V, one battery, and the engine can only mount one alternator.  The alternator is a direct drive via a “coupler” which has an internal clutch mechanism so that if the alternator seizes it won’t take the engine out.  I had one coupler fail, and then when it was replaced, we got a succession of bad ones from an aftermarket supplier, they wouldn’t last more than a few flights.  I finally have one now that we got direct from Mooney and that has lasted three or four years, which is nice. Once, when I had a coupler installed, the mechanic left out a bushing with the result that the coupler wobbled on the alternator shaft, beating up the tiny cotter pin that held it in place, and everything fell into the running engine.  So I have had the pleasure of flying, more than once, with the Master off to preserve what battery was left to drop the gear, fly an approach, whatever was needed to land safely. In addition to that total electrical failure the first GPS I had in the plane was an Apollo GX65. One day on a short fun flight it fried.  It happened that the second radio was out for service so I was NORDO, and of course that happened on a day when there was a big air show at my home field, so I got to fly into the Mode C veil, enter Class D, and land on light gun signals.  It had occurred to me I had a cell phone and I had the presence of mind to call the tower and pre-arrange all this, so the main stress was landing NORDO to light gun signals with every taxiway at KFCM filled with warbirds (including a Liberator) and aerobatic planes watching to see if I would screw up the landing.

I tell you all this because total electrical failure is never impossible.  Instrument failure is also possible.  I have lost the vacuum pump and with it the gyros, and have also lost all the radios, as mentioned. I have done quite a bit of upgrading of my panel, but I always make sure I have a backup in the event of one type of system failure, whether electrical or vacuum.  I am about to put a GI275 and a GTN750 in, but I will still have a full steam six pack.  I am getting rid of the ADF which is not useful anymore, and my KNS80, but putting in a DME so all the capability of the KNS80 for radio nav will still exist on my panel even though my main nav will be the GTN and a backup 430AW.  I am a little concerned about having two Garmin radios, my ancient King is better than the high power Garmin 430AW I have (16 watt transmitter), but it won’t last much longer so it goes also.

The 275 will replace the current Turn Coordinator, but it concerns me that I will only have one, solely electrical slip/skid at that point (the 275), so I am having my avionics shop mount a plain old ball just so I have one that is not dependent on electrical.  Having just done a bunch of uncoordinated stalls (I wrote another thread about that), I have no interest in “practicing” one in my Mooney. They are fun, but not that much fun.

Then I have two iPads and a cell phone, plus a backup handheld aviation radio.  The panel is wired to plug that radio into one of the antennas.  I have never had to use it, but that is the point, have it and then you will never need it.

If I had an aircraft with 28V, two batteries and two alternators I might feel confident in abandoning the vacuum system and going all electric, but that is not possible in my aircraft so all the nav instruments are either electrical backed up by vacuum or the other way around.  And of course, even with two alternators there is always the risk of a prolonged engine out which would mean no electric.

One lesson I have learned from flying with the Master off to preserve the battery, is that backups never have the theoretical life expectancy.  Usually it is about half that, and you may be lucky to get half.

Stuff happens.  Always have a backup to the backup, and a plan if that does not work.

  • Like 3
Link to comment
Share on other sites

On 3/12/2020 at 8:03 AM, PMcClure said:

God help me if I am left flying in IMC, at night overwater with only a whiskey compass, ASI and Altimeter and hand held to navigate/communicate. But it would have to be a really bad day for that to happen and I do plan on practicing with it. 

I think someone went all the way across the Atlantic almost 100 years ago with less than that. He went NORDO  :-)

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

On 3/12/2020 at 1:31 PM, PT20J said:

Has anyone run a G5 all the way down on its internal battery? Did the "time remaining" displayed on the screen match reality? How long did it actually last?

I've never had a laptop that accurately predicted battery life and so I'm wondering if the aircraft instruments are any better.

Skip

Dynon has a battery test that has to be run on a periodic basis.   You tell the Dynon to stay on after master is cut so it is on the internal battery.   You walk away.  The timer starts.   When you come back the time the battery lasted is recorded.   Must last 45 minutes or you get warnings.

  • Like 1
Link to comment
Share on other sites

6 hours ago, Yetti said:

Dynon has a battery test that has to be run on a periodic basis.   You tell the Dynon to stay on after master is cut so it is on the internal battery.   You walk away.  The timer starts.   When you come back the time the battery lasted is recorded.   Must last 45 minutes or you get warnings.

My Aspen PFD is similar. The specification is >30 minutes and I've tested it twice in two years and it runs about 42-45 minutes. The best way to test it is to put it on battery and time it during a flight. On the ground (actually if the airspeed is <60) it will shut down anytime ship's power is not supplied.

Having made my living as an engineer, I like to test everything. So whenever I get a new backup program for my computer, the first thing I do is make sure I can restore from the backup. I just had my Concord battery capacity tested. Cost me an hour of labor ($120) at the only shop nearby that had the tester for a 28V battery.

I'm wondering how many of us that have avionics with battery backups ever test them? If you have a bunch of instruments, each with it's own battery, and the electrical system fails, at some point all the little batteries are going start timing out -- I'd just like an idea of when and in what order. I live in the mountainous west and may not be in a position to land in half an hour when it happens. My question about the G5 battery indicator is driven by my experience that battery life indicators on cellphones, cameras, laptops and the like seem to be pretty imprecise. I've had them go from "full charge" to "darn near dead" pretty quickly. 

Skip

 

  • Like 1
Link to comment
Share on other sites

11 minutes ago, PT20J said:

My Aspen PFD is similar. The specification is >30 minutes and I've tested it twice in two years and it runs about 42-45 minutes. The best way to test it is to put it on battery and time it during a flight. On the ground (actually if the airspeed is <60) it will shut down anytime ship's power is not supplied.

Having made my living as an engineer, I like to test everything. So whenever I get a new backup program for my computer, the first thing I do is make sure I can restore from the backup. I just had my Concord battery capacity tested. Cost me an hour of labor ($120) at the only shop nearby that had the tester for a 28V battery.

I'm wondering how many of us that have avionics with battery backups ever test them? If you have a bunch of instruments, each with it's own battery, and the electrical system fails, at some point all the little batteries are going start timing out -- I'd just like an idea of when and in what order. I live in the mountainous west and may not be in a position to land in half an hour when it happens. My question about the G5 battery indicator is driven by my experience that battery life indicators on cellphones, cameras, laptops and the like seem to be pretty imprecise. I've had them go from "full charge" to "darn near dead" pretty quickly. 

Skip

 

For those of use that have only one battery and one alternator, and did not want a 20 minute or so limitation on back-up power, there is this:

http://www.basicaircraft.com/turbo-alternator-bae-14-28.asp

http://www.basicaircraft.com/gallery/turbo-alternator-bae-14-28.asp

I have a full two Avionics masters, #1 is an emergency bus and #2 everything else.  

The Turbo alternator will power the emergency bus and what else you may wish to ad as needed.  As long as you have airspeed (and it is not iced up - and if it is you have bigger problems), it will supply power.

A full set of traditional back-up instruments, including 3 Artificial horizons, G-600, 2" Midcontinent and 3" SigmaTek Vacuum adds redundency.

John Breda

Mooneyspace pic panel.jpeg

Link to comment
Share on other sites

For me, I’ve always had a portable radio with VOR capabilities, 3 batteries and a battery pack with 4 triple As, Remote antenna for portable radio, and cellphone.

Now, I have iPad Pro 12", iPad mini, stratux with 2 battery packs, 2 battery packs for cellphone and iPads.

Plane now has 2 coms, and EGSi transponder with ADSB in/out.

AV30c will be next, engine monitor after that. I’ll want the do an IDF 440 later.

-Don

 

Link to comment
Share on other sites

As far as multi tasking goes...

you are flying a plane in non-ideal situations when the charging system goes TU...

Stress level increases... and scan rate goes along with that...

Now....can you pay attention to the battery levels at the same time?

 

My iPad constantly runs from 100% down to zero every day...

it gives unmistakeable warnings at 10 and 5% power left...

Too many times have I forgotten to get up after clicking the ignore button....

 

Being focused on the job at hand... flying the plane... it will be difficult to pay attention to the battery levels with any known accuracy...

Get to VFR condition as soon as practical... before the electrons run out...

PP thoughts only, not a brain scientist...

Best regards,

-a- 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.