Friday, April 17, 2015

Automated Passenger Aircraft

In light of the Germanwings crash, I have just seen proposals for remote control of passenger aircraft.   Almost choaked with laughter (at the solution... not the horrible problem).

Lol.... Called it!

See my random rant from 2013... http://stratasphere.blogspot.com.au/2013/04/plane-hacking-or-not.html

Ok, work with me here. 

1)  Plane with a single pilot who goes bad can crash/hijack/fly into towers etc..  

2) Plane has two pilots (captain and co)... one goes bad.  The other is supposed to either take control (physically... 50/50 chance I guess.. unless the bad one did some forward planning) .... same result in some cases.

3)  Plane has two pilots who collude and go bad together.....

4) Plane has one remote pilot.... who goes bad.  Result... whatever the pilot wants.

5) Plane has multiple remote pilots who collude and go bad together....

6) Plane has one pilot and one remote pilot.... they disagree.

7).... other permutations


These  are all examples of the two clocks problem... which is essentially a trust system.  I have had to explain this to so many students I can do it in my sleep now. Not sure where I first heard it but it was a while ago.  I have repeated it and mangled it so much that its probably not even recognisable.  However it is very useful for instrument design and trust in black boxes.

My version of the two clocks problem

So, on oldschool sailing ships they used to calculate their map position using clocks as a reference.  The captain would wind the clocks every day and by plotting angle and speed, against the clocks time, he could fairly well calculate his position on the ocean.  However, clock making technology was not always wonderful.... which resulted in clocks running slow, fast or stopping. 

What do you do if you are out on the ocean and the clock fails?  You're lost!

So one solution was to have two clocks on board.  Wind them both at the same time, keep a log of any time differences, keep them in the same place so the conditions were the same etc.(Control your variables) 

What  if you wake up and the two clocks disagree? Which one is right?  (Which do you trust?)

So the solution is to have three clocks.   If at any point the clocks disagree... the odds are that only one will fail at a time, so the other two should agree and you can reset the bad one (and make some notes in the log about not to trust the shoddy clock).

The point being that in a trust game... you cannot differentiate between two conflicting but trusted positions.  How do you know which pilot is bad? 

So is the solution to have three pilots on each plane?  Remote pilots? Robot Pilots?  Which do you trust?

The problem is not that you cannot design a robust system... the problem is that a robust system will appear to be inefficient while its opperating.  The pressure to cut costs will always be a factor in free-market ecconomics... so any system with two redundant parts will eventually be simplified down to one.  Simply because nothing bad ever happened.....

Keep in mind that trust is a dynamic betwen components... its not a property of any single component.  The is the opposite of the profit principle... which says cost is a factor of each component and reducing cost is so easy....


Duncans first law of system design

An economist will always fuck up any given system they are given control over.  

Why?  Becasue their minds do not work right.  They suffer the human frailty of trying to simplify and generalise based on perceiving repeating patterns in their perception of the system.  This lets our meer moral brains make sense of overwhelming complexity.  This gives them the idea that they can "optimise" or get efficiencies... but remember that their perceived patterns are based on only the amount of observation/data they have access to rather than a complete mapping of all possible cases.


The secret of any trust system is not to prevent it getting into a conflict situation but designing for the inevitable undesirable cases and having an elegant way to get out of conflict.  (different to risk mitigation-which is an economists way of trying to cope with edge cases)

If the air saftey groups were not economists they would design a flight system that could be flown by a suicidal pilot safley.  But once they start thinking around that corner... they cannot be economists any more.

The economist mind set will always try to eliminate/replace/fix the "bad" component in the system and assume that everything else will remain the same.  This is such newtonian thinking.  The universe is not a giant clock.

Bad is very very very relative....

Imagine an aircraft flight system that could be flown by a healthy happy pilot, a suicidal pilot, a hijacker or a child.... all in total saftey for the passengers.  Once we crack that, we are probably ready to call aircraft "Safe". Is this the same as driverless cars?  Are there still situations where the skill of a "human" is our last hope?  (Probably given the state of hardware system design.... )

The point being that the passengers should be safe even when the system is in a failure state.  Why are there no "eject" seats?  Why no ability to seperate the passenger compartments/pods and fly/land them with shutes or whatever?  If you were designing another system with a massive single point of failure (pilots and cockpit) you might think about some sort of backup... but aircraft designers have some sort of catastrophic inability to learn from any other industry.....

Moving on....

No comments:

Post a Comment