Monday, April 27, 2015

Pointless philosophical debate - Disengagment, Debugging and Futility


I listened to an Honours presentation a few years ago called:  "Knowing When To Quit: Self-Focused Attention Enables Disengagement, Positive Affect Predicts Reengagement Tendencies. (Tristan Hazelwood)".

It was on one of those aspects of psychology that I had never considered before.  That of "task disengagement"... or the ability to quit doing something. I found this quite personally intriguing becasue I had always seen "quitting" as a failure state and even though I have quit a lot of things in my life... it was always with this sense of personal failure.

Tristans experiment looked at how long people will continue to attend to a "futile" task before stopping doing it.  There were some manipulations as well... which are not relevant.

I had never really asked myself this question or the extension questions.

How long do I keep doing futile tasks? 
How do I know a task is futile?
What if a task seems futile but is ultimatly not?
What if the effort required to complete the task is disproportionate in comparison to the reward?

Anyway,  this gets back to my today issue:  Fixing bugs in other peoples code. 

I have just spent way too long trying to debug why the wrong favicon was being displayed in a particular browser (Firefox).  This required me to:
  • Verify my assumptions about the local copy of the site files (Still broken)
  • Verify my assumptions about the server copy of the site files (Still broken)
  • Reload everything in Firefox
  • Test in multiple other browsers
  • Read up on how Firefox handles FavIcons
  • Break into the sqllite db used by Firefox to cache the favicons
  • Clear the cache, reload, test, clear again, clear a different way... test-fiddle cycle until I got the combination correct and finally expunged the old favicon from the chain of server-browser-cache rubbish.

In hindsight, knowing what I know now, this was a worthless task.  It was a combination of a stupid caching policy/bug in firefox and my uploading "placeholder" images before I had the final design for the favicons. The problem would not have been visible to the client because it was in my local browser cache.  But at the time all I could see was an error!

When is a debugging task "worthwhile"?

I have a long history of tenacity with debugging... and in hindsight many of the more intractable bugs now seem to be examples of time wasted and needless distractions.  Others were important and meaningful battles that I needed to fight to get projects over the line. All were bugs.... are there any clues to differentiate between valuable battles and needless distractions?

When the bug is in someone elses code... 


I think the first possible clue is where the bug lies. I think debugging is often a loosing proposition, even if you do pin it down.  I have spent way too long proving, documenting and reporting bugs in products that I use in the hope that they will be fixed.  Some have been... but I think that the majority of them resulted in a discussion with the product programmer/support minion that not only wasted more time, but resulted in them not resolving the bug in any way that resulted in me getting a fix when I needed it. (I.e very soon after identifying it)
I would suggest that while these activities were probably noble they were essentially of no value to me or the work I was trying to get done. (Except where they resulted in me clearly identifying that the bug was not in my code)

My point?  Stop debugging other peoples code where possible.  There is just not enough time in my life to fix all the broken software (and get a usable result) Once I see that the bug is not in my code... give up.  Find a work-around, hack it, abandon the tool/framework/etc... but move on.

When there is more than one way to acheive a result...


The difference between "the right way to code" and the "expeditious way to code" can sometimes be clear.... however, I find there are usually lots of ways to get to the same result.  Unfortunatly, I often find that the "recomended" way in the docs/tutorials/book/forum post/back-of-the-toilet-door is either out-of-date, incomplete, partially thought out or just plain wrong.  The time I have spent trying to get stuff working that I found on the internet... that eventually turned out to be rubbish, is painful to recount.  (This is mostly my own fault for trying to work in too many languages/toolkits/frameworks etc without taking the time to be an expert first)
I think that I need to be a bit more willing to abandon broken code a bit faster than I currently am. This often comes back to some time pressure, that I'm trying to get something working quickly rather than taking a bit more time to learn the docs first.  False ecconomy.

That being said, I would say that more than 50% of the time when I have gone to the docs for whatever I'm working on, the docs are wrong in part or whole.  So, I'm not sure if there is any actual win with reading the docs first... but it might help in some cases.   To paraphrase... go back to the source Neo...

My advice?  Don't hold onto any ideas of the right/clean/ideologically sound/best practice way of doing something.  Get it working as best you can and clean it up later if it proves to be a problem.  Future proofing is a never ending piece of string argument.... you cannot win, so don't fight the battle.

Usually I find that the next time I try to solve the same problem, I inevitably have a different approch anyway... so I tend to end up rewriting rather than fixing... but thats me.  The key point is to let go of something that is not working no matter how ideologically "right" it may be.

The curiosity trap...


Debugging to figure out how something works is a really painful way to learn.  There are probably lots of scenarios where its the only way (security work, reverse engineering malware, deep bugs in critical systems etc)  but for most of the desktop stuff I do, there is just no reason to be trying to learn that far down the levels of abstraction. It's just too slow.

My point?  I think when its curiosity driving the activity and you find yourself trolling through code... "you're doing it wrong".  Go feed your brain on wikipedia or get a pet or play Sudoku.... the urge to solve puzzles can lead programmers into some really pointless time wasting.

Neat and tidy.... 


The urge to "finish" something or make it "tidy" is a seductive beasty!  It can lead you to make beautiful things... or take you to crazy places.

I find that neatness in coding is important so you can stay on top of the project... but there is a slippery slope beyond that point where neatness for the sake of it becomes a procrastination exercise (he types on a blog....laughing ironically as the keystrokes land...)

Basically, there is no limit to how far you can take neatness in coding.... I think the best advice is to go the other way... encourage messiness to the point you cannot function... then take a tiny step back. Minimize how much extra time you need to spend on the housekeeping...

This ties in with our human pattern recognition systems so it can be a bit of a two edged sword ( triple edged swords just mess up this metaphore...so just don't...)  in that it can be very valuable but also lead to the darkside.

Neatness in the code allows us to scan repeating structures in ways that don't involve fully concious "thought".  This can be a really valuable behaviour; that personnally has spotted issues more times than I could guess.  However... the other side of this problem is that it only works with repetition.  To get maximum value, you need repetative patterns.  Once these patterns get larger... you start to run into the bad 'code smell' of  DRY.  (Don't repeat yourself).   So, I think its probably best used for "small stuff". Such as how I order operators or how I use whitespace in a statement.  The ways that brackets are laid out (this may be why code formatting is such a universal issue for code jockeys)... but once I start to see repeating blocks of statements... its time to refactor or shop for a red lightsaber. 

I think there is a bit more to think about in this area of pattern recognition in coding, but thats another post for another day.

In summation your honour... 


Estimating is always easier in hindsight.  Knowing when to let go of a task is an np hard problem.  Having written all these ideas down, its still not clear that there is any generalisable wisdom in any of it.  But, I feel a bit clearer having articulated some of the issues, which is really the point of writing it down....








Friday, April 17, 2015

Automated Passenger Aircraft

In light of the Germanwings crash, I have just seen proposals for remote control of passenger aircraft.   Almost choaked with laughter (at the solution... not the horrible problem).

Lol.... Called it!

See my random rant from 2013... http://stratasphere.blogspot.com.au/2013/04/plane-hacking-or-not.html

Ok, work with me here. 

1)  Plane with a single pilot who goes bad can crash/hijack/fly into towers etc..  

2) Plane has two pilots (captain and co)... one goes bad.  The other is supposed to either take control (physically... 50/50 chance I guess.. unless the bad one did some forward planning) .... same result in some cases.

3)  Plane has two pilots who collude and go bad together.....

4) Plane has one remote pilot.... who goes bad.  Result... whatever the pilot wants.

5) Plane has multiple remote pilots who collude and go bad together....

6) Plane has one pilot and one remote pilot.... they disagree.

7).... other permutations


These  are all examples of the two clocks problem... which is essentially a trust system.  I have had to explain this to so many students I can do it in my sleep now. Not sure where I first heard it but it was a while ago.  I have repeated it and mangled it so much that its probably not even recognisable.  However it is very useful for instrument design and trust in black boxes.

My version of the two clocks problem

So, on oldschool sailing ships they used to calculate their map position using clocks as a reference.  The captain would wind the clocks every day and by plotting angle and speed, against the clocks time, he could fairly well calculate his position on the ocean.  However, clock making technology was not always wonderful.... which resulted in clocks running slow, fast or stopping. 

What do you do if you are out on the ocean and the clock fails?  You're lost!

So one solution was to have two clocks on board.  Wind them both at the same time, keep a log of any time differences, keep them in the same place so the conditions were the same etc.(Control your variables) 

What  if you wake up and the two clocks disagree? Which one is right?  (Which do you trust?)

So the solution is to have three clocks.   If at any point the clocks disagree... the odds are that only one will fail at a time, so the other two should agree and you can reset the bad one (and make some notes in the log about not to trust the shoddy clock).

The point being that in a trust game... you cannot differentiate between two conflicting but trusted positions.  How do you know which pilot is bad? 

So is the solution to have three pilots on each plane?  Remote pilots? Robot Pilots?  Which do you trust?

The problem is not that you cannot design a robust system... the problem is that a robust system will appear to be inefficient while its opperating.  The pressure to cut costs will always be a factor in free-market ecconomics... so any system with two redundant parts will eventually be simplified down to one.  Simply because nothing bad ever happened.....

Keep in mind that trust is a dynamic betwen components... its not a property of any single component.  The is the opposite of the profit principle... which says cost is a factor of each component and reducing cost is so easy....


Duncans first law of system design

An economist will always fuck up any given system they are given control over.  

Why?  Becasue their minds do not work right.  They suffer the human frailty of trying to simplify and generalise based on perceiving repeating patterns in their perception of the system.  This lets our meer moral brains make sense of overwhelming complexity.  This gives them the idea that they can "optimise" or get efficiencies... but remember that their perceived patterns are based on only the amount of observation/data they have access to rather than a complete mapping of all possible cases.


The secret of any trust system is not to prevent it getting into a conflict situation but designing for the inevitable undesirable cases and having an elegant way to get out of conflict.  (different to risk mitigation-which is an economists way of trying to cope with edge cases)

If the air saftey groups were not economists they would design a flight system that could be flown by a suicidal pilot safley.  But once they start thinking around that corner... they cannot be economists any more.

The economist mind set will always try to eliminate/replace/fix the "bad" component in the system and assume that everything else will remain the same.  This is such newtonian thinking.  The universe is not a giant clock.

Bad is very very very relative....

Imagine an aircraft flight system that could be flown by a healthy happy pilot, a suicidal pilot, a hijacker or a child.... all in total saftey for the passengers.  Once we crack that, we are probably ready to call aircraft "Safe". Is this the same as driverless cars?  Are there still situations where the skill of a "human" is our last hope?  (Probably given the state of hardware system design.... )

The point being that the passengers should be safe even when the system is in a failure state.  Why are there no "eject" seats?  Why no ability to seperate the passenger compartments/pods and fly/land them with shutes or whatever?  If you were designing another system with a massive single point of failure (pilots and cockpit) you might think about some sort of backup... but aircraft designers have some sort of catastrophic inability to learn from any other industry.....

Moving on....

Thursday, April 16, 2015

Thought Viruses

I have been doing some reading about "Narcissistic personality disorder" and "Borderline personality disorder" for various reasons.    While there are all sorts of aspects to these conditions, they are essentially a set of "thought patterns" which are expressed as a grab bag of symptoms of varying intensity by the victim. 

The key point being that these "though patterns" are "trained in" by an abusive parent(usually).  They are do not stem from a physical injury or anything else.  Essentially the child is fine before hand, then afterward is broken by exposure to the parents condition.  Similar to PTSD.  

The complex issue that I have recurringly thought about is that these particular conditions are repeated down the generations unless interupted during transmission.  The condition is self-replicating and self-maintaining.  I.e  a parent with a mild case of NPD can damage a child who manifests a strong case of NPD... so the condition does not "weaken" over generations.  

To me, this is a perfect example of a "Though Virus".    The same sort of pattern seems to happen with bullying (although I have not read as much about it)  where the victim of a bully may go on to become a bully themselves, thus replicating the condition. 


I'm sure there will be a bunch of these kind of "though patterns" that are transmitted from generation to generation. Some we call "wisdom", "habits", "myths", "family culture" etc.  But like all symbiotic relationships, its the negative ones that get called names.  


The interesting aspects of these constructs is that:

A) If they can be given to someone, they can be taken away. (More or less cured... in theory.  This ignores any damage done while the victim was carrying it; which could be substatial)
B) Can we develop an immunisation for this crap that will remove it from society?
c) Is there are liability for society by allowing parents to infect their children with this kind of negative thought pattner?  If so, should society identify and treat before the infection can jump to the next generation? 
d) These things can only work in a particular context.  NPD depends on isolating the victim(s) and creating and re-inforcing an alternative "reality bubble" around them.  Can this be defeated simply by not leaving infected parents alone with un-infected children? (This is probably the cure for all sorts of shitty parenting....)

e) These kinds of though patterns could potentially be transmitted to artificial intelligence and back again.  Something to think about... 


 Once you start looking at thought patterns as a transmittable "thing" you start to see all sorts of passive and active mechanisms that may be playing large or small roles in this process.  I have seen a few articles such as "Playing computer games is re-wiring my brain.." kind of thing.  I know logically this is true... I have just never considered the full extent of this kind of massed, repetative rewiring.  Its funny to see social conventions emerge and propogate on game message boards.... and then to see them make a jump into social memes... but in reality this is how society has evolved... a collective set of through patterns that are self sustaining, transmissible, and self-reinforcing.  Keep in mind that all though patterns are emergent (random, chaotic and useless until re-inforced by utility) its easy to think of "intelligence" as one big virus.....

Once you conceive of the human brain as simply a virus laden organ which can be infected by other viruses.... it gets weird. 

But as a mechanism to explain intelligence, thought, society etc... its pretty neat and tidy.