Friday, April 19, 2013

Plane hacking or not?

Article about the security presentation

The denial from the Aviation Authorities

Well meaning clarification from pilot

This is both a fascinating research paper and a social comedy.

The researcher has demonstrated an interesting exploit against aircraft.  Ok, yet another insecure system. Interesting but only comment worthy for the novelty of the target.

The clearly bullshit ridden  denial from the Aviation Authorities is a nice attempt to damp down the expected alarmist crap from the usual suspects in the media.  However, the denial smells exactly like a denial.  There is really nothing that the FAA etc could say that would be beleived either by the ignorant masses, the ignorant media or technical experts. But its nice that they cared enough to make a statement. 

Finally, the well meaning clarification by the pilot.  Hmmm what can we say here?  Apart from pointing out how well meaning he is.  To put it simply, the guy is an egotistical fucking imbecile.  The whole point of hacking is to take command of a system.  This guy has not understood that pilots are part of a system.  He is arrogant enough to think that pilots are some how "above" it all.  They are uncorruptable, omnipotent and benevolent.  And like all generalisations... this one too is crap.

The evidence is easy to find. Start watching "Air Crash Investigators" or any similar program.  Pilots are human just like everyone else.  Air crews make catastrophic mistakes every day.  Sometimes they survive and sometimes they dont.  The point is that they respond with training and skill, under pressures, time limits and equipment limits.  They trust all sorts of automated systems, instruments and processes.  The point that this research has shown is that some of these automated systems may no longer be as trustworthy as the suppliers and the FAA would want the traveling public to think.

The fact that any system is hackable (read corruptable) is always simply a matter of time and resources.  Every system has weaknesses.  Most people do not have the resources to corrupt the systems around them.  Most of the time the users of those systems "trust" them.  However, the more these systems are revealed as being potentially untrustworthy, the more people are forced to consider what the systems tell them, to pay attention to inconsistencies and be aware of the big picture.  In the case of pilots, it will hopefully make them a little more aware of where the manual controls and old school instruments are located.  It may make them a little more dilligent in keeping their hand-fly skills polished.

Trust in automated systems should have limits.  Robots are only going to take over the world when we let them.  Once humans are totally redundant and a completely automated system that everyone trusts is availible, the human will be fired.  Look at the manufacturing industry.  Look at any industry where data processing has replaced people.  Once we take humans out of the loop and trust the machines there will be a much longer unemployment line.

Its disturbing to see just how many people are no longer needed to keep many large organisations opperating.  Look at all the industries that are down-sizing across the US.  White collar workers are the current ones who are being made redundant simply because knowledge can be managed by software robots. 

Showing this kind of exploit will sharpen a couple of pilots up and perhaps make them a little more paranoid for a while.  It will force the FAA etc to consider the possibility that there are exploitable systems on planes... but the other side of the arms race, the component manufacturers will work hard to rebuild that trust and show that their systems are bullet proof. Eventually everyone will trust them enough to take the humans out of the loop and planes will be completely flown by automated system.  I am suprised that commercial drones have not been trialed yet.  Certainly for freight but I expect eventually for passengers.  The final step will be fully automatic (with a remote manual override for a while).  But human trust will take the humans out of the loop eventually.

Thursday, April 11, 2013

Installation of Psychtoolbox on Matlab

I had a need to set up a matlab environment to work on some legacy code.  The code relied on Psychtoolbox library. So I tried to install this.  The installation runs as a script from within matlab.  Easy enough, but it requires some pre-requisites first.
  • GStreamerSDK 64bit 
  • Subversion Command Line Client
 GStreamer went in smoothly. Installed to the default location.

Already had TortoiseSVN in and working, so ignored this item. 

Stuff that went wrong:

TortoiseSVN does not work without the command line tools. Caused the script to exit.
- Solve this by hand checkout from the repo.

Once the "DownloadPsychtoolbox.m" has failed... you are on your own. 

Script is not able to be re-started without deleting the existing checkout... doh. Start hand reading the scrip to try to figure out how to manually skip ahead.   Found some hints and ran the "SetupPsychtoolbox.m" file from the folder where I checked the files out to.  (In my case "e:\psychtoolbox") This should manually set up the matlab environment path so you can actually use it.

After that, its testing time and lots of debugging.

Ugly Error Message from

I remember when C++ Template Compiler Errors were ugly.  Have a look at the following that just spewed from an MS mail server.

Is this really the kind of thing you want to spit to the client side??? Seriously?

Microsoft.Exchange.Clients.Security.LiveTransientHRESULTException: LiveId authentication code has returned error 0x80049234 PP_E_RPS_REASON_POST_TICKET_TIMEWINDOW_EXPIRED, indicating that there's a temporary problem with the remote server. Please check the Application event log for detailed information and retry the operation later. ---> System.Runtime.InteropServices.COMException (0x80049234): Post ticket time window expired. Ticket could be reposted. at Microsoft.Passport.RPS.Native.IRPSHttpAuth.AuthenticateRawHttp(String siteName, String httpVerb, String path, String QS, String httpVersion, Boolean bHTTPs, String httpHeaders, String httpBody, Object pAuthResultsBag) at Microsoft.Passport.RPS.RPSHttpAuth.Authenticate(String siteName, HttpRequest request, RPSPropBag propBag) at Microsoft.Exchange.Clients.Security.LiveIdAuthentication.Authenticate(HttpContext httpContext, String siteName, String& puid, String& orgIdPuid, String& cid, String& membername, UInt32& issueTime, String& responseHeaders, RPSTicket& rpsTicket, Boolean& hasAcceptedAccrual, UInt32& rpsAuthState) --- End of inner exception stack trace --- at Microsoft.Exchange.Clients.Security.LiveIdErrorHandler.ThrowRPSException(COMException e) at Microsoft.Exchange.Clients.Security.LiveIdAuthentication.Authenticate(HttpContext httpContext, String siteName, String& puid, String& orgIdPuid, String& cid, String& membername, UInt32& issueTime, String& responseHeaders, RPSTicket& rpsTicket, Boolean& hasAcceptedAccrual, UInt32& rpsAuthState) at Microsoft.Exchange.Clients.Security.LiveIdAuthenticationModule.OnAuthenticateRequest(Object source, EventArgs e)

Wednesday, April 10, 2013

Sychronisation of nodes with cloud services

I have read a couple of articles recently about the complexity of synchronisation and reconcilliation across cloud services (apple) and the horror that ensures.

The problems are:

Multi-point synch - The so-called triangle synch or three node synch.
Inconsistent connectivity - Partial synch and partial updates.
Clashes - Multiple nodes over-writing or under-writing each other.
Rollback semantics - How does "undo" function in this kind of environment?

This problem exists in all synchronisation exercises. Be it database sharding, file based backups, database synch across cloud services, multipoint sharing and synch etc.

I have been thinking about various strategies for either getting this to work or mitigating some of the issues.

Topology Issues

Peer to Slave - This is where there is essentially one master "node" and a backup slave device.  In this case the master always wins.  This is basically a backup service.  The Master is the author of everything while the backup device has reader rights only.

Peer to Peer - This is the classsic "sychronisation" where two peer nodes are trying to stay in synch.  In this case both nodes can be authors and both nodes are readers of both nodes product.

Peer to slave to Peer - This is the dreaded three way.  Each Peer has author rights, while all three have reader rights.  So for any change, the system needs to propogate it to the slave and then to the other Peer.  Easy with assured connectivity.  Hard with asychronous and intermittant connectivity. Almost a guarantee of a reconcilliation event at some point.

In most cases higher order topology can be reduced to one of these models, however with asynchronous connectivity, it becomes exponentially complex. 

Data Issues

File based synch - There are lots of these systems and its pretty much a solved problem.  In the higher complexity topologies it simply has an increased probability of a clash.  Easy to have a deterministic resolve rule for clashes like (Add a digit to the end of the file) to prevent data loss, but merging the two versions of the same file is still logically impossible without more information or a pre-conceived rule set for reconcilliation and merging.  This can only be done by the actual application and cannot be done at the transport layer.

Fragment based synch - Such as databases, large binaries, complex data sets, real time collaborative virtual environments etc etc. These get physically ugly and logically ugly really quick, anytime the topology count goes over two.

So is there a solution? 

Possible strategies

Authoratative Model
In this model, one node is made the "Authoratative" node and logically "creates" the data which is then synched to the second (and other) nodes.  This gives a kind of master-slave model.  When synching back to the master, there needs to be some kind of idiot check that when the nodes are out of synch, the masters copy always wins. 

This works for fairly simple use cases with low complexity. However when you have multiple authors for fine grained parts of a system... trouble.

Logical Merge Model
Take the time to create a robust set of merge rules, so that any time there is a conflict, there is a prescribed solution. This may be useful for a set of shared settings, where the most conservative options overwrite less conservative options ( I am thinking about security settings) but in the case of a database of data that has no obvious natural precedence rules we need some other strategy.

Time based Transaction Log
With a database it would be possible to create a set of activity logs and merge them by merging the logs and generating the current data set.  This works for applications that do read or write-only, but falls to bits when you have read, write, modify and delete.  In an asynchronous environment the different node copies can quickly move so far out of synch, they are completely different.

Point by Point Merger
How about avoiding the problem and making the user choose what to over-write.   Show them an option for each conflict and tell them the source of each of the values and let them choose.  (Having seens users who do not want to reconcile automatic backups in word, I can imagine how this would easily fail)

Merge at the time of synch
Theoretically, if the software is in communication with another node, then now is the perfect time to reconcile any conflicts.  Give the users the choice to over-write or keep the data. (assuming its not too complex) This will determine which way the resolution goes. The problem is that its a one-shot deal that will always result in some data being irrovocably lost.

Fail loudly
Simply inform the user that there is a conflict throw an exception and make it sound like its their fault. 

Keep two copies
Keep the previous copy on the node and create a "differece" file with only the conflicts from the synch.  This prevents data loss and gives the user or software time to gracefully merge the values.  There are still logical issues with how the data is merged, and large sets of conflicting transactions may still create some sort of failure state that cannot be automatically fixed.

Thinking thinking,....

Bug - Microsoft Sudoku app lost all progress

This morning I synced my phone to the computer to charge it up and lost all the progress in Sudoku.  Totally reset the lot.

This looks like yet another interesting case of synchronisation reconcilliation failure. So much for connecting to xbox live leaderboards.

Monday, April 8, 2013

How much game automation is too much?

I was just formatting some matlab code and waiting for a virtual machine to bring my machine to its knees (again...) and played a little sudoku on the phone.

One of the nice bits about MS sudoku is the helpful way that the game automatically eliminates some possibilities from your pencil work when you enter a number in pen. This removes one of the tedious bits of the game, but also "helps" you by removing one of the areas where human error increase the "challenge".

This got me to thinking; how much "automation" can help or wreck a game.

Scenario 1  Aim-bot in an FPS.

In an FPS, the human error that the player is trying to manage is mostly in two areas,  shooting and moving.  If the game automated either of these elements, the difficulty of the game would be seriously reduced. (This assumes that the games does not involve any resource constraints that require decision making.. I.e it has a single gun type with unlimited ammo)
This would reduce the game to an exercise of moving around.  Kind of like being the driver in a tank, where the gunner takes care of all the shooting bit and you just drive.  

This is no bad thing if the movement is interesting and challenging.  However, if the movement aspect is not "fun" then the automation has wrecked the game.  It has reduced the complexity, the chance of human error and the consequence of that error (learning).

What if the opposite was done.  The movement aspect was automated. The game now becomes more like the gunner in the tank.  The tank moves and you do your best to fire with no control or coordination with the driver.  (Like playing a team shooter when you jump in a jeep and take the mounted gun while a teenager with no headset takes the driver position. Enjoy! Might work out, might not.)

Does this help the game, break the game or turn it into a new and different game of different but similar complexity?

One application of this tech would be making games like FPS's accessible to differently able-ed people.  Someone who has issues with the fine motor control required to use a keyboard may still be able to play by taking the gunners position and managing aim and fire with a headset and switch. 

This kind of automation is both interesting and complex.  Getting it to work effectivly means having some means of coordination with the human player. Cooperate and in some instances "undo" something the bot may have done but the human player wants to back out of. In real-time games this may mean that the bot needs to be quite a quick and clever planner.

The second aspect I find interesting is what this though exercise tells me about the essence of a "game".  What can you take away and still have a "game".  I think the essential list is something like the following:

  • Control
  • Decision making
  • Human Error
  • Learning
  • Complexity

Automating Control 

You can automate control simply by using macro's and other "helpers".  More complex automation would be cause and effect "triggers".  In some respects these would "improve" the game becasue they quickly become part of the players sense of self.  However, the frustration would come with complexity.  If the player needs to deal with a very complex environment, then they may end up with many fine grained triggers to deal with the subtle combinations or only a few "blunt-instrument" triggers that deal with the environment in a very simple fashion.  Both of these would generate a very different gameing experience.

 Extreem automation of the players controlwould turn the game into a movie. Unless the "game" is a meta-game in which the object is to construct a bot or AI that plays a mini-game... but thats a bit different.

Automating Decision Making

The automation of decision making could be done by trying to eliminate "bad" choices and "tedious" or "repetitious" activities.  Those with programmed decision making and little novelty.  By removing these from the game, or allowing the player to set "default" responses, the game gets faster and may get more "concentrated".  This is probably a good design choice when player "eyeball time" is an issue.  Frustrating your players is important at some points, but not when the payoff is minimal.

At its extreem, removing decision making from the game turns it into a "spectator" experience.  Kind of like playing Myst without being able to play with the switches.  Its just an environment where stuff happens.  There is no investment by the player in their own experience.

Automating Human Error

Automation to eliminate human error is probably the most desired but the issue with the fastest route to failure.  Human error is the thing that people are trying to "beat".  Its the thing that players bring to the game that makes it different for everyone.  Automation that tries to mitigate or "help" users not make errors is probably essential in some industries and things like "self-drive" cars, but would be disasterous in a game. The game would then become a kind of "min-max" monster, where the player can rely on the "helper" bot to always make good choices.  This kind of "training wheels" design is essential for "training" and "practice" areas in complex games, because it helps the players to get high enough up the learning curve that they don't give up and quit.  But like all training wheels, they need to come off asap.

Games with high levels of complexity like SimCity need some sort of "advisor" or helper function simply to make some of the complexity managable for players of different skill and ability. The question is how "helpful" this advice is and how it's delivered.  Also, how accurate it is.  The thing with advice is that with humans it always reflects their "point of view" and intrinsically comes from a limited amount of information that the advice giver has.  In a RTS or Simulator, the "game" potentially has acurate and timely knowedge of every aspect of the game, so there is much less "uncertainty"... I say, potentially as the game designer can obviously engineer this uncertainty in.
The other issue would be "comprehensiveness", if you have a group of advisors who all give you advice on different aspects of the city, you as the player still have to bring all that advice together and make sense of it.  But it could certainly help with managing high levels of complexity or a complex interface.

But is this automating to reduce human error?  I think that if the automation is "looking over your shoulder" and prompting your or "second guessing" the players choices and providing hints before those choices are realised then it is. Then its the wrong kind of automation. It becomes a crutch that the player depends on and expects.  This is nice in the training wheels phase but destructive in the "real" game.

Automation that provides "timely" information and simplifies complexity is still very valuable, but this is really only good interface design. It still allows the player to form their own opinion and make their own choice. This could be manipulated simply by having the interface distort the information to influence the players choice... but thats just paranoid.

Automating Learning

There are all sorts of aspects within a game that foster learning.  The first is simply the experience of the game environment.  Learning the rules. Learning how big the play space is. Learning about the content of the play space.  Any attempt to "speed up" or "smooth out" this learning, can have some strange consequences.  Consider the difference between being told what to expect and being allowed to find out by experimentation.  Learning happens through experience, much less via observation. 

The opposite is interesting, automating the "evaluation" of how much the player has learned.  I build testing and measuring systems in my day job, so collecting data on performance is my bread and butter.  Building a system to evaluate the players performance or not allowing the player to progress until they have mastered something are interesting plot devices.  (They can also be sources of frustration if the threshold is set to high...) But used with sensitivity could provide a very useful mechanic for some game styles.

This is not really automated learning. Its more a way for the game designer to manage the players experience a little.  I am more concerned with automating in such a way that learning by the player is reduced.  There are probably any number of aspects of many games that players wish they did not have to learn.  Either they are tedious, not "core gameplay", not interesting for some players etc.  It would be nice to allow some people to "focus" on the bits they are interested in while being able to "ignore" the rest.  But would they be playing the same game as everyone else?  Is this simply disecting the experience? 

One example of this kind of thing is the "fast travel" mechanism in some of the RPG's.  Where once you have "discovered" an area, you can then "hop" around the map automatically.  You do not have to trudge through the area each time you want to move around.  This gives some balance between having to learn something that is novel and having to repeat something that is familiar. 

In real life, repeatedly traversing the same area reinforces our memories and familiarity with a location.  Repeatedly experiencing the same area may allow us to "see new things" each time we experience it.  However, with virtual environments not yet having the depth of detail of reality, this may be a blessing.  Having to re-experience a low fidelity environment may infact reinforce the fact that its fake!  It may be that "novelty" is one of the essential tools in the virtual world designers corner at the moment.  I know every time I have to go back in a virtual world and search an area thoughly for something, it becomes very obvious that its artificial.  This erodes my sense of immersion and reduces the pleasure in the experience of the world.   Contrast this with entering a new area, potential enemies lurk under every bush and resources lay around every corner... I think this is probably the point where I am most immersed in the experience.  Every sense is hightened and the fact that the bark on the trees is not tiling properly completely escapes my notice. 

I think using automation to reduce the exposure to stimuli in the game that facilitate learning simply means that the player does not get much of a lasting impression from the experience.  This may be an area for some research.  My feel is that a certain amount of repetition at different levels of excitation is essential for a rounded learning experience.  There is certainly a balance to be found between repetition and bordem... in some game environments, I think the shallowness of the environment and the lack of stimulus would result in a boring experience with very few repetitions.

Automating Complexity

Helping your player to manage complexity is a complex issue.  I touched on it above in the decision making item.  Obviously there is a fixed level of complexity that a player can cope with.  Depending on their mood and energy they may be able to move it up a notch, but there is still a ceiling effect somewhere.  For games that involve a high level of complexity simply by their nature, this can create an insurmountable learning curve for some players.  There is an argument that this becomes a self selecting exercise, but if a player has paid their money, it would be a poor game designer who did not reasonably wish them to have a good experience.

So, is there a case for automating some of the complexity, so that the player has a good experience, no matter what their level of capacity?  In team based games and competative games, this may violate some of the social contract that players self-create.  The concept of "cheating" and "fairness" are complex beasts.  Usually very dependant on how closly matched players feel themselves to be.  The question is, is a game fair when one of the players has limited hand function or is vision impaired? (Just examples, there are lots more out there.) What about when one player is a teenager and the other is much more experienced but may have less snappy reflexes?  (Age will get us all.. if the zombies don't)

I am not trying to make a case for automation, simply pointing out that there are many cases where the "fairness" argument fails.  Its predicated on the idea that everyone is equal and any differences are "self imposed".  We know that is patently untrue. 

Hmm... I feel like I may be wandering away from the thread a little.

So, I think the take-home is that automation is a complex subject. There is a case for automating all sorts of aspects of games; some to make the game better for all, some to make the game better for players of different capacity.  Certainly, less automation can increase the challenge for some players, while for others it may make the game unplayable.  In some cases, the addition or removal of an item of automation may make the experience "a different game". I know that the automation in sudoku for me makes it a faster and easier game that I think subtly sucks the fun out of beating a hard game.  It feels a bit too easy. 

Friday, April 5, 2013

The Go Language makes me want to cry... good cry!

There are sections in this article that very nearly brought tears to my eyes.  I so want to be able to use Go simply for its design. 

Looking at this article after reading one on the proposals for C++14 are enough to make me quite C++ forever. 

Now I just have to figure out how to use Go for all the legacy rot in my life that I have to drag forward.

Response to the flaws in the SOLID principles

Because I hate the comment system on his blog... here is my response to the above post.

The massive flaw in all this is the semantics. The "Horizontal" and "Vertical" seperation is, not only a simplistic set of terms from the business world, but useless semantics that do not map to anything execept your own weird two dimensional internal mental model. There are no intrinsic "dimensions" to a codebase that would support you using these as axiomatic semantics, that other folk would "grok" automatically. We deal in abstractions, don't fall into the habit of trying to map high order abstractions back to simple and limiting lesser systems that impose non-existant contraints. its quite possible to flatten everything onto a whiteboard board but thats not the reality. Any non-trivial codebase has a multi-dimensional graph of relationships, dependencies and states that cannot be simply represented in a 2d image.

This is not to say that I disagree with your intent. If you observe your naming conventions, you will see that your "horizontal" model could equally be called a "Functional" model as it uses very "functional" descriptors for the names of the entities it is describing, while your "vertical" model shows a naming convention very similar to a "logical" description of the entites its describing. The fun part comes when you realise that a single class can have a single functional description, but may have multiple logical descriptions and vice versa.

The problem, once we have established my proposed naming convention, is that every class and interface has a dual role within the architecture. Let's take for example the "controller" class. Which functionally performs a "control" function of some resource or process; while logically it may be part of a business rule that manages a transaction. How do we reconcile these dual identities in both the naming conventions and the "architecture" decisions?

Once you identify this duality, its easy to start to see how concepts like SOLID are one-eyed and only allow you to see one or the other role at a time. Which ends up with both roles being mixed through the codebase names, comments and relationships.

We need a better system to conceptualise the roles and functions of the design entities that can capture the actual complexity rather than ignoring half of it and hoping it goes away.