Wednesday, December 11, 2013

Make your own game or make your own reality bubble - Games for stress disorder therapy

I have been kicking around the idea of an article on how players create their own games within games or game environments.  This is a bit of a ramble to get some of the thoughts out on paper and explore the topic a bit.


Abstract

A game is, at its simplest an activity with a evaluatable objective performed within a play-space defined by the "Rules".  The "Objective" is imposed by the designer of the game. 

What happens when the player self-selects another objective but coopts the rest of the infrastructure of the game.  Is this a distinct, personal experience worthy of investigation or is this such a common behaviour that we (the players) do it as a matter of our basic function.  


A paralell topic that I have been exploring is the construction of, what I term, a local reality bubble.   This happens all the time around me.  Different people and groups of people define reality and share it.  This is normal human nature.  Reality is a construct that we define with our perceptions of the environment, imperfect memories, rationalisations about causality and communication with other agents within the same environment.  This is just neural nets playing together.

But every so often someone defines reality that conflicts markedly with the reality agreed upon by others who are perceiving much the same stimuli.  We will call the majority the "Normals" and the minory the "Crazys".  This is basic social psychology.  However, what happens when there are very few people in the two reality fields?  What happens when the populations are 1 & 1?  Which is the normal and which is the crazy? 

The key word is "Conflict".  No one cares when two reality bubbles differ but do not conflict.  Its just different and quirky.  The problem only arises when the members of the reality bubbles cannot accept that other people have a difference of perception/reality/belief as it fundamentally undermines their faith in their own construct.  Note this may be bi-directional or only uni-directional.  I.e Only one of the reality bubbles sees the conflict while the other thinks they are happily co-existing.    To borrow from highlander... "there can be only one!".



Anyway, back to the "Make your own game" thread.   I think its time for examples. 

For instance,  the game is "Hitman", the objective that is supported by the game infrastructure, the scoring system, the game environment and the narrative is that the player is on a mission to "hit" one of the fictional characters in the scenario environment. The player then needs to escape the environment and the scenario will end and the score be presented.  Designers objective complete.  No problem here.  This is entertainment if your taste runs to fantasy assasination.

Now we move into the realm of the player-as-objective-setter.

Within the scenario there are quite a few "self-selected" objectives that the player could nominate to pursue.  They might be variations on the origional scenario objective. 

"Do the hit but without killing anyone else"
"Do the hit, but use only a knife!"
"Do the hit in the fastest time possible!" 

The third one is often called a "Speed Run" and has been a popular exercise for a small sub-culture of games for many years.  They create and post movies, tips, maps etc of various games with the objective of completing the game in the minimum time possible. 

Another group of players are those interested in maximising their expereince of the designer-objectives.  These are the players who write and contribute to walk-throughs and exhaustive exploration of every fascet of the games to acheive a level of master of the designer-objective in all its subtlety.

But what about the players who elect to just wander around the game environment and spot butterflies?  For a game like "Hitman" which has quite a limited environment beyond that needed to support the assasination scenario, this will be a reasonably limited experience, but it is still a valid objective to choose to ignore the scenario and simply explore the little pocket universe of the scenario.  Look for the quirks and bugs that the designers have left intentionally or unintentionally.

So why are these self-selected objectives interesting?

What do they say about the player themselves?
What do they say about the game environments, designer-objectives and scenarios?
What are the implications for the business of game creation and sales?
How can these kinds of activities be encouraged, discouraged or manipulated?
What are the implications for more free-form play spaces that lack strong objective systems?

An interesting aside that I have been poking at for a little while is using games as therapy for people with stress disorders, such as post traumatic stress disorder.  But equally for people who have stress responses to long term experiences such as victims of domestic abuse, school and workplace bullying, stalking, ideological suppression, discrimination, persecution and other environmental stresses that create an on-going stress reaction and coping mechanisms that when transposed out of the stress producing environment are difficult to adjust.  (Ties into the reality bubble thread quite a bit)



The designer-objective

My observation (as a player and reader) is that having a designer-objective in a scenario provides both a guided experience and in these old days of limited development resources a way to focus the player on the good bits of the game and try to hide all the compromises.   These rationales have faded a little as the game hardware and software had improved with large open world games with spawling environments now being more common.  These often have a tangle of small and large missions and objectives, side quests, ethical systems and multiple "endings" that dilute the effect of having a designer-objective and may facilitate more players to self-select their own course through the possible play experinence.

However, contrast this with a play environment where there is a much lighter touch by the designer.  An example that springs to mind is "Minecraft", where there is simply a play space and a tool set.  Players self-select their objectives and there is little in the way of any scoring or feedback system to impose meaning on their behaviour.  There are a few environmental stimuli such as monsters that will intereact with the player if they remain static too long, but otherwise there is no impetus to do any particular thing in the environment.

There are a whole slew of derivative games and game editor that allow the player to construct their own environment and then experience it as they see fit.  What is there to learn from this that could be generalised?  Apart from the fact that there is a sizable population of peole who enjoy this as entertainment and find it a satisfiying use of their resources... there is probably little to conclude about the actual choices they specifically make.  There is a huge range of possible research into learning rates, curiosity and objective setting but thats another rant.

Getting back on thread.  What is the deal when a player chooses to enter a game environment with a strong designer-objective but chooses to ignore that imperative and do their own thing?  
I have experienced this in games when I get bored with the designer-objective, or the game system is flawed and the objective seems "broken".  I do not loose engagement with the game but tend to get creative and start looking for other things to do.  
How would this impact for people who want to re-write their reality bubble?  Would it provide a bridge that they could cross if they became familiar enough with the "scripted scenario" to loose interest and try something new?  Or would they continue to have such strong emotional reactions to the cues that they would remain trapped in the coping mechanisms that they are wanting to change?  In which case, simply remove the cues until they reach a level that they can suppress and change, then re-introduce them and allow them to adapt at their own rate.  My hypothesis is that given some control over the cues that they are reacting to, most people would be able to adapt with exposure to a different self-selected narrative.  They would then be able to get on with self-selecting their objectives rather than being trapped in the imposed narrative.  I have a sneaking suspicion that the number of cues required to trigger a stress response would be quite small in many people with strong stress reactions, so figuring out how to "turn them down" may take some creativity but should be do-able.

This should be quite easily evaluated with something like eyetracking and galvanic skin response.  Stress is usually not hard to measure, so inducing stress and measuring interventions to see if they are more or less stressful would provide some objective insight into how stressful a scenario or environment is.

The other side of this is the feedback system.  Designer-objective games often have an abstract feedback system using quantitative "score" mechanics.  These serve to re-inforce the objective and behaviour required to complete the objectives as defined.  How do you create an abstract feedback mechanism for a very personal experience? 

I think a blunt instrument approch is probably a starting point.  Simply get the player to self select a simple positive objective and count the time taken to acheive it and some sort of progress clock.  This makes the feedback system alway a positive.  Like a progress meter rather than having any negative re-inforcment. 

For a complex environment with many stress cues, it may take a more complex scoring system with some boolean scoring items which could be represented as "medals" or "Acheivments" which add some specifics to the general "progress" score to address very specific issues in the persons scenario.  These medals can then be further refined into a small progression to give them a small scale of feeback where the therapist sees value in getting them to attend specifically to an issue in a complex scenario. I.e the medal can be changed to a progression of medals (bronze, silver, gold..etc) to represent improvments on that issue.

























Thursday, December 5, 2013

Gravity Powered Light GravityLight

http://deciwatt.org/

GravityLight.  Simple and elegant. 

This is similar to an idea I had a while back... only much more elegantly developed.  Must dust off my plans and take another look. 


Wednesday, December 4, 2013

Fixing Adobe Master Collection CS4 on Windows 7 64bit...again

I am cleaning up and have decided to try again to get some more of CS4 working.  As a background, I purchased CS4 some time ago, then updated to Windows 7 64bit Enterprise and re-installed CS4.  At that point the pain began.  Most of the Adobe tools crashed on startup... the ones that reported an error often pointed at one of the plug-ins.  My conclusion is that many of the plug-ins do not like the 64bit architecture and were probably compiled for 32bit.  Rocket Science....  

Generally, I have been using this process:

1) Start the Application from the Master Collection... crash. (Sometime with an error message)
2) Cut the Plug-in directory from the application directory and past it somewhere else.
3) Re-start the application (often now works)...
4) Re-introduce the plug-ins bit by bit to eliminate the ones that cause a crash (often a number of them).


The result being that I have most of the basic functionality of the tools but without some of the plugins.  This is not ideal, as I paid a lot of cash for the whole toolkit... but its what I have to work with until some mythical upgrade in the future.

I had considered doing a clean uninstall, running the Adobe Cleanup tool and trying a new install but the chance of it bouncing and having to go through the mad drama I had last time has too high a cost to bare.

For instance, below is my debug of Photoshop.

Adobe Photoshop CS4 Extended 64bit

Plug-ins
 - Panels  (works)
- Measurements (works)
- Import-Export (works)
- Image Stacks (works)
- Filters (Works)
- File Formats (Works)
- Extensions (Broken)
----- FastCore.8BX (Works)
----- MMXCore.8BX (Works)
----- MultiProcessor Support.8BX (Works)
----- ScriptingSuport.8li ( BROKEN )
- Effects (Works)
- Automate (Works)
- ADM (Works)
- 3D Engines (Works)

 The scripting plug-in has been the cause of crashes on a number of other products and tends to be my first suspect in everything else.

A funny thing just happened.

On the Help Menu for Photoshop 64bit there was no "Update" option.  Then after opening the "System Info" item  and closing that dialog... suddenly the Update option appeared.

Photoshop has been weird on the 11.0.2 update.  It constantly wants to download and install the update but the update will not stick.  Photoshop always shows version 11.0.1.  When I manually downloaded and installed the update,(11.0.2) it claims that its already installed.  Running Update from the help menu just does it all again and seems to silently fail the update.

But when I run the Update option from the Now working Photoshop CS4 64bit.  There is a new patch waiting in addition to the usual 11.0.2 patches for the 32bit and 64bit versions of photoshop.  This patch seems to have installed and worked.. however 11.0.2 is still pending. 

There are some comments on the adobe forum about fixing this problem with the Cleaning tool but its such a huge risk that it may trash my whole installation that I am just not ready to do it.  I have tried updating Adobe Updater and that did nothing.  I think the patch is just silently failing. 

Adobe After Effects CS4

Same problem as above:
- Plug-ins
---- (I forget which sub-folder)
--- Scripting.AEX (Broken)

Thursday, June 13, 2013

More Visual Studio shenanigans

I just received an email with some light detail about Visual Studio 2012 update 3 RC.  After having a look at the lean list of fixes included and the news that MS is prepping for VS2013... well I was pretty disapointed.

Suffice to say that effectivly I have not been able to use VS2012 for much of anything.  Its either been feature incomplete or buggy as shit to the point of unusability.  So apart from not progressing a couple of projects... I will essentially have to skip a whole version before (hopefully..) I can make some progress.  Talk about stealing momentum.

While I understand that VS is an "everything and the kitchen sink" type toolbox and getting it to all work is a herculean task... it seems like this version has just sucked constantly.  Its been such a political release with MS trying to strong arm their agenda for Win8 down my throat that I am just disgusted by the whole experience. 

The sad thing is that Win8 would have been picked up and used quite happily as part of the natural cycle and no one would have mentioned it.  We would have migrated everything in our own time and at a comfortable pace. But the whole "forced adoption" policy just broke too many things all at once and really did not leave me with a path forward.  And that left me holding the bag and trying to explain to a lot of clients why I could not deliver what seemed to be a simple update for a new Microsoft Platform.   You just cannot do that and not expect to cop some flack for it.

Diversification is the only way to manage risk across a broad portfolio.  So now, I have to diversify and spread my position away from Microsoft.  I have to maintain a number of possible solutions on a range of platforms and middle ware rather than relying on MS to be both platform and middleware provider.  The trust is gone.  I have spent more time reviewing tools sets and API's from other platforms over the past year than any time previously.  I have had to develop on a dozen different middle ware packages and push more projects into the cloud that otherwise would have been on the MS desktop.  All because they moved my cheese and pissed all over it.

And for what?  Has the MS position in the market place been strengthened?  Hardly.  I still can't find a supplier in the city who cares about selling surface tablets. There are now just as many Mac products amoung my clients as there are PC.  Chrome books and Android tabs are breeding and market place apps are spawning like crazy salmon.  The monopoly is gone; panicing is just making it worse.






Friday, June 7, 2013

Visual Studio Update fixed flickering bug

Today I happened to start Visual Studio 2012 for an indirect reason and it told me there was an update for NuGet availible.  I let the update install... and then restarted and....   the horrible flickering and endless instability were gone.  

This behaviour has been paralysing my use of Visual Stuido for months.  I guess I can finally get back to the job I have had on hold... once I finish all the others that have filled the vacumme. 

Whoot!

(So, in hind sight... I guess its been the best part of 18 months since I have been able to make meaningful progress on that job due to all the f@#% ups with Visual Studio, Win7/8 and the features that were missing and finally showed up in service pack 1.  While my disgust knows no end... I have been using a bunch of other developement tools recently and know just how polished VS really is... despite some of its crippling flaws)  What a frustrating life we get to lead....

MATLAB Refactoring

Today I hate Matlab. 

My problem is that I have inherited a pile of scripts that have been hacked on by a range of researchers in multiple countries over multiple years to do multiple experiments.... (I hope you get just how ugly that is).  Each of these researchers is a gifted amateur. That is to say, they are very good at hacking in a fairly oldschool procedural style of coding. Which is precicely where everyone starts... no problem there.  The problem is the scaling issue, lack of discipline, quite varied use of idiomatic styles and all the usual code-rot problems of a codebase that has been recycled multiple times without any particular clean-out or clean-up.  Its got gunge!

Where do I start?

Firstly, get my tools lined up. get it under source control, create a working copy...

Tools
Matlab and Notepad++

Now start reading to see what I have to deal with....

As per Matlab convention... they are basically one massive function per file.... some very rudamentary decomposition but still about 5kloc over 4 files.  Shit....

Scads of cryptic variables.  Lots of them seems to be allocated within complex calculations and then used here and there.... then turn into arrays and cell arrays or just get dumped to files where they die.

Lots of magic numbers.....

Lots of one line calculations doing multiple things with multiple cryptic variables and a scatter of magic numbers.

Did I mention the abysmal lack of whitespace.
Did I mention the oldschool style of keeping all names in lower case and minimising the number of characters (looks like they were trying to keep it under 5 characters for some insane reason??/)
Did I mention the lack of meaningful documentation?
Did I mention the lack of any verifiable test for anything?

The only documenation of intent ("Intentional" descriptions) is a bunch of research papers that have been written off the back of this snarl that document what the researchers think they were doing.... which scares me when its very hard to tell if the code actually did that...reliably.

Now to start the refactoring.

Step 0.  Format the Code for readability.

In Matlab (Ctrl+A, Ctrl+I) for the auto-indent function. This works nicely and I have not found any problems with it.

Make sure your functions have the matlab "optional" "End" keyword.

Now where are the styleing tools????  WTF? There are no whitespace tools for matlab... you must be F@#@$% kidding.

Here is the .m file I wrote to fix that.  http://duncanstools.blogspot.com.au/2013/06/matlab-fix-style-tool-to-fix-formating.html

Step 1.  Name stuff. 

Name the functions as meaningfully as you can. 

 This will change as you work through the codebase. But name them to describe their "function" as much as you understand it.

Start naming the variables to describe what they are doing.  

Matlab has some fairly handy tools to rename variables... they are a little flakey and sometimes fail which means manually undoing what you have done.  When this happens I use the find tool to work backward up the file until I find the first instance of that variable and try to rename it at that point. Usually this works.  If all else fails use find and replace....

Step 2. Group Stuff. 

Use the matlab code cells (use a double percent characters with a following space to start a comment "%% " and it will create a different "region" in the code. This is an easy way to "soft" group a region of code that you think is a candidate for extracting into a function.

Start looking for any variables that are "recycled" and split them up.

Start looking at where variables are allocated, initialised and used. Group them together and see if you can package them up into a nice little unit of functionality.

Break out all the repetitious code and sketch out a reusable block that might work to replace them.  Make an class if it will need to hold state data or a function if its just needs some scope.  Beware of Matlabs scope rules... they are a bit weird if you have come from other languages.

Matlab allows you to have multiple functions in the same file.  Just start the file with the "big" function and the put "helper" functions below.  There are no refactoring tools to help you with this but the syntax highlighting and variable highlighting are pretty good and help alot when extracting sections of code manually.

Package up related material and then clean up the packages.  The exercise of packaging is really an exercise of breaking the total "scope" of the code into small managable bits that you can hold in your head all at once.

By driving a structural "fence" through a large peice of code (using functional decomposition), and managing what crosses the fence (passes into and out of functions), you are immediatly simplifying your mental schema. Once you have the system decomposed into chunks of a size that you can hold in your head comfortably, you are done.  Now just look at each chunk and clean it up.   

Step 3. Write Stuff. 

Write comments to document what you think a variable is for.
Write comments to document what a function or group is doing.

The more you write the more clearly the code will come out and the less you need to hold in your mental schema at any point in your coding. This is the objective of all refactoring.  Unload your mental schema of the code so you can get a handle on the whole game. 

Setp 4.  Delete Stuff. 

This is the most pleasant step.  Delete all the crap comments,  bad formatting, irrelevant and duplicate code, old style idioms, bad names, unwanted functionality and anything else you can find that in not being used NOW.  Kill it all.  Make your code shiny and clean.  If you need something later, add it back in... don't try to carry it forward based on a "maybe useful later" kind of logic.  This does not pay off.  Go back to the code in the repository and have a look if you want to refer to something that was in an old version.  Be ruthless.  Get rid of everything that is not "Shiny" and "clean smelling" (to borrow from Fowler et al.)


Does Matlab help or hinder refactoring?

Don't get me wrong, Matlab is a powerful toy with a shiny shell and a big price tag, but its not a productive toolset for larger projects.  Then again, I doubt its intended to be.  So maybe this is just me thinking about it in a way thats inappropriate.

There are some good features built in that provide some basic tools for the refactoring process, but there is little beyond that.

Find and replace Tool.  It's good, but not great.
Block Comment/Uncomment.  It's there and works without mangling the code
Auto-indent.  It's good.
Auto-Style.  NONE.  Check out AStyle or Profactor StyleManager, PolyStyle if you don't know what I mean. Also this article on CodeGuru - MakeCodeNicer by Alvaro Mendez (Took me a lot of effor to find this references. I havn't used this code in 5 years but it sticks in my memory as part of my essential toolkit back then...)

Higher level refactoring.  NONE. Is this a problem or am I just applying Matlab to bigger things that its ready for?

Later all...


Friday, April 19, 2013

Plane hacking or not?


Article about the security presentation 
http://www.net-security.org/secworld.php?id=14733

The denial from the Aviation Authorities
http://www.net-security.org/secworld.php?id=14749

Well meaning clarification from pilot
http://www.askthepilot.com/hijacking-via-android/


This is both a fascinating research paper and a social comedy.

The researcher has demonstrated an interesting exploit against aircraft.  Ok, yet another insecure system. Interesting but only comment worthy for the novelty of the target.

The clearly bullshit ridden  denial from the Aviation Authorities is a nice attempt to damp down the expected alarmist crap from the usual suspects in the media.  However, the denial smells exactly like a denial.  There is really nothing that the FAA etc could say that would be beleived either by the ignorant masses, the ignorant media or technical experts. But its nice that they cared enough to make a statement. 

Finally, the well meaning clarification by the pilot.  Hmmm what can we say here?  Apart from pointing out how well meaning he is.  To put it simply, the guy is an egotistical fucking imbecile.  The whole point of hacking is to take command of a system.  This guy has not understood that pilots are part of a system.  He is arrogant enough to think that pilots are some how "above" it all.  They are uncorruptable, omnipotent and benevolent.  And like all generalisations... this one too is crap.

The evidence is easy to find. Start watching "Air Crash Investigators" or any similar program.  Pilots are human just like everyone else.  Air crews make catastrophic mistakes every day.  Sometimes they survive and sometimes they dont.  The point is that they respond with training and skill, under pressures, time limits and equipment limits.  They trust all sorts of automated systems, instruments and processes.  The point that this research has shown is that some of these automated systems may no longer be as trustworthy as the suppliers and the FAA would want the traveling public to think.

The fact that any system is hackable (read corruptable) is always simply a matter of time and resources.  Every system has weaknesses.  Most people do not have the resources to corrupt the systems around them.  Most of the time the users of those systems "trust" them.  However, the more these systems are revealed as being potentially untrustworthy, the more people are forced to consider what the systems tell them, to pay attention to inconsistencies and be aware of the big picture.  In the case of pilots, it will hopefully make them a little more aware of where the manual controls and old school instruments are located.  It may make them a little more dilligent in keeping their hand-fly skills polished.

Trust in automated systems should have limits.  Robots are only going to take over the world when we let them.  Once humans are totally redundant and a completely automated system that everyone trusts is availible, the human will be fired.  Look at the manufacturing industry.  Look at any industry where data processing has replaced people.  Once we take humans out of the loop and trust the machines there will be a much longer unemployment line.

Its disturbing to see just how many people are no longer needed to keep many large organisations opperating.  Look at all the industries that are down-sizing across the US.  White collar workers are the current ones who are being made redundant simply because knowledge can be managed by software robots. 

Showing this kind of exploit will sharpen a couple of pilots up and perhaps make them a little more paranoid for a while.  It will force the FAA etc to consider the possibility that there are exploitable systems on planes... but the other side of the arms race, the component manufacturers will work hard to rebuild that trust and show that their systems are bullet proof. Eventually everyone will trust them enough to take the humans out of the loop and planes will be completely flown by automated system.  I am suprised that commercial drones have not been trialed yet.  Certainly for freight but I expect eventually for passengers.  The final step will be fully automatic (with a remote manual override for a while).  But human trust will take the humans out of the loop eventually.

Thursday, April 11, 2013

Installation of Psychtoolbox on Matlab

I had a need to set up a matlab environment to work on some legacy code.  The code relied on Psychtoolbox library. So I tried to install this.  The installation runs as a script from within matlab.  Easy enough, but it requires some pre-requisites first.
  • GStreamerSDK 64bit 
  • Subversion Command Line Client
 GStreamer went in smoothly. Installed to the default location.

Already had TortoiseSVN in and working, so ignored this item. 

Stuff that went wrong:

TortoiseSVN does not work without the command line tools. Caused the script to exit.
- Solve this by hand checkout from the repo.

Once the "DownloadPsychtoolbox.m" has failed... you are on your own. 

Script is not able to be re-started without deleting the existing checkout... doh. Start hand reading the scrip to try to figure out how to manually skip ahead.   Found some hints and ran the "SetupPsychtoolbox.m" file from the folder where I checked the files out to.  (In my case "e:\psychtoolbox") This should manually set up the matlab environment path so you can actually use it.

After that, its testing time and lots of debugging.

Ugly Error Message from Outlook.com

I remember when C++ Template Compiler Errors were ugly.  Have a look at the following that just spewed from an MS mail server.

Is this really the kind of thing you want to spit to the client side??? Seriously?


Microsoft.Exchange.Clients.Security.LiveTransientHRESULTException: LiveId authentication code has returned error 0x80049234 PP_E_RPS_REASON_POST_TICKET_TIMEWINDOW_EXPIRED, indicating that there's a temporary problem with the remote server. Please check the Application event log for detailed information and retry the operation later. ---> System.Runtime.InteropServices.COMException (0x80049234): Post ticket time window expired. Ticket could be reposted. at Microsoft.Passport.RPS.Native.IRPSHttpAuth.AuthenticateRawHttp(String siteName, String httpVerb, String path, String QS, String httpVersion, Boolean bHTTPs, String httpHeaders, String httpBody, Object pAuthResultsBag) at Microsoft.Passport.RPS.RPSHttpAuth.Authenticate(String siteName, HttpRequest request, RPSPropBag propBag) at Microsoft.Exchange.Clients.Security.LiveIdAuthentication.Authenticate(HttpContext httpContext, String siteName, String& puid, String& orgIdPuid, String& cid, String& membername, UInt32& issueTime, String& responseHeaders, RPSTicket& rpsTicket, Boolean& hasAcceptedAccrual, UInt32& rpsAuthState) --- End of inner exception stack trace --- at Microsoft.Exchange.Clients.Security.LiveIdErrorHandler.ThrowRPSException(COMException e) at Microsoft.Exchange.Clients.Security.LiveIdAuthentication.Authenticate(HttpContext httpContext, String siteName, String& puid, String& orgIdPuid, String& cid, String& membername, UInt32& issueTime, String& responseHeaders, RPSTicket& rpsTicket, Boolean& hasAcceptedAccrual, UInt32& rpsAuthState) at Microsoft.Exchange.Clients.Security.LiveIdAuthenticationModule.OnAuthenticateRequest(Object source, EventArgs e)

Wednesday, April 10, 2013

Sychronisation of nodes with cloud services

I have read a couple of articles recently about the complexity of synchronisation and reconcilliation across cloud services (apple) and the horror that ensures.

The problems are:

Multi-point synch - The so-called triangle synch or three node synch.
Inconsistent connectivity - Partial synch and partial updates.
Clashes - Multiple nodes over-writing or under-writing each other.
Rollback semantics - How does "undo" function in this kind of environment?

This problem exists in all synchronisation exercises. Be it database sharding, file based backups, database synch across cloud services, multipoint sharing and synch etc.

I have been thinking about various strategies for either getting this to work or mitigating some of the issues.

Topology Issues


Peer to Slave - This is where there is essentially one master "node" and a backup slave device.  In this case the master always wins.  This is basically a backup service.  The Master is the author of everything while the backup device has reader rights only.

Peer to Peer - This is the classsic "sychronisation" where two peer nodes are trying to stay in synch.  In this case both nodes can be authors and both nodes are readers of both nodes product.

Peer to slave to Peer - This is the dreaded three way.  Each Peer has author rights, while all three have reader rights.  So for any change, the system needs to propogate it to the slave and then to the other Peer.  Easy with assured connectivity.  Hard with asychronous and intermittant connectivity. Almost a guarantee of a reconcilliation event at some point.

In most cases higher order topology can be reduced to one of these models, however with asynchronous connectivity, it becomes exponentially complex. 

Data Issues


File based synch - There are lots of these systems and its pretty much a solved problem.  In the higher complexity topologies it simply has an increased probability of a clash.  Easy to have a deterministic resolve rule for clashes like (Add a digit to the end of the file) to prevent data loss, but merging the two versions of the same file is still logically impossible without more information or a pre-conceived rule set for reconcilliation and merging.  This can only be done by the actual application and cannot be done at the transport layer.

Fragment based synch - Such as databases, large binaries, complex data sets, real time collaborative virtual environments etc etc. These get physically ugly and logically ugly really quick, anytime the topology count goes over two.

So is there a solution? 

Possible strategies

Authoratative Model
In this model, one node is made the "Authoratative" node and logically "creates" the data which is then synched to the second (and other) nodes.  This gives a kind of master-slave model.  When synching back to the master, there needs to be some kind of idiot check that when the nodes are out of synch, the masters copy always wins. 

This works for fairly simple use cases with low complexity. However when you have multiple authors for fine grained parts of a system... trouble.

Logical Merge Model
Take the time to create a robust set of merge rules, so that any time there is a conflict, there is a prescribed solution. This may be useful for a set of shared settings, where the most conservative options overwrite less conservative options ( I am thinking about security settings) but in the case of a database of data that has no obvious natural precedence rules we need some other strategy.

Time based Transaction Log
With a database it would be possible to create a set of activity logs and merge them by merging the logs and generating the current data set.  This works for applications that do read or write-only, but falls to bits when you have read, write, modify and delete.  In an asynchronous environment the different node copies can quickly move so far out of synch, they are completely different.

Point by Point Merger
How about avoiding the problem and making the user choose what to over-write.   Show them an option for each conflict and tell them the source of each of the values and let them choose.  (Having seens users who do not want to reconcile automatic backups in word, I can imagine how this would easily fail)

Merge at the time of synch
Theoretically, if the software is in communication with another node, then now is the perfect time to reconcile any conflicts.  Give the users the choice to over-write or keep the data. (assuming its not too complex) This will determine which way the resolution goes. The problem is that its a one-shot deal that will always result in some data being irrovocably lost.

Fail loudly
Simply inform the user that there is a conflict throw an exception and make it sound like its their fault. 

Keep two copies
Keep the previous copy on the node and create a "differece" file with only the conflicts from the synch.  This prevents data loss and gives the user or software time to gracefully merge the values.  There are still logical issues with how the data is merged, and large sets of conflicting transactions may still create some sort of failure state that cannot be automatically fixed.

Thinking thinking,....

Bug - Microsoft Sudoku app lost all progress

This morning I synced my phone to the computer to charge it up and lost all the progress in Sudoku.  Totally reset the lot.


This looks like yet another interesting case of synchronisation reconcilliation failure. So much for connecting to xbox live leaderboards.




Monday, April 8, 2013

How much game automation is too much?

I was just formatting some matlab code and waiting for a virtual machine to bring my machine to its knees (again...) and played a little sudoku on the phone.

One of the nice bits about MS sudoku is the helpful way that the game automatically eliminates some possibilities from your pencil work when you enter a number in pen. This removes one of the tedious bits of the game, but also "helps" you by removing one of the areas where human error increase the "challenge".

This got me to thinking; how much "automation" can help or wreck a game.

Scenario 1  Aim-bot in an FPS.

In an FPS, the human error that the player is trying to manage is mostly in two areas,  shooting and moving.  If the game automated either of these elements, the difficulty of the game would be seriously reduced. (This assumes that the games does not involve any resource constraints that require decision making.. I.e it has a single gun type with unlimited ammo)
This would reduce the game to an exercise of moving around.  Kind of like being the driver in a tank, where the gunner takes care of all the shooting bit and you just drive.  

This is no bad thing if the movement is interesting and challenging.  However, if the movement aspect is not "fun" then the automation has wrecked the game.  It has reduced the complexity, the chance of human error and the consequence of that error (learning).

What if the opposite was done.  The movement aspect was automated. The game now becomes more like the gunner in the tank.  The tank moves and you do your best to fire with no control or coordination with the driver.  (Like playing a team shooter when you jump in a jeep and take the mounted gun while a teenager with no headset takes the driver position. Enjoy! Might work out, might not.)

Does this help the game, break the game or turn it into a new and different game of different but similar complexity?

One application of this tech would be making games like FPS's accessible to differently able-ed people.  Someone who has issues with the fine motor control required to use a keyboard may still be able to play by taking the gunners position and managing aim and fire with a headset and switch. 

This kind of automation is both interesting and complex.  Getting it to work effectivly means having some means of coordination with the human player. Cooperate and in some instances "undo" something the bot may have done but the human player wants to back out of. In real-time games this may mean that the bot needs to be quite a quick and clever planner.

The second aspect I find interesting is what this though exercise tells me about the essence of a "game".  What can you take away and still have a "game".  I think the essential list is something like the following:

  • Control
  • Decision making
  • Human Error
  • Learning
  • Complexity

Automating Control 

You can automate control simply by using macro's and other "helpers".  More complex automation would be cause and effect "triggers".  In some respects these would "improve" the game becasue they quickly become part of the players sense of self.  However, the frustration would come with complexity.  If the player needs to deal with a very complex environment, then they may end up with many fine grained triggers to deal with the subtle combinations or only a few "blunt-instrument" triggers that deal with the environment in a very simple fashion.  Both of these would generate a very different gameing experience.

 Extreem automation of the players controlwould turn the game into a movie. Unless the "game" is a meta-game in which the object is to construct a bot or AI that plays a mini-game... but thats a bit different.


Automating Decision Making

The automation of decision making could be done by trying to eliminate "bad" choices and "tedious" or "repetitious" activities.  Those with programmed decision making and little novelty.  By removing these from the game, or allowing the player to set "default" responses, the game gets faster and may get more "concentrated".  This is probably a good design choice when player "eyeball time" is an issue.  Frustrating your players is important at some points, but not when the payoff is minimal.

At its extreem, removing decision making from the game turns it into a "spectator" experience.  Kind of like playing Myst without being able to play with the switches.  Its just an environment where stuff happens.  There is no investment by the player in their own experience.

Automating Human Error

Automation to eliminate human error is probably the most desired but the issue with the fastest route to failure.  Human error is the thing that people are trying to "beat".  Its the thing that players bring to the game that makes it different for everyone.  Automation that tries to mitigate or "help" users not make errors is probably essential in some industries and things like "self-drive" cars, but would be disasterous in a game. The game would then become a kind of "min-max" monster, where the player can rely on the "helper" bot to always make good choices.  This kind of "training wheels" design is essential for "training" and "practice" areas in complex games, because it helps the players to get high enough up the learning curve that they don't give up and quit.  But like all training wheels, they need to come off asap.

Games with high levels of complexity like SimCity need some sort of "advisor" or helper function simply to make some of the complexity managable for players of different skill and ability. The question is how "helpful" this advice is and how it's delivered.  Also, how accurate it is.  The thing with advice is that with humans it always reflects their "point of view" and intrinsically comes from a limited amount of information that the advice giver has.  In a RTS or Simulator, the "game" potentially has acurate and timely knowedge of every aspect of the game, so there is much less "uncertainty"... I say, potentially as the game designer can obviously engineer this uncertainty in.
The other issue would be "comprehensiveness", if you have a group of advisors who all give you advice on different aspects of the city, you as the player still have to bring all that advice together and make sense of it.  But it could certainly help with managing high levels of complexity or a complex interface.

But is this automating to reduce human error?  I think that if the automation is "looking over your shoulder" and prompting your or "second guessing" the players choices and providing hints before those choices are realised then it is. Then its the wrong kind of automation. It becomes a crutch that the player depends on and expects.  This is nice in the training wheels phase but destructive in the "real" game.

Automation that provides "timely" information and simplifies complexity is still very valuable, but this is really only good interface design. It still allows the player to form their own opinion and make their own choice. This could be manipulated simply by having the interface distort the information to influence the players choice... but thats just paranoid.

Automating Learning

There are all sorts of aspects within a game that foster learning.  The first is simply the experience of the game environment.  Learning the rules. Learning how big the play space is. Learning about the content of the play space.  Any attempt to "speed up" or "smooth out" this learning, can have some strange consequences.  Consider the difference between being told what to expect and being allowed to find out by experimentation.  Learning happens through experience, much less via observation. 

The opposite is interesting, automating the "evaluation" of how much the player has learned.  I build testing and measuring systems in my day job, so collecting data on performance is my bread and butter.  Building a system to evaluate the players performance or not allowing the player to progress until they have mastered something are interesting plot devices.  (They can also be sources of frustration if the threshold is set to high...) But used with sensitivity could provide a very useful mechanic for some game styles.

This is not really automated learning. Its more a way for the game designer to manage the players experience a little.  I am more concerned with automating in such a way that learning by the player is reduced.  There are probably any number of aspects of many games that players wish they did not have to learn.  Either they are tedious, not "core gameplay", not interesting for some players etc.  It would be nice to allow some people to "focus" on the bits they are interested in while being able to "ignore" the rest.  But would they be playing the same game as everyone else?  Is this simply disecting the experience? 

One example of this kind of thing is the "fast travel" mechanism in some of the RPG's.  Where once you have "discovered" an area, you can then "hop" around the map automatically.  You do not have to trudge through the area each time you want to move around.  This gives some balance between having to learn something that is novel and having to repeat something that is familiar. 

In real life, repeatedly traversing the same area reinforces our memories and familiarity with a location.  Repeatedly experiencing the same area may allow us to "see new things" each time we experience it.  However, with virtual environments not yet having the depth of detail of reality, this may be a blessing.  Having to re-experience a low fidelity environment may infact reinforce the fact that its fake!  It may be that "novelty" is one of the essential tools in the virtual world designers corner at the moment.  I know every time I have to go back in a virtual world and search an area thoughly for something, it becomes very obvious that its artificial.  This erodes my sense of immersion and reduces the pleasure in the experience of the world.   Contrast this with entering a new area, potential enemies lurk under every bush and resources lay around every corner... I think this is probably the point where I am most immersed in the experience.  Every sense is hightened and the fact that the bark on the trees is not tiling properly completely escapes my notice. 

I think using automation to reduce the exposure to stimuli in the game that facilitate learning simply means that the player does not get much of a lasting impression from the experience.  This may be an area for some research.  My feel is that a certain amount of repetition at different levels of excitation is essential for a rounded learning experience.  There is certainly a balance to be found between repetition and bordem... in some game environments, I think the shallowness of the environment and the lack of stimulus would result in a boring experience with very few repetitions.


Automating Complexity

Helping your player to manage complexity is a complex issue.  I touched on it above in the decision making item.  Obviously there is a fixed level of complexity that a player can cope with.  Depending on their mood and energy they may be able to move it up a notch, but there is still a ceiling effect somewhere.  For games that involve a high level of complexity simply by their nature, this can create an insurmountable learning curve for some players.  There is an argument that this becomes a self selecting exercise, but if a player has paid their money, it would be a poor game designer who did not reasonably wish them to have a good experience.

So, is there a case for automating some of the complexity, so that the player has a good experience, no matter what their level of capacity?  In team based games and competative games, this may violate some of the social contract that players self-create.  The concept of "cheating" and "fairness" are complex beasts.  Usually very dependant on how closly matched players feel themselves to be.  The question is, is a game fair when one of the players has limited hand function or is vision impaired? (Just examples, there are lots more out there.) What about when one player is a teenager and the other is much more experienced but may have less snappy reflexes?  (Age will get us all.. if the zombies don't)

I am not trying to make a case for automation, simply pointing out that there are many cases where the "fairness" argument fails.  Its predicated on the idea that everyone is equal and any differences are "self imposed".  We know that is patently untrue. 

Hmm... I feel like I may be wandering away from the thread a little.

So, I think the take-home is that automation is a complex subject. There is a case for automating all sorts of aspects of games; some to make the game better for all, some to make the game better for players of different capacity.  Certainly, less automation can increase the challenge for some players, while for others it may make the game unplayable.  In some cases, the addition or removal of an item of automation may make the experience "a different game". I know that the automation in sudoku for me makes it a faster and easier game that I think subtly sucks the fun out of beating a hard game.  It feels a bit too easy. 








Friday, April 5, 2013

The Go Language makes me want to cry... good cry!

http://talks.golang.org/2012/splash.article

There are sections in this article that very nearly brought tears to my eyes.  I so want to be able to use Go simply for its design. 

Looking at this article after reading one on the proposals for C++14 are enough to make me quite C++ forever. 

Now I just have to figure out how to use Go for all the legacy rot in my life that I have to drag forward.

Response to the flaws in the SOLID principles

http://codeofrob.com/entries/my-relationship-with-solid---starting-with-s.html

Because I hate the comment system on his blog... here is my response to the above post.

The massive flaw in all this is the semantics. The "Horizontal" and "Vertical" seperation is, not only a simplistic set of terms from the business world, but useless semantics that do not map to anything execept your own weird two dimensional internal mental model. There are no intrinsic "dimensions" to a codebase that would support you using these as axiomatic semantics, that other folk would "grok" automatically. We deal in abstractions, don't fall into the habit of trying to map high order abstractions back to simple and limiting lesser systems that impose non-existant contraints. its quite possible to flatten everything onto a whiteboard board but thats not the reality. Any non-trivial codebase has a multi-dimensional graph of relationships, dependencies and states that cannot be simply represented in a 2d image.

This is not to say that I disagree with your intent. If you observe your naming conventions, you will see that your "horizontal" model could equally be called a "Functional" model as it uses very "functional" descriptors for the names of the entities it is describing, while your "vertical" model shows a naming convention very similar to a "logical" description of the entites its describing. The fun part comes when you realise that a single class can have a single functional description, but may have multiple logical descriptions and vice versa.

The problem, once we have established my proposed naming convention, is that every class and interface has a dual role within the architecture. Let's take for example the "controller" class. Which functionally performs a "control" function of some resource or process; while logically it may be part of a business rule that manages a transaction. How do we reconcile these dual identities in both the naming conventions and the "architecture" decisions?

Once you identify this duality, its easy to start to see how concepts like SOLID are one-eyed and only allow you to see one or the other role at a time. Which ends up with both roles being mixed through the codebase names, comments and relationships.

We need a better system to conceptualise the roles and functions of the design entities that can capture the actual complexity rather than ignoring half of it and hoping it goes away.

Tuesday, March 26, 2013

Acrobat 9 Pro bug


I have been processing a bunch of scanned pdf's and running the OCR tool on them.  Since I had a pile to do, I created a "Advanced > Document Processing > Batch Processing > Batch Sequence" script in Acrobat to auotmate the process. Problem is, its results are bizare.

If I run the commands manually, I get a nice result. (OCR Text Plus smallish ~2MB file size)
If I run the commands by batch, I get a weird result (OCR Text Plust origional file size)  With the same options and the save options to reduce the files size turned on.

Also, If I run the OCR command manually, and then run the batch command on the same file, it detects the existing OCR text.  However, if I run the batch command first, then run the OCR command manually on the same file, it does not detect existing OCR text and seems to completely re-do the whole file (and do a much better job).

WTF?

The is with Acrobat 9 Pro.



 

Monday, March 25, 2013

My ...adequate Development System Wish List

Currently I am wrestling with my system configuration to try to get it to do all the different dev tasks that I need.

Mostly I build desktop apps in various toolsets for various platforms. I target Win XP, Vista, Win 7 and MacOSX with a tiny bit on Debian.  I've been asked to look at Windows Phone 7.5 and 8 for a project. I build some Office apps for MS Office and maintain a bunch of add-ins.  I also support a scatter of fairly simple websites and web "glue" scripts to keep various projects running.

So I'm using VS2010 for the Phone dev, VS2012 for the Desktop dev,  VBA in Office for Excel and Access Dev. DreamWeaver for Web dev, Matlab, Eprime and Python for various experimental platforms. I have a natvie OSX codebase pending and something in PowerBasic that I need to convert. Each of these has its own versions of libraries and SDK's that support whatever horror platform or library dependencies its currently carrying.

Just trying to keep all this stuff straight in my head, let alone up to date, robust and vagley tested is a total joke. If I counted strictly, I currently have four different development boxes in various states of use.  Chances are I'm about to inherit a 5th (Native MacOSX) very soon.

This brings me to my current pain point.  I can't touch Windows Phone 8 without Windows 8 due to the restriction on installing the Windows Phone 8 SDK on anything except Windows 8.

The my main base system is an Optiplex 990 with (soon 12GB) of RAM with 4 monitors.

Base install is Windows 7 (not able to be changed)

On top of that is Visual Studio 2010 Ultimate and Visual Stuido 2012 Premium with a bunch of SDK's but specifically Windows Phone SDK 7.8. Which gives me fair coverage over the various toolsets and platforms.

I want to keep all my normal dev tools and working tools in the base image, simply because thats where I go to get most of the "other" work done. (File munging, multi-media work, administration, databases etc.)

The only physcial requirement is the ability to go into the labs and physically plug into some of  the systems that are not mobile. (Treadmill via Serial cable, Motion capture system via LAN) Everything else I can get to via sneakernet.

I have recently started to look at multiple virtual box images for development simply because putting all the tools into the same image is starting to creak at the seams.

Virtual Box Development Images

Windows 7 with Visual Studio 2012 + Desktop SDK's and other toolchains.
This will allow me to develop a single image for most of the desktop work. 

Windows 7 with Visual Studio 2012 + Web frameworks and testing add-ins.
Keep all the web stuff seperate from the Desktop tools as the add-ins tend to drag eachother down.

Windows 8 with Visual Studio 2012 + Windows Phone 8 SDK.
This will be my "Windows Phone" image. And allow me to target WP7.5, 7.8 and 8.

Hackintosh + XCode and Matlab.
This will allow me to develop for OSX and specifically the Matlab codebases that I need to maintain.

Ubuntu for some Cross Platform work.  Needs a GCC compiler chain and an IDE (YTBD)


Virtual Box Testing Images

Windows XP SP3 with current patches. This is my "Lab Image" machine.  Mainly Eprime and Desktop Experiments testing.

Windows 7 with Office 2010 + Current Patches for Excel, Database and Deployment Testing.

Windows 8 for future Win 8 desktop app, and Deployment Testing.

Hackintosh + current patches? With Matlab?

Ubuntu and Debian for testing the MoCap Interface software.



This does not solve my need to move to some of the equipment, so I will continue to need a physical laptop that I can run a debugger on.  Currently this is WinXP with Visual Stuido 2010; which is showing its age and need an update.

2 of the other dev machines are various configurations of WinXP with VS2008 or VS2010.

My Mac solution has been to borrow a lab machine when I need it, but that not tenable going forward.


Thinking thinking... I need more coffee.


EDIT:

Ok,
* Adding 8G more RAM to my main box to deal with VS choaking and help the VM's run a little easier.
* Turning the other physical machines into VM's simply to deal with the variability in hardware across my fleet.
* Trying to avoid the whole "hackintosh" approch simply because it introduces more unknowns into the process.  
* Scrounged for Mac hardware but only came up with PowerPC based systems stuck on OSX 10.5 or older so thats a fail.  (I do have a PowerPC specific codebase that has to be dragged into this century so that machine will help with that project....)
* Still need a worthwhile laptop with a serial port that I can walk to various systems.  This will need to run VS2010 more than likely.  I have some recent Dells with serial ports so thats probably solved. Just have to build one and get it all tuned up.

EDIT:

AHHHHHHHHHH!  Totally F$%$@G snookered again.  Got everything going in my carefully planned Virtualisation Solution and then got this chunk of golden S@#t on my monitor.

http://stackoverflow.com/questions/13166573/windows-phone-8-sdk-this-computer-does-not-support-hardware-virtualization-wh

So, no matter what, I need Win8 Pro running on bare metal somewhere to run the phone dev tools.  Do I seriously want to rebuild my main box to win8?  I do not think F@$##(ing so.  What are my options?  Sweet F@#$# all.

Dual booting is looking like a painful and retarded possible solution. I do not want to be dropping out of my main workspace, rebooting to another boot image, running some tools and finding that I need to get back to my other image to perform a step or generate a resource... talk about interupting the flow.  Especially as I have just finished bedding everything down and getting my head and heart set on doing it all with win7 as my base os.  F#$## F#@#$ F#$@#@#$@#$@#$@#$@#!

*More cathartic ranting here*
WTF! After the hours of building VM images, running infinite updates and service packs. Figuring out how to do a base VHD + difference disks that will not self destruct. Getting all the right SDK's on the various images. Deriving the testing images and planning how to do their difference disks. And the endless fucking licensing roundabout.... and logging into accounts with stupid secure passwords and the spew of packs, tools and extra shit that the SDK suit's install that spray all over the place.

So my option, it would seem is:

BareMetal + Optiplex 990 12GB 1.5T i7 etc.
+ Win8 Pro 64bit Dual boot with
+ Win7 Pro 64bit.
++ Virtual Box running on both pointed at a shared partition containing the VM's with the dev and testing images.

Use Win7 for the general work, Multimedia and Animation projects.  Boot a VM for all the unusual dev works and deployment testing.  Dual boot win8 for win8 apps and win phone 7,7.5 and 8.

Still don't have a good solution for the Mac dev and testing as I have quite chasing the Hackintosh rout. But I have a line on a Macbook Pro that is about to be replaced that might solve that problem finally. SciLab is just not cutting it for my MatLab work. 

Fuck me.  What a horrible snarl of shit. It's enough to make me go and hack together a supercomputer from old gamboy's and gen 1 iPods just for relaxation.

I had someone ask me if I could write custom apps for a samsung tv yesterday... If anyone so much as mentions iPad or Android development I am going to fucking snap...


EDIT:

Just ran across Vagrant.

http://www.vagrantup.com/

Looks like an interesting solution for some of the problems I have (especially testing boxes).  I was solving this problem using VHD's and discarding the changes after the test was complete.

Friday, March 22, 2013

Strategy for dealing with learning while Coding

http://bradmilne.tumblr.com/post/45829792502/the-one-tip-that-will-help-you-learn-to-code-10x-faster

This is blindingly obvious once you read it.  Something to think about.

Article on RPG design using detailed characters

http://gamasutra.com/blogs/CraigStern/20130319/188793/Using_Details_to_Craft_a_Coherent_Game_World.php

This is a good article on some aspects of using characters to drive an RPG world.  Seems to be quite single-player centric but the idea could be mapped to a multi-player environment with enough horsepower in the social layer.


Internet Census via Carna botnet

http://internetcensus2012.github.com/InternetCensus2012/paper.html

There is so much to enjoy in the research that was published in this paper.  Not lease of which is the audacity to publish the research. 

The findings from the survey are mildly interesting.  Probably similar to what we could have guessed, but its nice to have some independant confirmation.

The methodology is technically fascinating and demonstrates a high level of skill.  Some of the anecdotes are fun to read but are similar to the war stories every researcher has of their struggles and triumphs.

The quality of the design, writup and presentation of the research is world class.  This was one of the most enjoyable reads of a technical paper I have had in a long time. This should have been published in an A* journal.  I would be proud to do something a quarter as good as this.

The underlying psychology of the researcher is quite interesting. 

The legal implications both of the massive base of exploitable machines being so obviously demonstrated and the implications of both exploiting them and publicly identifying them is complex.  There is a case for the manufacturers, users and local and national regulators to have the finger pointed at them.  The fact that so many trivially vulnerable devices exist on the network bothers people enough to talk about it, but not enough to do anything about it.  Its much easier to shoot the messenger.

Good luck to the author staying anonymous. 

Monday, March 18, 2013

Why is Microsoft a bad shepherd?

http://winsupersite.com/windows-8/fixing-windows-8-advice-users-and-microsoft

The above article is yet another mild... how to fix windows 8 piece.  Quite nice and makes some useful points about customising win 8 to suit different users. But thats not the idea that I found interesting.

The seed came from the first comment.  "Microsoft should buy startdock. Or steal some of their employees".  This is both blindingly obvious and subtly interesting.

But why would that be a bad thing?

Consider this,  Microsoft purchases Stardock... or any other company that is building interesting products.  Lets not argue about how much or what the finacials are... lets be real and say that if the decision was made to purchase... it could be done.  This is not the point. Lets focus on what happens.

Microsoft and the mangers of the inhaled company integrate the staff and processes with MS and merge them into one of the MS business groups. They let go anyone that is not right and add some resources where needed.  Imagine it all goes well. (Not making any inuendos... just skipping past distractions while I get to my point) So whats the problem? Everyone is happy and productive.  There is only one thing that has been lost in the process.

Independance. 

The new additions to the Microsoft family... are now part of the Microsoft family.  They are goverened by the same internal politics that have generated the Microsoft platform.  They no longer have the choice to "have a different vision".  They cannot "fix things" that Microsoft do not see as broken. They cannot be a dissenting voice. (They can internally... but it carries less weight in the market place than offering an actual product that provides a solution users want)

The erosion of dissent and the aggregation of control are the things I see as being the death of all great organisations.  As more and more central control takes over a platform, there is less flexibility in thinking, less ability to adapt and address different users needs.  There is more movement towards a shared vision... the so-called "reality distortion field". 

What Microsoft and all the other lumbering giants of the tech industry need is an eccosystem of "loose" collaborators.  Companies, developers and users who all work on the same platform... but with different visions and objectives.  They fill the eccosystem and flesh out all the tiny little niche opportunities. 

Bringing the successful ones under the same vision and managment is just foolish.  Imposing control is the last thing that Microsoft should do to the ecosystem. Their role is simply to foster the ecosystem... to increase opportunity for the benine population and limit the opportunity for the predator population.   They are curators for the platform and the herds that browse upon its bountiful slopes.  Trying to domesticate the herds and put them into factory farms is just totally missing the point.

But, luckily MS have not bought Stardock.  They have not crippled the voices of dissent. Either intentionally or accidentally, there is still a healthy ecosystem around the platform(s).  This in no way forgives the many many many missteps that MS has taken and the many ways that it regularly alienates the ecosystem... but change is always painful.  Some will win, while others will sit around and whinge in the dust... such is life.

Thursday, March 14, 2013

Floppy Disk attack on Zombie BIOS...

You know those sureal moments when something from the past comes back to life... wanders around and tries to eat your brain? 

I have had two of those in the past couple of days.  One was people from the past contacting me and having mid-life crisies... the other was a computer needing a floppy disk to fix a corrupt bios.

Talk about a blast from the past.  Even finding a functioning USB floppy drive is hard enough... then I had to scrounge for a floppy disk in working order.

But so far to no avail.  The machine is still in a loop of death ( it was working... ok-ish before I tried to flash the bios... but had obvious problems) So I am considering either canabalising it for parts or just dropping it down the stairs a couple of times....

I am still blown away that a floppy drive is the manufacturers fallback position... even for a machine that was never supplied with one.  It does make sense as BIOS basic features were carved in stone a few decades ago....

Whoa!....

Later.

Anyway, this machine looks like its bricked.

It powers up, with a normal HP boot screen... but the F9,F10 & F12 options do not work.  Then it quickly flicks to a black screen with the message:

"Your BIOS failed to complete update..... blah blah"  again with the F10 option mentioned at the bottom.  Again F10 does not work.

Then after a couple of seconds it reboots. Rince - repeat.

I have read my way across the net and tried all the recomended BIOS recovery procedures involving USB sticks, old bios images, win+b button combinations and have not found anything that has worked..(obviously)

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=110&prodSeriesId=3356620&prodTypeId=321957&prodSeriesId=3356620&objectID=c02693833

 It's interesting that when I run a BIOS recovery using the USB Floppy, you can hear it seek and start to read... but it still reboots on the same schedule. This suggests that the USB drivers, FAT driver and the disk driver are getting loaded... but something choaks or is corrupt and it reboots.

To me it sounds like the boot region of the bios has been corrupted and its just not able to set up enough of a rudimentary system to be able to reload the new BIOS image from the disk.  Keep in mind that the BIOS image that is supplied for flashing, is not the "whole" of the code in the BIOS eprom.  There are some other regions that are not always replaced when you "flash" the bios.  If these get corrupted...well the ability to repair them gets increasingly low-level.

From reading between the lines and looking at the discussion on the MyDigitalLife boards about BIOS mods... I think there is no reasonable way to replace this code without desoldering the BIOS chip and reloading it with a working image dumped from another machine (Ignoring the serial number issue... which can be managed)

As this sounds like fun.... I have thought about it... but realistically... I just don't have the time.  I have about 8-10 laptops of the same model and some are in worse shape than this one... but boot happy... so I think its canabal time. 

Monday, March 11, 2013

The Whack-A-Mole Strategy case study

The Whack-A-Mole strategy is pretty much exactly as it sounds.  It's a reactive strategy where you wait for a problem or issue before attempting to deal with it. This is opposed to a "proactive" type strategy where you attempt to predict the problem before hand and deal with it in a way that prevents it occuring.

The ideas of "Proactive" and "Reactive" seem to often be cast in terms of "Proactive is good" and "Reactive is bad".  However there are lots of scenarios where this is neither true nor efficient.

My current problem is to update a group of students from using one document based data entry form to an updated version to refelect both changes in the course and changes in the data retention requirements.  Simply by their nature and some pre-exisiting use of the old form version, I reasonably expect some to not immediatly comply with the request, no matter how its delivered.

The proactive solutions that have been floated are:

1) Deliver the upgrade message in such strong terms that no-one will even consider not complying.
2) Build the system to cope with multiple versions of the data entry form and accept that they will not comply. 

As you can see, the cost of proactivly solving the problem is unpleasant in both cases.. because mostly the problem is "people factors" rather than a technical issue. I could implement both solutions, but they lead to bad places both technically and personally.  So, the best solution is to politely ask all the students to update their forms, give them a short cut-over period and then use the whack-a-mole strategy to individually handle any students who have problems with the upgrade. 

Another benefit of this solution is that we also learn exactly what sort of problems people are having with the new system (if any) and that can inform us about either bugs or unexpected "people factors" without the students feeling like they are at fault.

And everyone sleeps well...

Trusting Robots Article

http://www.bbc.co.uk/news/magazine-21623892

This is an excellent article that covers a lot of subtle ground about the state of integration of robots into our social space. It's worth a couple of reads and some followup.

http://www.bbc.co.uk/news/technology-17387854

This related article moves in the opposite direction and relates some anecdotes about just how far we have to go before this kind of social interaction will be comfortable.

iPad Consumption vs Creation Article

http://speirs.org/blog/2013/3/4/beyond-consumption-vs-creation.html

This is a very good analysis of the Creation vs Consumption arguments that I have been involved in (and perpetuated in some cases...) I personally feel that this is a very good summary and makes some useful points about where iPads (and their ilk) will be positioned in the near future in education environments (at least).  I would also contend that virtually everything that is said in this article that relates to the education environment probably maps to most of the enterprise space.  The heavy duty creation work in enterprise will still be dominated by workstations and laptops... but its easy to envisage a large amount of the "other activity" being suplanted by tablets.

Change is already here...

Emergent Social Movement and Bullying

http://www.technologyreview.com/view/511846/an-autopsy-of-a-dead-social-network/

The above is an interesting post-mortem of Freindster which illustrates some of the emergent nature of social movements.  Its interesting to see the conclusions that the researchers reached about the resiliance of the network.  This echos some of the resiliance models I have seen recently surrounding the issues of bullying, where children and teens who have small peer networks and mentor networks are much less resiliant to bullying.

It raises the question about whether a student is able to be identified as being "vulnerable" to bullying simply by examining their social network and an intervention designed to mitigate the risk or better understand the underlying issues that have contributed to the social isolation of that student. ( I would suggest that the systemic removal of all possible mentors from the school environments means that students are having their social networks composed of other students who are by their very nature not as mature or rich in experience as adult mentors could be, which results in the student having access to lower quality mentor networks).




ViolatorWare Software

http://www.ghacks.net/2013/03/05/beware-hoverzoom-extension-for-chrome-turns-evil/


This article talks about a Chrome extension that has "turned evil".  This is a strategy that I have been thinking about for some time.  I think its probably only the tip of the iceberg for this one being found out.

This highlights the weakness in reputation based systems with incomplete review mechanisms. This extension, like so many other products that have evolved from good to bad start out as a useful tool, then either cease to be useful or outright implement "features" that the user neither expected nor finds beneficial to them.

The big problem is always that the ecconomic model for "free" software puts pressure on the developer to pour in their time and energy while indirectly seeking some return. (commonly called "monitorisation"... or "selling out") In the grand scheme of things this is the tragedy of all great "free" software... eventually it becomes too expensive to remain "free".

Even the great FOS systems have all evolved mechanims to fund their existance.  Donantions, "Support", sponsors, selling swag, advertising, crowdfunding... etc.  None are truly "free".

So whats my point today?

The point is that there will be pressure from the dark side of monitorization to take advantage of market position and trust to modify the software to do "other" things.  This is kind of a trojan horse strategy... but its really more like a "betrayal of trust" strategy.  I like the term "ViolatorWare". lol.

The point I made earlier about the tip of the iceberg needs to be expanded.  If you think about a popular extension for a browser with an installed base of some 500,000 that has a reasonable upgrade cycle.  In the event that it was possible to insert a backdoor into the package and have it go undetected for some period of time (assume a competent designer with a high level of sophistication) it should be possible to deploy that exploit to a large number of the users before the flag went up.

This makes these kind of extensions a really attractive mechanims to deploy all manner of malware, crimeware and spyware.  With the ubiquity of browsers.... there are virtually no places on the networked planet that are not vulnerable to that kind of attack.  It would be a really effective way to generate a massive botnet in the wrong hands. However, it would only work for a little while.  Who ever abused this kind of system would probably need to use the system simply to bootstrap a more effective system, such as we have seen with some of the very high level espionage systems recently.  Use the ViolatorWare to open a tiny, onetime backdoor that would probably not be noticed.  Use that to insert a tiny custom backdoor which then piggybacked on some other communication channel to "phone home" to a command and control system.  (The use of twitter is still a bit novel... but you get the idea) basically hide the handshake in some other traffic.
This then allows the exploit to upgrade itself if needed.

Anyway,  this kind of sophisticated attack is probably still out of the hands of most of the crimeware and malware writers.  I would expect to see it become very popular for espionage type attacks as the diversity of extensions and the frequency of updates to them makes it a very "noisy" system that is hard to police, hard to review and hard to notify users when something goes bad.

The perfect target of course is extensions with the "highest" trust and the most complexity.  Things like security tools.  I have been expecting some of these to publicly go bad for a few years.  Either through it being revealed that one of the crime gangs have been producing them right from the start or the whole project has been purchased/highjacked/forked and is now just a front for malware delivery.  This is also going to be a problem for "abandonware" extensions, where someone can "take over" the project and update it using the existing trust model.

The example that comes to mind is the hijack of Sharaza, the filesharing client. This is tangled up in the media industry funded attacks on the P2P file sharing networks so the politics are quite nasty.  The point being that the hijack certainly occured of both the webdomain and the name with a different software product being delivered via the channel which masquraded as the old client and relied on the trust relationship to fool users into installing it.  While that campaign was a straight forward attempt to disrupt and sabotage the file sharing activities using a popular client rather than a determined effort to deliver a malware/crimeware package, I feel that its a forerunner of the ViolatorWare strategy just applied for a different end.  In that case it was much more explicity about violating the trust of the user base to drive them away from the product rather than depending on that trust to exploit an on-going relationship.

Anyway,  my prediction is that we will see more low level violationware show up with clumsy attempts to add a little monitorisation to otherwise popular extensions.  The form this monitorisation takes will be all the usual suspects, advertising in all its forms, data harvesting, criminal penetration via backdoors, botneting etc.  The extent of the abuse of this vector for espionage work will probably not be known for some time, but if I was an anti-virus company, I would start building libraries of all the versions of these extensions that appear so that later on we can re-construct how this kind of incremental violation occured.

Lets just take a moment to look at the platform implications.  Since extensions (at least to browsers) are supposed to run in a sandbox model of some type.. how can violationware do much damage?  Firstly, breaking out of a sandbox is a proven hobby for malware writers.  So, the potential will always be there.  Second, even within the sandbox, the extension can do quite a lot.  Its a programming model, its not hard to build a whole email server or web server in a small amount of code and embed it into a script.  It doesn't need to be powerful or general purpose, it just needs to acheive the programmers ends.  Assume that espionage systems would be able to break out of the sandbox and there is not a whole lot that is not possible once the code is on the target computer.  The point is simply that this type of attack is a different way to "socially engineer" the user to install and more importantly update the package by abusing a trust relationship.

Friday, February 15, 2013

Access RecordSource & Dynaset not showing records

Just had a weird moment in Access land.

I have two forms that have been in use for a couple of years, suddenly they will not display records as I expected them to.

I was setting the RecordSource property using an SQL string which was working a couple of hours ago.  But now they are not showing any of the records.  However, if I put the RecordType to Snapshot... tada... all the records show up as expected.

After reading some forums, I found the solution here

http://bytes.com/topic/access/answers/763599-form-does-not-show-records-dynaset-mode-only-snapshot


The thing that is bothering me the most, is how did this come to be a bug I am only just finding?  Have the properties been reset somehow?  Has a patch fixed something?  I swear it used to work as expected... now...weird.

Anyway, I have updated my codebase to explicitly set the DataEntry property to false when I am "reviewing" records and it seems to work fine.  I just have to figure out if my users have different ideas about "fine".


Friday, February 8, 2013

Minesweeper Solver AI with a flawed algorithms.

http://luckytoilet.wordpress.com/2012/12/23/2125/


The above is an interesting article on how to build a minesweeper solver.  There is a good discussion of various algorithmic solutions that can solve everything except the "must guess" states in the game.

The result of the AI in the article is a solver that acheives ~50% win/loss ratio.  Which is effectivly chance.  

It takes about 5seconds to figure out a couple of ways to beat the AI's win ratio using "indirect" strategies. 

1) Only "complete" games that do not result in a "guess" state.
2) Generate a "result" screen without actually playing the games.
3) Throw a tantrum - blue-screen the computer, throw a critical error... whatever the AI equivilent is. This allows the AI to avoid registering a less than optimal win ratio
4) Use the AI to fiddle the number of mines and the board size variables to allow it to increase its win ratio.  (Set the number of mines to 1 and the board size to expert and rack up a huge number of wins...tada)
5) Take a state snapshot of the game, clone the snapshot, play each position until a successful solution is found then move forward with that winning solution. This is a brute force mechanism to acheive a perfect 100% win ratio. 
6) Use the "Replay" function to increse the proportion of games that are won using partial knowledge rather than starting with a fresh board every time.



This in itself is strange.   If we assume that there must be a proportion of games that are solvable without reaching a "must guess" state, then these should be 100% solvable using the AI's methods.  The rest of the games must involve one or more "must guess" situations.  Obviously for a win, each guess is dependant which means that a game involving more than a very small number of guesses becomes improbable for the AI to "win".  If we assume that the proportion of games that do not involve a guess event are fairly fixed ( say 5%) and the games involving guesses are all essentially chance, then we should still get a result of about 55% win ratio.  But like any coin walk, this will only appear with a sufficiently large sample of games.  (Approching infinity?)

So are the games being presented to the AI really random or is there another factor that is hiding this effect?

We can assume that as the number of mines increase relative to the size of the board, the number of "must guess" events required to "win" this kind of game would increase.  So is there a sweet spot with these variables that allows for games with minimum guessing events but still results in a "credible" seeming AI? Probably the easy levels. 

If memory serves there are a fixed number of mines in each of the three pre-set "hardness" levels. (Someone has uninstalled the windows games from my machine image..damit.  Ok, Fixed that)

Beginner has 10 Mines  on a 9x9 grid.
Intermediate has 40 mines on a 16x16 grid
Advanced has 99 mines on a 16x30 grid

As there is no clear way to calculate the probability of a particular number of "Must Guess" configurations occuring at a particular board+mine count ratio. (Well none that spring to mind anyway) I guess we could just do some sampling and develop a bell curve.  (This would require that I could complete a reasonable number of games and see the board layouts, so I could infact count the number of instances of guess events in the process of manually solving each game.  Either that or just get the AI to solve a set of games and get it to count the number of guess events... mmm automation.

Anyway, assume we came up with the probability of guess events for each board size.  This would only give us a sense of what the true win ratio should be over a large enough set of games. 

However the probability of solving the boards will be:

No Guess Event (100%) 1
1 Guess Event  (50%) 0.5
2 Guess Events (50% * 50% )  0.25
3 Guess Events ( 50% * 50% * 50%) 0.125
etc.

Do you notice a pattern?  My point is that if we assume that each of these types of games has an equal probability of being presented to the AI, then we should get the probability of solving any game simply by averaging them together. The average of the above 4 game types is 0.46875... which is below chance. The further we go with the table of possible game types the lower the probable outcome is.  However the fact that the results reported in the article suggest that the AI using the published strategies was still getting a win ratio of about 50% would suggest that the probability of the game type is not distributed evenly. With some simple spreadsheeting,  the distribution turns out to be a simple falloff curve.


Based on the reported win ratio of about 50% I suggest that the games that are being presented to the AI probably involve only a small number of guess events.

However, we are only dealing with the ratio of games that were won.  We cannot really make a conclusion about the games that the AI lost.  They could infact have contained a much larger number of guess events.  The above curve really only shows the ratio of the games that the AI can win, even when its guessing.  This is simply the law of chance stating to bite hard. It doesn't actually tell us what the distribution of guess events is like in the games being presented to the AI.  Inference go suck.... 

Does this give us any useful ways to optimse the strategy?  Is there an angle that we are not trying simply because we want a general solution? Are we simply chasing a white rabbit down the wrong hole?  Can we be the first to beat the law of chance? Bzzzzt. Wrong answer.  There is no way to remove the effect of chance in these games.  The AI must make a guess and the guess will always have a 50/50 chance of being correct. The number of guess events will generally be small (as shown in the graph above) but these are inviolate.  So how do we beat it?  Simply examine your assumptions.

As I proposed above, there are lots of ways to get a better win ratio by out-of-the-box thinking.  Many of these strategies and approches would be considered as "cheating" by humans who are crippled by social ideals of "fair play".  Keep in mind that the only rule given to the AI was to maximise the number of wins, so these are all successful strategies that the programmer has simply failed to implement.  The AI has not been given these tools and certainly was not given a sense of "fair play" through which it might have decided not to employ some of those strategies.  So in conclusion, the only reason that the AI has not got a 100% win ratio, is that the programmer failed to implement any successful strategies to acheive this ratio.  Essentially crippling the AI.

A better AI would have a result that looked like this:


So whats the point?

The point is that the AI is handicapped by the "humanity" of the designer. 







Tuesday, February 5, 2013

Access 2010 corrupt database "File not found"

I had the unplesant experience of a corrupt database yesterday.  After doing some work, suddenly everthing started generating a "File not Found" error.  Clicking buttons, using the ribbon bar, trying to run any macros.  My database was fucked.

I tracked it back to a module that I had added... put a little bit of code in and then deleted earlier.  When I closed and opened the database the module still showed up in the tree in the VBA editor, but seemed to be "Empty" as every time I tried to open it... it flicked up the "File not found" error.  All the other modules worked as normal.


Anyway,  after trying to delete it, over-write it, modify its name etc... I gave up and created a new database, imported everything from the old one and got moving. The steps are:

NOTE. Make a list of the "References" in the VBA editor for your database at this point. (See below for more details to avoid getting screwed when you close the corrupt database. Just write them on a piece of paper or something low tech. They cannot be exported.)

1) Create a new database
2) In the new database go to "External Data" > Access
3) Select "Import tables, Queries, forms, reports, macros, and modules into the current database.  Select the old database file using the browse button and then click the OK button.
4) You will get a mega select dialog where you can select all the objects you want to import.  Generally you will want to use the "Select All" button on each of the tabs to make sure you get "Everything" from your old database.  Make sure you do not have the "Corrupt" item selected... if you are not sure what is corrupt... take it in a couple of bites and test the new database between bites.

5) Once the import has finished and you save the new database, you may still need to "Wire up" a couple of things in the new database. (Go to File > Options > Current Database) The things I had to setup were:

Application Options Section
* Application Title
* Application Icon
* Display Form

Ribbon and Toolbar Options
* Ribbon Name

These are just the options for my "Development" database. I turn off the navigation pane and the Default ribbon before I deploy the database to the users.

Hope this helps someone.


In the Visual Basic Editor, I has to recreate all the References. I had to open the old database and make a list of everthing and then manually add the references to the new database.

Unfortunatly, when I came to try to re-open the old database it was now throwing a much larger error

"The database cannot be opened because the VBA project contained in it cannot be read. The database can be opened only if the VBA project is first deleted. Deleting the VBA project removes all code from modules, forms and reports. You should back up your database before attempting to open the database and delete the VBA project. To create a backup copy, click Cancel and then make a backup copy of your database. To open the database and delete the VBA project without creating a backup copy, click OK."  mmmm shit!

Well after making a backup...  I opened the Database and deleted the VBA project and found that the references had gone with it..... fuckcicle!

I tried going back to one of the production copies but they have been compiled and the VBA project will not show the references.... fucked again.

Don't bother trying to decompile a previous version either:
http://stackoverflow.com/questions/3266542/ms-access-how-to-decompile-and-recompile

When I try this it gives me the error message "The VBA project in the datbase is corrupt".  Basically when it was compiled, it was stripped of all the symbols etc and they cannot be re-created.

Fucked again.

So my options are to "discover" the references that I need by trial and error.  I remember I had about 8-10 references, so it should not be tooooo hard.  I think the calendar control was the worst to find.

Where you have used early binding in your VBA you can discover missing references simply by running Debug > Compile Database.  This will highlight any types that you are using that have not been defined.  You can then search the net for which object library contains that type and references it correctly.




Bugger... just have to test everything in the damn database again.