Tuesday, January 31, 2012

Roundup of HTML5 games technology


How to bake the perfect Likert Scale

I've been building a survey and needed to argue a point about some "Likert Scale" questions in the design... so I started reading a little and all the bits that I thought turn out to be crap..... damit.

What's a Likert scale?(First Error...) A tool(Second Error) to measure attitude strength and direction. (And maybe also "Conviction" or the firmness with which the attitude is held) (Third Error)

* Error 1 - "Scale" can mean a number of things.  The intended meaning of this would more correctly be a "Likert response format item"; probably as part of a scale, composed of many of these response items... which can then be called a "Likert Scale". 

* Error 2 - It's not a tool, its simply a mechanism by which to filter the possible responses of the participant into a very limited subset that are acceptable to the researcher. 

* Error 3 - One answer to one question does not measure anything.  Statistical measurement requires multiple tests and is based on a summative result.  Its pattern recognition. One data point does not make a trend line.

This post is simply looking at the variety of flavours of Likert response items in an effor to make sense of it all.

Likert item Stylin'

How many ways can you present a Likert response format?  Seems simple enough....but.... maybe not.  Some of the options I have been asked to use are...

1. Text anchors  vs text labels on all points
2. Numeric scale vs "dots" vs line vs line with dots.
3. Midpoint or no midpoint points ("Forced Choice" model)
4. NA column, "dont know" and "No opinion" choice.
5. Reverse scale or not
6. Number of points
7. One stage or two (Split the direction and the strength into seperate question items)

Seriously, can all these thing actually end up being used in the same way are they real options?  Probably not.  What you may be calling a likert scale/item may infact be a...

Discrete visual analog scale(DVAS)
Ordered-category item
Summated rating scale
Random crap

This article breaks down the above similar constructs and explains the differences (except the random crap). Essentially if your items do not have a agree-to-disagree response format... your doing something else. 

So are you even talking about anything Likert "like"? Here's an article on constructing a Likert style Scale with some rigor.  Chances are, if your's is not being constructed and used in a very similar way... its something else. So go find another name.


Questions of Analysis

There are various arguments about what can and cannot be done with data collected using a Likert style scale.  Parametric or not?  F-Test or not? Ordinal, Interval or Norminal?  Most of these questions illustrate a more fundamental flaw in understanding the context and the instrument that has collected the data than in any choices in analysis.  Go read a good text, define the terms clearly and re-examine the research questions... it should solve itself.

And in conclusion... 

I vow never to misuse the term "Likert scale" again.... bad me.

Further Reading

Stopping the abuse... with the one true article
(The writing style in this article is worth a look... talk about venom and breathless, hypobolic writing and run on sentences and .... you get the idea.)
(Carifio, J. & Perla, R. (2007). Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social Sciences, 2, 106-116. )

This is the Jamison article that seems to have pissed off the writer of the above article.
(Jamieson, S., 2004. Likert scales: how to (ab)use
them. Medical Education, 38: 1212-1218.)

To Mispoint or not to midpoint...

Breaking it up into bits and re mixing it

Monday, January 30, 2012



Causality vs Correlation and why its a sucky "idea"


This is a very relevant article for all researchers. Especially those psych students planning on doing correlation studies in the labs in a few weeks....

Friday, January 27, 2012

Never saw this comming....


Well who could have predicted it?  I guess all the low hanging branches have been picked clean... time for a little encroachment to see what pisses off the herd least. 

Time to move my web presence again.... damit.

biohacking info


When I have all that free time....

Death to the evil filename ending in a dot

I have finally found a way to kill a file with a name ending in a dot.  This particular file has been jamming up all sorts of activity.  Specifically today it was stopping me creating a wim from that drive even though I specified the directory containing the poisoned file in the imagex exclusion list. 

The answer is here:

It finally worked from the command line under win 7 on a poison file inside a vhd that had been sent in a mangled zip file by a student from a mac that had proven undeletable by all sorts of other means.

The fix links back to the following kb article http://support.microsoft.com/?kbid=320081

Now I just have to erradicate the same file from the clones of the VHD, origional drive, backups and any other places it has migrated.... arg!

Printing shoes article


I think this is a point that needs to be made.  Theoretically anything can and will be printed... probably the first thing to be printed will be a better printer.... the question will be ... is it good enough to get stuff done.

I think the technology availible for 3d print heads is at the primeval stage.  Until there is some better way to deposit materials with a higher precision and the ability of the printer to self-check and correct errors, it will always be a crappy "blind-write" exercise.  One mistake and the end product is crap... no restart... no interuptions. Kind of like a modern washing robot that doesnt allow the wash cycle to be interupted because it can't detect its own internal state...

A robot without senses is a dumb beast.

Thursday, January 26, 2012

Game Design Patterns


There is a massive pile of stuff in this wiki.  Fossic around and see what turns up.

Article on Macros inC++


Nice explanation...

(bad)Magical game scripting

I have been picking through the quest scripting interface for oblivion and honestly I'm shocked that it works at all.  I get that it's implemented as a fun little dynamic language that anyone can pick up and use... but its also a giant stick to beat yourself with.  Horrible unstructured spagetti code sprayed with pretend objects, magic numbers, state fragments and every other crappy artifact of bad programming you can imagine.  There is only nominal encapsulation, no compile time checking... magic numbers are the order of the day... basically all the bad things that every paid programming tool set has been trying to help prevent for the past 50 years...

It's frustrating that each game platform re-invents the wheel with their scripting languages and re-creates the same set of bugs and flaws. 

That said, I sympathise with the system designers who are tying to balance accessibility and flexibility with function and some measure of protection for their runtime.

If only level designers were highly skilled programmers... lol.

This train of thought leads down the slipper slope of ... "If only..."... and "We could make it better if...".  However, everything has a price.  Raise the feautures and we raise the complexity for the users.  Feature creep in the scripting languages will only increase the debbuging costs and the chance of exploits. 

Look at matlab, labview, eprime, maxscript, (those are just the ones I have used in the past week) etc etc etc... every one of them started with the idea that they could make a simple runtime with a scripting interface and a few tools that would allow novice users to build ... stuff. 

They all have varying learning curves... varying degrees of depth, various mechanisms to help with bug management and usually user scripts with bug counts up the whazoo. (If the ones I have seen are anything to go by...)

I think the problem has been well described in many other places, the users are ignorant, the tools are unfriendly and the language is full of traps.  (Oh and the runtime is probably buggy as a termite nest.)

So... we have a problem... whats the solution? Better educated users? Tools that check and identify common classes of errors (static script analysis anyone?) and a less fragile language.  All the things every language community has been arguing about for years.... Nice to know I'm not cutting any new ground today...

Think laterally... make a graphical programming environment... drag and drop... boilerplate script... templates.. generics... maybe let the users do their own memory allocation... add raw pointers to the scripting language ...lol.  ...hmmmm no.  Is this a hopeless... fluffly problem that has too many dimensions?  No.

The thing is that the scripting languages are not trying to solve low level programming problems.  Usually they're pushing around high level constructs with very clearly defined abstractions.  The game world is very rigidly defined and the need for magic numbers is pretty limited.  Incrementing and decrementing a skill number can be done through a function call to increment() or decrement() rather than passing a raw int to a setter method.  This forces the script writer to play by some clearly defined rules that the interface designer has laid down and write their script at a level of abstraction and in terms of the "game" world... not at the level of raw numbers. 

A scripting language for a defined interface should not support lower level abstractions than are defined in the interface. 

This also means that patching the runtime does not break scripts.  As long as the abstractions in the interface do not change... everyone is fairly happy.  You can even write tests in the scripting language.

If a scripter wants to do something fancy.... let them write it up in a doc, generalise some cases and request a change to the interface... but for Oblivions sake, don't hand the tools and the responsibility to the scripters... its not their job to make the runtime safe.

Tuesday, January 24, 2012

Why robot cars are going to suck

My thought for the day...

Robot driven cars technology are comming along nicely.  Improving the route finding, obstacle avoidance and generally not crashing-into-shit abilities.  The one problem I forsee is this...

Creatativly solving a dynamic problem at speed. 

Given that there are situations on the road that often involve poor weather, multiple moving objects, unpredictable drivers and variable surfaces, stationary obstacles of more or less value, dogs, kids and multiple physics models.... whats a poor computer to do when things get complicated and it becomes a situation of picking the less bad outcome?

It's not that I think humans are so much better in the same situation... its just we forgive them for their fuckups... or not as the case may be.  My point is... will we forgive a robot car if it makes a set of decisions that lead to fatalaties?  People accept that other people are flawed.  They are generally very unforgiving of flawed machinery. Its a trust thing...

This is related to the setup in "I,Robot" where the lead character has developed a unshakable distrust of the robots simply because he has experienced an event where the decision making ability of the robot apparently did not parallel human values.  The robot calculated that is should save the man over the child; while the human (having surviors guilt) had a lot more trouble with that choice and chose to illogically blame some factor in the robots 'nature'.  He then extrapolated that beleif to all robots having that same flawed 'nature'.   

The point is not that people don't get into the same bad situations and have to make the same horrible choices... its just that I think it will be easy to blame the choice on some perceived "difference" than for the survirors to accept that they could not have done any better.  Survirors are like that.  Blaming someone and trying to find a 'reason' or a pattern is just what we do. 

Synapse at IBM


This is pretty....

A quick debugger for .NET


Very interesting....

Data Mining article


Nice little intro to item comparsion across multiple dimensions.

Common Crawl


This looks interesting for doing research on the internet and text analysis. Need to have a better look at some stage.

Monday, January 23, 2012

Article on social friction


There is lots to think about with this article... todo.

Magic roses....


This article is embarrassingly poorly worded.  I would hate to be the researcher or the author as it makes them look like idiots.  Who knows who may actually be responsible for it in this fun age of poor editing ....

This is akin to suggesting that putting fertiliser on your garden increases your risk of getting roses. This would suggest a poor grasp of biology for a professor of Molecular Bioscience. (Ignoring the issue of wind-blown seeds, neighbours throwing cuttings over the fence ..etc) Roses, do not just ...appear.

Rose seeds cause roses. Just as cancer is caused by runaway growth in a mutant cell line originating with a mutant cell caused by a mutating agent/enzyme/accident/watever acting on a healthy cell during division.  The fact that growth factors are available in the body simply affects the potential rate of growth of the then mutant cell to further divide and carry that mutation forward after the mutation event. 

If you already have roses in your garden, and you add fertiliser, then there's a reasonable expectation that you will increase the growth of the exiting roses but suggesting that adding fertiliser to a garden without roses will increase your risk of getting roses is just silly.

My guess is that there has been some correlation study done that somehow controlled for all the other lifestyle factors, generational differences etc and came up with the conclusion that providing a rich growing environment somehow caused something to have a greater chance of appearing.  Missing the point that for that thing to appear you still require exactly the same causality as if the environment were not rich. The issue of the growth environment logically only has an effect after the seed has been planted or more worryingly that the growing environment has an effect on the chance of a seed being implanted.  This would suggest then that the environment increases the risk of cell mutation.  However this theory would need to fight it out with all the other "lifestyle related condition" theories such as smoking, drinking, dieting, breathing, cellphones, watching cartoons, not getting enough exercise, reading fashion magazines, owning a hat, driving a car... basically everything that people either get beaten for doing too much of, not enough of or have some guilt complex about.

This in no way should suggest that a relationship between the breast cancer and the availability of glucose and insulin is not real ( if that is what the researcher found), just that the simplistic ( and ass backward ) conclusion that the article presented is just a bit dumb.  

A more rational conclusion may be that a rich growing environment may allow greater growth in the cancer resulting in increased detection levels which leads to a whole range of other more interesting questions, such as, are the same number of occurrences of breast cancer occurring in people with lower levels of glucose/insulin and either not being detected, not developing or is there some flaw in the methodology that has reached this result?

Its intersting to speculate that possibly a richer growing medium may result in a greater number of  mutations actually successfully establishing.... which would suggest that mutations may die off in some cases...

This again reminds me to not read or watch dumbed down media that employs functionally illiterate people to report on research... its frustrating and causes unreasonable amounts of ranting. 

Friday, January 20, 2012

Economics of the App store model


Humor amoung the bit rot


Humor makes it easier.... there is some true horror awaiting me in some of the SVN repositories on my drives.... I think its much simpler to retire them rather than try to revive them.  Fingers crossed no one ever wants to use them again....

Article on Stallman being right all along


This article in interesting and indirectly reflects on the AI Ethics issues of control.  Once a devices has some sort of motivation and objective that is not controlled by its "Owner"... then where does that leave it in the scheme of things?  The above article is arguing against devices that are under the control of another person and not the "Owner".... this will go to another level when the devices are under the control of another device.... see the movie "I,Robot" for Asimov's examination of one possible scenario. 

The Self-Repair Manifesto



Social Context

Just reading something that made me think around a new corner. 

The social environment in a town or region is probably fairly stable ( or moving toward a status quo) as long as the physical environment, employment etc is fairly constant.  Its only when there is sudden changes that the social environment changes radically... think of various towns or cities in decline or undergoing major addition or loss of significant employers.  In these cases the social context of the population will undergo some transformation. 

So what are the levers for social renewal?  Really only some signficant change in major percentages of the population.  Which suggests that people will keep doing the same thing and not change their individual social habits, its just the aggregate change of population that changes to social scene.  Which can have more granular effects on the social habits of the individual simply by removing some other individuals who they previously had relationships with and adding new people with different habits who they now need to have relationships with.

Immigration, migration, ecconomic decay or renewal, drought, environmental changes, legeslative changes etc.  What else?

This ties into the community resiliance research area.... also something I was mulling over while driving about renewal of staff in an organisation.... again feels like stating the obvious....

On the Oracle SCN flaw


This is an interesting problem with some complicated solutions.  Since the problem really is unpredictable growth in a very large number with cross infection between critical systems... then there seems to be some fairly straight forward solutions.

The first is what Oracle have done, which is to patch and innoculate recent versions.  Another would be to raise the ceiling of the soft limit... again they have done that.... another would be to expand the hard limit by adding a second number as a multiplier which would allow the hardlimit clock to roll over and turn the SCN into a much larger number.... this reduces the possibility of hitting the hard limit while remediation is made.

The last solution would be to remove the hard synchronisation requirement between databases in large interconnected data centers and instead simply have a synchronisation table for each connected database.  This way there is a stupidity buffer between the instances SCN ( which it only increments ) and the SCN of any other instance.  If the two DB's need to keep in step, then they keep a step difference in a table and do some math as needed.  This way the only thing that increments a db's SCN is the actual transactions that are happening in that db. 

So even if one db is poisoned with a large SCN or during the patching older systems get in a tangle with low soft ceilings then there is no propogation of SCN's through the interconnects. Its just table data that a dba can get in and edit to correct. Then reconnect the sane db's and get on with the buziness.

These interconnect SCN offsets will probably have to be tracked in the logs and reconciled where needed, the point is to stop it being a hard requirement and allow the dba's to set and correct as they need.

The biggest problem is just how fundamental this part of the architecture is. Any changes will take a massive amount of testing and care on Oracles part. 

Thursday, January 19, 2012

Article on 3d Printing in the consumer space


This is an interesting article examining some of the context of 3d printing in the near future.  Includes a couple of wish-list items and some good observations.

Another static analysis experience


This post by John Carmack speaks with a lot more authority than any of mine on the subject of static analysis.  Its good. Do it.

Windows xp to vhd pain

I have been converting to a new workstation and to avoid some of the trauma wanted to convert my old system to a VHD so I could run it a bit longer in a VM.  Seemed possible and sounded like a good idea....

Firstly, my experience with vhd's and vms was fairly basic.... I had created a couple and played with various OS's using VirtualBox and VirtualPC2007 but never really needed to use them (as the old PC was struggling anyway...)

So I started by pulling the hard drive from my old case and plugging it into the new one as a second drive. No problem.  Then used Disk2vhd to capture a VHD image from the system partition.  (98GB later...), created a virtual machine in VPC2007 and tried to boot it.  ...tada.... doh!  Lockup.

Go into safe mode..... locks at Mup.sys.  (I have been here before with various boxes so I thought it probably had something to do with the HAL and the change in hardware or a bad driver...) easy enough.

Break out the winXP installation disk to try to get Recovery Console up and running... (unfortunatly the disk was created using nLite so the recovery console had been stripped out... damit.) ... ok, scrounged through the box of CD's and found a winXP SP2 disk and was back on track... Boot from CD in the VM, go to recovery console... read a couple of pages on line.... try a couple of the suggestions ( bad boot sector so run fixmbr and fixboot, run chkdsk and get a weird response.... run diskpart (xp generation of diskpart) and see it choke to death ) After these changes the whole partition became unbootable and I learned more fully about undo disks on VM's.  "This volume does not contain a recognised file system" etc etc lots of general badness.  Ok, maybe this is more than I thought.

Create a whole fresh VHD (and a copy of it this time...) and start again.

Try to install a fresh xp system over the top of the old one.... "Current Installation is too damaged or full to install... blah blah" thats no help.  Just wants to format the disk  seems to think the file system is unknown. 

Boot using winPE and check with a more current version of diskpart, it finds the disk and the volume but canot figure the file system.  Thinks its a RAW fs.  Thats not right ... or good.

More reading online. This post on why xp cant be booted as aVHD  gave me some hints about the why and some hints about a maybe solution.  Somehow the HAL and the boot loader in winXP were not happy... kind of confirmed what I had been seeing.  However, the bootloader in my vhd's seemed fine.  It was just when windows tried to play with the partion and mount the file system it all went to hell.

There were hints that the disk2vhd tool or VPC2007 could somehow fool the system and patch it.  So I rebuilt the old machine and ran disk2vhd in the active session and there was a new little check box that said something like "Prepare vhd for use in VM"... I checked it and let it build the vhd over night.

So after all that... move it all back to the new machine and run up a VM using the new vhd. (Number 4 so far) ....

The bootloader has been obviously changed, and now has a new option "Disk2vhd Microsoft Windows XP Professional" and the normal "Microsoft Windows XP Professional" option.

Booting to the "Microsoft Windows XP Professional" option results in the same failure at Mup.sys. While booting to the new shiny option seems to get a bit further but now it bluescreens with "UNMOUNTABLE_ BOOT_VOLUME".  Damit.

So where too now....

Honestly I think I'm stuffed.  The only options I can see are either try to copy files into a freshly built XP VM or conversly copy files from a freshly built XP VM to try to path this one. by mounting both VHD's in the one VM.... Or ????

I have spent about as long as I can trying to get this sorted... I may have to move on to other things.

One other detail I have not explored is the fixed size vs dynamically expanding VHD.  It may be possible to convert the VHD to something that will be able to be mounted correctly.

Interestingly when I try to add the VHD I created using Disk2vhd without the little checkbox being ticked ( I'm sure the option was not availible at the time) anyway... when I add it as a second hard drive its still mounting as unformatted and unreadable, so perhaps there is something even more fundamentally broken.  It's still 98GB in size so that may be a factor..

Also I have just checked the one created using the magic check box and its comming up as unreadable and XP wants to format it. WTF?

Monday, January 16, 2012

Weird win7 system reserved partition bug

Had a new workstation delivered today.  Started familiarising myself with it and poking around as I am inclined to do. I found that the system partition was about half the hard disk and the rest was missing.  So opened up disk manager and found that the partition layout was:

39MB OEM partition
233GB "System Reserved" partition (no drive letter)
232GB Active Partition mapped to C: drive.

Being curious and wanting the space back, I mapped a drive letter onto the system reserved partition and had a poke around.... turns out there was nothing in there except an altirus cfg file.

I had a google about and the system reserved partition for win7 enterprise should only be 100MB by default... (unless hackery was involved) so I'm a bit baffled by how this has happened.  The IT staff who built the machine did not create this layout intentionally so something weird seems to have happened during the partitioning step in the installer.

My next job is to either fix it or live with it... I don't really want to have the machine re-imaged... and I like seperation of system and data so its actually quite fine for me.  The only element that's making trouble is the stupid hidden boot files in the partition root.  I don't want them floating around in a partition I am actually using for data.


Ok, the solution was to shrink the 233GB partition down to a reasonable size (300MB rather than 100MB as recommended in some article I read somewhere)  then use GPartEd to move the C partion into the empty space and then extend the C partition to use the new space. (Seems that someone thinks one mega partition is prefereable to multiple partitions on the new workstation image)

Sunday, January 15, 2012

Beautiful Simplicity

RenPhoric is a simple little file renaming tool that I have been using for the past few days in an effort to get on top of a mess of media files.  After using a number of different file renamers over the years.... RenPhoric is just bliss. 

It's just such an elegantly simple concept.  Treat the file names as a block of text and use text processing tools on them... then apply the changes back to the files. 
If you're familiar with text processing, pattern search and replace and regex then its easy to apply these same skills to get lots done.  Fair enough, there are other tools that work on similar principles... but this just works so simply.  Showing the result of the changes while you work.  Maybe thats what I find so attactive... the immediate feedback... anyway.... I'm a convert.