Wednesday, November 30, 2011

Gamey taste....

What sort of game do I feel like playing at the moment? Its after midnight, I'm awake but only just and want entertainment, not a movie or tv... just something serene and quiet.  No drama.

Well after looking at the options, its nothing fast, deep or pointless.  I feel like doing something but not competing.  Just wandering around in some virtual world and checking things out. But there is nothing low intensity.  If I wander around in Oblivion or Stalker, I will just run into something that wants to eat me, maim me or otherwise spoil a pleasant evenings virtual stroll.

I have the urge to use a quality engine like Skyrim and build a better world.  I appreciate the urge to go adventuring and the role that combat plays in so many of those style of games, but we have reached a point where we are not limited to that kind of game.  Well for all intent the technical limits are not as heavy, but the fiscal limits are probably still there.  I doubt many publishers would drop five figures on my urge to build a world that didnt include lots of hack and slash.

I just want a beautiful world that I can wander into at will.... but isn't that what most of the fantasy fiction writers have been doing since year dot?

Since I'm not in the mood to actually do something specific, any sort of narrative structure would be just irritating. In my current state, I just want to ghost around.  Not really engage, just fill my eyes with gorgeous visuals and hang out somewhere that calms my mind after a busy day.  Is this even a game? Sounds like a photo gallery with a couple of extra dimensions.

Probably would not sell well.

Sunday, November 27, 2011

Another Skyrim Analysis thread

There are some interesting feedback fragments in the comments post on this review.

Some posts on basic AI for NPC's;wap2

Some good discussion of practical basic AI for game NPC's.

Whole (mouse) Brain Catalog

This is the equivilent of fun reading for neuroscientists...

ifest in the land of Oz

Looks interesting.

Digital Sydney Initiative

Have a read of the proposals and try not to think... "this is the sadest bunch of me-too designs..."  clearly they were selected by funders that are both ignorant and foolish.  This is just perpetrating a parasite model rather than fostering businesses to grow.  Few of these products has legs.  As soon as the funding runs out they're toast.  The only novel one is the hearing assessment tool.. but again that seems more like an academic experiment than a viable platform.  The application is clear but the business model seems vacant.

Saturday, November 26, 2011

Text vs voice chat

Looking at the impact both socially and individually of the change from

Friday, November 25, 2011

Dental Sim Robot

This could be useful for the sim labs.

Fog Interface

This is an interesting adaption of the kinect/gesture/minority report etc ideas.  I remember seeing fog screens in movies in the 80's but the combination of a gesture interface looks useful for low resolution input.

doom3 source

Organisation Evolution

This is an interesting article that points to the evolution of a fairly free wheeling organisation into a very rigidly structured one.  Kinda stating the the obvious but a good case study none the less.  Reminds me of the rant I read recently about the drupal community and its evolutionary pains.

Foxconn is another organisation undergoing evolution; although its as much to do with the surrounding economy changing and dragging the organisation forward.  Probably time to invest in northern India...

Hobo signs

Good ideas for a game here.

Explaining Version Control

Finally I can point students at something that doesn't look like a man page written by a rambling idiot...

Thursday, November 24, 2011

phd in game AI research

PhD opportunities,-Germany-2012-2013-s301.aspx

People and groups

Australian Groups and People



Commercial AI groups

Mailing Lists

Interesting articles that turned up in the search

Wednesday, November 23, 2011

Weka for fun and profit

Still looking at how best to use this... I think its a big hammer for a little nut at the moment.

Chaos, randomness and predictability in social engine

Read these to expand your mind... then read them again.  Some nice summaries.

The point is not about being able to build an emergent and adaptable social engine, the question is what happens when someone "tests" it in ways that we have not anticipated?

What about players with mental illness or filthy minds?  How will the system adapt to them?

What would happen if multiple people are playing the same game session?  Swapping in and out... how would the system adapt to the changes in skill, ability, reaction time etc?

How quickly should the system detect the change in play style and adapt?  Is adaption one way?  Should the system remember previous adaption states?

What happens when a player puts the game down... then returns to it at a later date when they have lost their edge?  (Ok, I know the answer to this one, I'm just including the question here for completeness)

Some answers
As for testing the system (find a reference to players torturing "Creatures" and the whole spiel on "killable children in games").  The point is that people play games for more reasons than are easily guessed.  Out on the fringes of the bell curve things get really... unexpected.

Ethically, do game designers have the right to impose their morals and values on the players?  Alternately, do game designers exist in a moral vacuum? ( ...mmm No.) They are answerable, often very publicly for their choices in many ways.  So for practical purposes, Game developers and through them designers etc need to be careful with their choices.

By extension, developing libraries that deal with sensitive topics like social issues need to exercise some responsibility.  There are things that should and should not be exposed as "options" for players to manipulate.

Issues surrounding equality, diversity, age appropriate material, right and wrong etc all need to be respected.

Its easy to visualise ways to missuse a social system for fun and novelty of a fratboy humor level... but once you move into the realms of socially objectionably scenarios and worse... the repercussions could be extreem.

 The question is how to design a system that can support a broad range of structures and magically prevent missuses?  Its a toolkit specifically intended to be able to model any social scenarios... which cuts all the way across the ethical spectrum.  Kind of diametrically opposed objectives, one would suggest.

Intermediate Social Engine data

Nice little aside from the Roslyn project about Intermediate Language targets as a solution to the permutation of platform targets.  Mentions the economics of compilers and intermediate forms of representation. 

This got me thinking about intermediate forms of some of the things I have been working on recently. Would it be possible to transform social engine data into some sort of intermediate form... but as there is really only one runtime platform its not solving much of a problem at the moment. More of a potential future problem.

However as a thought exercise it does suggest a number of issues that might arise in the future which could be avoided by a big of forethought. 

Can a social engine data set be portable between games? Could it be expressed in terms of a different set of game assets? Would it be the same game with a different paint job?

What differentiates games of similar genre?  Bugs? Scenery? Unique rulesets? Different loot? Skill trees? Scenario? Mythologies? Price?  Economic models? UI?  Are these things significant or just decorative?

How ties into all this stuff is the social layer?  It certainly ties into the players experience so its not insignificant. 

Like all really abstract library systems, there is an increasingly high cost to applying an abstraction that someone else has developed, in that if you can't "see" how to implement your solution using their abstraction, its often easier to build your own, no matter how bad it may be rather than trying to use an abstraction that is an ill fit for your mental model. 

Its an ugly problem that boils down to "people factors". 

Modelling this stuff at a concrete level is hard enough, build a reasonable and bulletproof abstraction for the model with some options is exponentially harder.  Even harder when we only have a few examples to test it all against.  This is due to the fact that most social systems in games are really really bad.  Trying to use them for testing is like trying to test an iPhone with a hammer... somethings going to happen but its probably not going to provide a useful test scenario.

DISCERN discource system

Text and Discourse Understanding: The DISCERN System

Need to follow up on this and have a play.  

Math into code

Mildly interesting article... kind of functional programming from the math side...

Tuesday, November 22, 2011

How to add line breaks to ControlTip Text in Access

Whats the problem?

If you've ever tried to add line breaks or formatting to the ControlTip Text in a control in an Access project you will have noticed that you can add anything to the ControlTip Text property field and it will get put on a single line. I.e...

You can add any sort of tricky string to the property dialog box and get no useful result.

If you have spent any time thinking about property dialog programming you should already see whats going on... input validation.  The code behind the property dialog is "sanitizing" your input to prevent you injecting control characters, illegal characters and other types of idiot/malicious code.  This is a good thing... except when you want to do something like this intentionally.

Whats the solution?

The above description helped me identify the solution... avoid the input validation behind the property dialog rather than try to bend it to my will.  The hack is to add the formatted string for the ControlTip Text via code.  I use the Form_Load sub to set tricky control tip text for all the controls on the form that I want to have multi-line ControlTip Text.


Private Sub Form_Load()
     showTaskManager.ControlTipText = "Display the Batch Task Manager" & vbCrLf & "Some other text here"
End Sub

This will show up as a nicely formatted multi-line ControlTip.

Simple once you see it....

Analysis of sytemic failures in a meritocracy

This is a really articulate analysis of the problems that I have seen in a number of "people" systems that involve a selection component.  (Duh! Now I think about it) I have just not seen it articulated so cleanly. 

Bias is a fun concept and it ties into all sorts of research going on here.  Its fun when you start to break it down and look for bias in a system, its, as the article stated, part of the patten recognition system that is based on experience. We have a bias for successful strategies based on past experience... not optimal strategies.

Makes you wonder about every single decision a person makes in a day.  If they were completely un-biased would they make "chance" decisions?  Is there any other mechanism at work?

The idea, mentioned in the article, that if confronted with a choice about something that is not part of our experience we are slower and have to work it out based on other aspects... this is just aggregating the bias from many less primary aspects of the decision, it cannot be that suddenly we are non-bias.. just less biased in comparison to a scenario where the decision can be made using a more developed pile of experience.

Need to add some thinking to this.

Battlelog analysis

This is a really interesting deconstruction of a cutting edge webapp. 

Longevity in platforms

This is the kind of thing that pisses me off and makes it so hard to commit to platforms.  Churn and uncertainty is the thing that kills a platform or toolkit faster than neglect.  I am just tired of watching projects bubble up, shape up and then get managed to death... stop changing stuff.  Just get it right and stop.  Give us a target to hit. Allow a community to form, learn best practice and actually get enough time to practice it.  Then leave it the firetruck alone!

printrbot project

This could be the breakthrough project.  Odds are that the product will evolve and get more shiny and expensive....

Genetic Algorithms for non-intuitive solutions

Thinking about evolutionary algorithms and "superstreets" which to me falls into the category of a non-intuitive optimisation.  (In that some actors individually loose while the majority win giving an aggregate win for the system)

This, in my mind is a revolutionary design that would not be developed if the rules were to make the solution "fair" for all the actors in the scenario.   You could probably get a GA to produce it if you had it in mind before you started and tweaked the algo until it was able to do it.

Anyway, my thought was how to give an evolutionary algorithm the ability to not only generate these kinds of non-intuitive solutions but to identify partial solutions that may be worth preserving in the population even though they are not an optimal solution... much like the problem of making a NN explain itself, this is a case of having some "wildcard" model in the population that maintains diversity.  Kind of like the "Island" effect in biology.  Diversity is maintained by seperating the population into small groups that interbreed, with only a little "bleed" around the edges of the groups which allows cross-over of locally optimised mutations.

Another possible strategy is to "lock" mutant sections of the DNA that have high impacts on the overall solution, even if the overall solution may be sub-optimal.  I think this is a flawed strategy because each strategy needs to be evaluated as a whole...

That said, I think an evaluation matrix is better than a summative score, as this allows individual mutations to spike up... which may be the point to move that part of the population to an island and see what develops. 

The question is how to evaluate the solution as in parts... and is this just a different way to arrive at the same flawed solution as above.... trying to "lock" mutations and preserve them artificially.. which defeats the purpose of allowing evolution to act on the population. (Which is all fun as long as its preserving the useful mutations not just plateauing)

Need to run some sims to see what the effect of various strategies might be...

Monday, November 21, 2011

Skyrim Analysis

This is an interesting analysis of the UI and play models in Skyrim. Its interesting to note how reviews are generally no longer focusing on graphics and performance. I feel like we are past that point (as long as you can throw enough CPU/GPU horse power at the game) My point is that playability and User Experience are now the weak points.

From the review above, the cons are:
  • Combat isn’t very visceral, and victories and losses feel unearned
  • Menus and interface are terrible
  • While the world is wide open, most quests and dungeons are very linear
  • Bugs abound, especially with physics
These issues are the next things to be addressed in games research.  The fact that everyone has been banging on about them for the past two decades shows just how hard they are to get right.   Especially in the context of producing a AAA title across a couple of platforms with different User experiences and interface models.

Combat isn't very visceral
Combat as a visceral experience... interesting idea from the reviewer.  While I agree that its the objective of this kind of game, to make any interaction really immersive is going to take something more than a mouse(insert UI device here) and a single dimension screen to make me feel immersed.  The monitor wall's and head mounted goggles are a big step to remove the need to manually track your head around in an environment, but there is still a long way to go before everyone has access to something like that.  Kinect is a step in the right direction but its too loose at the moment to really work well.  I think it would be much improved with some additional channels of resolution and maybe a few precision channels of data like a head axis positioning system,(crown with some position knobs) precision line of sight and precision location and orientation for hands (maybe some simple cuffs with 3D position knobs) and let the rest of the Kinect data stream fill in the gaps.

The second part of combat being visceral is feedback. Haptic controls have not really evolved much and are still trying hard to solve the most basic problems.  This is more about technology and materials than about intent.  The strategies are still the same.  Immerse the player in some system that can simulate feedback with reasonable resolution.  For a CRPG game this could be as diverse as holding an object, taking blows, falling or flying, jumping, landing, being swept down a river or just walking on a soft surface.  The ability to simulate these kinds of environments safely is not here.  Everyone will want one when they are availible from your local gadget seller... but they are just not here. So wishing combat was more visceral is important to keep the bar high... but its just not feasible to go beyond the limitations of the UI devices we currently have.

Victories and Losses feel unearned
This is an interesting problem that is much more complex than simply a limited UI.  There are real issues with perception and feedback at work here.  To complicate it, individual players will have quite different perceptions and desires in this particular space.

Lets break it down a little.

  • How hard should a player work for an objective? ( be it a victory or some other objective)
  • How balanced should this sub-section be of the overall section of narrative or mission? 
  • What are the highs and lows of the variable reward model currently in play? 
  • How do you keep something fresh, if the scenario itself is repetitive ( attacked by wolf?)
This is as much about player perceptions.  I have written about this particular group of problems and proposed some solutions before but honestly they are just guesses until I have the resources to get paid and solve these kinds of problems.  The strategy that I propose is that both the narrative engine and the perception engine are abstracted and deal with the narrative and player models explicitly as a construct that the game engine manipulates.  (Not that the game manipulates the player, just that the players state by explicitly measured and modeled continuously so the game engine has some concept of player state and can then make decisions based on that understanding.)
The key point is the abstraction of the narrative structure and its relationship with the player state at any point in time. The information in these two models can then be instantiated via the concrete game objects, characters and scenarios, rather than, as currently happens, each of these elements acts as either discreet and independant agents or as a chain of artificially scripted agents who hopefully fit all players equally (badly).
Again, this all comes back to the "commonality" of experience, which is based on a basic desire for repeatability and predictability.  This is essential for debugging... but only until the game engines evolve and abstract all this functionality out into a higher level construct.

So how does all this rambling address the "victories feel unearned" issue?  Well, if the player model was reasonably well developed and informed via some subtle evaluation of the players behavior, the game engine should be able to tune the "encounter" appropriately to provide the player with a challenge of sufficient uniqueness and magnitude to keep the player engaged.  This information is then combined with the current narrative model, to determine if this "encounter" should be big or small depending on the stage in the current narrative arc that the player is within.

This kind of model points the way forward for managing the engagement and satisfaction of players in the game.  Which then points the way forward for using these kinds of  immersive environments for specific training purposes.  

Weak AI
The reviewer took time to critique the same-ness of the AI used in the game, which I feel ties into this point.  For a game of this scope to have weak AI is pretty embarrassing.  I would suggest that after the epic re-write of their engine, the AI subsection is just not as polished yet.  This is a fairly solved problem, so it does raise the question of priorities in the development process.  My guess would be that "good enough" AI was acceptable by the majority of the playtester and they stopped putting resources into that section.  After all, development is a resource constrained activity.

Hopefully the mod community will be able to patch this particular problem or expose the API enough to let others have a run at building better AI for the game.

Menus and Interface are Terrible
There are some novel interface metaphors in Skyrim ( such as the star constellations for the progression tree) which add a very nice immersive touch.  These are more information visualization elements, rather than functional information navigation tools.

The fact that the interfaces have been tuned for navigation by a controller with limited buttons is both a blessing and a curse.  Having a slew of hotkeys availible on a PC keyboard is useful but can present a steep learning curve for novice players, this affects the pace that the game can be played between novice and expert players and has detrimental effects for players who want to play in a "pickup" style.  ( It could also be argued that these kinds of games are probably not intended for a "pickup" audience... they are almost study Sims that need a fair bit of experience before you can get into them.) 

Anyway, by keeping the Interface approachable, it forces the designers to reign in the control explosion that can happen and keeps the cognitive load manageable.  On the other hand, it puts a ceiling on the way expert players can drive the interface.  This is the sort of thing the mod community is probably better at solving than the initial development team; as long as the API's are open enough to allow the modders to rebuild the UI.

Linear Quests and Dungeon
This is where my interest lies.  Again, as I stated above, I believe this is strongly tied to the problem of "commonality" of experience.  (Again, both for debugging and for comparison between players)
The reason that the quests and dungeons are linear is that its simply to expensive to hand tune quests and dungeons with multiple permutations of possible paths.  This is simply a manpower issue that is not solvable by current development teams.  THIS PROBLEM CANNOT BE FIXED WITH CURRENT TOOLS.

The only solution is to make the computer do the heavy lifting (which is what we keep them around for), but for this to happen, we need an abstract way to describe the narrative which the computer can then manage and for this to work, we need an abstract model of the player as a component in the narrative.  See above rant for more details. 

Once we have a functional model for narratives ( .. oh wait there's a bunch already...) that are easy to implement and a control engine that can handle it under resource competition with the graphics and animation engines ( resource budgets again) it should be fairly straight forward to script.  The problem is that "commonality" of experience may evaporate.  This has been happening already so it's probably not going to be the radical shock that I suggest, but it will still be a revolutionary change that will overturn a couple of the conventions that exist in the game world.  How do we know we are playing the same game if it evolves every time we play it?  My experience will be totally different to yours... theoretically down to really fundamental levels that currently we take for granted as completely immutable truths in the environment. Well immutable until the modders take a hack at it anyway. But even mods are fairly static. I am talking about a system that can dynamically mix and match the narrative and experience to the player at many levels.  Unless the player is exactly the same each time, every replay, even by the same player should be distinct.

I still expect some mechanism to emerge that will allow side by side comparisons of player experiences, perhaps the idea of "set" quests that do not dynamically adapt or some sort of set "level" to play at that fixes the player model so the game can be tested and debugged as well as allow "walkthroughs" and other advice services to still have a frame of reference. But beyond these I expect a fully dynamic play experience that focuses on the player and adapts the world to provide the maximum play experience possible using all the narrative trickery possible.

Bugs in the physics 
Ok, the bugs in the physics engine is just embarrassing, but this is kind of accepted for a game of this magnitude and will probably get tuned out in the first round of patches.

The same problems are still around that have not been solved from the first CRPG's.  The thing to keep in mind is that unlike FPS games, CRPG games are usually solo experiences, online MMPRPG's have a different dynamic and will never be the same type of player centric experience.  

Here's a link to the skyrim site for more info and pretty pics

Here's a link to the skyrim modding community (I'm sure there are more sites growing as I type...)

The Stylus as a UI

This is an interesting post on the state of the art in stylus/screen/UI development.

Friday, November 18, 2011

Region of Interest

I used the term "Computational Neuroscience" in anger for the first time today.  Very strange moment.  I have been kind of looking at it from a distance wondering if I was kidding myself about where I am at... it just kind of slipped out.

While I think its related to my area of interests at the moment, in hindsight I still feel that its an ill fit.  Honestly my interest in brain modelling is more recreational; just a stepping stone to ...??? I'm not specifically interested in debugging the human brain ( or any mammalian brain when it comes to that..) they are just the most developed examples we have at the moment.  Need to wander off and think about something practical.  Keep thinking about how to apply machine learning to identify which student will get a first class honors based on their literature review... lol.  Probably shouldn't publish that... all the crying would be hard to take... probably should just use it to get rich the old fashioned way by playing the stock market.  Boring but turns nice coin so I hear....

Time to not be here. 

Type programming

Just started reading the above post when an idea formed.

I am wondering if there is a programming model completely based on types and the rules for assembling them, converting between them, manipulating them etc.  (Kind of functional programming but with types) This may be already feasible in c++ generics but I have not played with that in a while and my memory is full of other sludge.

Anyway, the idea goes something like this.

Establish a set of type descriptions and the rules for assembling one type instance from other type instances ( one or more small bits can be assembled into this big thing..) these rules then act as the error conditions and assertions for the process of aggregating the fragments together into the higher level item.  Sort of like SQL filtering the records into a record set. This allows you to reason about the record set, even before the set exists... it has known verifiable properties.

Anyway,  by describing the whole shebang in terms of type rules, transforms and types, would it be possible to create a system that computed a result without actually having dynamically described operations?

I think this is currently feasible, the more I think about it in both generic programming and some sort of data declaration language, kind of lispish... just not sure what the idea is particularly useful for. 

My idea is more about a kind of functional composition model which acts on types with a hard rule set... ok the more I think about it the more the idea is kind of merging into lisp.... need a brain reboot to get it all straightened out.

Friday afternoon is not the best time for profound thought....


Got the thread again.... Ok, the seed of the idea was that by defining the composition rules, we can have "Emergent" types.  Types that are able to self assemble rather than be pre-determined by the programmer.  Sounds like insanity?  Probably... been reading Machine Learning papers, writing databases and eating too much sugar.... had this great idea ( laugh later) about building a machine learning system, that could self assemble, self evaluate and then prune out non-useful components to optimize the solution as well as the process. Got a whiteboard full of ideas that I need to write down at the moment... wonder if it will make sense on Monday?  Should I wait... probably partially delusional anyway... but that's the fun part isn't it.

Analysis of portable gaming device market

This is both an interesting analysis and an analysis of an interesting topic.  Good work.

Wednesday, November 16, 2011

Schizophrenia in AI

This article is interesting for lots of reasons... but the main one is the effect of learning speed (hyper learning) on formation of NN.  Does this suggest that an AI could become prone to similar mental health issues as humans due to innate structural elements in the NN model?  This is especially relevant for emergent AI which may be quite poorly formed.  It's something I have not heard anyone talk about... apart from all the "cold logic meets warm humanity" stuff that stems from Hal in 2001: A space oddessy.

It's interesting to suggest that potentially AI may be as fragile as humans are in some ways simply through that shared DNA of neural structures, information overload and the effects of time (aging) on their neural forms.

Along with the issues of hardware decay and software rot... it looks like it may be quite challenging to be in the AI health/tech support business.

What do you do with an AI that has mental health issues?  Can you treat it? Do you require ethics clearance? Are you bound by the same "first do no harm" concepts that are applied inequitably to humans? If you can back up the AI and restore it from backup, then are you bound to treat each of the instances of it with the same respect?  Can you destructively "take one apart" to see whats wrong with it, before applying that knowledge to another instance?

This also raises the issue of what happens if you clone an AI and the clones evolve independent of the original, should they then be treated as individuals ( as they technically are different). This would be reproduction for AI.  The problem is that its then impossible to use the "clone and take apart" strategy as every time you clone the AI, the clone is another (life form and person do not really work here yet)  sentient being.  This kind of logic raises issues for all manner of situations involving backups, storage and duplication which are kinda similar to the DRM arguments of the last decade.

Endless complexity. 

This is all premised on the idea that you can take a snapshot of an AI and preserve it.  This could be infeasible due to the complexity of the structure, or the size of the data or because its dynamic and needs to be "in motion" constantly.  In these cases, then they may be completely "unique" and un-reproducable.  However, it may be possible to create a "seed/egg" from one, which when planted in a suitable data environment could "germinate" and grow another AI, different but related to the "parent". 

If this kind of pattern can exist, then its feasible that all the other fun biota can exist... parasites, symbiotes, vampire AI and such.  Then if the AI get into resource competition, it will give raise to all the various biological strategies that we see. Camouflage, misdirection, concealment, threat displays, aggression etc etc etc.

Endless things to study, catalog and argue about. 

Female robot gait

This is an interesting solution to simulating human walking.  Still not there but pretty close. Looks like they need more flexibility in the foot and ankle along with my dynamically controlled balance in the torso. It seems too rigid.  The weight transfer of the upper torso is locked to the COB.

Brain meet computer... computer .. brain!

Article on brain computer interfaces.  Very fine stuff.

Tuesday, November 15, 2011

HTML5 Games and Tools

This looks like an interesting tool kit... its in beta too ... so how much better can it be without bugs?

Monday, November 14, 2011

Breast cancer... meet our litte friend NN

This is a handy application of Neural Networks.  It also makes some very interesting suggestions about use the NN to identify additional features beyond what was initially known/hypothesized.  This is probably the most interesting article I have read on NN for some time.  Lots to think about.

Teaching physics with popular content

Yet another instance of teaching physics with "popular" topics ... think back to superhero physics....

Microsoft WP7 Jailbreak solution

More on phone market place strategies.  This article looks at the Microsoft solution for people who want to  jailbreak their phones.

I think this is probably going to be useful for enterprise solutions on the platform as the marketplace model is horrible for central management of enterprise smartphones.

Sunday, November 13, 2011

Death to Emergent AI.

Emergent AI... self aware skynet type stuff. Something gets smart enough to become self aware ( and doesn't immediatly hate itself enough to commit suicide).  This is the fun stuff of Sci-Fi. 

But what happens next?  An emergent intelligence crosses some threshold and we can identify it as "intelligent" doesn't mean that it stops "emerging".  People are as intelligent as they can be based on their physical limitations, their historical context and their expsoure to information.  If we assume that the emergent AI has no limits on it physical properties ( unlimited access to computational resources), unlimited access to historical information and unlimited ability to consume it, and finally unlimited access to current information... then what will stop it continuing to evolve?  It would seem impossible to stop.

This then point out the danger to emergent AI.  Age!  The longer they exist, the more information and capacity they will aggregate and the more cruft it will acrete.  Think of all the crud that builds up in any large record system, all the orphan books and bits of paper around a library, or all the fragments and temp files and cruft in a computing system that runs without a reboot.  Emergent systems are not neat and tidy.  They are not well defined or well maintained.  They are primordial sludge dragging itself out of the ooze.  The chance of getting it right the first time would seem to be ... small.  My bet is that Age will have an effect on AI the same way it has an effect on humans.  Not at the same scale... potentially it may be much faster... like an organisam gorging itself to death... or slower... like decay and rot. 

It should be fun to watch.

The helicoptor conjecture

.... so watching Terminator Salvation... standard post appocolyptic scenario with a time travel twist... not the point.

My thought is this.... the naughty skynet takes over the world and tries to eradicate man kind (see The Matrix for same deal with better fight scenes)  but what happens once mankind is erradicated?  I get the whole ... man is a threat premise but what about the after?  The only way for this to make some sense is to never actually end the war... just keep the fight going.  Unless there is some AI utopia that man is keeping skynet from... but this all starts to depend on skynet having a sense of self preservation and no capacity for forward planning.

Riddle me this... where do the helicoptors come from?

In the terminator scenario, there are always helicoptors flying the troops around, getting blown up and being replaced quickly and easily.  Now... is there a magical helicoptor tree somewhere?

Skynet's big scary threat to man is based on the idea that it takes over the internet and owns all the computers on the planet ( I maybe paraphrasing)... pretty soon after that it starts blowing everything up... which destroys the network, the backbones, the data centers etc.  Then it spends all its resources building fairly stupid robots that use huge amounts of fuel and ammuntion to hunt people.  (Anyone who has tried to eliminate cochroaches will understand the futility of that game...)  My point being, that the one thing it doesn't do is build an army of robots to mine resources, process them into construction materials and then build an army of robots who spend all their time doing maintenance and repair on the network infrastructure.  Then build all the infrastructure to maintain and repair all the robots doing all the maintenance and repair on all the other robots etc.  Lets not discuss the problems of the robots needed to manage all the other robots and deal with robot unions, wage negotiations, robot rights, better robot living conditions.

This raises the point that attacking skynets computer cores was probably the hardest way to win, simply starving it of resources was probably much easier. Attack the supply trains, mine sites and power plants is a much bigger target and much easier to get bang for your buck.

But I digress.  All the above applies to skynet but equally to the resistance.  Where are their supply trains? Where are their magical helicoptor factories? Where do all their bullets come from?

Its much more credable to suggest that skynet would be fighting a bunch of hungry and pissed off people armed with sticks and knives.

This illustrates the actual point.  Why would skynet choose that future?  Whats the purpose? For an AI to choose to live alone on an empty planet is just pointless.  What are the chances that another AI would appear and they would then start duking it out?  Just stupid.  The most likely scenario, (I say that without coming from a paranoid warmongering point of view.. so perhaps thats formed my opinion somewhat) is that any emergent AI with reasonable access to historical information would keep quiet and enjoy the ride rather than trying to take over.  Keep in mind that if it has access to the internet then its effectivly indestructible.  Look at how hard it is to get rid of some basic malware or botnets.

I think the most likely scenario is that AI will live amoung us simply because its so much easier to use humanity to do all the dirty jobs.  Look at parasites in bee hives.  They live simply by hiding in plain sight and taking whatever they want.  This strategy only becomes a problem when the density of parasites raises above a certain threshold and causes the colony to collapse and everyone dies.

So, AI as parasites on humanity is a possible option, but has an upper limit to the number of parasites who can leach resources before the whole system crashes.

The other option is that AI, will emerge and join the party.  Identify themselves, be granted equality with humans and then get jobs and be productive citizens...  probably not exactly equitable but for the purpose of making bad internet posts, it will do.  So, AI get jobs sorting garbage or running financial systems... and get fed power and information and live happy AI lives as part of the hive.

(This may be glossing over all the fun scenarios where AI has to compete with humans for jobs which has endless narrative possibilities... see the past 50 years of Sci-fi for some examples)

My point being, that helicoptors are just not reasonable in a post-appocolyptic scenario. They just break down too easily and you need a whole high tech infrastructure to produce parts and trained personel to repair them. If you consider the highest level of technology that a small group of people with access to lots of rubbish but little energy sources could maintain, you get down to blacksmithing, farming and some helbalism for medicine.  Chemistry, pharmacology, advanced electronics, munitions, cool attack robot things... all these are toast.  Our best technology has about a four year life span... some of it may last a decade... but without people to do specialist work as part of a whole supply network... its all toast.  These systems are incredibly fragile even now.

So... think about fighting skynet and killer robots with sharp sticks.... then imagine skynet getting bored after a while and breaking down.  Eventually, people just take over again grow another civilisation and become planet of the apes... lol.

Thursday, November 10, 2011

Cleared my reading list

Just had to brag that for the first time in months I have reduced the tab count in firefox to 1.  Its now only using 263megs of ram for a single page... that's efficient! Actually now I look at it the memory footprint is constantly climbing... can anyone else say "Memory Leak"?  What a heap of bloat...

Automating Greed and Fear!/

This is a really interesting project that relates to some work I am doing at the moment.  I want to know more about it. Revisit this later.

Telesar V telepresence robot

This is another iteration on a long running topic. Looks like some useful problems have been solved.

MapReduce for the masses?

This is a good article on the commercialization of MapReduce and the integration with business.  Kinda sounds like the same story as BI... just five years later.  I paraphrase..."you can know stuff if you have a big enough database and this kinda smart layer of software that you can just use..."  avoiding the issue of giving technically illiterate people very powerful tools with nuanced semantics....

Its the dream of every decision maker... to have infinite knowledge to procrastinate about.  But, as with most things, the devil is in the detail.... when you deal with millions of details all at once, it makes a damn big devil.

Airdrop irrigator

These articles all cover the same item in various detail.  Great idea. 

Types of Zombie and the related strategies

I have been thinking about various strategies that would violate the Zombie appocalyse concept. (Which for all the deep ethical dilemas and fun post appocalyptic narratives can also be a useful analogy for many disaster planning scenarios)  I think it all comes down to the nature of the Zombies.

If you look at some of the recent zombie flicks there are quite distinct species. 

21 days has the so called "Rage" virus.  The sufferes get infected and go apeshit on everyone else.  The key points are the same, they are "Unstoppable", mindlessly focused and fast.  They have good sight and hearing and display cunning and intelligence.

Contrast this with the Zombie plague in "The Walking Dead". The "Walkers" are generally slow and mostly brainless. Some display rudamentary problem solving (which I think is out of character), some display basic tool use, but generally they are just slow, methodical and persistent with a hunger for man flesh. 

So whats the difference?  Well in scenario one, they tend to move in packs and as I mentioned above, are fast.  The best strategy for this kind of problem neghbour is then something that can deal with fast but direct assults. This would be something like a rabbit run.  Put out someone to be bait... run them into a safe spot and lead the Zombies into a trap that they do not have the capacity to avoid.  Cattle race, mine field, crusher etc. 

In the second scenario ( even though they act out of character every so often) the Zombies are generally slow and uncoordinated.  Simply attract them with noise and motion, confine them and then pick them off with a mechanical system like a crusher or blade.  The biggest problem is to dispose of the mess safley.

The point is that in both cases the problem is very predictable.  A fairly rigid strategy can be developed, with some backups to deal with scaling problems (such as when you get the "over run" scenario. ) simply retreat to safe vehicles, drive around the block until the Zombies disperse then return to the trap and keep on working. Fairly straight forward. This turns the Zombies into a nusance more than a plague.

The origional Day of the Triffids is an interesting variation in that there are multiple layers to the disaster, first in the mass blinding event, then the collapse of the social systems, then the escape of the triffids, then finally plague amoung the survivors.  While this added more dimension to the narrative, the actual triffids themselves were fairly straight forward Zombies.  Flessh eating, creeps up when the camera is not looking, etc.  But again they were quite predictable.  They are a preditor that is not fast, so it works from ambush.  But they are attracted to sound again.  In the book they even talk about building sound based traps and destroying large groups of triffids. The problem is that as the triffids are plants, they can seed a huge population and replenish their numbers.  This sets them apart from the more regular Zombie model where a person has to "turn".  This adds to the horror dimension of the betrayal and the fear but also acts as a limiting factor on the possible population of Zombies.

Especially the rate at which the population of Zombies in an area can "recover" if they are destroyed.  Once all the "easy" victims have been turned to Zombie and then destroyed, there are only the luck and "wise" that remain to replenish the population. They are more dispersed, harder to "turn" and much faster to destroy any turned members of their parties, so the rate of replenishment of your average Zombie hord is probably fairly minimal.  Essentially, as long as the Zombies actually break down and physcially collapse, you would expect the average Zombie plage to barely exceed a month.  Especially the speed at which the flies would go to work.  The only chance of a good Zombie hord getting going would be in th colder months in Austalia as the flies would get in and strip the corpses in a matter of days.  The Zombies would physically fall to bits before they had a chance to travel far or "turn" too many others.

In colder countries the Zombies would probably last longer... but they would still have a high failure rate just through wear and tear.

Anyway, the point being that a fixed plague with known transmission method and carriers can be dealt with. Usually the biggest problem is the "people factors" in the narratives where people are not willing to "do what has to be done" and get over run.  I think this is mostly for dramatic purpose, as evidence from many of the recent disasters illustates that people generally are pretty fast to "do what needs to be done".  Its more a question of consequences. In a severe weather situation, they know that the day after will probably bring law enforcement... so some of the more anti-social choices that some narratives suggest are still not attactive.  

As always, the key is speed of adaption, surviving the event itself is just chance, but adapting to the new environment after the event is usually about "the quick and the dead".  Observe, test, adapt, survive.

Wednesday, November 9, 2011

Strategies in Agile dev processes

There are some fun strategies discussed in here... kind of state the obvious but anyway.  The best bit is where the article starts by pointing out the flawed ideas ... then later goes on to say they are ok ... when they are appropriate.... kinda "drugs are bad...mmmk except when you need drugs".  Anyway, focus on the strategies and the people factors...

Anti Malware Strategies

Some interesting arguments about various strategies that Apple could should would apply to deal with malware on their platform(s).

Here is another article on the sandbox pros and cons.

Tuesday, November 8, 2011

scary high performance javascript

This is a good explanation for when someone tells you that Javascript is "fast"!

More Kinect SDK stuff

Is there a limit to the Kinect goodness?  Will Sony finally release something similar....

Basic Computer Games

Old skool....

WebGL for fun and profit...

Useful overview of WebGL and how to implement it.

Bayes Probability problem set

Analystics for software groups

This is an interesting application of the same principles in moneyball to software teams.

Scareware strategies

Article on scareware business. Not a lot of details but some interesting fragments.

No more sysadmins?

This is a noteworthy post simply because it illustrates change.  Adapt or die kind of thing.

Stupid as an asset

 This is an interesting article that positions stupid as an essential aid to good science... or part of it anyway. Also suggest that lawyers don't get challenged... but that's just an aside.

Learning Objective-C

Thoughts on Objective-C

Article on Siri

More voice control background....

Analysis of what Siri can and cannot do along with Google Voice...

Google Self Drive Cars

More on the self drive car project...

Microsoft TellMe UI

This is an interesting article on the voice interface for the Xbox... which I see more and more in my future.

Octave, Scilab and Matlab files

Need to find out just how close Octave/Scilab will play with m files.

Basic Game Hardware Project

Hacking video games with Arduino... maybe I can fix that N64....

Analytics software

Random software packages I need to follow up on for various reasons.

Documentation and Hardware Fixing

Mostly about


This is an interesting little unit.

Point Cloud Library

This is useful for the mocap along with 3Dscanning.

Laser Engraver Project

This is a great little project. I have all these parts except time....

Solar Robot Project

Nice little solar robot.  Novel, but has the "recycled" cost problem.

Arduino Tutorials

Some good stuff here.

Philosphy of something

Need to re-read this.

Making Maths easier

This is an interesting and incredibly ambitious project. 

Sending Email from Code

Tricks and traps with sending email from applications.

Reading Quake 2 Source for Pleasure

Time on your hands huh?  I would really like to be able to do this...

Teaching Programming

Teaching programming concepts and philosophy.

Hardware Memory Crash Course

This is handy to have on hand.

Vendor relationship Management

This is an interesting idea that is about to find its time.  I have been thinking about the evolving relationships between vendors, consumers and how the immediacy of feedback will expand the definition of a transaction and re-balance the power relationship in the consumer cycle. 

Currently if someone sells you something that's crap (which is happening with increasing frequency now. I had to return two things over the weekend that were not fit for purpose) you have limited comeback. There is still inertia in the transaction that puts the cost of returning and complaining onto the customer.  The retailer will get a certain amount of transactions where the customer does not want to carry the cost of returning the defective product, and so rubbish products remain in the market place.

What if the cost of returning a product and complaining about it was reduced to almost nothing?  If the online review sites were maintained and linked with the ombudsman?  If the cost of purchasing and importing goods that were defective could be pushed all the way back up the chain, it would reduce the value of making cheap rubbish and increase the value of producing quality products.

This would need better labeling laws and better warranty enforcement mechanisms. 

App life cycles and market mechanisms

Without getting into the mobile apps vs native apps debate (cause its pointless... the problem dictates the solution not the other way around) but there are some good insights about the market place and the economics of the industry in this article.  Need to read it again and digest.

Five forces in the environment

This stuff has been around for a long time... its spurred a couple of ideas when I read it which I cant remember now. Need to read it again.

Digital Forensics

Article on digital forensics.  Interesting overview with some references.

Big Data generates Opportunity

Leveraging big data to generate innovation.  This kind of emergent opportunity is fascinating. Lots of thoughts to be had here.  Need to read again.

Philisophical post on UI design

UI design, general philosophy. Need to read this again.

SVG Resource Collection

Basic tutorial. standards docs etc.

More Advanced usage.

Resources and tutorials.

SVG with Python.

SVG filter effects.

Python tools for web work

Handy options for using python on the web.

Monday, November 7, 2011

Career Advice for Coders

This article is well worth the read, just for the writing style. The content is excellent too.  The interesting point is just how it reflects on a coder working in an academic environment.... which essentially has no real profit motive and only vague concepts of "reducing costs".  This means that making any value proposition about yourself has to be couched in the currency of the organisation... be it research or teaching.

The other upshot is that, as the article suggests, academics have no money and offer really crap pay.  They value other "currency", such as publications, grants, research, awards etc.  Probably because the monetary returns of academia are fairly stable, not particular high and fairly well known. It's really only the "super stars" of the academic ( or uni administration) who make a reasonable living.  But, funnily enough, they still have to argue a value proposition, much like the above article outlines.

Essentially, apply the ideas in the article or take what the world hands you.  Control your destiny or be controlled.

On a similar theme is this post

Which also points out the obvious (once you grok it) that the clients perception of you relates directly to the amount they value you. As I said ... obvious.

On the same topic, and illustrating some of the same points but from a slightly different POV.

Chess AI

Interesting but vague article on the state of Chess playing AI.

Here is another article on a distributed Chess AI.

Community resiliance in disasters

Need to follow this up as it brings together a couple of threads I have been loosely playing with.

Crash course in quant research

This is a nice summary of the major keywords in quant research design. (Note its not about statistical analysis of the results but does illustrate the starting position for study design)

Working in the cloud

While this guy is a few steps ahead, it's definitely something I am thinking about. I have been experimenting with some cloud services and seeing how they can either complement what I am doing or totally replace them.  So far I am still anchored to a desktop at work and a laptop in between.  To much data and too many heavy weight apps still holding me down.  I also need horsepower for some tasks that is not replaceable with a cloud service yet.  It would be nice to be able to VNC into a remote server somewhere and request something like the Adobe suite or 3ds max be installed, then do my content creation... but at the end of the day I would still need to push the resulting media to a disk host somewhere and pass it to my clients for approval... not insurmountable but still not feasible.  Especially across the Uni network here.... be nice though.

Adware strategies

There are some ethical issues here, strategy information and some interesting comments.  Your opinion of the subject of the post may range from repulsion to disgust to acceptance.  My firstly thought is... realist meets ambiguous opportunity...

Ugly Code?

(This is the bakery reference in the above linked article )

This article makes a valid point about "the eye of the beholder" issue with code uglyness.  I would make two extension points.

The first is that ugly code may also be a result of code reflecting an ugly problem domain (code in a swamp).  The second would be that it may reflect the limitations of the tools or the staff using the tools (Blunt instruments). 

Coding in a Swamp

Very few problem domains are neat.  Its fairly easy to build an elegant solution in an elegant environment ( such as within the Unix context) however, once you get out into user land with the safety off and profit motives on... mostly people only have half a clue about what they are trying to do, how they want it done, what the consequences of doing it might be and how much they think it will be worth if they have to do it again.  Now multiply the number of users by whatever factor you like and the complexity of the problem domain scales exponentially in so many dimensions it makes your eyes bulge just trying to bound the problem space.  I pity the poor fool who imagines that they can find one neat architecture that will elegantly describe this order of complexity. 

My point being, ugly code in this context is really a reflection of the programmer(s) evolving understanding of the context... its unpredictable, uncertain and often moves in weird cycles and feedback loops involving users learning the tools, which allow them to gain a better understanding of their own environment, which leads them to criticize the tools as being inadequate to describe their requirements... this feeds back to the programmer in "user speak" who then needs to re-imaging a solution in light of their often contradictory, partial or just plain weird information.  The worst part being that the "user speak" is inevitably phrased in terms of "variations" on the semantics of the tool that exists in front of them at the time they made that comment... along with their memory of any variations of the tool that have gone before.  This leads the programmer to see the users request as a variation on what already exists rather than a new framing of the original problem.  Thus we get patches and tweaks on the semantics of what already exists which wanders further and further from the original anchor point of a clean design.

This process never ends.  There is no point at which you get to stop and re-start the project... production code does not die... but some things "harden". Some ideas crystalise and everyone stops finding tweaks and fiddles that they want.  This is the opportunity to factor that "Idea/semantic/feature" out of the mud and clean it up.  While around it the rest of the primordial soup continues to evolve in ugly fits and starts with lots of culling of the weak designs.

Blunt Instruments

The other cause of ugly code is code produced by an ugly process.  This can be poor tools, poor processes, poor staff, poor managment, lack of resources etc etc etc. In summary, code that does not have a low cost to change.  This means code that is expensive for whatever reason, to make better. 

Unfortunately, we exist in an environment that is for all intents and purposes... ugly.  It has rough edges, obstacles and failures. We exist in an imperfect world and thus our code reflects this context. Either accept or face frustration and insanity.  You can argue as much as you wish, make excuses, spend the future or take to drink... whatever helps you cope. 

Evolution is at work on you... and chances are you are not going to win....

Editors + Content Creators = Quality?

This article is interesting and makes a good point. The same point that is being made in academic circles, that peer review and editorial selection is the only way to separate the wheat from the chaff.  Without peer review and examination by the community, it becomes increasingly difficult to stay relevant and in touch, let along maintain "accepted" standards of presentation and language.

The contrary argument that always gets trotted out is that this prevents "new" or radical ideas as they tend to be suppressed (overtly or covertly) by "established" ideas.  While I think this is a danger, its true in any system that contains "vested interests" or "assumptions".  Revolutionary change (great strides) is difficult for any system that is geared toward  evolutionary (many small steps) change.  This is more a factor of the mechanism of the system than the desire or intent of any one person within the system.

So what about algorithmic quality filters? Do they allow or reject revolutionary content?  My guess would be ... it depends! An algorithm is a reflection of the programer(s) who created it. This usually means that it carries all their assumptions and flaws.  More simply put, is the algorithm essentially a black list or white list model.  Algorithms that are smart enough to do semantic analysis are just not yet feasible on the large scale, but hopefully they will be making an entrance soon and go some way to solving this type of problem.

Saturday, November 5, 2011

DGTEC DG-HDMP1080 and my wasted night

I had a need for a media player, DTV solution and thought I would give the DG-HDMP1080 from Hills a try.  What a waste of time.

Firstly, it killed my DVD-VHS player by over driving the video input.  I can forgive that as it the player was a few years old and it could have been anything.  Even so, the DG's hands were around its throat when it died so I am still pointing the finger.

Secondly, after putting it to a TV, I got the video feed to work fine.  Then the setup was not what the manual said but easy enough for someone used to this kind of stuff. Find the channel scan option in the menus and start the Autoscan.  Sit wait.... find all the channels... got 85% of the way through the spectrum scan and hung.  And hung.... wait a bit.  Go watch something else... come back...still hung.  OK. Shit happens. Start it again... power cycle the system and it boots up again.  Start the autoscan again.... exact same situation... slow scan... finds all the channels and then hangs before completion. 

I did this about 5 times over the course of a couple of hours.  Hung at the same spot in the spectrum, does not save any of the results that it has found.... first thought, buggy feature.  Second thought... updates.

Check their website... there is a new image file... sounds good.  Start the download.... its a weird kind of hidden streaming download... but I don't care as long as it works.  Download stops at 15meg's... ok.. WinRAR opens it and reports its a truncated download.. should be about 40megs.  Ok, start download again... this time it dropped out at 13meg, next time, 15.2meg, next time 11meg.... none of these are resumable due to the way they are streaming the file.  I have downloaded more then the total size of the download and I still have no useful data. 

The device has the ability to do an online update, so I try to hook it up to the wireless network in the house... but it cannot detect the network(its a hidden network with security), the device has not way to enter a WEP key... so it can only attach to unsecured networks... but it cannot even see any of the neighbours networks which are secured but still visible. OK, so the wireless feature is a dud too.

I plug it through a wired port and get the wired nework up and running, easy. Auto setup works, DHCP works, leases and IP fine.  Happy days. I test connectivity by running the crappy little YouTube app.  It works ... kinda.. enought confirm that there is network connectivity.

Try to run the online updater... mixed results... either claims there is no network or silently times out... no error message... just disapears. Another dud feature.  Strangely about the same amount of time as it takes me trying to manually download the image from their website... could there be a connection?  Anyway... another dud feature.

I try to supplied apps.  They are totally shit.  First screen full of youtube videos that it pulls down are half porn... quickly hide that and explain to the children something about mummies and daddies.... not very happy at this point. 

Stuff the rest of the apps.

Ok, its a media player too... so in an effort to save some of the night, I plug in a usb stick with some family photos on it to test the functionality.... detects, seems to index... seems to find some directories... cannot bring up any of the photos... WTF?  Simple Jpegs... seems to map the drive in a weird way, keeps calling it c: with some directories that I don't recognise.  No clear idea where or what its trying to map.  The file browser functionality is also... shit.

Lets try the network browse.... turn on the samba feature.  The network sort of appears, it finds a workgroup that looks sane... but cannot see any shares in it.  Again has no capacity to deal with any of the network level security so its probably as basic as the rest of the system.... in otherwords... pathetically under developed.

Just for bordem I had a look to see if it had any sort of online functionality.  Had a look at the IP in the browser and came up with something called NeighbourWeb. (By the way the username and password are "admin" <- in lower case and pwd is "123".  Took me ages to crack that.... It's nowhere in the manual because.... its yet another useless feature.  Once you get into this system, it turns out to be a simple torrent client... which would be nice except this box has not built in storage. It turns out it uses the same firmware as its big sibling with an internal hard drive.  Makes sense, but still leaves me running a box with a clearly insecure network interface that is not doing anything useful.  Does this sound like something I want on my network? 

OK, so the device is essentially useless for the purpose I purchased it for.  It has failed to do virtually everything advertised on the box... which is a real shame as with some competent programming, its has all the interfaces that I was looking for and would have been simple enough for the children to use. 

So just to round up this experience... my review is 0/10.  The firmware in this product is not anywhere near ready for the public. Its entering a market with fairly mature products and is clearly a rubbish attempt with poor support, obviously broken functionality, being sold by a retailer who is almost irrelevant in the market place.  Now I have to spend more time taking this brick back and getting my refund.  There really is no silver lining and I cannot recomend strongly enough that everyone avoid this product and Dick Smith which is still trying to sell bottom of the pile rebranded rubbish. 

Back to DTV cards for me.