Tuesday, September 25, 2012

Sketching ideas for a rpg social system


Firstly, it has to be an agent based model.  No other options.

Use a small-worlds network based on proximity but with travel and some mixing functions over time. This network can provide the information propogation system, friends and enemys mechanism, trading model, etc.

The other option is to use different network models for different "classes" of agents. Perhaps a mix, to represent different concepts.  Small world network for the information propogation, while a context free network for trade?  Have to have a play...  maybe invent some other topologies.

The various agents decision making models will be dynamic with LOD for the proximity to players.  So when the player is distant from the agent in both proximity and network, the agent will degrade to a very simple state machine.  (Essentially going through the motions and just doing the "big stuff") until the player gets closer at which time the complexity of the decision making model can be increased to improve the fidellity of the players local experience.  When the player is interacting with the agent, the agent will move up to some sort of voting based multi layer Utility model.  This should provide the highest fidelity interactions. 

(It would be funny if there was a feedback system where the agents could identify their own LOD as part of their utility model and see if an emergent behaviour of trying to stay near the player emerged. See if we could get all the agents in the game to "flock" around the player.)

In an effort to simplify the environment for the mobil agents, I think its best to represent the environment as a set of abstractions ( or simpler, less mobile agents) Specifically, represent the social environment such as towns as an agent.  This layer of agents can then act to recruit new residents, manage all the functions of the town using some heuristic model and generally model all the bits that we don't want to spend the effort to get working with emergent behaviour.  Things like trade, markets, festivals, law and order, development, reconstrcution etc.

As processor power becomes availible (player inactivity) additional resources can be passed out to improve various locations based on player activity. We could also use it for "sweeps" to check for agent stupidity and "fix" it before the player finds it. (Agents in walls, buildings built on each other etc)

Social units such as families, gangs, tribes, associations, brotherhoods/sisterhoods etc can all be repesented by an abstract "group" agent.  These again act to model all the functionality of the group such as recruiting, maininting order, creating missions, trade, group knowledge, friend and foe etc. 

Using some sort of delegate model, large social group agents can spawn and delegate-to smaller social group agents as temporary holders.  For instance, when a large social group needs to divide ( imagine a scenario where a tribe of goblins seperates, one sub-group pursues the player, while the second sub-group returns to the goblins cave to keep the home fires burning) the agent which was driving the group can sub-divide and create a sub-agent with orders to pursue the player (within some conditions) and then return to the main group.  This allows the main group agent to degrade to a lower LOD if the player moves away from the main group and allows the smaller pursuit group to act in a much more complex way, than if a single fairly simple agent was trying to drive a whole tribe after the player. 

As the group agents divide and delegate roles to their sub-agents, this can create a great deal of complex behaviour with the group agents driving the big stuff and individual agents handing issues like local path finding, combat, ambient behaviour etc. It should also allow for some interesting (and cheap) distributed behaviour amoung groups. 
Internal group communication can be modeled almost instantatious, which could be used for all sorts of things.  It can be used for effective flocking behaviour, but also for mob attacks, distributed "watch" systems and coordinated attacks and defense.

Some control will need to be placed on the rate of spawn of these sub groups to make sure there is no "chatter" effect of agents on boundaries which cause sub-groups to be defined then re-attached, then defined and re-attached...etc.  Forcing the sub-group to exist for a minimum lifetime would be a simple mechanism, even if they are within some proximity to re-attach. 

How do we deal with orphan groups?  Imagine a scenario where a goblin tribe is split into two, one being the origional "tribe" agent, while the other is a delegated "hunting party" group.  While the hunting party is out, the main tribe group is wiped out in a horrible kitche fire accident and the group agent enters some sort of termination state.  I guess if it retains a link to all sub-agents and they still have members who are alive then it has no reason to terminate.  Ok... not really an orphan as long as they are linked.

What about preventing massive fragmentation of groups?  Minimum member rules? (Group cannot be spawned with less than X members)  Recruit rules? (When they are out of sight of the player they can re-spawn at a given rate to replenish their numbes?)

 There is only so much trouble the player can get into at any one time, so perhaps this would not happen.  As once the player moves out of range, the sub-groups would collapse back into the main group and the fragmentation disapears.

There is still the chance of fragmentation when two groups collide.  In the event that the player creates some sort of massed fighting group and goes and has a large scale battle with another group... then it should be possible for both groups to break apart into smaller units and potentially generate a large number of sub-group agents.  At some point, in a battle, most of these sub-groups are going to be decimated and the remaining members will need some mechanism to collapse back to the grandparent agent.  In the event the individual agents all maintain a link back to the grandparent agent and if they are isolated, simply make their way back as best they can ( or control returns to the grandparent agent, which allocates them to another existing sub-group that maybe nearby.  kinda cheating, but should work convincingly)

The interesting possibility of dynamic and learning agents at these various levels will create some level of dynamic change in the game.  It would be interesting to allow the town agent to respond to the level of damage of buildings by changing the type of buildings or re-modelling existing builds to be more robust.  The converse is true.. for a town that is rich and prosperous without any damage.... make it more decorative and less secure.
Giving significant agents some sort of memory/learning variables that can evolve them in different ways to the players and other groups activities as well as the games general level mechanic will allow the game to display some emergent behaviour.  Where there is simple inter-group predation without the interaction of the player, it should generate an environment that represents that reality.  Buildings should be fortified, people should be wary and there should be some dynamic friction between the groups.  By combining a mix of cyclical factors , ossciliators and boundary seeking algorithums, it will be possible to generate a range of fragile, stable or unstble scenarios for the player to disturbe.  More interesting situations could be created where the player acts as the damper or a supressor.  (Local lawman type scenario) where every time the player leaves the area, the problems flare up again.  The player will then need to learn the dynamics of the sitaution and figure out ways to solve them.

This kind of model will generate a whole mass of emergent behaviour, highly complex group dynamics, inter-group dynamics, all the way down to individual dynamic behaviour.

Now to add some really emergent fun, provide a general library of strategies and behaviours and allow all agents to pick from them.  Then allow a collective learning algorithm to "learn" the player and weight the shared behaviour library in terms of value around the players style.  This will allow the game to adapt to the players style and skill and hopefully raise the challenge level for the player over time.  The other major use of this library should be to enrich the behaviour of any companions that the player hangs out with.  They should learn to play along side the player and gain experience, better strateies and become more helpful as they see the player adapt.  There could be an order system or the player just does their thing and the companion learns how to support that behaviour.  (Cautious, attacking, stand and fire ranged weapons, circle and attack, move to high ground, put their back to something, run and hide... etc)








Friday, September 21, 2012

Strengths and Weaknesses as characteristics for selection


I have been considering the issue of identity by catalogs of strengths and weaknesses.

I have come to the conclusion that the strengths/virtues/assets are only interesting when they are in some fashion Unique.  Rather, its the weaknesses/ failings / lacks that characterise people much more interstingly.

Here is the train of thought:

In a workplace where most people are able to "do" the items of work that they are collectivly engaged in... its the inability to do the work that marks someone out as different. Similarly, in a workplace where the work is unusually challenging, there may only be a few people who can do that work.  However,  the recruitment process will select for those people with capacity (hopefully) so the population should remain generally capable.  (Yes there are situations out in the tails of the bell curve that will always be exceptions... missing the point)

First assumption: More of the population can do the "thing" than cannot.  This makes the "weakness" (inability to do that thing) the exception rather than the rule.  Being able to do the "thing" becomes convention, assumed... etc.  Its the "norm".

However, the "abnormal" is a noteworthy characteristic.  Inability to satisfy the norm is "different", while ability to "over-satisfy" the norm critieria is a bit interesting but generally not "big news". 

My point being that apart from people who are super heros at stuff, its those who are "less than" or weak at being "normal" that get commented apon. 

Assumption 2:  Normal is a spectrum with a fairly well determined center. 
I think there has to be a "satisfactory" point on the normal spectrum above which you are "normal" and below which you are "abnormal". Its probably got a reasonable grey zone around it, as all things to do with human judgement have... but its there in some form.  And its only those who fall behind that will be commented upon. 

My takeaway is essentially, its nice to be normal or a super-normal person... but it doesn't really get you much (in comparison)  while its pretty bad to be below the "satisfactory" point as that probably gets you a whole lotta pain.  You are essentially "out". 

So to round this train of thought out... its probably not worth the effort to catalog people's traits where they fall above the "satisfactory" line and only keep track of those who fall below the line. Simply because its less data to keep track of in your head. 

If you have watched any reality televison where the participants are actually challenged (not the ones that sit around in a big pathetic goldfish bowl) then its more often a case of understanding the participants weaknesses in any given scenario than it is to keep track of their strengths. 

Again, as I said earlier... strengths are only interesting when they are "Unique" or rare in some fashion and so are not "normal". (Topic for a different ramble)

(Can you tell I have been reading "The Theory of Evolution" again...)

How can this be used to create efficient selection algorithms?  Well that depends on the application and the assumptions about the task I guess.





Thursday, September 20, 2012

Article on NFC...

http://www.macdrifter.com/2012/09/nfc-is-a-crutch.html

This article has one interesting insight on the use and replacement for NFC.  I have always been a little distrustful of anything like an automated payment system that involves you nonchallantly waving your computing device near something. 

Talk about out of sight, out of mind.

I really like the proposed solution much better.  Use the device and the face for two factor authentication.  Display the amount and some one-time handshaking encryption on the screen.  Record the transaction.  How simple is that.


Monday, September 17, 2012

Statistics Software Resources for Psychology Students

This is a list of all the stats packages that I have some level of interest in, or need to manage. This reference list is simply my way of trying to keep track of them all.


The Statistics software list for the various labs I manage:
Microsoft Excel (With various add-ins, Solver, GailsTools, NodeXL etc )Cleaning and processing
IBM SPSS and AMOS Lots of varied trickery
G*Power Shonky Post-hoc power analyis. See Here
STATA Nothing much
PSY  Not sure why, but its become flavour of the month. I would guess for Confidence Intervals.
SurveyMonkey Online Surveys and Analytics
Qualtrics Online Surveys and Analytics
BrainVision Analyzer 2  EEG Recording and Analysis
Biopac AcqKnowledge EEG Recording and Analysis
Matlab  Experiment Construction/stimuli generation and data wrangling
SigmaPlot Presenation Graphics
Epi Info Specialied study analysis
NVIVO Qualitative Data Management and Analysis


Tools I have not used/seen/been forced to learn yet but are mentioned by other psych labs
SAS
BMDP
LISREL and HLM
Mplus
R
Lumenaut
JMP
EViews
Maple
Minitab
Statistica
SYSSTAT
UNISTAT
XLStat


Other interesting packages I have found while compiling this list:
Empirisoft  Seem to be another competitor for E-prime/ Superlab / PsyScope / Inquisit etc

KrackPlot for Network Analysis (May complement NodeXL?)

Thursday, September 13, 2012

Systematic Futility...

Picture this, a business unit is under new managment. (Internal Promotion), the new manager is reviewing the abysmal performance of one of the sections.  They decide to ask only the other senior people that they know, rather than any of the coal face people doing the work.

Since they have only recently been promoted and have up till now had regular lunch dates with these people... why are they expecting to suddenly gain new insight... by talking to the same people they have always talked to and continuing to ignore the same people who have the information.

Did I mention there is a culture of old sexist male academics involved?  Talk about endless fucking pointlessness.  This is the same business unit that has been dysfunctionally unable to get anything done right for the past few years.  Now that the VC's mate has finally been given the arse, he has been replaced with another specimin of dead wood from the same bad forest. 

This is the same business unit that last year conducted a survey about their performance... and then supressed the results. 

This is the same business unit that was audited by an external auditor and then ignored the vast majority of the report.

Its sad when the senior managment starts to look like a retirement home for aged and failed academics who are so out of touch with everything they just act as a retardant on the efficiency of the system.  Suppressing change, unable to foster communication, engaged in work avoidance behaviour and generally sucking up money and exhaling nothing but stale air.

I must congratulate the VC on his moves to flatten the system and remove a lot of the dead wood that was sitting around sucking up free lunches... but there are still holdout sections of the Uni that need a bright light shone up their dark recesses and a bit of objective rot removal.
My suspicion is that the VC is moving only when his hand is forced, since I suspect that he finds some use for these types.  Someone has to go and talk to the state and fed polies and suck up the external audits and various other bodies that the Uni has to interact with. But that does not make them functional parts of the system when they are at home. 

Still the VC has managed to fire the completely insane one, although it was his personal hire... so its right that he pulled the plug.

That's an anecdote that needs some telling. 

Hire a person ( who has history with at least one senior staff member. Bad history BTW ) who then proceeds to rapidly, stuningly and completely irritate, offend and flumox with her obvious stupidity a large number of the acadmic staff in the University.  Demonstrates a complete inability to manage a team, stay focused on anything, play the long game or anything involving budget constratings.  Speaks in a constant train of through fashion.... kinda hopes if she says enough shit, something may sound intelligent.  Has no ability to functionally answer a question.  Lacks the trust or faith of anyone around her and demonstrates incompetence with money, budgets, time and promises of a stunning degree.  Then proceeds to play games with the whole schools planning for the next year by stopping all hiring of casual staff for the next semester (until week three of the semester) blocks all resource allocation for every unit except on her persoanl sayso ( while having no fucking idea what the units are, how they are run, why they are being run..essentially tries to micromanage a system that currently takes about 60 or so administrators) generally tries to pull all authority for anything up to her level and prevents people doing the jobs they were hired to do, and have been doing for quite some time.... does this strike anyone as anything other than a FUCKUP? 

HOW THE FUCK DID THIS PERSON GET HIRED?  VC's personal selection... that's how.

Oh, and then she got a golden payout as she left the building.  Oh, and she had well documented history for doing the same shit in her last position... where she was also shit-canned.  Was any due-dilligence done on this fact? I leave this as an exercise for the reader.... (since I don't actually know)

Wonder what sort of reference she got?

The funniest bit was watching her online profile as she crowed about being hired.. got fired... then stripped all references to the University out of all her profiles. Completely whitewashed it. 

Yet another example of a functining researcher who should not have been asked to manage anything.

Tuesday, September 11, 2012

Post Hoc Power Analysis Article

http://wiki.math.yorku.ca/index.php/Statistics:_Post_hoc_power_analysis

A collection of insights on the pointlessness of post-hoc power analysis to try to make something out of an insignificant result from (insert virtually all Fisher-style tests for significance here).


I've seen this style of argument made regularly over the past few years at the honors presentations... it goes something like this.

".... I did a (massivly complex research design) and then did a MANCOVA/ANOVA etc which failed to show any significance relationships between any of my variables due to tiny effect sizes.  So I did a power analysis that showed if only I had x number of more participants it would have worked out differently...."

It always bugged me.  Firstly becasue they have the usual infatuation with testing the significance of the variance between derived numbers based on (often) questionable data processing .... while being oblvious to the issues of effect magnitude. 

I have come to like confidence intervals as a better way to present arguments about wether or not an effect really occured or not.  I just wish some more attention was paid to effect magnitudes.


Monday, September 10, 2012

Yet more data recovery

Got a hard drive that has confounded ZeroAssumptions. Doh! It has obvious bad sectors.  Time to Clone.

Found another that is the same size and grabbed a copy of HDD Raw Copy Tool from Here.

Cloning the drive to the good one.  Just waiting for that to finish, then I can run ZA on the clone.  Fingers crossed I get it mostly back.

Update:

HDD Raw Copy Tool seems to work... up to a point. It has crashed twice (requiring a complete restart) and gracefully exited once (at 30% complete) claiming that the device was removed.

The interesting thing is that even with a partial image recovered... the destination disk will mount and seems to have a readable partition table and file allocation table.  Shame it will not get all the way through.  Anyway, after wasting 4 days on it... its time to move on.

The interesting thing from HDD Raw Copy was that it consistently hit the same 5 patches of errors.  After which it seems to happily copy at a good pace without missing a beat... until it stopped working.  Would love to be able to re-start the copy at arbitrary offsets.  Which leads me to...

I have been looking at ddrescue and decided it was the next option. It has the ability to stop and restart and lots of other interesting options. Fairly cryptic UI... but thats fairly normal for cmd line tools in linux land.  I also have issue with how vague some of the help files are... but then I'm not really spoilt for choice at this stage.

First attempt, boot a Ubuntu live cd and install gddrescue... failed miserably with various errors. Mostly to do with not being able to find the repository.  Still a bit new at that so I figure its more to do with it being a live CD than a full install.

Second attempt, got a copy of Ubuntu-rescue-remix which already has ddrescue installed.  Boot... to a bit of a weird terminal screen as the shell.  Fair enough. (Not exactly luxury but functional enough)

Introduced the USB drives (the damaged one and the one I want to copy to) one at a time so I could figure out what the device names were.

/dev/sdb1 is my source device
/dev/sdc1 is my destination device

I have to bugger around with mounting and unmounting them until ddrescue seemed happy to proceed. This was a bit of trial and error as the error messages were as usual completely cryptic.

My final setup was to allow the destination device to automount, which sorted it out as read/write with all the usual mounting options.  While the source device automounted when I plugged it in, but I then unmounted which made ddrescue happy.

sudo umount /dev/sdb1

Then its just a case of figuring out how to run ddrescue.

The command I ended up using was straight from the manual example.

sudo ddrescue -f -n /dev/sdb1 /dev/sdc1 logfile

In hindsight (after reading a bit more) it may have been interesting to add the -v flag (for verbose) but I can live with it as it is.  It would also have been better to give the logfile a more meaningful name.

Anyway, its rolling and seems happy at the moment.  Has already foun one error...

Update:

So after about five days copying... (there was a weekend in there, so my estimate is about ~4 days) the clone has finished successfully. Only about 8MB and change that it could not copy.  Not bad for about 8 reported errors. 

Anyway,  now back to Windows and ZeroAssumptions. 

ZA is totally ripping through this scan.  Its done the first two passes in about two hours and is well into the third already.  (For comparison, to get to this stage on the old disk took about 3 days... which was shorter once I tuned the timeout period and the bad sector skip factors... got it down to two days or so)

Feeling quietly confident...

Computational Social Science

http://www.nature.com/news/computational-social-science-making-the-links-1.11243

This is an interesting article that touches on a couple of fun items.  Good general background piece. Couple of names to chase up on Academia.edu


Thursday, September 6, 2012

Pedantic arguments...

http://www.economist.com/blogs/johnson/2012/07/language-and-computers

The above article has been sitting in my to-read pile for a little while.  I tend to superficially agree with the general sentiment about the topic that the author is arguing with.  (The comments make a better case about that particular issue) 

My particular take however is more about the fact that the author is fairly illiterate about the subtle issues of coding.  I think this is clear by the very basic nature of the quotes and comments that he uses to describe the nature of code.  It seems clear that he has not spent any time reading large bodies of code written by multiple authors. 

I have read large amounts of code, some of it by students still learning the art, so by working programmers and a little by various "masters".  The idea that there is only one way to express something in code is quite funny.  It demonstrates a complete lack of understanding of what a program is or does.

I find that more substantive programs ( a bit beyond the "Hello World" that the author quotes) are much closer to a dialog between the programmer and their user(s); be they man or machine.  There are an infinite number of ways to begin the conversation, deal with the twists and turns of the process and finally complete the exercise ( successfully or otherwise) 

The subtle issues with choice of architecture, how to structure modules or sections, choices of decomposition or the investment in comments within the code are all expressions of taste and style that have nothing to do with the fact that a computer is going to crunch the machine code.   They are individual expressions of the people who crafted the code, the constraints they worked within and the limitations of man, machine and language.  But most essentially, the communicated with audiences; the compiler, the machine, the frameworks, future maintenance programmers, their code reviewers, their future selves... they expressed everything from hope, optimism, frustration, clarity to insight, skill, mastery and obsessivness.

How can this be anything but prose?

How to avoid malware (abit...)

I was asked how to get an offline copy of a video from a popular online video site. 

The solutions that used to work no longer do as the sites are using multiple flash wrappers to try to obfuscate the video stream and prevent all the usual tricks from working. 

I had a troll of some of the online services that used to do the trick using some fairly straighforward javascript. They used to be ad-supported.  Now they have much fewer ad's and they all require Java for their functionality.... hello?

Could there be a correllation with the number of exploits recently found in Java?  Could it be possible?

Anyway, since I did not have time to fiddle... the solution is to create a clean vritual machine using your favorite VM base OS.  (Something XP or Ubuntu should do the trick) make sure you enable Undo disks. Create a shared folder for passing back and forth to the VM.

Then boot up the VM, pass your YouTube links to the VM via a shared folder in a text file. 

Inside the VM, open a browser, go to your favorite video file scamming website and download the files to the shared folder. (Install Java if required)

Shut down the VM and do not commit the changes to the VM. 

Tada.  Ofline copy saved, Malware deleted, problem solved. 

Now explain that to your grandma....

Hmmm strategy...



http://www.technologyreview.com/news/428649/hey-hackers-defense-is-sexy-too/




The title of the above post caught my fancy.  The content is pretty small but the idea is funny.

This is just stupid.  Its the most poorly thought out marketing attempt... well not "ever"... but its still pretty weak.

Attack is always easy.  Defense is always hard.  But now it's sexy....lol. That should convert the masses.

Attacking a stationary/static target is simply a matter of trial and error.  Success is a factor of time, effort and a bit of cunning. Hence its popularity with "security researchers". 

Defending a stationary target is an exercise in "preventing the unknown".  You have no capacity to prevent whats going to happen if its a simple attack-defend scenario, unless you can brute force "prevent" the attackers vector from functioning.  But to do this you either need to know what the vectors are before hand, prevent all possible vectors or ..."other".
The first is essentially the "attack" strategy just going in the opposite direction.  (See all current signature based mechanims)
The second is theoretically impossible but heuristics offer partial solutions. (Mechanisms such as DEP, Mutable loading, calling etc, behaviour monitoring mechanisms, white lists)
The third is.... "unknown".

But now its sexy!

Its also going to be damn hard to "show" in the way exploits have traditionally been demonstrated at the various conrferences.  An exploit either works or fails.  Defending against an unknown and possibly non-existant attack is ... harder to demonstrate.

"Evidence of  defence against non-existant threat may have been successfully demonstrated... audience baffled and bored!"

At least if you find an exploit and then demonstrate a fix, people "get it".  Doing this for broad classes of attack strategies may be harder, simply becasue doing so just moves the goalposts for the attackers. It does not eliminate goalposts, although it may in the mind of the purchaser.

I think being under a state of constant attack, forces people to adopt a conservative approch to their computing activities.  Purchasing a "solution" simply promotes a false sense of security. 

But I ramble...