Wednesday, June 29, 2011

Debit reimagined

I have been thinking about the concept of debit and seeing it in different ways.

The idea of technical debit has crossed my radar a couple of times recently and I have started looking for other types of debit around my life.

Sleep debit.  (Not getting 'enough' sleep in a night supposedly accrues a 'debit' of unpaid sleep.)
Time debit. (Owing time to something/someone or borrowing time from one project or job and putting it into another)

Mess debit. (Not cleaning up and letting stuff pile up without putting it away/back where it goes.)

Ok, so that's all pretty obvious; but for me it expands the abstract idea and allows me to see how the improved abstraction relates to other concepts that I work with.

If you follow this abstraction, making a mess at home and not cleaning it up incurs a debit.  While cleaning up returns you to equilibrium.  So what fits into the 'investment' ( moves you into positive space above equilibrium) in terms of home? 

Improving storage space? Having cleaning supplies more accessible?  Reducing the amount of stuff?  Some of these are risk mitigation or risk increase activities.

This helps me to improve my economic model.  I currently see it as the value dimension ( value credit and value debit being on either side of the equilibrium) with a second dimension being risk. ( increase risk and mitigate risk being on either side of equilibrium)  It's easy to map this onto a 2dimensional Cartesian space with risk and value on th X and Y axises. If you are bored, you can map time on to the third axis and have a whole modellling party.

Friday, June 17, 2011

Tools vs Complexity

I just found a comment in a thread on Coding Horror that articulated something for me:

I worked at one company in the 1990's (before the days of CMS's) where I maintained web pages for a knowledge base about the product I supported. The official website team at this company periodically changed the design of the website, and then they had a huge task editing hundreds of pages one by one, to match the new design.
Of course, to update the pages I was responsible for, I wrote a Perl script as a crude form of HTML templates, and my pages were done in five minutes. I offered my script to them to help them get their work done. They refused, saying, "we don't have time to learn new tools, we have hundreds of pages to edit!"
I was appalled at the time, but I've learned something since then: There are all sorts of people working with data, with HTML, and with code. To some people, it doesn't make a task easier to learn a new library -- it makes the task HARDER. To them, using a tool they know how to use already is a huge win, even if that tool solves the task inefficiently.
Eventually, a person trying to manipulate HTML with a regular expression hits a wall, where their tool simply can't solve the task. Some people will simply not be able to do some things. That's why they need to hire someone who has more tools.

Bill Karwin on November 16, 2009 10:43 AM

from http://www.codinghorror.com/blog/2009/11/parsing-html-the-cthulhu-way.html

I have always tried to explain peoples upper limits in terms of complexity.  I.e "people tend to be able to manage an upper limit in the amount of complexity within a task/situation/scenario" etc.  Different people express different levels of capacity.

However this is too simplistic.  I think the key difference between peoples capacity is abstraction.  People with higher capacity are better at abstracting the complexity into simpler artifacts.  This is both a factor of practice and innate capacity.  Look at the way you can train executive functions and working memory.  I was watching a doco on people who practice memory tricks.  Such as remembering the order of a deck of cards etc.  They do this by abstracting and relating an essentially meaningless string of facts to a known concrete visualisation.

So, getting back to the post above, I think we are expressing a similar sentiment, but the use of a more concrete term like "Tool" rather than the vaguer "complexity" provides a better conceptual handle.  Isn't abstraction fun!

Tuesday, June 14, 2011

How to change the "boat people" game

The easiest way to mess up the political debate around the arrival of boat people is to change the identity terms that are being used to shape the debate. Simply call them "New Australians" from the second they arrive.  If the media continued with this strategy, it would focus the argument in ways that make it very difficult to demonise the immigrants.

Can you imagine a politician trying to turn the phrase "New Australians" around? Just imagine the foot-in-mouth opportunities for some of the less mentally agile elected members and public commentators.

If the new arrivals are refugees who are only wanting a haven until things settle down in their own country, call them tourists or some other term that blends them into the population and reduces the us-them naming conventions that are used to focus on difference and unknowns to foster fear and uncertainty.

Challenge based skills drill

I have been playing some "Tux of math command" for amusement. The obvious application of this kind of game model is any small atomic knowledge elements that can be simply tested

Basic math operations, times tables etc have been covered well by 'Tux of Math Command". TuxTyping applies the same idea to typing games. I'm not as happy with it because it has a linear increase in speed.

Spelling, missing letter, missing word, extra letter, rotate the letters, etc.

Geometry, shapes etc get a little too close to tetris.

Anyway, the point is that this model could be applied to anything that needs drilling.  It could be applied to facts that a student wants to study.

The format modification I would add is to keep track of the RT's for the various stimuli in the game and repeat the elements with the highest RT's more often. The point being that the items that are harder get more repetition.

This is a simple feedback system that would need upper and lower thresholds for sanity checking.

The other response properties that could be tracked are accuracy, errors of commission, errors of ommision etc.  This kind of thing could then be used to inform the choice of the speed, duration and amount of pressure generated by the number of stimuli elements on screen at the same time.

The other addition would be setting some kind of threshold of achievement. This could be used to add an additional reinforcement.

So the reinforcement happens by repetition and success, which gets you to a degree of mastery, which you can then use to manipulate the elements in some way that forms a meta game.  Then by improving the mastery, the side effect is that the skill being drilled is no longer the focus but the mechanism underpinning the mastery and the higher level manipulation.

Example.

Simple spelling/typing task.  Type the word displayed on screen. The basic skill is to type the word correctly. The mastery comes from typing it more and more quickly under pressure ( in TuxTyping) while this is a fairly simple mechanism, it has little complexity to push the mastery beyond any particular level.  Instead think of a ladder. As the word moves down screen, it moves down the ladder and by typing and completing the word at the right time, the word is scored at the value for the particular ladder section that its over.  The values move in a predictable pattern on the ladder which allows more complexity in the mastery. 

By focusing on degrees of mastery rather than degrees of the basic function of the skill, the basic skill gets re-inforced in a different way. The question is how effective this is.

So start with a basic skill and add some executive function on top of the basic skill.

Monday, June 13, 2011

Post Ecconomic Meltdown Analysis

I was watching "Inside Man" and thinking about some of the big picture issues that it raises.  While there is a lot of explicit and implicit suggestion that lack of regulation allowed this horrible event to happen, which is true, it also highlighted for me the structural effect of regulation.

Don't get me wrong, I don't think a completely deregulated market is a perfect market.  The point I am trying to make is slightly more subtle.

A regulation is a rule.  A rule encodes knowledge about whats permitted based on assumptions. Enforcing a rule allows whats permitted and relies on the assumptions. A rule cannot work the way it was envisaged if the assumptions no longer hold.

Now all this is fine, except for the fact that the assumptions are rarely stated.  This is a problem for both dungeon masters and financial regulators. Mainly because they both forget this simple fact.

If we use the iceberg metaphor, the rule is the 10% and the assumptions are the 90% that is hidden. The thing to take away from this is that the assumptions change easily and constantly while the rule text stays static.

You could see it in a different way.

A set of rules define a market place.  If the rules are fairly comprehensive and prescriptive, they make for a very structured and "known" game.  There are still plenty of variables but most of the big ones are locked.  Its a game most people could play.
The rules are the encoded wisdom of the market regulators about what is and is not allowed. They encoded knowledge about what has worked and what has not worked.  In effect they try to freeze time. The market is frozen at a point that the regulators are happy with how its performing. This gives the ecconomy around the market a very stable base and everything adapts to the stability of the market.

Happy days..

But what happens if one of the commodities in the market in no longer available? Easy enough, just don't use any rules that apply to it.  But what about the capital that used to invest in that commodity? It moves into other commodities and changes the prices and the values. More people trade in those commodities.  Ok, its a subtle change but thats what markets are about.

Now take a more radical change in the assumptions.  A whole new currency is in use and competing with the existing currency. Its electronic currency and everyone can print their own money. It has a wildly fluctutating exachange rate with the existing standard currency but people are still willing to exchange between them. (Don't asky why or how, think dutch tulip bubble if you want an example) Yes it may be a bubble but its still violationg the assumptions of the market place and has introduced changes that the existing rule set were not designed to handle.  Much like any social rule set.  Except in a market, the dependencies between the rules are much more immediate. Its a dynamic system. Closer to fluid dynamics than a newtonian clockwork.
You need to remember that a market does not have a fixed equation like an energy equation. It has fluffly edges that at any time may introduce more or less products onto the playing field, players on the field may put value on the table or take it out of circulation randomly.  There may be idiot sistuations where structures allow emergent bubbles or drains to occur. The point is that a static rule set that is highly prescriptive will be much easier to break in a dynamic environment.
If you want a look at a fixed environment, have a look at the Japanese banking system. Its been stuck in the 1930's for the best part of a century.   It has begun to change but its way past its used by date.

So whats the point. One of the interesting points that I took away from watching Inside Man was a throw away comment that someone made about allowing the market to take on more risk and to explore more risk.

The interesting point I think was that removing some of the regulations allowed the market to explore new ideas.  This generated a whole new set of opportunities and threats.  The biggest problem was that no-one did anything about the emerging threats. The regulatory systems were completely corrupted and all the watchdogs had their teeth systematically removed.

Now imagine the same situation with a strong hand at the tiller. As the economy started to explore all these new opportunities, and made some exploratory moved into toxic CDO's, a good regulator would have pointed out that these were a ticking time bomb, helped the market to unwind the situation gracefully and prevented more of them being generated. This would have been the encoding of new wisdom into the market regulation mechanism. 

The consistent failure of the US administration is not that it has failed to shackle the market with lots of regulations, the failure is that it has failed to build a set of dynamic regulatory mechanisms that have teeth. 

Regulations have a used by date.  A good market needs to adapt to change and allow some exploration of both old and new ideas to see if they may be valuable in the moment.

Around the edges there will be theft, fraud, ethical failures, conflict of interests, lies, deceit etc, but these can be dealt with by a strong market regulator.  They just need some flexible rules that allow them to rein players in, give them a clear talking to, impose a penalty and continue playing.  Markets do not need bursting bubbles. Many lose while few win.

Friday, June 10, 2011

Corporate culture's effect on product type and design

http://arstechnica.com/apple/news/2011/06/fourth-times-a-charm-why-icloud-faces-long-odds.ars

This is an insigtful look at the difference between the designer driven culture at Apple and the culture at google.  It links this to the success of the respective companies implementation of various types of product.

I think that an interesting example in the middle is the iTunes service.  Its got massive scaling issues, has an interface that started out graceful and clean and has degenerated into clutter and crap functionality (IMHO)

On the other hand, the iPhone app market place is evolving and scaling fairly well.  So perhaps its just the culture of various business units within Apple that are either centralised or decentralised... either way, the basic idea expressed in the article seems valid to me.

Monday, June 6, 2011

Software Design cycle and cost to change.

Just thinking about some of the issues within the software design cycle and the cost of making changes.

The simplest start point for the design cycle might be:

1 Start working on design
2 Hack up prototype (to help clarify requirements)
3 Display it to the client in some form

At this point what are the implications of choices you have made or missunderstandings between you and the client?

I would suggest that at this point the cost of reversing a decision in your head is relatively small. ( It probably has a few high level dependencies that have to be refactored and untangled but its essentially still a clean slate)

Depending on how many of the clients people have seen the prototype and talked about it ( blogged about it with screen shots??) will depend on how the ideas you have communicated have become semantically embedded in the clients understanding of the project.

Now begin a cyclical design/build/show process. At each turn of the cycle, the client gets more engaged ( you hope) with the semantics of the solution.  What is the cost to add or remove these semantics?  Whats the cost to "tweak" them?  Whats the cost to "improve or clarify" them?

At some point the project is moved into beta testing.  At this point the cost to change semantics in the project gets heavier.  Additions of new semantics is cheapest, changes to/replacement of existing semantics are complex and removal of existing semantics is highest.

Once the project moves into general use by the client(s), the cost to change the semantics of the design are significant.  Time in re-training, time in learning new ways to talk about tasks and processes, time to update documentation and remove old references.

The only way we currently know to do this is to explicitly replace the semantics with a completely new semantic network ( change the version number and attach the new ideas to the new version number)

So whats the idea here?

Like everything once we abstract it, we can manipulate the abstraction. 

We can re-arrange the size of the costs and the time that we need to pay the cost by manipulating the way the semantics are exposed to the client and how "concrete" they are to reduce the way the client absorbs them early in the lifecycle of the project.

Rather than describing the project features with a specific name like "Print Receipts" use a vaguer abstraction such as "Feature to print receipts from a sales transaction".  This reduces the specificity of the semantic network that can be built from the information and keeps it malleable. Once you start reducing the words it takes to describe an idea, the term gets more and more semantically specific and takes on a more and more defined role in the semantic network that can be formed from the semantic soup of the whole design.

By keeping the semantic network loose and malleable, its much cheaper to change. However the cost in complexity of the network is higher.  I.e the client has less clarity about the semantics of the design. (Some good, some bad...depending on the client I guess)

That being said, you as the designer need to have clear semantic network... so does this help you or bog you down in vague language that you will be tempted to clarify and thus harden the design?  Tradeoffs are a bitch.

Needs more thinking.

 

Bitcoin as scam

http://www.quora.com/Is-the-cryptocurrency-Bitcoin-a-good-idea/answer/Adam-Cohen-2

This is an interesting analysis of the BitCoin system.  Looks at the ecconomic fundamentals and identifies the weaknesses that undermine BitCoin as an old school "currency".  The question is if BitCoin is an old school currency?  Do the same rules apply?  Are the weaknesses really valid for a system that spans nations and avoids many of the assumptions that rule old school currency markets?

I am not claiming BitCoin is anything. Just asking some interesting questions.

Wednesday, June 1, 2011

Analysing Installation Dependencies

My two favorite tools for understanding why an installer is all tangled up are:

Dependency Walker http://support.microsoft.com/kb/256872

and

MSI Explorer http://blogs.technet.com/b/sateesh-arveti/archive/2010/11/21/msi-explorer.aspx

If you want to unpack or understand the activities of the setup.exe or .msi files, these tools are the only ones I know of.

Its ON. Lodsys is eating the peasants....

http://www.slashgear.com/apple-letter-disregarded-lodsys-sues-app-developers-anyway-31155994/

It's so ON. 

Following on from my previous rant about the pattern between Lodsys, Apple, the developer ecosystem etc.  The last round came from Apple threatening Lodsys... now Lodsys has ignored the threat and is going after a couple of fat sheep.  Time to see what Apple will do. 

Google, Microsoft, RIM and all the other app store owners will be watching with interest (or naked fear) and outside the firelight the trolls will be straining to see what happens.  If Apple flinches they will be piling in and tear the developers to bits. 

Apple now really has to fight. Get your wacking stick out Steve cause you CAN NOT loose this one.

And now to my thoughts on developing an app for iOS.... LOL.... 

Typography in eBooks and the revenge of the Reader

http://www.lunascafe.org/2011/04/typography-is-about-reading-and-so-are.html

Hopefully this kind of problem will cause the return of copy editors and all the other "quality" control mechanisms in book publishing.  There is only so much that software can do before you have to actually apply the subtle art of "care and attention" to the product. 

It highlights the fact that no two writers or readers use text or written language in the same way.  Publishers are accountable for the quality or lack that their products display.  They need to find where the artificial inteligence in the software can compliment their human staff rather than replace them.

AI is for the drudgery while human skill is for the subtle. 

Reversing Cocoa apps

http://38leinad.wordpress.com/2011/05/12/cracking-cocoa-apps-for-dummies/

This is another old school discussion that brings back some memories

Reversing a firmware image

http://www.devttys0.com/2011/05/reverse-engineering-firmware-linksys-wag120n/

This is a nice post on reversing a firmware image.  Well described.