Monday, January 31, 2011

Current Reading List

I am trying to refine my regular reading palate. Both in terms of quality and in terms of topicality.  I figure that all the lists of bookmarks that I never have time to visit in my browsers are dead air, so I have turned to RSS feeds and subscriptions to email newsletters to bring topical curated material to me. I am being ruthless in culling low quality feeds. If I don't read it or the volume is too high or the quality is too low... its toast. It's also fun to play the finite resource game. I only have so much time to read in a day and I want it to all count. This means a feed has to keep the noise down.

I would prefer a better source on robot hacking, but have not found one yet.

CodeProject's daily Newsletter
Usually good. I tend to read about 75% of the articles.

AAAI's AITopics
http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/HomePage
Lots of robot goodness. Keeps the brain ticking over. About 90%

Hack a Day
http://hackaday.com/
This is usually high quality and interesting.  About 80%

Lifehacker
http://www.lifehacker.com.au/
The signal to noise ratio here is not good. Most things are interesting but of fairly low utility. So its distracting but not particularly enriching.  About 30%.  This one is sliding off the end.

Science Daily
http://www.sciencedaily.com/
This is a general roundup of science articles. Usually something interesting but a crap RSS feed as it duplicates and repeats stuff all over. Probably needs to get dumped as its too much work.  About 25%. This is too much work and is too hard to manage.


I have recently discarded GeekDad because it has turned to crap. I was reading about 1 in 20 articles and the signal to noise ratio was constantly falling.

100 books that shaped a century of science

http://www.americanscientist.org/bookshelf/pub/100-or-so-books-that-shaped-a-century-of-science

Some  interesting reads here. Surprisingly few that I have actually read. Probably a measure of just how few of these books are on topics that I'm interested in. The one's I have read were generally boring, so perhaps I should look up some of the other ones.... when I have a whole pile of time to burn...

Orbo overunity system

http://www.steorn.com/

This is an interesting bit of gear. I have been thinking about utilising magnets as an energy accumulation system. This one is more useful because its cyclical and can be used to generate work directly.

The obvious application for this system is to firstly bootstrap it with a passive energy source ( photo voltaic cells ), then drive it from its own generated current plus tap it for energy.  This becomes a simple power source that does not need anything more than enough sunlight to bootstrap it.

The heat dissipation could either be harvested or dumped. It may be a small problem in an enclosed system but heat can easily be radiated through a case. You would also need to be careful with inertia effects on the rotor if you were using it to run a high speed robot.

The next problem is power to weight ratio. Can you develop a light enough system that it can become self propelled. In which case you have a bootstrappable battery for small robots. If you could harvest additional power and store it using a non-degrading energy accumulator like a magnetic zip starter then you have something that can both bootstrap itself, store power infinity and be self propelled.

The only issues to deal with now are mechanical wear on the rotors and other mechanical parts.

The first solution is magnetic bearings in the rotor for the Orbo and some sort of frictionless propulsion for the robot.  Otherwise make a robot self repairing. Bit more complex but within scope if you can solve the power problem.

Later....

Friday, January 28, 2011

Plotter project Notes

These are resources for my next project. Building a CNC xy plotter using an Arduino and some old steppers and printer parts I have.  I have not decided yet if it will be a paper + pen plotter or something more interesting like an eggbot or a sand table.

The endpoint will either be a pen or some sort of simple cutter. The problem with the cutter is the that it takes a little more precision and has a higher cost per mistake if I jam it into a table or something else goes awry. Still the learning is in the doing and mistakes are part of the fun.

http://www.tigoe.net/pcomp/code/circuits/motors/stepper-motors

The system will have 3 axis of control for starters with limit switches on both travel ends of each axis. This gives me six switches to read from. I also want to put some debugging lines using LED's so I know what should be happening even if its not actually doing anything. There will also be 2 calibration switches for the head position. The last (or first) major feature is a master kill switch which will terminate all the action on the plotter but will not kill power to the Arduino.

The overall features will be:

UP_X, UP_Y, UP_Z - Moves the Z axis to the top of its travel (Is 0 best as the top or the middle of the travel?) Away from the work piece.

Home - Returns the head to (0,0,0) in an orderly way. slowly.

Calibrate - Measure the dimensions of the cutting head.

CutterON and CutterOFF - Power up or down the cutter head. (Not used for a pen obviously)

LOAD - Measure the load on the cutter.

RUN - Start the current cut/draw script.

Software Testing Stack

http://www.matthewbussa.com/2011/01/evolution-of-automated-testing.html

This is a good article that summarizes the software testing stack as I visualize it.  The problem I keep coming back to is that there are no turn key systems to implement this kind of stack for most developers. You can find fragments of the solution in some languages. Others have much better sets of tools, but they are still not integrated or complete.

As a developer I just want a testing suite that I can turn on, build tests in and run in various obvious ways (selectively, by group, by layer etc), I want a back end reporting system that will give both simple and rich stats, coverage metrics, etc.  And most importantly, I want the system to support my favorite languages, libraries and frameworks.

I figure it should be illegal to use a language/framework/library etc that is not able to be comprehensibly tested to using an open set of tests on a known testing framework. Things like this are just not business ready.

Seems like the .NET testing environment is maturing quickly but its still a bit of a "roll-your-own" mentality which makes portability of the tests between developers less easy.

Friday, January 21, 2011

AI playing games

http://www.wired.co.uk/magazine/archive/2011/02/play/the-brain-game

http://www.npr.org/2011/01/08/132769575/Can-A-Computer-Become-A-Jeopardy-Champ

http://www.npr.org/2011/01/16/132959095/googles-artificial-intelligence-translates-poetry

These are some interesting applications but fairly specialized solutions to a bounded problem.
Even the translation engine is fairly straight forward, ignoring the internal complexity of the parts of the solution.  All the AI's are still a matter of selective brute force attacks. There is some variation in the ways they optimize the data either before, during or after the brute force is applied but its not particularly novel.


The GO engine optimizes the search tree by selective pruning. The fact that they are using a trained algorithm to generate the pruning system is a good solution to a large permutation problem.  It would be interesting to see what the training data set was derived from. The article suggested it was 250,000 games played since somewhere back in the day.

I would be more interested in an algorithm that classified each games that was played by length of moves and "Cleaverness".

It would also be interesting to break each game up into blocks of moves of say 20 moves. Then evaluate each block of moves for how each player "felt" about the strength of their position at in each block. You could even go down to the strength of each particular move made in the block.  This would give you the ability to select moves and whole games based on the way that the games progressed. The ebb and flow of dominant vs weak positions.
It would be much more interesting to be able to build and engine that could take a weak position and turn it into a strong position or play a strategy that played a strong position into a weak position but knowing it was possible to take a win from it.

This kind of strategy would not always produce an unbeatable computer opponent but it would produce an AI that could "play" with the game in a more whimsical fashion. Think of it more as a training partner rather than an opponent. You could ask it for a hard game with a win end or an easy game with an uncertain end for instance.

Having the AI be able to choose from a set of possible moves that it could use to engage and excite the opponent would be so much more interesting and engaging than just having an AI that beat you by being faster at processing a search tree.

DNA, Game Theory and Risk in partner selection....

http://blog.okcupid.com/index.php/the-mathematics-of-beauty/

This is quite thought provoking.  There are a slew of confounds and different directions to take the research... but that's not what interests me initially.

I was interested in the two types of graph patterns that were being discussed. Not if they were correct, just the thoughts that I had trying to explain them.

Firstly, the two profiles are essentially the "Normally Attractive" and the "Split Hot or Hate" shapes. It got me thinking about DNA and partner selection along with some game theory.

The first point is how to explain the "Split Hot or Hate" graph.

If we assume that the "Normally Attractive" pattern represents a female partner who has effective enough genes to be successful in the current environment. This means that her DNA has been selected for and reinforced across a significant part of the population enough to be considered the "normal" look. (This also assumes that appearance and whatever else was being overtly or covertly evaluated by the fairly simple measure is tied directly to DNA or social DNA... anyway)
So people who are outside this "normal" are aberrant in some way. This may be because they express non-normal characteristics.  Now if the environment is static, then the conservative strategy for picking a mate would be to always pick a "normal" mate and work on the assumption that others have taken all the risks and normal is good enough to survive.  The higher risk strategy is to look for someone who exceeds normal in some fashion and try to breed with an exceptional partner.

However in the event that the environment is not stable, picking a "normal" is no longer the risk free or conservative strategy. Because "normals" are adapted for the environment that was previously stable and may not be so tomorrow. (The question is if our DNA is smart enough to express that kind of strategy via testosterone or just uses blind selection to deal with this scenario.... which is simpler do you think?)

So how does this explain the "Split hot or hate" pattern?  I would suggest that risk taking in mate selection is essential for a healthy population and the ability to explore different mutations. But its probably only valuable if a small-ish part of the population engages in these risky ventures. The rest of the population should be steered toward only breeding with "normal" partners and avoiding anyone "abnormal". This way the population should have a large central pool of "normal" stock with a smaller pool of mutants ready to take advantage of any useful mutations or environment changes. So these graphs represent the difference between "normals" and "abnormals". While the "normals" get a response pattern that suggests they are acceptable to conservative and risk taking partners, the "abnormals" are either highly attractive to risk taking partners or highly repulsive to conservative partners.

This study is possibly more interesting because it was done on the male response to female potential mates.  The dis-equality between the male and female illustrates just how wildly different mate selection is on the two sides of the fence.
The cost to a male of breeding is relatively tiny if there is no husbandry involved. While the cost to a female is potentially a year or so of her life or ~ 1/15 - ~1/20th of her possible breeding events(Assuming starting breeding about 15 and being fertile until mid to late 30s).   So for Males to breed with a female outside the "normal" pool is fairly low cost/low risk. While for females its a much higher cost, but with the same risk.

Once you involve husbandry and the social conventions of mating, it gets much more complex but the underlying biology is the same.


I would predict that a similar study from the female point of view would show a similar pattern for the "normal" pattern males but a much smaller percentage of "hot" ( high risk but worth it) types and a much larger percentage of clear "hate" group.


This reflects the percentage of "conservative" choices being higher for women due to the cost and risk of breeding with an "abnormal" being higher.

It would be interesting to break the scoring given to each partner down into a set of finer grained plus or minus scores. IE.

Like their profile information  (score 1-5)
Like their picture (Score 1-5)
Overall like ( Score 1-5)

Later....

Thursday, January 20, 2011

Article on source code commenting

http://visualstudiomagazine.com/articles/2011/01/06/to-comment-or-not-to-comment.aspx

There are some good points in here about commenting code. Some fairly obvious.

It does suggest one idea to me that is not covered. If comments on one hand are not part of the run-time construct, while source code is completely part of the run-time construct, what are unit tests and asserts?  They are intermediate between comments and source code. They make assertions that can be interpreted as meaningful commentary on the program, although sometimes cryptic and wrong.

The point being that they are both parsed and evaluated, while not being part of the final run time construct (obviously you could come up with scenarios to prove this wrong... but that misses the point)

I would contest that there is a spectrum between comments, tests and asserts and source code. With human readable at one end and the final compiled construct at the other.  Everything has meaning but there are still differences between what is intended vs what is actually happening in some code.

Information on traffic shaping for local network with bittorrent

http://apenwarr.ca/log/?m=201101#14

Read this again when my head is clear.

Interface design concept - Subjects and Tasks

Just kicking around thoughts on how my latest plowing of the interface layout and organisation has ended up. I now have a little more clarity on how I am organizing access to the various features.

BTW the app is an access db based enterprise tool.

Basically I had initially built very basic forms to work with particular elements of the data. (Think 1 form per table kind of thing) With a table representing essentially an entity in the schema, it was a nice clean design. BUT it was focused on designing the UI around the schema, not what the users wanted to do with the schema elements.

So, its a simple but unintuitive design that forces the users to understand and work with the database schema rather than being able to build their own schema of their tasks. Not good! But still important, sometimes the users just want to mess with one of the entities. (Add a new entity, modify it etc) So I had added additional task focused features that crossed the boundaries between entities to provide a more task focused schema.

So my current thoughts are to build a UI based on a matrix

The whole problem is the root node in the schema. Does the user understand the schema in terms of subject then task ( find the "person" and modify them) or task then subject ( "I want to modify some people so I want the modify tool and apply it to some people").


This seems to be a common pattern in my experience of UI design. Usually we end up with a mix of tools, some are task first, while others are subject first. The question is whether either is always superior, both have their value or one is consistently worse for some things.

I also wonder how all this applies when either the task or the subject are more complex. For instance when you have a compound subject( multiple subjects glued together, such as an ad-hoc group of people that need to be modified.) that has a simple task applied and conversely when a simple subject has a complex task applied.

Another problem is the issue of composing a complex task from multiple simple tasks in a fixed order. The point of a UI is that the user can compose their desired task without the UI having to have a button that already does it.  So we end up with hundreds of simple task controls that can be composed into a sequence by the user to achieve their desired objective. The task units must be orthogonal in various ways, and its nice if their is some discovery mechanism to allow try/fail/undo semantics which allow low cost exploration and experimentation.

Also, how can a user create an ad-hoc complex subject. Both in parallel and in serial. (A group of "people" subjects need to be modified at the same time (parallel) or a set of "orders" need to be updated in sequence to reflect moving some "units" of stock from one order to another while still respecting some business rules about not returning too much stock to the warehouse(Serial)) that could be used.  In some systems this can be done by picking an arbitrary list from a multi-select list box or something similar. (Search filters etc etc)

So in an ideal system you would be able to compose either the task or the subject first and then compose the other side of the activity second. Both need to be able to compose arbitrarily complex groups and sequences, apply them in controlled ways, correct if errors are encountered, rollback if required and save for later if desired.  Sounds like a macro system for the task and some sort of data description/selection language for the subject (SQL perhaps?)  All this in a nice clean high level interface...

Thinking....

Wednesday, January 19, 2011

Hacking the OWI 535 Robotic Arm Edge 5 axis

I have started compiling information on ways to extend the OWI 535 Robot Arm. The general idea is not to turn the arm into anything unreasonable, just to get the most learning possible out of a nice, accessible kit.

So whats the todo list?

1) USB interface - Easy there are two commercial one availible which takes the effort out of it. Otherwise you can hack in a pic interface if you are keen.

Solution (3)
RAI-Pro USB interface kit
http://www.imagesco.com/robotics/owi-535.html#usbinterface

OWI USB Interface kit
http://www.robotcombat.com/products/0-OWIX0525.html



2) Position sensing - Currently there is no useful position control built into the system ( or the commercial usb interface) So the arm drifts pretty badly when its under any load or the batteries start to fade... or really anytime.

Partial solution
http://www.instructables.com/id/Modifications-to-Robot-Arm-for-Opto-Coupler-Feedba/

3) Power system - Replace the batteries with some more reasonable power source to drive the motors at a more consistent rate.

Solution (?)

4) Better software Interface - Having the ability to code against a standard dll wrapper around the driver would be more useful than the current black box software. There is a driver "elanusb.sys" but I am still trying to work out how to interface with it.

The other option is to write a new USB driver for the system. Once I remember to look at the USB IO chip on the USB interface board I will hopefully be able to figure out what is capabilities are and hack up a driver for it. Should be fairly simple as there are only 5 axis and one light so it should be fairly much on/off for each axis with forward and backward, plus the light on/off. So I estimate there are about 12 commands possible. That's assuming the timing logic is not encoded into the IO chip; which seems unlikely.

Solution (?)


Resources
http://www.owirobot.com/

http://filmmania.ru/video/index.php?key=OWI-535

Another group of hackers on the same path
https://agora.cs.illinois.edu/display/cs125wiki/OWIRoboticArm

Data Sheet for the Main Elan USB chip (EM78M612DDMJ) here
http://pdf1.alldatasheet.com/datasheet-pdf/view/113363/EMC/EM78M612DDM.html

Tuesday, January 18, 2011

Stop adding Features - Why open source projects do not have to grow endlessly.

I was busy installing the latest bloated version of some application the other day when it suddenly occurred to me what was wrong with that picture.

Commercial applications need to grow endlessly because they are competing. They're the part of the business that the user has the most contact with, so they embody all the biological strategies for competition.  There's also the problem that software features get cloned by the competition. This is the nature of software in the user and enterprise space. If someone is doing it well, all the competitors need to converge toward the features of the dominant player because the users expect similar functionality.... and there are only so many ways of doing the same job.  Similar with biological competition for food niche's. Everyone ends up with similar strategies to exploit the food resource.

Anyway, the realisation quickly followed that Open Source projects do not have the same imperative(Usually).  True there are some Open Source projects that are trying to compete. Either because they are being driven by a commercial business or some other competitive urge has take over the developers... But the point is that its not an imperative.  An Open Source project can "finish". It can get a set of features that are fine for the job, debug them and get it all squared away... and then stop. Job done. (Ignore the maintenance issues and any residual bugs that come to light, platform porting, updating libraries etc) Conceptually they can stop adding features.

This almost never happens because of the chaotic nature of the development. Developers leave, get distracted by other projects, the platforms change, user expectations push the project in different directions etc.

But it could happen....

Friday, January 7, 2011

Designing the User around the Tool or how software design is like a childs birthday party

I have been thinking about my design philosophy of a database system. (By db I mean a database centric application. The actual scenario is not relevant so ignore it except in the most abstract way of having users and some purpose)

This though relates indirectly to a post I read a few days ago about a programmer who had given their all to a project and had come to resent the users for not using the product the way it was designed and doing all sorts of stupid things which by his attempts to deal with them proceeded to turn the back-end code into rubbish in an attempt to cope with user stupidity.

Anyway, there are so many things wrong with the point of view expressed in the post that its hard to know where to start, but the point is not to attack the writer, it was just another seed for the idea I actually want to talk about.

The point I am trying to get around to is the idea of designing the tool to fit the user rather than trying to fit the user to the tool ( training, using it the "right way" etc).... seems obvious when I say it this way... but read on...

There is always a certain amount of tension between training the user to use the system and modifying the system to fit the users. There are economic arguments for an against both propositions and they underpin various business,design and engineering decisions... however all this aside and ignoring the issues with the limitations of interfaces and lack of computing resources and all the other limits and assumptions that come to the design table... there is still a point worth making. And it is...

A data centric application is, at its simplest, a sheet of blank paper.  This is the fundamental metaphor upon which a database application is built.  We will call this Layer 0 of the abstraction stack.

Layer 0. Medium.

The user can write on it, they can change things and erase things and freely scrawl whatever they like. The paper is infinite in size and there are no limits about where or what or why.  They can share it and let others draw on it as they like. Everyone's drawings are mixed together. Think of a sheet of butchers paper on a table at a children's birthday party. Hand around the crayons and see what happens...


Layer 1. Structure.

The next simplest idea is to begin applying some simple helpful structure.  Divide the paper into two areas with a single line (again infinitely long etc etc) Anything written on one side of the line has one meaning, and everything on the other side of the line has another. (The meanings are only in the head of the user at the moment, the paper does not "enforce" any meaning in any way.)  The only useful rule in this case is that things should not cross the line or exist on both sides at the same time as this would be ambiguous. However the paper in no way enforces this concept, so its still quite possible for the user to put their stuff wherever they like, line or no line.

Obviously, one can then add more lines to the paper and arbitrarily assign meaning to the areas that are starting to be defined.  These fields have no intrinsic relationships, except that of mutual exclusivity due to the dimensional nature of paper. However its possible to create relationships using ideas like Venn diagrams. Once you reach the level of sets and set theory you're wandering off into abstractions that are dealing with other ideas and not the point I am working on... so back up a bit. Keep it at the simple level of a 4yr old child with a sheet of paper.

Layer 2. Limits.

Rather than drawing lines on the paper, cut it into pieces.  It's now less possible to draw or write something that spans two areas. Simply by moving the two pieces of paper apart, anything over the edges looses coherency and is divided.  This is the simplest concept of enforcing a limit with this metaphor.

The things on the paper are now "in" or "out".  Venn diagrams with scissors start to not really work out so well. The divisions become much clearer and unambiguous. Its still possible to write the same information or draw the same scrawl in two places, but they are two clearly and functionally separate places.

Another useful property is that the pieces of paper can be shared more easily. They can be passed around individually and manipulated separately. ( I think I just invented the foundation of data security in the metaphor... but lets not rush ahead)

Layer 3. Tools.

Ink versus pencil.  One is permanent while the other is malleable.  By introducing this idea, some drawing start to be unable to be erased or changed while others are still free to be added and removed, changed, reworked etc.

The concept of read, write, edit, delete, modify etc have suddenly become more clearly defined. We are no longer talking about the surface on which the writing is done but the properties of the writing itself. How permanent is it, what can happen to it in the future, what has happened to it in the past?  These concepts and others can now be determined from intrinsic properties of the writing.


Layer 4. Properties.

Color. This is an intrinsic property of the writing tool but does not carry a semantic meaning beyond what we ascribe to it. Put down a handful of colored pens and pencils at a party. Each child picks one up and you can immediately start to determine authorship of each potato-headed figure on the paper.  However, every time they swap pens with someone else the whole system collapses. So authorship is difficult to identify or verify from the data itself; no matter how much everyone wants to believe they can do it... its just layers of "proof" based on nothing.... there is no intrinsic property of the system that can identify authorship....without additional external ... something.

 Try it at a party, be vigilant and write the name of the children next to each drawing, get all the parents to do the same, once you start this system, I figure it will be about 30 seconds before the children start writing their own names and the parents start helping them.... then a few minutes later the children are writing each others names and the parents have stopped supervising them.... after that... hows your system going now?



So whats the point? Am I saying all Users are like 4yr old children? Not quite, however the value of this comparison is that its a useful thought exercise. (The fact that the aggregate behavior of many "qualified" and mature staff in a large organisation is very similar to the aggregate effect of 4yr old's at a birthday party is amazingly similar... (even before the red cordial hits their systems)

The point is that a business system needs to be built from Layer 0 upward, not imposed with the expectation of the users being other than they are...

The other interesting point is that the layer model stops being useful pretty quickly as the ideas are not dependent upon each other in any order....  (might need to polish that idea a bit later)

So how does this gel into a useful idea?

Rather than building a system that is a mess of fields and data entry boxes and business rules, start with a sheet of paper and some pens and slowly add limits and boundaries that are realistic.  Evolve the design by adding a simple concept rather than starting with complexity and expecting people to use it in the right way.

Damn but this is probably one of the most rambling posts I have written so far.  And you know what... its working. I have turned over the ideas and tried to articulate them and most importantly I have captured the raw flow as it developed.

These writings are not intended to be a polished paper, they are raw ideas that are bubbling around. I write here to capture them while they are fresh. I can come back and edit them later when they gel again.

Simply by letting the idea flow out illustrates part of the point of this post. People use their sheets of paper differently. The role of the designer is to accommodate that while finding ways to extract business value from that process.

This maybe by allowing the content on the paper to be stored by machine, read by machine, shared with others, structured in small or large ways, edited, preserved, reported on etc.  Thus the role of the application designer grows, but at its heart is the idea of a sheet of paper that someone wants to write on. The balance comes from making the process of writing easy and fluid while still being able to extract the business value.

And then the rubber hits the road with all the force of a cheap special effect and you get hit by all the compromises and limitations of the environment that we work in.  Price, time, tools, resources, compromise, decisions etc..... all the bullshit and complexity of software engineering that obscures the reality...

..... of that simple sheet of paper and the brightly colored crayons in the hands of a 4yr old.


Edit.

The following casts a different perspective on the same issue. Redmond Reality Distortion Field. Same idea, different source.
http://www.stepto.com/Lists/Posts/Post.aspx?ID=486