http://www.mathworks.com/academia/arduino-software/?nocookie=true
Learn something new every day. This is the first time I have poked around simulink and it looks interesting. Totally inaccessable due to the price... but interesting none the less. The price of "research" IO boards and software is just a joke. Labview is in the same category, great gear and adequate software but totally inaccessable due to the price.
I honestly think Arduino and similar gear are going to completely eat this market segment. When you can get a data aquisition device up and running for a few hundred on arduino with some modules and bits or spend a couple of thousand just to get a box for Labview or Simulink... its not really reasonable.
I respect where they are comming from and the cost to develop and support their monolythic systems... but I still feel that as a business model... its had its day.
These kind of systems are just not feasible when compared with the capacity and richness of an open model.
Small purpose built open systems that interact via open standards are much simpler and less bug prone than trying to deal with the massive libraries and learning curves of what are archaic monsters with decades of cruft.
I unearthed an old aparatus in the back of a lab the other day that's a marvel of electro mechanical engineering. It interfaces with a 486 dx2 66 running some unix variant via some multi channel IO system from before I was born. The time and energy that went into making it is just wonderful. (Its a skinner box system for pidgeons BTW)
I really respect that its about 50KG of gear that probably cost someone a few thousand to have custom made, 15 years ago but the same functionality can be had from an arduino for about $50 today.
I just cannot justify matlab or labview for most projects. They're not even contenders. The only reason matlab is still around is due to support some aging infrastructure ( code for older academic staff...) that cannot be updated.
Life is change....
Friday, April 27, 2012
Article on the Netduino
http://10rem.net/blog/2012/04/07/first-experiences-with-the-new-netduino-go-and-how-it-relates-to-net-gadgeteer
Now for an article on netduino and the gadgeteer modules. Some useful information but more for the plug and play minded folk. There is reference to an IO shield that will interface all this stuff with the arduino.
Now for an article on netduino and the gadgeteer modules. Some useful information but more for the plug and play minded folk. There is reference to an IO shield that will interface all this stuff with the arduino.
Labels:
Arduino,
Hardware Hacking
Article on basic arduino programming
http://www.linuxjournal.com/content/learning-program-arduino#
This is brief but nice article on some basic arduino programming.
This is brief but nice article on some basic arduino programming.
Labels:
Arduino,
Hardware Hacking
Tuesday, April 24, 2012
MFD to Scanner hacking
http://entropia.kapsi.fi/blog/2012/04/the-transformation-of-samsung-scx-4200/
Nice recycling of a scanner from a multi-function device.
Nice recycling of a scanner from a multi-function device.
Labels:
Hardware Hacking
Thursday, April 5, 2012
Fallacy classes
http://blog.8thlight.com/colin-jones/2012/04/04/a-few-fallacies-for-your-consideration.html
Interesting list of common classes of fallacies and some examples.
Interesting list of common classes of fallacies and some examples.
Labels:
logic
Cheating via DirectX and OpenGL article
http://altdevblogaday.com/2012/04/02/extravagant-cheating-via-direct-x/
This is an interesting breakdown on cheat mechanisms and points to some of the interesting meta games being employed by coders on both sides.
My interest in this is both from a technical standpoint of learning about new ways to access and intereact with games ( useful for all type of research projects ) and for the strategies that are being proposed and the counter strategies.
I guess my interest is more about finding functional mechanisms that can allow AI's and Bots to intereact with game environments, simply for research purposes. My personal interest in cheating at a game is kind of the inverse. I can understand why people cheat and I'm truly fascinated by it as an emergent phenomena... I just value playing too much to actually do it myself. Then again, I do remember some truly frustrating games that I have happily played in god mode or searched the net for some cheats to get around some bug or irritant in the game.
My ego is not bound up in my social status and the fact that I don't play any of the big social games probably keeps me out of touch with just whats going on.... anyway.... back on topic.
There are some really useful techniques here for AI research. It's probably a good thing that I'm not involved in this kind of scene as there are so many more things I can think of to do than seem to be done at the present... lol. Best we don't explore any of those paths I think.
It was all so much simpler when we were hacking save files with a hex editor on single player games on the 486dx2 66. Then it was just us against the computer. No networks, no google... just slow painful puzzle solving in the wee small hours of the night until finally we cracked the file format and .... bascially wrecked the game for ourselves. Blatantly cheating in a game with other people .... just really sucks.
This is an interesting breakdown on cheat mechanisms and points to some of the interesting meta games being employed by coders on both sides.
My interest in this is both from a technical standpoint of learning about new ways to access and intereact with games ( useful for all type of research projects ) and for the strategies that are being proposed and the counter strategies.
I guess my interest is more about finding functional mechanisms that can allow AI's and Bots to intereact with game environments, simply for research purposes. My personal interest in cheating at a game is kind of the inverse. I can understand why people cheat and I'm truly fascinated by it as an emergent phenomena... I just value playing too much to actually do it myself. Then again, I do remember some truly frustrating games that I have happily played in god mode or searched the net for some cheats to get around some bug or irritant in the game.
My ego is not bound up in my social status and the fact that I don't play any of the big social games probably keeps me out of touch with just whats going on.... anyway.... back on topic.
There are some really useful techniques here for AI research. It's probably a good thing that I'm not involved in this kind of scene as there are so many more things I can think of to do than seem to be done at the present... lol. Best we don't explore any of those paths I think.
It was all so much simpler when we were hacking save files with a hex editor on single player games on the 486dx2 66. Then it was just us against the computer. No networks, no google... just slow painful puzzle solving in the wee small hours of the night until finally we cracked the file format and .... bascially wrecked the game for ourselves. Blatantly cheating in a game with other people .... just really sucks.
Thought experiments for Automated Starcraft Players
http://graphics.stanford.edu/~mdfisher/GameAIs.html
Starcraft 2 Automated Player
This is a brilliant writeup of both the architecture and some of the issues to consider when building an automated player for a game.
There are so many little bits of wisdom in this article that I want to spend time thinking about and hacking on.
First thoughts.
Interesting AI architecture. It (if I read it correctly) seems to use a kind of switched swarm intelligence for some of the decision making. Which makes sense when you consider the intricacy of trying to build a single unified decision making engine that could handle all the different individual units, their micro objectives and integrate that into a macro plan. (I would do that for a Phd with bells on...) kind of skynet for Starcraft....
The emergent strategies that have been implemented to deal with chaotic and imprecise events within the environment are interesting. Kind of obvious now its been pointed out. Again this moves a solution more toward a swam model where each agent/thread/program/entity deals with a small part of the map or a small objective as opposed to a large monolithic algorithm that would be a monster to debug. The tradeoff being that each of these small programs has loose coupling with all the others. They cannot share their internal micro state and need to implement some form of inter-entity communication if they need to cooperate. Which gets us back to resource competition, group decision making and all the other fun of peer agent models.
It's interesting to note that there are some higher level constructs in the architecture that are delegated to task switching and managing the stream of data comming in about the game state. Currently these are very single purpose; making debugging much simpler. But another task could be developed to run above the agents to implement a coordinating role and implement the "master plan" for the play session. (yet another Phd...)
The mirror driver implements the equivilent of a "sense" mechanism. We could describe it as "sight", but it does not share the same type of information stream as we understand "sight" in a biological form. Still its a fairly wide channel of information that is then internalised and used to generate a state represenation of the AI's environmental reality.
This state represeation will also include (expliclty or implicitly) some encoded knowledge about the artifacts in that environment. The number of different possible states that each individual artifact can exist in, the inter-artifact relationships, communications and inter-actions etc. It would be an interesting datascape to look at (yet another Phd...)
Dealing with the scene perception and understanding is interesting. It would be fun to offload this information to a different processor(or network) and see exactly how rich an internal representation could be generated from the information stream.
Dealing with the chaotic nature of the interaction with the opposing player, the state changes of the particular units and the static and dynamic elements of the environment is an interesting set of problems. (Called housekeeping in the article) Which again would generate an interesting set of rules and emergent behaviours in a good learning system.
It would be an interesting exercise to run a few hundred games on the same map and build up an understanding of the potential of each map. This may be an effective mechanism for debugging maps. Both the functional level debugging( finding holes and bugs ) and the logical debugging (balancing, removing or adding choke points, tuning difficulty etc). This is something I have been wanting to do for Chess for a while, is to calculate the true value of every square on the board. Probably someone has already done it... I just want to do it in as a multi-dimensional analysis, to see how many hidden variables I can tie to simple positional space, then to particular pieces and finally to particular strategic choices. Since there are only 32 peices, it should not take too long. Just have to raid a couple of databases for some game records. (yet another Phd...)
There is a fun little element mentioned called "Personas" which seems to be an undeveloped mechanism to mess with opponent players minds during the game. Its a simple blind system that uses a strategy of tying complex chat sentences during play....
Not sure who effective this would be but its an interesting strategic possibility. Attack the players outside the game environment.
Using a console overlay seems to be cute but wasteful. I would build a whole seperate GUI for monitoring the AI. (Benefit of multiple monitors I guess. )
At the end of the day, this is a great framework to build on, but seems to consist of a lot of fairly hard coded routines and strategies that are played out by a quite rigid system. Yes its fast, yes its beyond human capacity... but I don't feel like its moving beyond being a very complex script. I want to see my artifical children grow beyond my capacity.... not just leaverage hardward speed as a blunt instrument.
Starcraft 2 Automated Player
This is a brilliant writeup of both the architecture and some of the issues to consider when building an automated player for a game.
There are so many little bits of wisdom in this article that I want to spend time thinking about and hacking on.
First thoughts.
Interesting AI architecture. It (if I read it correctly) seems to use a kind of switched swarm intelligence for some of the decision making. Which makes sense when you consider the intricacy of trying to build a single unified decision making engine that could handle all the different individual units, their micro objectives and integrate that into a macro plan. (I would do that for a Phd with bells on...) kind of skynet for Starcraft....
The emergent strategies that have been implemented to deal with chaotic and imprecise events within the environment are interesting. Kind of obvious now its been pointed out. Again this moves a solution more toward a swam model where each agent/thread/program/entity deals with a small part of the map or a small objective as opposed to a large monolithic algorithm that would be a monster to debug. The tradeoff being that each of these small programs has loose coupling with all the others. They cannot share their internal micro state and need to implement some form of inter-entity communication if they need to cooperate. Which gets us back to resource competition, group decision making and all the other fun of peer agent models.
It's interesting to note that there are some higher level constructs in the architecture that are delegated to task switching and managing the stream of data comming in about the game state. Currently these are very single purpose; making debugging much simpler. But another task could be developed to run above the agents to implement a coordinating role and implement the "master plan" for the play session. (yet another Phd...)
The mirror driver implements the equivilent of a "sense" mechanism. We could describe it as "sight", but it does not share the same type of information stream as we understand "sight" in a biological form. Still its a fairly wide channel of information that is then internalised and used to generate a state represenation of the AI's environmental reality.
This state represeation will also include (expliclty or implicitly) some encoded knowledge about the artifacts in that environment. The number of different possible states that each individual artifact can exist in, the inter-artifact relationships, communications and inter-actions etc. It would be an interesting datascape to look at (yet another Phd...)
Dealing with the scene perception and understanding is interesting. It would be fun to offload this information to a different processor(or network) and see exactly how rich an internal representation could be generated from the information stream.
Dealing with the chaotic nature of the interaction with the opposing player, the state changes of the particular units and the static and dynamic elements of the environment is an interesting set of problems. (Called housekeeping in the article) Which again would generate an interesting set of rules and emergent behaviours in a good learning system.
It would be an interesting exercise to run a few hundred games on the same map and build up an understanding of the potential of each map. This may be an effective mechanism for debugging maps. Both the functional level debugging( finding holes and bugs ) and the logical debugging (balancing, removing or adding choke points, tuning difficulty etc). This is something I have been wanting to do for Chess for a while, is to calculate the true value of every square on the board. Probably someone has already done it... I just want to do it in as a multi-dimensional analysis, to see how many hidden variables I can tie to simple positional space, then to particular pieces and finally to particular strategic choices. Since there are only 32 peices, it should not take too long. Just have to raid a couple of databases for some game records. (yet another Phd...)
There is a fun little element mentioned called "Personas" which seems to be an undeveloped mechanism to mess with opponent players minds during the game. Its a simple blind system that uses a strategy of tying complex chat sentences during play....
Not sure who effective this would be but its an interesting strategic possibility. Attack the players outside the game environment.
Using a console overlay seems to be cute but wasteful. I would build a whole seperate GUI for monitoring the AI. (Benefit of multiple monitors I guess. )
At the end of the day, this is a great framework to build on, but seems to consist of a lot of fairly hard coded routines and strategies that are played out by a quite rigid system. Yes its fast, yes its beyond human capacity... but I don't feel like its moving beyond being a very complex script. I want to see my artifical children grow beyond my capacity.... not just leaverage hardward speed as a blunt instrument.
Labels:
AI,
AI Design,
Emergent systems
Tuesday, April 3, 2012
Professional Associations
http://atsip.herts.ac.uk/
Association of Technical Staff in Psychology
http://www.pnarchive.org/s.php?p=105
Psychology Network Technicians Awards
And joining the two together
http://www.heacademy.ac.uk/events/detail/2012/04-July-ATSiP-conference-Plymouth
http://www.anta.asn.au/otherlinks.htm
Association of Neurophysiology Technologists of Australia
http://www.sleeptechnologists.org/
Australasian Sleep Technologists Association
Association of Technical Staff in Psychology
http://www.pnarchive.org/s.php?p=105
Psychology Network Technicians Awards
And joining the two together
http://www.heacademy.ac.uk/events/detail/2012/04-July-ATSiP-conference-Plymouth
http://www.anta.asn.au/otherlinks.htm
Association of Neurophysiology Technologists of Australia
http://www.sleeptechnologists.org/
Australasian Sleep Technologists Association
Labels:
professional resources
Emergent intelligent behaviour in the hive
http://www.scientificamerican.com/article.cfm?id=you-have-a-hive-mind
This is an interesting article on decision making in the bee hive.
The interesting aspects are both the mechanism and the result.
The mechanism of using inhibition to prevent deadlock is quite useful and I will play with it later.
The result however is even more interesting. In the article they talk about the two groups of bee scounts returning from scouting and starting to recruit undecided individiuals.
If we assume that both scouting parties found suitable locations ( they were not unsuitable ) the question is if there is any mechanism to select between locations based on the quality of the locations or are they simply randomly deciding between locations that all exceed some minimum specification ( dry, enclosed, no bears?)
I would assume that a bee scout can only report on a single location at a time. So, I would assume that if a scout is out looking, it will stop looking as soon as it finds a location that meets minimum spec. As there is no point keeping looking as it would then be making the decision of which was the location most worth reporting.
Another factor is distance, it makes sense that two scouting parties depart at the same time and the one who finds the first viable location would return first. Thus they to recruit other individuals for a time period until the second party returns. I would guess that this may be a factor and would favor moving the hive a short distance rather than a long distance.
Another bit of information that may play out is the strength of the scouts dance. If the scout is not feeling good about the location... they perhaps will not fully invest in their dance and so may be easier to inhibit. This may be influenced by the stress levels of the scout. If they find a location and its a bit crappy and some of their fellow scouts get eaten... would they still dance as strongly?
If you think about that, the undecided individual has to make a decision. Is what the scout tells me a better option than what I already know ( status quo)? Is what this scout tells me better than what another scout has already told me? (They can only get one lot of information at a time) Or is this better than what another scout may tell me in the future?
So there are still lots of aspects of the decision making process that are not clarified by this article. It just describes an anti-deadlock mechanism for group decision making.
This is an interesting article on decision making in the bee hive.
The interesting aspects are both the mechanism and the result.
The mechanism of using inhibition to prevent deadlock is quite useful and I will play with it later.
The result however is even more interesting. In the article they talk about the two groups of bee scounts returning from scouting and starting to recruit undecided individiuals.
If we assume that both scouting parties found suitable locations ( they were not unsuitable ) the question is if there is any mechanism to select between locations based on the quality of the locations or are they simply randomly deciding between locations that all exceed some minimum specification ( dry, enclosed, no bears?)
I would assume that a bee scout can only report on a single location at a time. So, I would assume that if a scout is out looking, it will stop looking as soon as it finds a location that meets minimum spec. As there is no point keeping looking as it would then be making the decision of which was the location most worth reporting.
Another factor is distance, it makes sense that two scouting parties depart at the same time and the one who finds the first viable location would return first. Thus they to recruit other individuals for a time period until the second party returns. I would guess that this may be a factor and would favor moving the hive a short distance rather than a long distance.
Another bit of information that may play out is the strength of the scouts dance. If the scout is not feeling good about the location... they perhaps will not fully invest in their dance and so may be easier to inhibit. This may be influenced by the stress levels of the scout. If they find a location and its a bit crappy and some of their fellow scouts get eaten... would they still dance as strongly?
If you think about that, the undecided individual has to make a decision. Is what the scout tells me a better option than what I already know ( status quo)? Is what this scout tells me better than what another scout has already told me? (They can only get one lot of information at a time) Or is this better than what another scout may tell me in the future?
So there are still lots of aspects of the decision making process that are not clarified by this article. It just describes an anti-deadlock mechanism for group decision making.
Labels:
AI Design,
Emergent systems,
Psychology
Subscribe to:
Posts (Atom)