Friday, February 15, 2013

Access RecordSource & Dynaset not showing records

Just had a weird moment in Access land.

I have two forms that have been in use for a couple of years, suddenly they will not display records as I expected them to.

I was setting the RecordSource property using an SQL string which was working a couple of hours ago.  But now they are not showing any of the records.  However, if I put the RecordType to Snapshot... tada... all the records show up as expected.

After reading some forums, I found the solution here

http://bytes.com/topic/access/answers/763599-form-does-not-show-records-dynaset-mode-only-snapshot


The thing that is bothering me the most, is how did this come to be a bug I am only just finding?  Have the properties been reset somehow?  Has a patch fixed something?  I swear it used to work as expected... now...weird.

Anyway, I have updated my codebase to explicitly set the DataEntry property to false when I am "reviewing" records and it seems to work fine.  I just have to figure out if my users have different ideas about "fine".


Friday, February 8, 2013

Minesweeper Solver AI with a flawed algorithms.

http://luckytoilet.wordpress.com/2012/12/23/2125/


The above is an interesting article on how to build a minesweeper solver.  There is a good discussion of various algorithmic solutions that can solve everything except the "must guess" states in the game.

The result of the AI in the article is a solver that acheives ~50% win/loss ratio.  Which is effectivly chance.  

It takes about 5seconds to figure out a couple of ways to beat the AI's win ratio using "indirect" strategies. 

1) Only "complete" games that do not result in a "guess" state.
2) Generate a "result" screen without actually playing the games.
3) Throw a tantrum - blue-screen the computer, throw a critical error... whatever the AI equivilent is. This allows the AI to avoid registering a less than optimal win ratio
4) Use the AI to fiddle the number of mines and the board size variables to allow it to increase its win ratio.  (Set the number of mines to 1 and the board size to expert and rack up a huge number of wins...tada)
5) Take a state snapshot of the game, clone the snapshot, play each position until a successful solution is found then move forward with that winning solution. This is a brute force mechanism to acheive a perfect 100% win ratio. 
6) Use the "Replay" function to increse the proportion of games that are won using partial knowledge rather than starting with a fresh board every time.



This in itself is strange.   If we assume that there must be a proportion of games that are solvable without reaching a "must guess" state, then these should be 100% solvable using the AI's methods.  The rest of the games must involve one or more "must guess" situations.  Obviously for a win, each guess is dependant which means that a game involving more than a very small number of guesses becomes improbable for the AI to "win".  If we assume that the proportion of games that do not involve a guess event are fairly fixed ( say 5%) and the games involving guesses are all essentially chance, then we should still get a result of about 55% win ratio.  But like any coin walk, this will only appear with a sufficiently large sample of games.  (Approching infinity?)

So are the games being presented to the AI really random or is there another factor that is hiding this effect?

We can assume that as the number of mines increase relative to the size of the board, the number of "must guess" events required to "win" this kind of game would increase.  So is there a sweet spot with these variables that allows for games with minimum guessing events but still results in a "credible" seeming AI? Probably the easy levels. 

If memory serves there are a fixed number of mines in each of the three pre-set "hardness" levels. (Someone has uninstalled the windows games from my machine image..damit.  Ok, Fixed that)

Beginner has 10 Mines  on a 9x9 grid.
Intermediate has 40 mines on a 16x16 grid
Advanced has 99 mines on a 16x30 grid

As there is no clear way to calculate the probability of a particular number of "Must Guess" configurations occuring at a particular board+mine count ratio. (Well none that spring to mind anyway) I guess we could just do some sampling and develop a bell curve.  (This would require that I could complete a reasonable number of games and see the board layouts, so I could infact count the number of instances of guess events in the process of manually solving each game.  Either that or just get the AI to solve a set of games and get it to count the number of guess events... mmm automation.

Anyway, assume we came up with the probability of guess events for each board size.  This would only give us a sense of what the true win ratio should be over a large enough set of games. 

However the probability of solving the boards will be:

No Guess Event (100%) 1
1 Guess Event  (50%) 0.5
2 Guess Events (50% * 50% )  0.25
3 Guess Events ( 50% * 50% * 50%) 0.125
etc.

Do you notice a pattern?  My point is that if we assume that each of these types of games has an equal probability of being presented to the AI, then we should get the probability of solving any game simply by averaging them together. The average of the above 4 game types is 0.46875... which is below chance. The further we go with the table of possible game types the lower the probable outcome is.  However the fact that the results reported in the article suggest that the AI using the published strategies was still getting a win ratio of about 50% would suggest that the probability of the game type is not distributed evenly. With some simple spreadsheeting,  the distribution turns out to be a simple falloff curve.


Based on the reported win ratio of about 50% I suggest that the games that are being presented to the AI probably involve only a small number of guess events.

However, we are only dealing with the ratio of games that were won.  We cannot really make a conclusion about the games that the AI lost.  They could infact have contained a much larger number of guess events.  The above curve really only shows the ratio of the games that the AI can win, even when its guessing.  This is simply the law of chance stating to bite hard. It doesn't actually tell us what the distribution of guess events is like in the games being presented to the AI.  Inference go suck.... 

Does this give us any useful ways to optimse the strategy?  Is there an angle that we are not trying simply because we want a general solution? Are we simply chasing a white rabbit down the wrong hole?  Can we be the first to beat the law of chance? Bzzzzt. Wrong answer.  There is no way to remove the effect of chance in these games.  The AI must make a guess and the guess will always have a 50/50 chance of being correct. The number of guess events will generally be small (as shown in the graph above) but these are inviolate.  So how do we beat it?  Simply examine your assumptions.

As I proposed above, there are lots of ways to get a better win ratio by out-of-the-box thinking.  Many of these strategies and approches would be considered as "cheating" by humans who are crippled by social ideals of "fair play".  Keep in mind that the only rule given to the AI was to maximise the number of wins, so these are all successful strategies that the programmer has simply failed to implement.  The AI has not been given these tools and certainly was not given a sense of "fair play" through which it might have decided not to employ some of those strategies.  So in conclusion, the only reason that the AI has not got a 100% win ratio, is that the programmer failed to implement any successful strategies to acheive this ratio.  Essentially crippling the AI.

A better AI would have a result that looked like this:


So whats the point?

The point is that the AI is handicapped by the "humanity" of the designer. 







Tuesday, February 5, 2013

Access 2010 corrupt database "File not found"

I had the unplesant experience of a corrupt database yesterday.  After doing some work, suddenly everthing started generating a "File not Found" error.  Clicking buttons, using the ribbon bar, trying to run any macros.  My database was fucked.

I tracked it back to a module that I had added... put a little bit of code in and then deleted earlier.  When I closed and opened the database the module still showed up in the tree in the VBA editor, but seemed to be "Empty" as every time I tried to open it... it flicked up the "File not found" error.  All the other modules worked as normal.


Anyway,  after trying to delete it, over-write it, modify its name etc... I gave up and created a new database, imported everything from the old one and got moving. The steps are:

NOTE. Make a list of the "References" in the VBA editor for your database at this point. (See below for more details to avoid getting screwed when you close the corrupt database. Just write them on a piece of paper or something low tech. They cannot be exported.)

1) Create a new database
2) In the new database go to "External Data" > Access
3) Select "Import tables, Queries, forms, reports, macros, and modules into the current database.  Select the old database file using the browse button and then click the OK button.
4) You will get a mega select dialog where you can select all the objects you want to import.  Generally you will want to use the "Select All" button on each of the tabs to make sure you get "Everything" from your old database.  Make sure you do not have the "Corrupt" item selected... if you are not sure what is corrupt... take it in a couple of bites and test the new database between bites.

5) Once the import has finished and you save the new database, you may still need to "Wire up" a couple of things in the new database. (Go to File > Options > Current Database) The things I had to setup were:

Application Options Section
* Application Title
* Application Icon
* Display Form

Ribbon and Toolbar Options
* Ribbon Name

These are just the options for my "Development" database. I turn off the navigation pane and the Default ribbon before I deploy the database to the users.

Hope this helps someone.


In the Visual Basic Editor, I has to recreate all the References. I had to open the old database and make a list of everthing and then manually add the references to the new database.

Unfortunatly, when I came to try to re-open the old database it was now throwing a much larger error

"The database cannot be opened because the VBA project contained in it cannot be read. The database can be opened only if the VBA project is first deleted. Deleting the VBA project removes all code from modules, forms and reports. You should back up your database before attempting to open the database and delete the VBA project. To create a backup copy, click Cancel and then make a backup copy of your database. To open the database and delete the VBA project without creating a backup copy, click OK."  mmmm shit!

Well after making a backup...  I opened the Database and deleted the VBA project and found that the references had gone with it..... fuckcicle!

I tried going back to one of the production copies but they have been compiled and the VBA project will not show the references.... fucked again.

Don't bother trying to decompile a previous version either:
http://stackoverflow.com/questions/3266542/ms-access-how-to-decompile-and-recompile

When I try this it gives me the error message "The VBA project in the datbase is corrupt".  Basically when it was compiled, it was stripped of all the symbols etc and they cannot be re-created.

Fucked again.

So my options are to "discover" the references that I need by trial and error.  I remember I had about 8-10 references, so it should not be tooooo hard.  I think the calendar control was the worst to find.

Where you have used early binding in your VBA you can discover missing references simply by running Debug > Compile Database.  This will highlight any types that you are using that have not been defined.  You can then search the net for which object library contains that type and references it correctly.




Bugger... just have to test everything in the damn database again.






FOSS Compensation Mechanisms

http://www.datamation.com/open-source/9-things-that-are-never-admitted-about-open-source-1.html

The above article and associated comments are interesting.  There are plenty of kindling for flamewars contained, but its not those particular aspects that I find my attention drawn to.


Free Software.... System

The arguments about what FOSS is or who runs it or what happened are fun.  The point is that "Something", a system of some form, generates software which happens to have the property of "free"ness.  We have the software artifacts as evidence that this is not just some teenage RPG fantasy.  The fact that this has been happening for some decades suggests that its a systemic process.  At this point I conclude that its worth using the lable "System" to collectivly describe it.  The fact that just about every other heuristic about "systems" is violated somewhere means that its probably not a good term.  Its certainly an informal system that has displayed a wide range of emergent structures and processes. I like emergent systems.  I like seeing how structure forms and fragments within a pool of chaotically swirling components.  I like seeing the effects of context and environment.  Its fun to see small niche opportunities appear, be filled and then fade away as other actors either colonise the niche and exhaust the resources or change the environment and disolve the niche all together.

So is it an Eccosystem?  Lets drill into it a little more.

An Eccosystem is defined by its environmental constraints, resources and actors.

The actors in the Eccosystem are fairly easy to generalise:
* Producers of software
* Consumers of software
* Everyone else on the planet who knows them, sells stuff to them, talks to them, is even remotely connected to them in any distant way etc etc... living or dead.... 

The Environmental constraints are:
* all the computers in the past, present and some way into the future.
* all the users past, present and a bit into the future.
* the legal, social and cultural contexts of the environments for both producers and consumers of software.

The resources within this environment would include:

User (Consumer) Resources
* Time
* Money
* Knowledge
* Need
* Frustration
* Attention
* Interest

Developer (Producer) Resources
* Time
* Money
* Knowledge
* Frustration
* Attention
* Interest
* Resiliance
* etc etc etc

* Communication mechanisms
* Computing Resources (Time, storage, redundancy etc)



There are probably millions of other variables within the system that could be listed.... we can play that game later.  The point, I feel is to move on...

So in an effort to move the thought along... who are the big players in the eccosystem? Same rule as any eccosystem.... look at the most "successful".  Look at the species or groups with the largest numbers, look for those with the longest lifespans, look at the ones with the most resources under their control, look at the ones with the biggest teeth.... pick some quality that you feel is noteworthy and then make a game out of counting it.   When you're done... keep reading.

The point is that measuring "success" in an ecosystem is a game of "Eye of the beholder".  Once you start to look at a system that has millions of variables and millions of actors... keeping score gets hard. Ask a biologist.  Ask any academic. After a while, people develop coping mechanisms to deal with the complexity.  They focus on something they think is valuable and "fixate" on it.  They make up their own scoring system and their own game and play it to the exclusion of everyone else. They fight about their section of the bigger game and generally downplay everyone else's as unimportant. (That's being polite) but at the end of it... its just a coping mechanism to deal with something that is more complex than they can cope with.  Its too big.

Kinda like the "flame wars" and strong opinions that manifest in the software world.

So, in summary, I would suggest that the bulk of the discussions are coping mechanisms and should be respected as such.  One thing I have learned is not to mess with anyone elses coping mechanism... its not like I have something to replace it.

Once we move past the sound and fury of the coping mechanism bullshit storm... we can have an intelligent discussion about the software ecosystem.

First point worth observing is that the software ecosystem is not divisible into FOSS and "other".  It's a single ecosystem where everything is connected.  Many projects exists because of holes in other projects. This symbiotic relationship cannot be decomposed. But the projects can evolve away from each other.

The next point is that even within a sub-set of the ecosystem, such as the FOSS area, the complexity and variability of the people and projects is just too vast to try to clasify with a simple schema.  There must be millions of instances of FOSS software running and, here I am totally guessing, millions of people making choices about FOSS software on a regular basis.  It's just silly to try to argue that anyone has a good handle on what kind of decision making process those people are engaged in.  The variables are exponential, not linear.



Like any good religion, its hard to change anyones mind.... simply because to display a coping mechanism means you are already reacting to the stress of the situation you perceive. You're in the tarpit.  "Helping" someone else will simply pass your tar to them.

I guess the key point, rather than simply rambling on, is that the software ecosystem is, and will continue to be, incredibly complex.  Trying to map it, argue about it or control sections of it, is simply demonstating one's ignorance about just how vast and variable the whole show is.

So whats the future hold?  More of the same probably.  More pointless arguments by people who are freaked out by the complexity of the system that they cannot understand. More chaos, more evolution by projects reacting to "local" conditions without being able to make "big picture" plans.  There are probably very few players who have capacity to make really "big picture" arguments and even fewer who have the capacity to actually act at a high enough level to make any particular impact.  The big platform players might be able to send ripples through the ecosystem but none of them have enough monolithic impact to really drive much.  Microsoft has for years been a big player, big enough to distort the ecosystem around them, but similarly there have been thousands of others of larger and smaller degree who have created sub-ecosystems around themselves.  Tools vendors, hardware platforms, game engines, social platforms, phone makers, tablets, consoles, vehicle systems, retro platforms... etc.  Each one has created niches and relationships that have splashed in the pond and mattered for someone, somewhere.

Take a deep breath people... stop trying to keep score and get on with what is important to you.