Hmmm... pulling down a website is a sad thing. Deleting the packages, eviscerating the file system that took so long to put together and get working. Ripping the content to another location... seeing just how little it matters now.
Life is change.
Monday, April 26, 2010
End of the backup
Does not seem like much to be backed up. That being said, the original blog was intended to used for notes on various topics that didn't fit anywhere else. Still it seems lean.
When I look at the hard drive and see all the piles of doc files with random thoughts and ideas it makes me feel better. Its not like the past couple of years have been empty. Quite the reverse. Just depends what I use to try to quantify it.
There is a certain degree of melancholy in the air at the moment. Just to set the scene, I have decided to leave my current job. This is a big deal for quite a few reasons. I feel a sense of ownership of the job because I basically created it. The job has evolved from some casual development work, through to being the technical backbone for H&HS on the Coffs Campus. I do everything from front line tech support for all the H&HS staff, through managing the research labs, through developing all the infrastructure and software for the science units in the Psychology program, through managing all the H&HS assets ( both at Coffs and Port Macquarie. Also formally at the National Marine Science Center but that is ending) through to supporting various research programs and researchers, through to desiging and building Enterprise systems.
At last count I was supporting about 27 full time staff, a dozen or so part timers, between 20 and 45 honors students, up to 12 phd students and random other bodies as required.
I manage 5 specialist research labs for Psychology and 2 prac labs for Nursing. (Only the IT for the Nursing Labs)
I manage 3 seperate work websites. Not counting various research function pages that have been set up for specific programs.
I wrote and maintain a codebase of about half a million lines of code across a few platforms, in multiple languages and using a slew of frameworks. (C, C++, Perl, Python, Lua, VB.Net, EBasic, Matlab, HTML, PHP, Maxscript, VBA, LabView, Office Macros etc etc etc) I think I am at or near a terrabyte of media files I have generated for various projects. Images, animations, movies, audio tracks, sound files etc. I could not count the number of spreadsheets I have built to process different data sets or the graphs and visualisations I have generated. (Well I probably could but is seems like more...)
The current todo list has about a dozen development projects on it. Two are enterprise, two are static websites, one is a dynamic app, the rest are either stand alone apps or modifications of existing stand alone apps. There is a codebase comming that has over a million lines of objective-c in a codewarrior project for Mac OS9.x which needs to be ported to (at least OSX10.5 if possible)
So if anyone is hiring a research engineer with a broad and varied background... call me.
When I look at the hard drive and see all the piles of doc files with random thoughts and ideas it makes me feel better. Its not like the past couple of years have been empty. Quite the reverse. Just depends what I use to try to quantify it.
There is a certain degree of melancholy in the air at the moment. Just to set the scene, I have decided to leave my current job. This is a big deal for quite a few reasons. I feel a sense of ownership of the job because I basically created it. The job has evolved from some casual development work, through to being the technical backbone for H&HS on the Coffs Campus. I do everything from front line tech support for all the H&HS staff, through managing the research labs, through developing all the infrastructure and software for the science units in the Psychology program, through managing all the H&HS assets ( both at Coffs and Port Macquarie. Also formally at the National Marine Science Center but that is ending) through to supporting various research programs and researchers, through to desiging and building Enterprise systems.
At last count I was supporting about 27 full time staff, a dozen or so part timers, between 20 and 45 honors students, up to 12 phd students and random other bodies as required.
I manage 5 specialist research labs for Psychology and 2 prac labs for Nursing. (Only the IT for the Nursing Labs)
I manage 3 seperate work websites. Not counting various research function pages that have been set up for specific programs.
I wrote and maintain a codebase of about half a million lines of code across a few platforms, in multiple languages and using a slew of frameworks. (C, C++, Perl, Python, Lua, VB.Net, EBasic, Matlab, HTML, PHP, Maxscript, VBA, LabView, Office Macros etc etc etc) I think I am at or near a terrabyte of media files I have generated for various projects. Images, animations, movies, audio tracks, sound files etc. I could not count the number of spreadsheets I have built to process different data sets or the graphs and visualisations I have generated. (Well I probably could but is seems like more...)
The current todo list has about a dozen development projects on it. Two are enterprise, two are static websites, one is a dynamic app, the rest are either stand alone apps or modifications of existing stand alone apps. There is a codebase comming that has over a million lines of objective-c in a codewarrior project for Mac OS9.x which needs to be ported to (at least OSX10.5 if possible)
So if anyone is hiring a research engineer with a broad and varied background... call me.
Labels:
Life
Archived from 23_4_10 - Scary Dates
Just noticed the last time I actually wrote to this blog was two years ago.
Time flies when you have children.....
Later all.
Time flies when you have children.....
Later all.
Labels:
Philosophy
Archive from 23_4_10 - Long time no type
Hi Anyone,
Weeeelll... its finally done. I am now moving toward a new job. The position has been classified by the classification committee and ... well lets just say that its not for me. It has taken me a while to come to terms with the result and to accept that its over. Today was another closure moment for me.
The word "Irony" keeps coming up in my internal dialog but not in an Alanis way... more like sad and broken way. There are so many projects on the go here that I have not been able to complete (or in some cases start) that I would have liked to throw a week or three at. However, its not to be. And that is quite depressing. I finally had a good position with good tools and resources and no bloody time to get anything done.
I honestly spend the bulk of the week doing technical administration and finally on Friday when it gets quiet about 4pm I can spend an hour or so actually doing what I feel is core work. "Frustration" is such a short word to describe a concept that has so much effect.
Air conditioning has just clicked off so it must be 5pm. Time to move on to other things.
My computer at home has dead with a single bloated capacitor on the motherboard. One lousy cap... probably cost 30c to replace. That being said the mobo is probably getting on to being a few years old now. Such is life.
Another thing that is bugging me is a corrupt reminder in thunderbird/lightning. It will not die.
Time to go. Must get back in the habit of blogging. Very cathartic.
Weeeelll... its finally done. I am now moving toward a new job. The position has been classified by the classification committee and ... well lets just say that its not for me. It has taken me a while to come to terms with the result and to accept that its over. Today was another closure moment for me.
The word "Irony" keeps coming up in my internal dialog but not in an Alanis way... more like sad and broken way. There are so many projects on the go here that I have not been able to complete (or in some cases start) that I would have liked to throw a week or three at. However, its not to be. And that is quite depressing. I finally had a good position with good tools and resources and no bloody time to get anything done.
I honestly spend the bulk of the week doing technical administration and finally on Friday when it gets quiet about 4pm I can spend an hour or so actually doing what I feel is core work. "Frustration" is such a short word to describe a concept that has so much effect.
Air conditioning has just clicked off so it must be 5pm. Time to move on to other things.
My computer at home has dead with a single bloated capacitor on the motherboard. One lousy cap... probably cost 30c to replace. That being said the mobo is probably getting on to being a few years old now. Such is life.
Another thing that is bugging me is a corrupt reminder in thunderbird/lightning. It will not die.
Time to go. Must get back in the habit of blogging. Very cathartic.
Labels:
Life,
Philosophy
Archive from 21_10_08 - Using Environment Variables in Visual Studio Search Path
Hi All,
After setting up wxWidgets it suddenly occurred to me that their solution of using an environment variable (WXWIN) in the search path was just as applicable for all the other sets of libraries I need to keep up to date. So I have just created a set of Environment variables for:
* Xerces XML parser $(XERCES_DIR)
* Boost Libs $(BOOST_DIR)
* Lua $(LUA_DIR)
* CppUnit $(CPPUNIT_DIR)
Ogre already has one defined (OGRE_HOME) which I have used even though its not as intuitive.
This lets me abstract the libraries paths and stop having to fiddle settings in the project files each time I update a library.
Later,
After setting up wxWidgets it suddenly occurred to me that their solution of using an environment variable (WXWIN) in the search path was just as applicable for all the other sets of libraries I need to keep up to date. So I have just created a set of Environment variables for:
* Xerces XML parser $(XERCES_DIR)
* Boost Libs $(BOOST_DIR)
* Lua $(LUA_DIR)
* CppUnit $(CPPUNIT_DIR)
Ogre already has one defined (OGRE_HOME) which I have used even though its not as intuitive.
This lets me abstract the libraries paths and stop having to fiddle settings in the project files each time I update a library.
Later,
Labels:
CPPUnit,
Programming Tools
Archive from 20_10_08 - Compiling wxWidgets
Hi All,
Oh what a tangled web we weave when first we try to get anything done outside Visual C++....
I have been trying to get wxFormsBuilder to compile from source... its been painful. Firstly getting it up and running in CodeBlocks was straight forward, but then getting a version of wxWidgets compiled with g++ turned into a whole days work.
Finally by updating the wxWidgets source to 2.8.9, manually updating gcc-core to 3.4.5-2006117-3, gcc-g++ to 3.4.5-2006117-3 and most crucially mingw32-make to 3.81-20080326.
Now it all builds with the dreaded "CreateProcess(NULL blah blah ) failed." errors I have been staring at all day.
Previously all I could build was the native MingW .a files which were useless. These worked using configure & make in the root director; but using the makefile.gcc in the build directory would consistently dump errors these freakin CreateProcess errors that were driving me up the wall.
I did find a number of indirect references to the same thing on the web but they were all pretty murky and seemed to point toward mingw32-make.exe being the cause. Seemed easy enough to fix... except the version number system ran me around in circles for a while until I figured out I had Cigwin in the environment path ahead of mingw which was masking my attempts to patch G++. Got that sorted out (because I couldn't find an update for Cigwin... so might ditch it. But that's a side issue.)
Anyway... long story short... patch the Mingw32-make.exe to resolve the CreateProcess errors, then the build will work fine.
Out...
Oh what a tangled web we weave when first we try to get anything done outside Visual C++....
I have been trying to get wxFormsBuilder to compile from source... its been painful. Firstly getting it up and running in CodeBlocks was straight forward, but then getting a version of wxWidgets compiled with g++ turned into a whole days work.
Finally by updating the wxWidgets source to 2.8.9, manually updating gcc-core to 3.4.5-2006117-3, gcc-g++ to 3.4.5-2006117-3 and most crucially mingw32-make to 3.81-20080326.
Now it all builds with the dreaded "CreateProcess(NULL blah blah ) failed." errors I have been staring at all day.
Previously all I could build was the native MingW .a files which were useless. These worked using configure & make in the root director; but using the makefile.gcc in the build directory would consistently dump errors these freakin CreateProcess errors that were driving me up the wall.
I did find a number of indirect references to the same thing on the web but they were all pretty murky and seemed to point toward mingw32-make.exe being the cause. Seemed easy enough to fix... except the version number system ran me around in circles for a while until I figured out I had Cigwin in the environment path ahead of mingw which was masking my attempts to patch G++. Got that sorted out (because I couldn't find an update for Cigwin... so might ditch it. But that's a side issue.)
Anyway... long story short... patch the Mingw32-make.exe to resolve the CreateProcess errors, then the build will work fine.
Out...
Labels:
Frameworks,
Programming Tools
Archive from 15_10_08 - Updating CppUnit for Visual Studio 2008
Hi,
If you are trying to compile CppUnit 1.12.1 with Visual Studio 2008, here are the changes you will need to make to the project settings to get it to all work without problems.
BTW my build target it WindowXP SP2. If you are building with for a different SDK you may need to tweak things. See this page on MSDN for the relevant Hex codes.
http://msdn.microsoft.com/en-us/library/aa383745(VS.85).aspx
For the Debug Static Build
Add WINVER=0x0502 to the pre-processor symbols for each of the following projects:
* DLLPlugInTester
* DSPlugIn( Only if you want to build this for some strange reason)
* TestPlugInRunner
* TestRunner
Add OEMRESOURCE to the pre-processor symbols for:
* TestPlugInRunner
Change the manifest file name in the Manifest Tool output for DSPlugIn from
$(IntDir)\$(TargetFileName).embedded.manifest
to
$(IntDir)\$(TargetFileName).intermediate.manifest
Again, this is only relevant if you want to build DSPlugIn. Remember this is a plugin for Visual Studio 6.0.
For the Release Unicode Build.
Add WINVER=0x0502 to the pre-processor symbols for projects:
* TestRunner
* DLLPlugInTester
* TestPlugInRunner
Add OEMRESOURCE to the pre-processor symbols for project:
* TestPlugInRunner
And it should all build nicely.
I would assume the same holds true for the other build configurations.
Enjoy,
If you are trying to compile CppUnit 1.12.1 with Visual Studio 2008, here are the changes you will need to make to the project settings to get it to all work without problems.
BTW my build target it WindowXP SP2. If you are building with for a different SDK you may need to tweak things. See this page on MSDN for the relevant Hex codes.
http://msdn.microsoft.com/en-us/library/aa383745(VS.85).aspx
For the Debug Static Build
Add WINVER=0x0502 to the pre-processor symbols for each of the following projects:
* DLLPlugInTester
* DSPlugIn( Only if you want to build this for some strange reason)
* TestPlugInRunner
* TestRunner
Add OEMRESOURCE to the pre-processor symbols for:
* TestPlugInRunner
Change the manifest file name in the Manifest Tool output for DSPlugIn from
$(IntDir)\$(TargetFileName).embedded.manifest
to
$(IntDir)\$(TargetFileName).intermediate.manifest
Again, this is only relevant if you want to build DSPlugIn. Remember this is a plugin for Visual Studio 6.0.
For the Release Unicode Build.
Add WINVER=0x0502 to the pre-processor symbols for projects:
* TestRunner
* DLLPlugInTester
* TestPlugInRunner
Add OEMRESOURCE to the pre-processor symbols for project:
* TestPlugInRunner
And it should all build nicely.
I would assume the same holds true for the other build configurations.
Enjoy,
Labels:
CPPUnit,
Programming Tools
Archive from 27_2_08 - Father Again
By the way, I am a father again. Its been about two weeks and I'm still playing catch up... but its a good experience. I was a bit more prepared this time. Not so many surprises.
Labels:
Life
Archive from 15_10_08 - Fresh is best...
Hi,
Seems like the day to be updating websites and posting to blogs... must be getting close to the end of semester again. I have given my spike pages a quick freshen. Mostly just deleting some of the more dated content. So much has changed since this time last year. New addition to the family, changing roles at work (again), the current financial crunch... so much fun rolled into one year. As they say, "May you live in interesting times"....
Later.
Seems like the day to be updating websites and posting to blogs... must be getting close to the end of semester again. I have given my spike pages a quick freshen. Mostly just deleting some of the more dated content. So much has changed since this time last year. New addition to the family, changing roles at work (again), the current financial crunch... so much fun rolled into one year. As they say, "May you live in interesting times"....
Later.
Labels:
Life
Archive from - 27_2_08 - Fast way to rip radio interviews
Hi,
Today's subject is converting radio interviews from CD to MP3 for hosting on a website.
First use Audiograbber to rip from cda format to wav format(stereo, 44khz etc so Audiograbber can convert them to mp3 later) make sure you enable Normalization.
Step two, open each wav file in Audacity and trim to size. Export as a WAV to a new directory.
Step three use Audiograbber to convert the WAV to the final MP3 format using Lame_enc.dll. Use VBR and set the output to voice.
Life is good.
Today's subject is converting radio interviews from CD to MP3 for hosting on a website.
First use Audiograbber to rip from cda format to wav format(stereo, 44khz etc so Audiograbber can convert them to mp3 later) make sure you enable Normalization.
Step two, open each wav file in Audacity and trim to size. Export as a WAV to a new directory.
Step three use Audiograbber to convert the WAV to the final MP3 format using Lame_enc.dll. Use VBR and set the output to voice.
Life is good.
Labels:
Software Tips
Archive from 30_11_07 - SCU Papers and pubs Repository
Finally we have an online repository of publications from the Uni.
http://epubs.scu.edu.au/comm/
So far its a work in progress but a much needed step into the digital age.
http://epubs.scu.edu.au/comm/
So far its a work in progress but a much needed step into the digital age.
Labels:
Research Resources
Archived from 19_10_07 - C++ Parser..
Had another look at Antlr for some reason the other day and found the Antlrworks IDE. This is looking very fine. I now have a usable parser generator that doesn't hurt my head every time I try to do even a simple parser.
Elsa and Elkhound are also looking more fleshed out... still a bit of a horror to integrate with anything.
Oink looks interesting as a static analysis toolkit but is a bit too raw at the moment...
Elsa and Elkhound are also looking more fleshed out... still a bit of a horror to integrate with anything.
Oink looks interesting as a static analysis toolkit but is a bit too raw at the moment...
Labels:
Programming Tools
Archive from 19_10_07 - Static Anlaysis
I feel like I have run around in circles with the static analysis issue. Having spent a frustating time testing Parasoft tools ( very nice static analysis bit BTW) I ended up rejecting their C++Test as its just a messy interface. It seems to be intended for large build regression testing as its just too cumbersome for test driven development. The interface is just slow and confused.
I just can't live with that sort of overhead from a tool that is intended to make my life easier.
So I have come back to Pc-lint. The integration with visual studio is ok, I can capture the output into the IDE and use it easily... it doesnt check for all the fun stuff that Parasoft does but then again its not trying to be all things to all people ( and failing)
Still need to get a copy of Visual Lint to see if that makes life any easier. Otherwise I will just hack up my own version. What a waste of time... I have gotten little done for a whole week trying to get their tools to work the way I want them to.
I just can't live with that sort of overhead from a tool that is intended to make my life easier.
So I have come back to Pc-lint. The integration with visual studio is ok, I can capture the output into the IDE and use it easily... it doesnt check for all the fun stuff that Parasoft does but then again its not trying to be all things to all people ( and failing)
Still need to get a copy of Visual Lint to see if that makes life any easier. Otherwise I will just hack up my own version. What a waste of time... I have gotten little done for a whole week trying to get their tools to work the way I want them to.
Labels:
Programming Tools
Archived from 19_10_07 - Search for the perfect tool chain...continues
Hi All,
On the subject of my perfect programming tool chain... the never ending search for decent support continues.
In Summary my perfect toolchain is something like...
* Code Editor - Must have all the tasty goodness of a modern IDE (Intellisense, code highlighting, Code folding, file navigation... etc.)
* Compiler/Linker - GUI interface and must have lots of different optimisations and be standards compliant.
* Debugger - Must have a solid GUI and work with whatever I throw at it.
* GUI editor - Create RAD GUI's and dialogs.
Basically Visual C++ 2005 is working for me on all these points... now where it doesnt work for me are:
* Memory leak checking
* Refactoring
* Code formatting
* Unit Testing
* Static Analysis
* UML Design (Not really round trip but with a nod to wizard generated classes etc.)
These are things I am comming to consider as basic tools for any working programmer. Especially a one man show.
So in my recent searching for such tools I have found:
* Leak checker
- Visual C++ contains a basic leak detector. Its primative but will sort of do a crappy job. Its a total pain to use especially with the STL libraries that allocate and don't return memory. (not a leak by the way but shows up anyway)
- Glowcode - This turned out to be a pretty crude tool that doesnt intergrate with Visual C++ so I consider it crap.
- Various free memory checkers which plug into your code and intercept the new/delete malloc/free operators etc. These sort of work but are prone to spew the same STL messages and bury you in false positives that take forever to work through. Basically increasing your workload... yuck but if you are desperate check codeproject or google.
- Parasoft Insure++ - I have given this a short workout and it looks good. The intergration with visual C++ is a bit basic and it has definite Unix tendencies in the GUI but it works well and detects all sorts of issues. Full marks for the detection and its way ahead of the competition. Some rough edges but the winner so far.
* Refactoring
- VC++ Macors. This is an on-going pain. I have written a bunch of crude refactoring macros for my own use, but its a lot of time and effort to keep them working and accessible. Parsing c++ is often painful and I have full respect for anyone who puts together a commmercial package.
- Ref++ this is a nice little package. Its only got a few refactorings and can sometimes miss a few changes but generally works. I recognise that its doing a fairly major job. In the absense of competition this was the only show in town.
- Refactor! for Visual C++. I have just run accross this. A free tool with a whole bunch of refactorings. My whinging has finally been answered. http://msdn2.microsoft.com/en-us/visualc/bb737896.aspx I am going to test this till it bleeds.
* Code Formatting
- VC++ autoformat. This works but its not that configurable. There are things I like and things I don't like about its style. Would it be so hard to provide some access or a style sheet?
- VC++ macros. There is a great little formatter on CodeProject. Its been developed for a couple of years but for some reason if you use it vs2005 it just uses the autoformat. So I modified it to not use the autoformat and now I use it with some tweaking to format code I download or receive from students. Its a life saver.
- Basically, I am looking for a tool that has a couple of style sheets. One that I can set for my personal style and one that I can set for "work" style, so I can work on the code in my prefered layout then click a button, convert to the official layout and submit the code to CVS or whatever. Still have not found a good solution to this problem.
* Unit Testing
- CPPUnit is where I started. Works well, intergrates well with Visual C++ once you set up all the bits manually.
Problem is you have to write all the tests manually. Yuck. I know there is no way to test logic automatically but even so...
- I have written some macros to write stubs for tests automatically but its not saving much time...
- Parasoft C++Test - This works over CPPUnit and has automated test running and test generation. The test generation is the most valuable part. The rest works well but the generation is the gold. Still they cant test logic... damit.
- Other free c++ unit test frameworks... there are a couple. They do about the same as CPPUnit... who cares, its the test generation that is the unsolved problem.
* Static Analysis
- PC-Lint - the old standard. Works well, intergrates with visual C++ crudely or via third party "Visual Lint" from riverblade software.
- Parasoft C++Test has a static checker included. This thing is very nice. Does the same sort of job as PC-Lint. Has about 800 rules. Very good intergration with Visual C++. Very nice.
- Microsoft PREFast - Dont know only found out about it today ... but its looking promising. Sounds like it checks for microsoft specific stuff so will probably be good to do as well as rather than replacing either of the above.
Currently command line but with increasing intergration. To be watched...
* UML Design
- I still have no useful contender for UML in Visual C++. I have looked at Rational Rose and Paragons thing but both have way too much complexity. I am not willing to allocate that much of my time and brain to learning yet another tool package. I only need UML for about 10% of the work, I dont want to spend 50% of the time stuffing with their bloatware.
- I have written a few macros to generate class files of different types but as yet have not written any macros to construct the interface definitions. Its not complex just useless as the interface would be very crappy. It needs a better GUI than just dialog boxes.
- Still looking. There are a few plug-ins that I have checked but nothing that has come close to a solution.
The search continues... parasoft are looking good but the cost is heavy...
On the subject of my perfect programming tool chain... the never ending search for decent support continues.
In Summary my perfect toolchain is something like...
* Code Editor - Must have all the tasty goodness of a modern IDE (Intellisense, code highlighting, Code folding, file navigation... etc.)
* Compiler/Linker - GUI interface and must have lots of different optimisations and be standards compliant.
* Debugger - Must have a solid GUI and work with whatever I throw at it.
* GUI editor - Create RAD GUI's and dialogs.
Basically Visual C++ 2005 is working for me on all these points... now where it doesnt work for me are:
* Memory leak checking
* Refactoring
* Code formatting
* Unit Testing
* Static Analysis
* UML Design (Not really round trip but with a nod to wizard generated classes etc.)
These are things I am comming to consider as basic tools for any working programmer. Especially a one man show.
So in my recent searching for such tools I have found:
* Leak checker
- Visual C++ contains a basic leak detector. Its primative but will sort of do a crappy job. Its a total pain to use especially with the STL libraries that allocate and don't return memory. (not a leak by the way but shows up anyway)
- Glowcode - This turned out to be a pretty crude tool that doesnt intergrate with Visual C++ so I consider it crap.
- Various free memory checkers which plug into your code and intercept the new/delete malloc/free operators etc. These sort of work but are prone to spew the same STL messages and bury you in false positives that take forever to work through. Basically increasing your workload... yuck but if you are desperate check codeproject or google.
- Parasoft Insure++ - I have given this a short workout and it looks good. The intergration with visual C++ is a bit basic and it has definite Unix tendencies in the GUI but it works well and detects all sorts of issues. Full marks for the detection and its way ahead of the competition. Some rough edges but the winner so far.
* Refactoring
- VC++ Macors. This is an on-going pain. I have written a bunch of crude refactoring macros for my own use, but its a lot of time and effort to keep them working and accessible. Parsing c++ is often painful and I have full respect for anyone who puts together a commmercial package.
- Ref++ this is a nice little package. Its only got a few refactorings and can sometimes miss a few changes but generally works. I recognise that its doing a fairly major job. In the absense of competition this was the only show in town.
- Refactor! for Visual C++. I have just run accross this. A free tool with a whole bunch of refactorings. My whinging has finally been answered. http://msdn2.microsoft.com/en-us/visualc/bb737896.aspx I am going to test this till it bleeds.
* Code Formatting
- VC++ autoformat. This works but its not that configurable. There are things I like and things I don't like about its style. Would it be so hard to provide some access or a style sheet?
- VC++ macros. There is a great little formatter on CodeProject. Its been developed for a couple of years but for some reason if you use it vs2005 it just uses the autoformat. So I modified it to not use the autoformat and now I use it with some tweaking to format code I download or receive from students. Its a life saver.
- Basically, I am looking for a tool that has a couple of style sheets. One that I can set for my personal style and one that I can set for "work" style, so I can work on the code in my prefered layout then click a button, convert to the official layout and submit the code to CVS or whatever. Still have not found a good solution to this problem.
* Unit Testing
- CPPUnit is where I started. Works well, intergrates well with Visual C++ once you set up all the bits manually.
Problem is you have to write all the tests manually. Yuck. I know there is no way to test logic automatically but even so...
- I have written some macros to write stubs for tests automatically but its not saving much time...
- Parasoft C++Test - This works over CPPUnit and has automated test running and test generation. The test generation is the most valuable part. The rest works well but the generation is the gold. Still they cant test logic... damit.
- Other free c++ unit test frameworks... there are a couple. They do about the same as CPPUnit... who cares, its the test generation that is the unsolved problem.
* Static Analysis
- PC-Lint - the old standard. Works well, intergrates with visual C++ crudely or via third party "Visual Lint" from riverblade software.
- Parasoft C++Test has a static checker included. This thing is very nice. Does the same sort of job as PC-Lint. Has about 800 rules. Very good intergration with Visual C++. Very nice.
- Microsoft PREFast - Dont know only found out about it today ... but its looking promising. Sounds like it checks for microsoft specific stuff so will probably be good to do as well as rather than replacing either of the above.
Currently command line but with increasing intergration. To be watched...
* UML Design
- I still have no useful contender for UML in Visual C++. I have looked at Rational Rose and Paragons thing but both have way too much complexity. I am not willing to allocate that much of my time and brain to learning yet another tool package. I only need UML for about 10% of the work, I dont want to spend 50% of the time stuffing with their bloatware.
- I have written a few macros to generate class files of different types but as yet have not written any macros to construct the interface definitions. Its not complex just useless as the interface would be very crappy. It needs a better GUI than just dialog boxes.
- Still looking. There are a few plug-ins that I have checked but nothing that has come close to a solution.
The search continues... parasoft are looking good but the cost is heavy...
Labels:
Programming Tools
Archive from 17_9_07 - Another weekend another... what would make sense here?
Hmm... my hands are contantly aching at the moment. Apart from beating in star pickets, digging holes for a hedge and swinging the mattock to clear a whole lot of asparagus fern ( bastard stuff.... thorns with some instant infection coating... nice design mother nature!) I spent the late evening and early dusk last night trying to get control over a 20year growth of Agarve ( however you spell it. Cactus to us other folk) Its turned into a huge mount with giant florets all over the surface, once you get in under the big floretts there are dozens of medium size floretts. Then once you get under the medium size floretts there are hundreds of little ones.... get the picture. So I spent an hour or so pulling out all the little ones and a few of the medium ones that I could. Along with 20 years of leaf litter. I use the term "Leaf" as I dont know the term for a dried up cactus frond thing... They are tough and lightly abrasive and kink up like a leather corkscrew... now imagine shoving your hands repeately into these for an hour or so. Seem like a good idea? Live and learn...
Anyway... suffice to say my hands are aching. A cross between abrasion and some minor infection and vibration agrevating what might be the early onset of arthritus. Another gift from years of manual labour... particularly working in the forge and pounding metal that was not hot enough.... Oh and chopping fire wood... and trees and working in the garden with soil like concrete... its a wonder I still have hands... Mental note to look after them much better.
Back to the weekend that was. We got a lot done, so its all good. General cleaning and washing.... never ends.
The hedge is the major change... oh and a compost bin so we stop bothing the neighbours with wafting smells of decomposition. Actually we asked them and they handn't noticed it, but it was bothering me so ... problem solved via Bunnings again. Must buy some shares in that place... they get enough of my cash.
Back to the hedge. This is the most significant change we have made to the house on the "big" scale of things. It runs across the open side of the yard below the house. Ultimatly it will seperate the verge and road from the yard, which used to be all open and felt really exposed. Just by putting in the temporary trellis ( star pickets and clothes line with danger tape all over it) and the row of lillypilly, suddenly the yard feels more enclosed and protected from the road. Atlest it will be a bit of an obstacle to the children and stop them being able to run staight onto the road. Bring on the Cousins....
I found another tree/bush in the garden that has thorns. Seems to have been a nice topiary kind of thing at one stage but has gone rampant and sprouted uncontrollably. So there is this layer of close packed branches where the old topiary outline was which is now all dead and lots of long stragelly branches errupting from it... time for a power haircut. This is the point I discovered the fine thorns... what is it with spikey stuff in this garden? Half the plants are out to get me in one way or another. Makes me want to borrow the flame thrower and get hostile with them. Nice way to freak out various people... and probably get a fine from the fire brigade. Hmm... perhaps not. Back to loppers and the shredder I think.
Thinking about vicious herbage, I have:
An Orange tree with thorns.
This nasty thing with yellow berries and thorns.(dont know what it is just wish it was dead)
Half the plants along the back fence are toxic( so says the neighbor)
The agarve is pointy but not with true spikes. But all its dried leaves are abrasive so that has to count.
Bloody bindi eyes are swarming throug the lawn.
The Hoop pine drops natures version of barbed wire all over the lawn.
The jasmine doesnt have thorns but is just a pain in the ... *insert least favored spot here*
As for just generally pointy and twiggy... I have three species of cyprus with dead twiggs galore.
There is a reasonable swarm of cobblers pegs around and about.. but they are fairly easy to decimate...
There is some succulent thing that is popping up all over the place ( and getting pulled out) that is toxic ( according to the landscaping guy that came to visit)
Did I forget to mention the asparagus fern... my least favorite of all the fern family. Little bastard is everywhere. Well less places now... its getting grubbed up progressivly. But its just designed mean. The barbs on that stuff break off in your skin and seem to instantly fester.
Ok.. enough... work time.
Anyway... suffice to say my hands are aching. A cross between abrasion and some minor infection and vibration agrevating what might be the early onset of arthritus. Another gift from years of manual labour... particularly working in the forge and pounding metal that was not hot enough.... Oh and chopping fire wood... and trees and working in the garden with soil like concrete... its a wonder I still have hands... Mental note to look after them much better.
Back to the weekend that was. We got a lot done, so its all good. General cleaning and washing.... never ends.
The hedge is the major change... oh and a compost bin so we stop bothing the neighbours with wafting smells of decomposition. Actually we asked them and they handn't noticed it, but it was bothering me so ... problem solved via Bunnings again. Must buy some shares in that place... they get enough of my cash.
Back to the hedge. This is the most significant change we have made to the house on the "big" scale of things. It runs across the open side of the yard below the house. Ultimatly it will seperate the verge and road from the yard, which used to be all open and felt really exposed. Just by putting in the temporary trellis ( star pickets and clothes line with danger tape all over it) and the row of lillypilly, suddenly the yard feels more enclosed and protected from the road. Atlest it will be a bit of an obstacle to the children and stop them being able to run staight onto the road. Bring on the Cousins....
I found another tree/bush in the garden that has thorns. Seems to have been a nice topiary kind of thing at one stage but has gone rampant and sprouted uncontrollably. So there is this layer of close packed branches where the old topiary outline was which is now all dead and lots of long stragelly branches errupting from it... time for a power haircut. This is the point I discovered the fine thorns... what is it with spikey stuff in this garden? Half the plants are out to get me in one way or another. Makes me want to borrow the flame thrower and get hostile with them. Nice way to freak out various people... and probably get a fine from the fire brigade. Hmm... perhaps not. Back to loppers and the shredder I think.
Thinking about vicious herbage, I have:
An Orange tree with thorns.
This nasty thing with yellow berries and thorns.(dont know what it is just wish it was dead)
Half the plants along the back fence are toxic( so says the neighbor)
The agarve is pointy but not with true spikes. But all its dried leaves are abrasive so that has to count.
Bloody bindi eyes are swarming throug the lawn.
The Hoop pine drops natures version of barbed wire all over the lawn.
The jasmine doesnt have thorns but is just a pain in the ... *insert least favored spot here*
As for just generally pointy and twiggy... I have three species of cyprus with dead twiggs galore.
There is a reasonable swarm of cobblers pegs around and about.. but they are fairly easy to decimate...
There is some succulent thing that is popping up all over the place ( and getting pulled out) that is toxic ( according to the landscaping guy that came to visit)
Did I forget to mention the asparagus fern... my least favorite of all the fern family. Little bastard is everywhere. Well less places now... its getting grubbed up progressivly. But its just designed mean. The barbs on that stuff break off in your skin and seem to instantly fester.
Ok.. enough... work time.
Labels:
Philosophy
Archive from 17_9_07 - Microsoft Phoenix Compiler tools
Hi All,
https://connect.microsoft.com/Phoenix
This looks like a fun package for finally building some refactoring tools for Visual C++. This will expose an AST for the code via the Visual C++ front end and the Phoenix API's. Once thats done, there is no longer any need to hand build a C++ parser/lexer and AST representation. It might even be a *gasp* fairly standardised model. Perhaps MS can even get the code DOM in Visual C++ to work properly. We live in hope anyway...
Now all I need is about a year of free time to play with this.... Oh and someone to finance my extravegant lifestyle while I do it... yeah right.
https://connect.microsoft.com/Phoenix
This looks like a fun package for finally building some refactoring tools for Visual C++. This will expose an AST for the code via the Visual C++ front end and the Phoenix API's. Once thats done, there is no longer any need to hand build a C++ parser/lexer and AST representation. It might even be a *gasp* fairly standardised model. Perhaps MS can even get the code DOM in Visual C++ to work properly. We live in hope anyway...
Now all I need is about a year of free time to play with this.... Oh and someone to finance my extravegant lifestyle while I do it... yeah right.
Labels:
Programming Tools
Archive from 12_9_07 - 3ds max doesnt like directx debug drivers
Hi All,
I have discovered that 3ds max 9 doesn't like the DirectX debug drivers. it causes a strange disappearing object effect where some objects appear while others are overwritten.
I tried swapping to OpenGL and reinstalling my video drivers. Neither fixed the original problem. Although the OpenGL render was fine.
Finally tweaked that I was still using the debug version of the DirectX libs. So I cracked open dxcpl.cpl and swapped the drivers over, restarted 3dsmax and everything is now sweet. Way to waste a whole day.... bugger.
later...
I have discovered that 3ds max 9 doesn't like the DirectX debug drivers. it causes a strange disappearing object effect where some objects appear while others are overwritten.
I tried swapping to OpenGL and reinstalling my video drivers. Neither fixed the original problem. Although the OpenGL render was fine.
Finally tweaked that I was still using the debug version of the DirectX libs. So I cracked open dxcpl.cpl and swapped the drivers over, restarted 3dsmax and everything is now sweet. Way to waste a whole day.... bugger.
later...
Labels:
3ds max,
Software Tips
Archive from 10_9_07 - C++ Parsers
Following on with my interest in parsing C++ for various reasons, I have come accross this little jem....
http://www.visualco.de/cstor.html
The cstor parser and document generator. v. Interesting.
http://www.visualco.de/cstor.html
The cstor parser and document generator. v. Interesting.
Labels:
Programming Tools
Archive from 7_9_07 - Word Macro to extract a Unique list of words from a document
This MS Word macro strips all words from a document and creates a list of unique words in a new document. It dumps any word starting with a non-ascii character or that is less than 2 characters long.
Its based on some code from:
http://www.microsoft.com/technet/scriptcenter/resources/qanda/jun06/hey0628.mspx
Public Sub MakeUniqueList()
'This will make a list of all unique words in the document
' and stash them in a new document
Set objDictionary = CreateObject("Scripting.Dictionary")
Set objDoc = Application.ActiveDocument
Set colWords = objDoc.Words
Dim cleanWord As String
For Each strWord In colWords
'clean up the word
strWord = Trim(LCase(strWord))
cleanWord = strWord
'see if we want to add it to the dic
If KeepWord(cleanWord) Then
If objDictionary.Exists(cleanWord) Then
Else
objDictionary.Add strWord, strWord
End If
End If
Next
'create a new document to hold the list
Set objDoc2 = Application.Documents.Add()
Set objSelection = Application.Selection
For Each strItem In objDictionary.Items
objSelection.TypeText strItem & vbCrLf
Next
Set objRange = objDoc2.Range
objRange.Sort
End Sub
Private Function KeepWord(ByRef word As String) As Boolean
'function to try to remove some of the rubbish words from the list
'get rid of short words
If (Len(word) < 2) Then
'retVal = False
KeepWord = False
Exit Function
End If
'check for punctuation characters
'ASC function (65-90 is ucase ) 97 - 122 is lcase
If (Asc(word) < 65) Or (Asc(word) > 122) Or ((Asc(word) > 90) And (Asc(word) < 97)) Then
KeepWord = False
Exit Function
End If
KeepWord = True
End Function
Used in Microsoft Word 2003.
Its based on some code from:
http://www.microsoft.com/technet/scriptcenter/resources/qanda/jun06/hey0628.mspx
Public Sub MakeUniqueList()
'This will make a list of all unique words in the document
' and stash them in a new document
Set objDictionary = CreateObject("Scripting.Dictionary")
Set objDoc = Application.ActiveDocument
Set colWords = objDoc.Words
Dim cleanWord As String
For Each strWord In colWords
'clean up the word
strWord = Trim(LCase(strWord))
cleanWord = strWord
'see if we want to add it to the dic
If KeepWord(cleanWord) Then
If objDictionary.Exists(cleanWord) Then
Else
objDictionary.Add strWord, strWord
End If
End If
Next
'create a new document to hold the list
Set objDoc2 = Application.Documents.Add()
Set objSelection = Application.Selection
For Each strItem In objDictionary.Items
objSelection.TypeText strItem & vbCrLf
Next
Set objRange = objDoc2.Range
objRange.Sort
End Sub
Private Function KeepWord(ByRef word As String) As Boolean
'function to try to remove some of the rubbish words from the list
'get rid of short words
If (Len(word) < 2) Then
'retVal = False
KeepWord = False
Exit Function
End If
'check for punctuation characters
'ASC function (65-90 is ucase ) 97 - 122 is lcase
If (Asc(word) < 65) Or (Asc(word) > 122) Or ((Asc(word) > 90) And (Asc(word) < 97)) Then
KeepWord = False
Exit Function
End If
KeepWord = True
End Function
Used in Microsoft Word 2003.
Labels:
Macro,
Programming,
Word
Archive from 27_6_07 - Task Objects
Just musing on OOD and looking at all the books on my desk. Some of the ideas I have about the way to apply objects seems to be beyond the scope of the books ability to conceptualize.
I see objects being used to encapsulate the concepts in the domain, not just the structural concepts but the ephemeral concepts. None of the books have yet made this kind of link for me. I am hoping one of them will already have covered the material so I don't have to explain it all. Maybe they are explaining it in their own way, I am just not getting it.
I have restricted my reading to only books published since 2000. Along with a couple of well referenced ones from before that. My hope is that the more recent authors will have "gotten" it and be more on top of explaining OO. So far its just more of the same. Some have clever ways or pretty explanations but they all tread a similar path. if you substitute the concept of ADT (abstract data types) where they use the term Object, it still makes the same amount of sense. Build the static structure of the app, pass data around in response to events(or messages), exit the app. Etc.
The true power of object oriented design is the ability for objects to encapsulate both the fine grained semantics of the domain as well as the dynamic activities. I believe that the relationship between Use cases and the object design is not nearly as direct as it should be. Each Use case should be reflected with a 1:1 relationship with an object in the design. Obviously these are objects that express a dynamic concpet. They are instantiated on demand, encapsulate the concept of the task being performed and then are deleted once the task has terminated or succeeded.
These objects would be instantiated with parameters of all the structural objects they needed to interact with. This removes the need for all the structural objects to be interwoven with pointers and references to each other, to the extent that (I) have traditionally done, to get the app to work.
There will still need to be a common pool of pointers to the structural objects that can then have some factory method which creates these dynamic objects and sets them up with the correct references. Fairly easy once the Interfaces are defined.
Anyway, back to work.
Edit.
This kind of dynamic expression will render sequence diagrams practically irrelevant. They now are just expressing the internal (encapsulated) concepts within the constructor of a dynamic object. They are probably still excellent for expressing the actual sequence of the calls, just not in the way I have been thinking of them to this point. So to re-cap, this is probably more about me finally "getting" it than everyone else not "getting" it. All I need now is a book that expresses this and is available from one of the big distributors. Please....
I see objects being used to encapsulate the concepts in the domain, not just the structural concepts but the ephemeral concepts. None of the books have yet made this kind of link for me. I am hoping one of them will already have covered the material so I don't have to explain it all. Maybe they are explaining it in their own way, I am just not getting it.
I have restricted my reading to only books published since 2000. Along with a couple of well referenced ones from before that. My hope is that the more recent authors will have "gotten" it and be more on top of explaining OO. So far its just more of the same. Some have clever ways or pretty explanations but they all tread a similar path. if you substitute the concept of ADT (abstract data types) where they use the term Object, it still makes the same amount of sense. Build the static structure of the app, pass data around in response to events(or messages), exit the app. Etc.
The true power of object oriented design is the ability for objects to encapsulate both the fine grained semantics of the domain as well as the dynamic activities. I believe that the relationship between Use cases and the object design is not nearly as direct as it should be. Each Use case should be reflected with a 1:1 relationship with an object in the design. Obviously these are objects that express a dynamic concpet. They are instantiated on demand, encapsulate the concept of the task being performed and then are deleted once the task has terminated or succeeded.
These objects would be instantiated with parameters of all the structural objects they needed to interact with. This removes the need for all the structural objects to be interwoven with pointers and references to each other, to the extent that (I) have traditionally done, to get the app to work.
There will still need to be a common pool of pointers to the structural objects that can then have some factory method which creates these dynamic objects and sets them up with the correct references. Fairly easy once the Interfaces are defined.
Anyway, back to work.
Edit.
This kind of dynamic expression will render sequence diagrams practically irrelevant. They now are just expressing the internal (encapsulated) concepts within the constructor of a dynamic object. They are probably still excellent for expressing the actual sequence of the calls, just not in the way I have been thinking of them to this point. So to re-cap, this is probably more about me finally "getting" it than everyone else not "getting" it. All I need now is a book that expresses this and is available from one of the big distributors. Please....
Labels:
Philosophy,
Programming Tools
Archive from 22_6_07 - OOP Research
I have begun work on re-writing Object Oriented Program Development. Having been writing in what I feel is an Object Oriented manner for some time, I still find it interesting to read all the various viewpoints about OOP. There are the nay-sayers who proport that OOP is a useless fad without any benefit. ( Surprisingly enough they are generally old-school procedural programmers ). There are others who argue that OOP is not formal enough to be considered as an identifiable "thing".(Academics?) Then there are the swarms of programmers who are out there actually using object oriented techniques, for better or worse, and getting things done.
Like most things in life, its more a question of your perspective, that defines your reality.
My interest in OOP is both in doing it "well" and in the emergent phenomena that using it permits to occur. In simple terms, "stuff that can be build on top of the OOP ideas".
Procedural programming has its own emergent phenomena. Huge API's, function libraries, DLL's etc. All these things made procedural programming easier.
For me, Object Oriented programming provides a way to conceptualise the problem domain and extract the semantics of the entities within the solution more cleanly than procedural code. I also enjoy the elegance of the exception system. Once I turned my head inside out enough to think in these terms it suddenly all made sense as a simple, clean system. The most difficult period for me was when I had a foot in both camps, having come from procedural programming and moving towards OOP. (Even though I still didn't have a clear idea what it was all about) But once I was more in the OOP camp than the procedural camp, so many of the design decisions became so much easier.
I still find reading OOP books, how much of the concepts and code is written in what I would consider as a "procedural" style. Especially the old course notes that I have inherited. It seems so much clearer to me now. I remember going through the course as a student before it was in its current incarnation and struggling with the material. In retrospect it was mainly due to the lecturer not really having any ideas what they were talking about. Its not that they didn't know about classes and objects, its just that they still had both feet firmly planted in the procedural camp. Everything they looked at and gave examples of was constructed to work well in a procedural program. They had still not made the leap into using objects with other objects. Rather they were still trying to use object as slightly more complex data types in a procedural program.
I remember the first major program I wrote using classes in VB. It was basically using classes as huge function libraries. Honestly I have still been working that way up until recently when I started a large burst of programing that involved churning out a number of programs quit quickly. It gave me the concentrated experience to see what I was doing and try new things with each one. I feel like I have "got" a little more of the picture.
I still have problems establishing the static structure of a program to facilitate easy message passing, but less so now. I am also wrestling with the coupling issue. I have found that reading about and applying unit testing has cleaned up the coupling issues alot in my designs. Building mock objects and having to keep each object loosely enough coupled that it can be instantiated for testing has forced me to think "cleaner". It has also prompted a whole new line of thought on refactoring ( which I have loved since I first heard about it ) I have found that with a good refactoring tool such as Ref++ or any of the Java refactoing tool sets, its possible to really clean up and isolate all the semantics of an object. Every single atomic operation that is logically valid within the object can be isolated in its own private method. Each of these can then have pre-conditions and post-conditions established and a much cleaner style of exception handling be introduced.
These semantic functions ( as I call them ) are simple one or two line functions that clearly and simply express a possible operation on a data element. There is also often a lot of common functionality that can be expressed as template functions. These however are still wrapped in a tiny private method as this carries the domain semantic ( the method name ) that clearly expresses the logical operation in terms of the domain its occurring in.
I have also been exploring some of the ideas of "Onion skin" objects. This is the idea of wrapper classes taken to a whole new level. To understand this idea, its useful to conceptualise the operations within a class. A method call ( or event handler if you prefer ) can do only a couple of things, it can "mutate" a data element ( potentially change ) it can "do something else" ( make a call to another object - raise an event or message ) or it can throw an exception in response to an illogical message or from having caught an exception from a downstream object. There are also tests. The object can perform a logical test on its data members, on the messages it receives and on the exceptions that may be generated or be caught from below. Another function that is common is transformation of messages. This may be transformation of type or transformation of values ( clipping to a range for example ) In a nice clean design this does not need to happen, but when interfacing with other systems it appears to be inevitable. (Win32 API for example)
All these tasks can be isolated in a common "layer", so the top layer might be the public interface of the object, the next layer might be the "Check incoming messages" layer, the layer below contains the logic for any message transformation. Below that may be core logic tests. Below that might be exception handling for any outgoing calls, etc.
This is just a different way of stratifying the functionality that is already found in most objects. However rather than these functions occurring in a single method body. They are split over these thin walled wrapper objects. This provides some interesting properties to me. There is a great deal of shared code that can be factored out and placed in these thin walls where it can be generalised and simplified even more. Then there is the fact that each message in and message out has to pass through each layer explicitly. This means you cannot forget to deal with one of the steps of good programming habits. ( Check data range, check all parameters etc ) as you are forced to address each step individually.
I have found success with using inner classes for the construction of these "onion skin" objects. This keeps all the tightly coupled code in the one file pair and makes it fairly easy to maintain. I am still exploring the idea of trying to share some of the "layers" between different domain objects. At the moment I am sharing some template functions in libraries to help with repetitive tasks.
My last trick at the moment is working on self documenting code. This ties in with the semantics above. If you extract every single logical operation within the object to a little private method with a well chosen name, the code starts to look very very readable as a set of comments. Sometimes this takes a little bit of creative massaging of the names chosen for the type and methods but the need for comments inside the object becomes effectively zero. The public interface still needs to be clearly documented, and that's it.
Must write some papers on this stuff at some point....
Like most things in life, its more a question of your perspective, that defines your reality.
My interest in OOP is both in doing it "well" and in the emergent phenomena that using it permits to occur. In simple terms, "stuff that can be build on top of the OOP ideas".
Procedural programming has its own emergent phenomena. Huge API's, function libraries, DLL's etc. All these things made procedural programming easier.
For me, Object Oriented programming provides a way to conceptualise the problem domain and extract the semantics of the entities within the solution more cleanly than procedural code. I also enjoy the elegance of the exception system. Once I turned my head inside out enough to think in these terms it suddenly all made sense as a simple, clean system. The most difficult period for me was when I had a foot in both camps, having come from procedural programming and moving towards OOP. (Even though I still didn't have a clear idea what it was all about) But once I was more in the OOP camp than the procedural camp, so many of the design decisions became so much easier.
I still find reading OOP books, how much of the concepts and code is written in what I would consider as a "procedural" style. Especially the old course notes that I have inherited. It seems so much clearer to me now. I remember going through the course as a student before it was in its current incarnation and struggling with the material. In retrospect it was mainly due to the lecturer not really having any ideas what they were talking about. Its not that they didn't know about classes and objects, its just that they still had both feet firmly planted in the procedural camp. Everything they looked at and gave examples of was constructed to work well in a procedural program. They had still not made the leap into using objects with other objects. Rather they were still trying to use object as slightly more complex data types in a procedural program.
I remember the first major program I wrote using classes in VB. It was basically using classes as huge function libraries. Honestly I have still been working that way up until recently when I started a large burst of programing that involved churning out a number of programs quit quickly. It gave me the concentrated experience to see what I was doing and try new things with each one. I feel like I have "got" a little more of the picture.
I still have problems establishing the static structure of a program to facilitate easy message passing, but less so now. I am also wrestling with the coupling issue. I have found that reading about and applying unit testing has cleaned up the coupling issues alot in my designs. Building mock objects and having to keep each object loosely enough coupled that it can be instantiated for testing has forced me to think "cleaner". It has also prompted a whole new line of thought on refactoring ( which I have loved since I first heard about it ) I have found that with a good refactoring tool such as Ref++ or any of the Java refactoing tool sets, its possible to really clean up and isolate all the semantics of an object. Every single atomic operation that is logically valid within the object can be isolated in its own private method. Each of these can then have pre-conditions and post-conditions established and a much cleaner style of exception handling be introduced.
These semantic functions ( as I call them ) are simple one or two line functions that clearly and simply express a possible operation on a data element. There is also often a lot of common functionality that can be expressed as template functions. These however are still wrapped in a tiny private method as this carries the domain semantic ( the method name ) that clearly expresses the logical operation in terms of the domain its occurring in.
I have also been exploring some of the ideas of "Onion skin" objects. This is the idea of wrapper classes taken to a whole new level. To understand this idea, its useful to conceptualise the operations within a class. A method call ( or event handler if you prefer ) can do only a couple of things, it can "mutate" a data element ( potentially change ) it can "do something else" ( make a call to another object - raise an event or message ) or it can throw an exception in response to an illogical message or from having caught an exception from a downstream object. There are also tests. The object can perform a logical test on its data members, on the messages it receives and on the exceptions that may be generated or be caught from below. Another function that is common is transformation of messages. This may be transformation of type or transformation of values ( clipping to a range for example ) In a nice clean design this does not need to happen, but when interfacing with other systems it appears to be inevitable. (Win32 API for example)
All these tasks can be isolated in a common "layer", so the top layer might be the public interface of the object, the next layer might be the "Check incoming messages" layer, the layer below contains the logic for any message transformation. Below that may be core logic tests. Below that might be exception handling for any outgoing calls, etc.
This is just a different way of stratifying the functionality that is already found in most objects. However rather than these functions occurring in a single method body. They are split over these thin walled wrapper objects. This provides some interesting properties to me. There is a great deal of shared code that can be factored out and placed in these thin walls where it can be generalised and simplified even more. Then there is the fact that each message in and message out has to pass through each layer explicitly. This means you cannot forget to deal with one of the steps of good programming habits. ( Check data range, check all parameters etc ) as you are forced to address each step individually.
I have found success with using inner classes for the construction of these "onion skin" objects. This keeps all the tightly coupled code in the one file pair and makes it fairly easy to maintain. I am still exploring the idea of trying to share some of the "layers" between different domain objects. At the moment I am sharing some template functions in libraries to help with repetitive tasks.
My last trick at the moment is working on self documenting code. This ties in with the semantics above. If you extract every single logical operation within the object to a little private method with a well chosen name, the code starts to look very very readable as a set of comments. Sometimes this takes a little bit of creative massaging of the names chosen for the type and methods but the need for comments inside the object becomes effectively zero. The public interface still needs to be clearly documented, and that's it.
Must write some papers on this stuff at some point....
Labels:
Philosophy,
Programming
Archive from 5_6_07 - Knowledge harvesting
Just had a thought about harvesting knowledge in a public space.
Articles on web sites are written and posted by the author. In general there is no structure or expectation for feedback or update. They are "published". The only knowledge captured is that of the author. A hierarchy of one.
Compare that with a site like codeproject (www.codeproject.com)where an article( specifically about some programming topic ) is written and posted by the author but has a threaded forum attached to the bottom where comments and discussion can be collected and retained. The author remains the top level arbiter of the discussion with ( I assume ) edit rights to both the article and the discussion. The discussion participants retain edit rights to their contribution. A two level hierarchy.
Look now at something like slashdot. (www.slashdot.org) Here the author of the post contributes it then its chewed upon by the community. The signal to noise ratio is so low in the following discussion that the author has little hope of managing or replying to the discussion posts. The site employs a voting system to attempt to help the signal to be elevated above the noise. The author looses edit rights to the article; which are taken over by one of the site editors. Discussion posts are editable by their contributors and there is influence by moderation and voting. This creates a complected three level hierarchy with some complicated and non-deterministic rules for predicting what the final knowledge collection might look like. This in itself is an interesting example of a system to harvest knowledge out of an essentially chaotic community. The beauty of the system is that it works. The drawback is that the voting system can be 'gamed' and will corrupt the process if it happens on a large scale.
One of the key elements is that the dynamic updates promote the signal upward while 'modding' down the rubbish. It is not discarded but is visually diminished and tagged so it can be filtered out if desired. However at any time later, this rubbish can still be examined by researchers.
Compare this to a wiki system. Again it uses an article as the cornerstone of the knowledge accumulation. This article then is modified directly by random contributors. They can add, edit or delete information. Most wiki's seem to have the capacity for a simple comments system which is attached to the page. This is un-threaded and in my experience relatively unused. However it does make for an ideal place to ask questions and make requests. The problem being that without a responsible maintainer, there is no one to service those requests reliably. Its the responsibility of the 'community' to chaotically make improvements.
Wiki systems have an interesting 'rollback' feature that allows changes to be 'undone'. This mitigates the damage of authors work being deleted maliciously. However as the author and the subsequent contributors are effectively peers, who is to say that the original author's contribution is better than the deletion made by the peer.
Wiki systems spawn pages. These are hyper linked to older pages and the resulting knowledge base becomes a standard hyper linked mess. There is no way to automatically assess quality or topic or relevance to any particular subject different from a normal keyword search or page rank system.
Now look at discussion forums. Commonly called 'forums'. These usually have a threaded and date sorted post structure. A number of separate 'forums' are often collected under a major subject heading or site. These sub-forums are usually just arbitrary separations of the topic area or activity of the 'community' that they are servicing. The membership of the community is often fairly chaotic but share some commonality. Either a topical interest or some common property.
A community member can make a 'post' at the root level of the forum. This post can then be commented upon by other forum members who add to the thread. The original author has edit control over the post and any subsequent comments they might make in the resulting thread. These threads can branch into sub thread; making for a hierarchy of comment and response.
The root of the forum has three classes of post. There are 'recent' posts that are sorted to the top of the list by date. 'older' posts that have moved down the list simply due to the date they were posted and what are called 'stickies'. These 'stickies' are often posts addressing common topics or rules of conduct for the forum. They are fixed at the top of the forum and remain there irrespective of the date sorting.
The only other meta data that can be automatically derived from a post-thread object is its activity, number of posts, frequency of posts, statistics on who has posted, size of the posts and the number of thread branches. Otherwise the knowledge encoded in the structure will slowly drift away from the root of the forum.
Should write a paper on this at some point....
Articles on web sites are written and posted by the author. In general there is no structure or expectation for feedback or update. They are "published". The only knowledge captured is that of the author. A hierarchy of one.
Compare that with a site like codeproject (www.codeproject.com)where an article( specifically about some programming topic ) is written and posted by the author but has a threaded forum attached to the bottom where comments and discussion can be collected and retained. The author remains the top level arbiter of the discussion with ( I assume ) edit rights to both the article and the discussion. The discussion participants retain edit rights to their contribution. A two level hierarchy.
Look now at something like slashdot. (www.slashdot.org) Here the author of the post contributes it then its chewed upon by the community. The signal to noise ratio is so low in the following discussion that the author has little hope of managing or replying to the discussion posts. The site employs a voting system to attempt to help the signal to be elevated above the noise. The author looses edit rights to the article; which are taken over by one of the site editors. Discussion posts are editable by their contributors and there is influence by moderation and voting. This creates a complected three level hierarchy with some complicated and non-deterministic rules for predicting what the final knowledge collection might look like. This in itself is an interesting example of a system to harvest knowledge out of an essentially chaotic community. The beauty of the system is that it works. The drawback is that the voting system can be 'gamed' and will corrupt the process if it happens on a large scale.
One of the key elements is that the dynamic updates promote the signal upward while 'modding' down the rubbish. It is not discarded but is visually diminished and tagged so it can be filtered out if desired. However at any time later, this rubbish can still be examined by researchers.
Compare this to a wiki system. Again it uses an article as the cornerstone of the knowledge accumulation. This article then is modified directly by random contributors. They can add, edit or delete information. Most wiki's seem to have the capacity for a simple comments system which is attached to the page. This is un-threaded and in my experience relatively unused. However it does make for an ideal place to ask questions and make requests. The problem being that without a responsible maintainer, there is no one to service those requests reliably. Its the responsibility of the 'community' to chaotically make improvements.
Wiki systems have an interesting 'rollback' feature that allows changes to be 'undone'. This mitigates the damage of authors work being deleted maliciously. However as the author and the subsequent contributors are effectively peers, who is to say that the original author's contribution is better than the deletion made by the peer.
Wiki systems spawn pages. These are hyper linked to older pages and the resulting knowledge base becomes a standard hyper linked mess. There is no way to automatically assess quality or topic or relevance to any particular subject different from a normal keyword search or page rank system.
Now look at discussion forums. Commonly called 'forums'. These usually have a threaded and date sorted post structure. A number of separate 'forums' are often collected under a major subject heading or site. These sub-forums are usually just arbitrary separations of the topic area or activity of the 'community' that they are servicing. The membership of the community is often fairly chaotic but share some commonality. Either a topical interest or some common property.
A community member can make a 'post' at the root level of the forum. This post can then be commented upon by other forum members who add to the thread. The original author has edit control over the post and any subsequent comments they might make in the resulting thread. These threads can branch into sub thread; making for a hierarchy of comment and response.
The root of the forum has three classes of post. There are 'recent' posts that are sorted to the top of the list by date. 'older' posts that have moved down the list simply due to the date they were posted and what are called 'stickies'. These 'stickies' are often posts addressing common topics or rules of conduct for the forum. They are fixed at the top of the forum and remain there irrespective of the date sorting.
The only other meta data that can be automatically derived from a post-thread object is its activity, number of posts, frequency of posts, statistics on who has posted, size of the posts and the number of thread branches. Otherwise the knowledge encoded in the structure will slowly drift away from the root of the forum.
Should write a paper on this at some point....
Labels:
algorithms,
Big Data
Archive from 4_6_07 - Notes on the weekend
Spend best part of a day (spread over Saturday and Sunday) feeding greenwaste through the shredder. Nice little tool. Built by Ryobi. (Note to self - do some sort of re-design on their shredder internals) The most frustrating part of the shredder is the process you need to go through to clear the blades when they get jammed. It involves undoing a threaded bolt. This takes 30 second or so. Then you can open the top and clear the blades etc. There is a safety cutout with a little flange built into the top that must be engaged with the engine body before the thing will start again. However the slot is in the bottom part so it gets full of chips and grit as soon as you open the top.
Now the reason the blades are getting jammed is because the guides that help feed the material to the blades are in two pieces and have a little shelf beside the blades. This allows material to instead be guided into the gap between the two guides. Its only a little gap, but bits of twig and bark get jammed in there. Would it have been so hard to cast the pieces into a single unit? Would it be so hard to eliminate that little extra shelf bit?
Once the jam has been cleared its time to put it back together. This seems simple. Close the lid and screw up the bolt closure. However the bolt is in a bit of an awkward location and it is certainly not self guiding, so after some fiddling and lots of turning it closes tight. Would some sort of obvious catch not be more use? I get that it should be hard to undo until the blades have stopped.. but a more usable design seems easy enough.
Anyway, suffice to say I am having to clear the blades about every 5-10 minutes. This is probably due to what I am trying to shove through the shredder but even so, every time I need to spend a couple of minutes opening, cleaning, un-jamming and then screwing around to get back to work. Not its best feature.
Otherwise I am quite happy with the little bugger. Its turned a huge mound of branches into a much more manageable pile of chunks and shreds. Now if only the garden bed was that easy.
The garden bed....
Imagine a mound of Jasmin vine about two meters high. Imagine it about 5 meters long by about 3 meters wide.... now imagine what could be in it.
On the first excavation with secateurs, I appeared there was only more Jasmin. On the second exploration with a rake handle it turned out to hold lumps and objects of metal and wood.
On returning with the whipper snipper and a whole lot of patience... it was revealed that we were the proud owners of a fairly aged compost bin and a huge willow stump. (Which upon being given a slight push promptly fell over onto one of the walls of the compost bin and crushing it.)
Two weeks later, I have returned with more tools to try to break up the Jasmine remaining and make some headway into digging a garden bed. The Jasmine is laborious and takes a lot of pulling but is being beaten back. I keep finding hundreds of runners have taken off under the leaf litter and are working their way into every other part of the garden...anyway, I have establised a scorched earth policy around the former compost bin and am negotiating with the remaining Jasmine to respect my request. So onto the digging...
Concrete! you must be f$&*%ing joking. There is an old slab right where I wanted to dig up a vege patch. No one knows what the slab was for. Even my neighbour who has been here longer than the house we bought. Anyway, its there, so I have to make the best of it. My guess was the base for a compost bin or incinerator. Anyway, I will use it for a worm farm or something. Its handy being right in the garden... so it will work out. I just have to dig up more of the lawn to replace the lost space.
Why doesn't anyone sell long handled garden forks anymore? Whats with all the spade handled forks? Why in hell would I want to be that close to the ground when trying to lever my way through a mat of roots? Boggles the mind.
Otherwise, fixed a few things, did some house work, watched the young-un and a couple of movies.
Now the reason the blades are getting jammed is because the guides that help feed the material to the blades are in two pieces and have a little shelf beside the blades. This allows material to instead be guided into the gap between the two guides. Its only a little gap, but bits of twig and bark get jammed in there. Would it have been so hard to cast the pieces into a single unit? Would it be so hard to eliminate that little extra shelf bit?
Once the jam has been cleared its time to put it back together. This seems simple. Close the lid and screw up the bolt closure. However the bolt is in a bit of an awkward location and it is certainly not self guiding, so after some fiddling and lots of turning it closes tight. Would some sort of obvious catch not be more use? I get that it should be hard to undo until the blades have stopped.. but a more usable design seems easy enough.
Anyway, suffice to say I am having to clear the blades about every 5-10 minutes. This is probably due to what I am trying to shove through the shredder but even so, every time I need to spend a couple of minutes opening, cleaning, un-jamming and then screwing around to get back to work. Not its best feature.
Otherwise I am quite happy with the little bugger. Its turned a huge mound of branches into a much more manageable pile of chunks and shreds. Now if only the garden bed was that easy.
The garden bed....
Imagine a mound of Jasmin vine about two meters high. Imagine it about 5 meters long by about 3 meters wide.... now imagine what could be in it.
On the first excavation with secateurs, I appeared there was only more Jasmin. On the second exploration with a rake handle it turned out to hold lumps and objects of metal and wood.
On returning with the whipper snipper and a whole lot of patience... it was revealed that we were the proud owners of a fairly aged compost bin and a huge willow stump. (Which upon being given a slight push promptly fell over onto one of the walls of the compost bin and crushing it.)
Two weeks later, I have returned with more tools to try to break up the Jasmine remaining and make some headway into digging a garden bed. The Jasmine is laborious and takes a lot of pulling but is being beaten back. I keep finding hundreds of runners have taken off under the leaf litter and are working their way into every other part of the garden...anyway, I have establised a scorched earth policy around the former compost bin and am negotiating with the remaining Jasmine to respect my request. So onto the digging...
Concrete! you must be f$&*%ing joking. There is an old slab right where I wanted to dig up a vege patch. No one knows what the slab was for. Even my neighbour who has been here longer than the house we bought. Anyway, its there, so I have to make the best of it. My guess was the base for a compost bin or incinerator. Anyway, I will use it for a worm farm or something. Its handy being right in the garden... so it will work out. I just have to dig up more of the lawn to replace the lost space.
Why doesn't anyone sell long handled garden forks anymore? Whats with all the spade handled forks? Why in hell would I want to be that close to the ground when trying to lever my way through a mat of roots? Boggles the mind.
Otherwise, fixed a few things, did some house work, watched the young-un and a couple of movies.
Labels:
Life
Archive from 4_6_07 - More Blog configuration
Like all software, there is a learning curve attached to blog ware. There is all the conventional set-up( just get it working...) stuff, along with the personalisation ( "I want it to work this way" or "This reflects me a bit more") along with the external functionality ("Spammers are scum", "link to me", "search me" etc) settings.
Some of the missing functionality is the personalisation of function("I want it to work like this..."). Like most software there are a set of assumptions and resulting constraints on the way the software works that are derived from the designers understanding ( either poor or good) of how "users" want their product to work. This may be either poorly implemented or quite polished ( in this case it seems quite polished) but at the end of the day the software encodes a particular work flow and mindset on the task. The assumptions are drawn from the designers experience with their own activities.
I find these sets of assumptions interesting. I am coming to see them more and more in my own work. Developing many quite different tools for disparate users in a short span of time has provided some quite condensed experience.
My observation is that most users who are approaching a new task are very dependent on their previous experience. You can either borrow from that ( both good and bad ) or make a clean break and provide something new and simple to learn. If they can have easy and early success with something then you have a winner. By this I mean, software should work straight out of the box. It should do something that is recognisable to the user as being sort of like what they want. Only then can you start to offer them options to modify the experience to be "more like what I want". At this point you have achieved the all important step of "engaging" your user. They are "inside" the experience. They and the software are now collaborating on getting the experience and process they want.
To bring this back to the subject of blogging software; my recent experience has suggested that my understanding of "normal" blogging processes are effectively zero. The conventions of blogging and the process of managing a blog are totally new to me.
I consider this a good thing. I have few pre-conceived notions about how the "blog" experience should be. The closest I can come is in managing online learning environments and discussion forums.
Another observation is on the learning curve subject. If you assume a learning curve is a factor of the difficulty of the subject matter as well as the complexity imposed by the interface design, then blogging is currently a fairly painful exercise. Not impossible but still relatively complex. Perhaps a 6 on a scale of 1 to 10 with 1 being transparent and 10 being "Invent your own tools". My conclusion would be that the software still has a fair way to mature yet. The feature set is still growing, as well as the challenges facing the designers. There are still conventions being worked out and new ways of using the blog being invented. As such its far from a mature experience but is also hardly raw anymore. For a couple of years in existence... that pretty good going. (when I say a couple of years, I mean, a couple of years being know to 'popular' culture. Obviously writing web logs has been around a tad longer than that.)
I feel the need to examine why I have started a blog. Really, it was the urge to try to simplify my life a bit. I am coming to use the online systems more and more as file repositories. The learning systems we use at Uni are fine for study materials and all the associated files, but are unsuitable for the addition of personal files or files I need access to from multiple places. They are also unsuitable for taking notes on random stuff. I have the habit of writing tracts of, basically, rambling garbage about topics that spark my interest and saving them on various hard drives, thumb drives, email systems etc. Its a pain to find them afterward and its more a pain to make sure they don't fall into the hands of anyone else. Its not that they are sensitive or contain unkind thoughts about others; its just that they are my rambling garbage and I wish to control access to it.
The next step in my attempt to solve this problem was to post it all to one of my websites. However this required having an FTP client and a decent HTML editor installed and accessible on all the computers that I use. Quite impossible. Having three offices, working in the labs, working from other peoples houses... not going to happen. I thought about trying to set it all up on a thumb drive, but even that means I need the drive with me 24/7. Another piece of junk to get lost, damaged or just in the way.
So the perfect solution is a web app. Something that I can have access to anywhere there is a connection and will just work.
While the page formatting might not be stupendous and the lack of some of the features of a word processor might be a pain. Its good to be able to just log in and work. Upload... finished. No pain. Speed, done.
I am tying to simplify things so that there is less of a barrier between me and what I am trying to get done. I am finding this an elegant solution, learning curve not withstanding.
It will be interesting to see how my habits change with access to slightly more transparent tools. Will I be inclined to write more often? Will my signal to noise ratio raise or fall? Will my interest wander to something else now I feel like its taken care of?
My guess would be slightly higher noise to signal ratio. ( no spell checking or grammar checking. Easier to click the save button without thoroughly checking what you have written )
Like now...
Some of the missing functionality is the personalisation of function("I want it to work like this..."). Like most software there are a set of assumptions and resulting constraints on the way the software works that are derived from the designers understanding ( either poor or good) of how "users" want their product to work. This may be either poorly implemented or quite polished ( in this case it seems quite polished) but at the end of the day the software encodes a particular work flow and mindset on the task. The assumptions are drawn from the designers experience with their own activities.
I find these sets of assumptions interesting. I am coming to see them more and more in my own work. Developing many quite different tools for disparate users in a short span of time has provided some quite condensed experience.
My observation is that most users who are approaching a new task are very dependent on their previous experience. You can either borrow from that ( both good and bad ) or make a clean break and provide something new and simple to learn. If they can have easy and early success with something then you have a winner. By this I mean, software should work straight out of the box. It should do something that is recognisable to the user as being sort of like what they want. Only then can you start to offer them options to modify the experience to be "more like what I want". At this point you have achieved the all important step of "engaging" your user. They are "inside" the experience. They and the software are now collaborating on getting the experience and process they want.
To bring this back to the subject of blogging software; my recent experience has suggested that my understanding of "normal" blogging processes are effectively zero. The conventions of blogging and the process of managing a blog are totally new to me.
I consider this a good thing. I have few pre-conceived notions about how the "blog" experience should be. The closest I can come is in managing online learning environments and discussion forums.
Another observation is on the learning curve subject. If you assume a learning curve is a factor of the difficulty of the subject matter as well as the complexity imposed by the interface design, then blogging is currently a fairly painful exercise. Not impossible but still relatively complex. Perhaps a 6 on a scale of 1 to 10 with 1 being transparent and 10 being "Invent your own tools". My conclusion would be that the software still has a fair way to mature yet. The feature set is still growing, as well as the challenges facing the designers. There are still conventions being worked out and new ways of using the blog being invented. As such its far from a mature experience but is also hardly raw anymore. For a couple of years in existence... that pretty good going. (when I say a couple of years, I mean, a couple of years being know to 'popular' culture. Obviously writing web logs has been around a tad longer than that.)
I feel the need to examine why I have started a blog. Really, it was the urge to try to simplify my life a bit. I am coming to use the online systems more and more as file repositories. The learning systems we use at Uni are fine for study materials and all the associated files, but are unsuitable for the addition of personal files or files I need access to from multiple places. They are also unsuitable for taking notes on random stuff. I have the habit of writing tracts of, basically, rambling garbage about topics that spark my interest and saving them on various hard drives, thumb drives, email systems etc. Its a pain to find them afterward and its more a pain to make sure they don't fall into the hands of anyone else. Its not that they are sensitive or contain unkind thoughts about others; its just that they are my rambling garbage and I wish to control access to it.
The next step in my attempt to solve this problem was to post it all to one of my websites. However this required having an FTP client and a decent HTML editor installed and accessible on all the computers that I use. Quite impossible. Having three offices, working in the labs, working from other peoples houses... not going to happen. I thought about trying to set it all up on a thumb drive, but even that means I need the drive with me 24/7. Another piece of junk to get lost, damaged or just in the way.
So the perfect solution is a web app. Something that I can have access to anywhere there is a connection and will just work.
While the page formatting might not be stupendous and the lack of some of the features of a word processor might be a pain. Its good to be able to just log in and work. Upload... finished. No pain. Speed, done.
I am tying to simplify things so that there is less of a barrier between me and what I am trying to get done. I am finding this an elegant solution, learning curve not withstanding.
It will be interesting to see how my habits change with access to slightly more transparent tools. Will I be inclined to write more often? Will my signal to noise ratio raise or fall? Will my interest wander to something else now I feel like its taken care of?
My guess would be slightly higher noise to signal ratio. ( no spell checking or grammar checking. Easier to click the save button without thoroughly checking what you have written )
Like now...
Labels:
Philosophy,
Software Tips
Archive from 1_6_07 - Blog Software Test
I have just spent the day testing various blog packages to see what will work for me. The choices were:
bBlog - turned out to be 2 years since any major updates = bad. Also couldn't figure it out and the documentation was a bit lite (none useful available)
word press-installed fine but would not work. Spat with an error about the MySQL version not being enough. Even though the version installed is higher than the spec requires. Go figure!
b2evolution - the one I ended up using (because its the first one that would install+work+let me post)
I only had one more option after this... nucleus... but I don't need to try it (day is already wasted anyway) I am over this.
Night all.
bBlog - turned out to be 2 years since any major updates = bad. Also couldn't figure it out and the documentation was a bit lite (none useful available)
word press-installed fine but would not work. Spat with an error about the MySQL version not being enough. Even though the version installed is higher than the spec requires. Go figure!
b2evolution - the one I ended up using (because its the first one that would install+work+let me post)
I only had one more option after this... nucleus... but I don't need to try it (day is already wasted anyway) I am over this.
Night all.
Labels:
Software Tips
First!
Like any good blog... this one needed a first post. Trivial but essential.
The purpose of this blog is essentially to take my blogging off the machine at the Uni that it has lived on for three years and move it into the cloud. I may be leaving my current employer and I don't really want to try to put this on another fragile host somewhere. Essentially its time to bite the bullet and stop avoiding cloud services. Just to add insult to injury, my workstation at home has finally blown a capacitor on the mobo. So I have hacked together an antique Compaq p4 and mounted the drives from my old machine via an external IDE to USB interface I scavenger from an external CD drive. Its crude but functional. Actually, I really like these old Compaq machines. They are indestructible. (True I have had to cannibalize a couple to get one working but in general they just work.) Even the drivers on the Compaq page at HP are still in place and reliable.
First job is to copy the archive of my old blog. Onward....
The purpose of this blog is essentially to take my blogging off the machine at the Uni that it has lived on for three years and move it into the cloud. I may be leaving my current employer and I don't really want to try to put this on another fragile host somewhere. Essentially its time to bite the bullet and stop avoiding cloud services. Just to add insult to injury, my workstation at home has finally blown a capacitor on the mobo. So I have hacked together an antique Compaq p4 and mounted the drives from my old machine via an external IDE to USB interface I scavenger from an external CD drive. Its crude but functional. Actually, I really like these old Compaq machines. They are indestructible. (True I have had to cannibalize a couple to get one working but in general they just work.) Even the drivers on the Compaq page at HP are still in place and reliable.
First job is to copy the archive of my old blog. Onward....
Labels:
Hardware Hacking,
Philosophy
Subscribe to:
Posts (Atom)