Monday, April 27, 2015

Pointless philosophical debate - Disengagment, Debugging and Futility


I listened to an Honours presentation a few years ago called:  "Knowing When To Quit: Self-Focused Attention Enables Disengagement, Positive Affect Predicts Reengagement Tendencies. (Tristan Hazelwood)".

It was on one of those aspects of psychology that I had never considered before.  That of "task disengagement"... or the ability to quit doing something. I found this quite personally intriguing becasue I had always seen "quitting" as a failure state and even though I have quit a lot of things in my life... it was always with this sense of personal failure.

Tristans experiment looked at how long people will continue to attend to a "futile" task before stopping doing it.  There were some manipulations as well... which are not relevant.

I had never really asked myself this question or the extension questions.

How long do I keep doing futile tasks? 
How do I know a task is futile?
What if a task seems futile but is ultimatly not?
What if the effort required to complete the task is disproportionate in comparison to the reward?

Anyway,  this gets back to my today issue:  Fixing bugs in other peoples code. 

I have just spent way too long trying to debug why the wrong favicon was being displayed in a particular browser (Firefox).  This required me to:
  • Verify my assumptions about the local copy of the site files (Still broken)
  • Verify my assumptions about the server copy of the site files (Still broken)
  • Reload everything in Firefox
  • Test in multiple other browsers
  • Read up on how Firefox handles FavIcons
  • Break into the sqllite db used by Firefox to cache the favicons
  • Clear the cache, reload, test, clear again, clear a different way... test-fiddle cycle until I got the combination correct and finally expunged the old favicon from the chain of server-browser-cache rubbish.

In hindsight, knowing what I know now, this was a worthless task.  It was a combination of a stupid caching policy/bug in firefox and my uploading "placeholder" images before I had the final design for the favicons. The problem would not have been visible to the client because it was in my local browser cache.  But at the time all I could see was an error!

When is a debugging task "worthwhile"?

I have a long history of tenacity with debugging... and in hindsight many of the more intractable bugs now seem to be examples of time wasted and needless distractions.  Others were important and meaningful battles that I needed to fight to get projects over the line. All were bugs.... are there any clues to differentiate between valuable battles and needless distractions?

When the bug is in someone elses code... 


I think the first possible clue is where the bug lies. I think debugging is often a loosing proposition, even if you do pin it down.  I have spent way too long proving, documenting and reporting bugs in products that I use in the hope that they will be fixed.  Some have been... but I think that the majority of them resulted in a discussion with the product programmer/support minion that not only wasted more time, but resulted in them not resolving the bug in any way that resulted in me getting a fix when I needed it. (I.e very soon after identifying it)
I would suggest that while these activities were probably noble they were essentially of no value to me or the work I was trying to get done. (Except where they resulted in me clearly identifying that the bug was not in my code)

My point?  Stop debugging other peoples code where possible.  There is just not enough time in my life to fix all the broken software (and get a usable result) Once I see that the bug is not in my code... give up.  Find a work-around, hack it, abandon the tool/framework/etc... but move on.

When there is more than one way to acheive a result...


The difference between "the right way to code" and the "expeditious way to code" can sometimes be clear.... however, I find there are usually lots of ways to get to the same result.  Unfortunatly, I often find that the "recomended" way in the docs/tutorials/book/forum post/back-of-the-toilet-door is either out-of-date, incomplete, partially thought out or just plain wrong.  The time I have spent trying to get stuff working that I found on the internet... that eventually turned out to be rubbish, is painful to recount.  (This is mostly my own fault for trying to work in too many languages/toolkits/frameworks etc without taking the time to be an expert first)
I think that I need to be a bit more willing to abandon broken code a bit faster than I currently am. This often comes back to some time pressure, that I'm trying to get something working quickly rather than taking a bit more time to learn the docs first.  False ecconomy.

That being said, I would say that more than 50% of the time when I have gone to the docs for whatever I'm working on, the docs are wrong in part or whole.  So, I'm not sure if there is any actual win with reading the docs first... but it might help in some cases.   To paraphrase... go back to the source Neo...

My advice?  Don't hold onto any ideas of the right/clean/ideologically sound/best practice way of doing something.  Get it working as best you can and clean it up later if it proves to be a problem.  Future proofing is a never ending piece of string argument.... you cannot win, so don't fight the battle.

Usually I find that the next time I try to solve the same problem, I inevitably have a different approch anyway... so I tend to end up rewriting rather than fixing... but thats me.  The key point is to let go of something that is not working no matter how ideologically "right" it may be.

The curiosity trap...


Debugging to figure out how something works is a really painful way to learn.  There are probably lots of scenarios where its the only way (security work, reverse engineering malware, deep bugs in critical systems etc)  but for most of the desktop stuff I do, there is just no reason to be trying to learn that far down the levels of abstraction. It's just too slow.

My point?  I think when its curiosity driving the activity and you find yourself trolling through code... "you're doing it wrong".  Go feed your brain on wikipedia or get a pet or play Sudoku.... the urge to solve puzzles can lead programmers into some really pointless time wasting.

Neat and tidy.... 


The urge to "finish" something or make it "tidy" is a seductive beasty!  It can lead you to make beautiful things... or take you to crazy places.

I find that neatness in coding is important so you can stay on top of the project... but there is a slippery slope beyond that point where neatness for the sake of it becomes a procrastination exercise (he types on a blog....laughing ironically as the keystrokes land...)

Basically, there is no limit to how far you can take neatness in coding.... I think the best advice is to go the other way... encourage messiness to the point you cannot function... then take a tiny step back. Minimize how much extra time you need to spend on the housekeeping...

This ties in with our human pattern recognition systems so it can be a bit of a two edged sword ( triple edged swords just mess up this metaphore...so just don't...)  in that it can be very valuable but also lead to the darkside.

Neatness in the code allows us to scan repeating structures in ways that don't involve fully concious "thought".  This can be a really valuable behaviour; that personnally has spotted issues more times than I could guess.  However... the other side of this problem is that it only works with repetition.  To get maximum value, you need repetative patterns.  Once these patterns get larger... you start to run into the bad 'code smell' of  DRY.  (Don't repeat yourself).   So, I think its probably best used for "small stuff". Such as how I order operators or how I use whitespace in a statement.  The ways that brackets are laid out (this may be why code formatting is such a universal issue for code jockeys)... but once I start to see repeating blocks of statements... its time to refactor or shop for a red lightsaber. 

I think there is a bit more to think about in this area of pattern recognition in coding, but thats another post for another day.

In summation your honour... 


Estimating is always easier in hindsight.  Knowing when to let go of a task is an np hard problem.  Having written all these ideas down, its still not clear that there is any generalisable wisdom in any of it.  But, I feel a bit clearer having articulated some of the issues, which is really the point of writing it down....








Friday, April 17, 2015

Automated Passenger Aircraft

In light of the Germanwings crash, I have just seen proposals for remote control of passenger aircraft.   Almost choaked with laughter (at the solution... not the horrible problem).

Lol.... Called it!

See my random rant from 2013... http://stratasphere.blogspot.com.au/2013/04/plane-hacking-or-not.html

Ok, work with me here. 

1)  Plane with a single pilot who goes bad can crash/hijack/fly into towers etc..  

2) Plane has two pilots (captain and co)... one goes bad.  The other is supposed to either take control (physically... 50/50 chance I guess.. unless the bad one did some forward planning) .... same result in some cases.

3)  Plane has two pilots who collude and go bad together.....

4) Plane has one remote pilot.... who goes bad.  Result... whatever the pilot wants.

5) Plane has multiple remote pilots who collude and go bad together....

6) Plane has one pilot and one remote pilot.... they disagree.

7).... other permutations


These  are all examples of the two clocks problem... which is essentially a trust system.  I have had to explain this to so many students I can do it in my sleep now. Not sure where I first heard it but it was a while ago.  I have repeated it and mangled it so much that its probably not even recognisable.  However it is very useful for instrument design and trust in black boxes.

My version of the two clocks problem

So, on oldschool sailing ships they used to calculate their map position using clocks as a reference.  The captain would wind the clocks every day and by plotting angle and speed, against the clocks time, he could fairly well calculate his position on the ocean.  However, clock making technology was not always wonderful.... which resulted in clocks running slow, fast or stopping. 

What do you do if you are out on the ocean and the clock fails?  You're lost!

So one solution was to have two clocks on board.  Wind them both at the same time, keep a log of any time differences, keep them in the same place so the conditions were the same etc.(Control your variables) 

What  if you wake up and the two clocks disagree? Which one is right?  (Which do you trust?)

So the solution is to have three clocks.   If at any point the clocks disagree... the odds are that only one will fail at a time, so the other two should agree and you can reset the bad one (and make some notes in the log about not to trust the shoddy clock).

The point being that in a trust game... you cannot differentiate between two conflicting but trusted positions.  How do you know which pilot is bad? 

So is the solution to have three pilots on each plane?  Remote pilots? Robot Pilots?  Which do you trust?

The problem is not that you cannot design a robust system... the problem is that a robust system will appear to be inefficient while its opperating.  The pressure to cut costs will always be a factor in free-market ecconomics... so any system with two redundant parts will eventually be simplified down to one.  Simply because nothing bad ever happened.....

Keep in mind that trust is a dynamic betwen components... its not a property of any single component.  The is the opposite of the profit principle... which says cost is a factor of each component and reducing cost is so easy....


Duncans first law of system design

An economist will always fuck up any given system they are given control over.  

Why?  Becasue their minds do not work right.  They suffer the human frailty of trying to simplify and generalise based on perceiving repeating patterns in their perception of the system.  This lets our meer moral brains make sense of overwhelming complexity.  This gives them the idea that they can "optimise" or get efficiencies... but remember that their perceived patterns are based on only the amount of observation/data they have access to rather than a complete mapping of all possible cases.


The secret of any trust system is not to prevent it getting into a conflict situation but designing for the inevitable undesirable cases and having an elegant way to get out of conflict.  (different to risk mitigation-which is an economists way of trying to cope with edge cases)

If the air saftey groups were not economists they would design a flight system that could be flown by a suicidal pilot safley.  But once they start thinking around that corner... they cannot be economists any more.

The economist mind set will always try to eliminate/replace/fix the "bad" component in the system and assume that everything else will remain the same.  This is such newtonian thinking.  The universe is not a giant clock.

Bad is very very very relative....

Imagine an aircraft flight system that could be flown by a healthy happy pilot, a suicidal pilot, a hijacker or a child.... all in total saftey for the passengers.  Once we crack that, we are probably ready to call aircraft "Safe". Is this the same as driverless cars?  Are there still situations where the skill of a "human" is our last hope?  (Probably given the state of hardware system design.... )

The point being that the passengers should be safe even when the system is in a failure state.  Why are there no "eject" seats?  Why no ability to seperate the passenger compartments/pods and fly/land them with shutes or whatever?  If you were designing another system with a massive single point of failure (pilots and cockpit) you might think about some sort of backup... but aircraft designers have some sort of catastrophic inability to learn from any other industry.....

Moving on....

Thursday, April 16, 2015

Thought Viruses

I have been doing some reading about "Narcissistic personality disorder" and "Borderline personality disorder" for various reasons.    While there are all sorts of aspects to these conditions, they are essentially a set of "thought patterns" which are expressed as a grab bag of symptoms of varying intensity by the victim. 

The key point being that these "though patterns" are "trained in" by an abusive parent(usually).  They are do not stem from a physical injury or anything else.  Essentially the child is fine before hand, then afterward is broken by exposure to the parents condition.  Similar to PTSD.  

The complex issue that I have recurringly thought about is that these particular conditions are repeated down the generations unless interupted during transmission.  The condition is self-replicating and self-maintaining.  I.e  a parent with a mild case of NPD can damage a child who manifests a strong case of NPD... so the condition does not "weaken" over generations.  

To me, this is a perfect example of a "Though Virus".    The same sort of pattern seems to happen with bullying (although I have not read as much about it)  where the victim of a bully may go on to become a bully themselves, thus replicating the condition. 


I'm sure there will be a bunch of these kind of "though patterns" that are transmitted from generation to generation. Some we call "wisdom", "habits", "myths", "family culture" etc.  But like all symbiotic relationships, its the negative ones that get called names.  


The interesting aspects of these constructs is that:

A) If they can be given to someone, they can be taken away. (More or less cured... in theory.  This ignores any damage done while the victim was carrying it; which could be substatial)
B) Can we develop an immunisation for this crap that will remove it from society?
c) Is there are liability for society by allowing parents to infect their children with this kind of negative thought pattner?  If so, should society identify and treat before the infection can jump to the next generation? 
d) These things can only work in a particular context.  NPD depends on isolating the victim(s) and creating and re-inforcing an alternative "reality bubble" around them.  Can this be defeated simply by not leaving infected parents alone with un-infected children? (This is probably the cure for all sorts of shitty parenting....)

e) These kinds of though patterns could potentially be transmitted to artificial intelligence and back again.  Something to think about... 


 Once you start looking at thought patterns as a transmittable "thing" you start to see all sorts of passive and active mechanisms that may be playing large or small roles in this process.  I have seen a few articles such as "Playing computer games is re-wiring my brain.." kind of thing.  I know logically this is true... I have just never considered the full extent of this kind of massed, repetative rewiring.  Its funny to see social conventions emerge and propogate on game message boards.... and then to see them make a jump into social memes... but in reality this is how society has evolved... a collective set of through patterns that are self sustaining, transmissible, and self-reinforcing.  Keep in mind that all though patterns are emergent (random, chaotic and useless until re-inforced by utility) its easy to think of "intelligence" as one big virus.....

Once you conceive of the human brain as simply a virus laden organ which can be infected by other viruses.... it gets weird. 

But as a mechanism to explain intelligence, thought, society etc... its pretty neat and tidy.
 

Wednesday, February 18, 2015

Why Solar power systems suck


I have just finished my yearly research into the current solar schemes and system prices, I have come to the conclusion that its a big scam and will continue to be for some time. 

Let me be clear that I am talking about urban, grid connected systems with easy availability of grid electricity.  This is not about remote area power or fridge power (where the quality of the grid is a bit wobbly)

1) Pricing

The pricing model for solar systems is always pitched in comparison to the grid.  The sales people are simply trying to "beat the grid"rather than pricing the systems based on any intrinsic quality of the system.  Usually they are only trying to be "just"better than the grid price.  The other problem is that the way they try to calculate that price is on an upfront cost versus "best guess over time"cost.  This all gets pretty wobbly once you start trying to guess what your usage will be in the future, what the grid price will be in the future etc.  

2) Power output

The models used to predict power output from a solar system are pretty simple.  Amount of sun per day, amount of sunny days for your region, potential output from the panels, decline in panel output over time, time of day you want to use the power.... but you try getting a straight answer from a salesman.  Once you do the math from first principles its not rocket science.  But you also realize just how little actual usable power you will have at any point in time. 

Then you realize just how much power you will be returning to the grid for free; which leads to the realization that you will be buying most of it back at 6 times what you were paid for it.

3) Storage

Storage is the only thing that makes a solar system make sense. But once you look at the additional cost and risk it just gets stupid quickly.

The storage systems that are available are expensive, high risk and high maintenance.  You require yet another box on the wall to manage the storage, which increases system complexity and risk.  Trying to do anything like a gravity battery or hydrogen system is a joke.  There is just nothing viable unless you go for old school battery banks with the associated problems. 

4) Risk

Take 15 different peices of equipment, wire, frames and a connection to the grid and figure out the failure rate of them and the cost to replace, downtime, potential side effects of collateral damage in the event of catastrophic failure, the additional risk of high voltages floating around the house and the general issues of dealing with more trades, small businesses and 15 to 25 year warranties from companies that change their names every two years or only supply under short term contracts....  It's just a massive pile of risks that are difficult to mitigate.  Some are physical risks, some are ecconomic, some are reliability risks.... and they are all your problem once you buy the system.

About the only way that you can mitigate some of these risks is to buy (or add the system to your existing) insurance.  However, as we are again playing guessing games with the future, I think its hard to know if the insurance will actually cover all these issues.  Even if it does now, its possible that this could change at some point during the period you own the system.... yet another risk.  

5) Investment return

Honestly the investment returns I have seen are rubbish.  Depending on how you massage the spreadsheet and how much wishful thinking you inject you can get a flat payoff period somewhere between 6 and 10 years.  IF you play the system very hard. 

If you live in the real world and work outside the home during the day... then you are pretty much screwed.  Unless you can either store or use the power during the day, your ability to recover your initial costs are seriously diminished. The power going into the grid during the day will not be nearly enough to offset what you use during the evening.  

This is yet another risk that is not disclosed in any of the literature that is easily available. Your system dictates your lifestyle.   

The return from the gov rebate or a grid feed rebate is always going to be a game run by the big players.  YOU CANNOT WIN.  There is no interest in making it a fair game. Even if it was, it would still be a game of SCALE.  Big generators will be able to get efficiencies that small players cannot.  The overheads will always push small producers out of volume markets. 

6) Reliable data

Just trying to get hard enough data to differentiate between two products in the solar market is way too hard. It's just bullshit.  You try differentiating between two inverters based on anything other than the published price and the colour of the box... you have virtually nothing to go on.  I have not found anything like an independent testing body any useful data to base a decision on performance.  Even then the sales people will finally admin that once the system is installed they will need to "tune"it to get it to perform adequately.  This may continue for a year or more.  (So for the first year or so of your systems lifetime, it may not work properly.  Should this be part of the product disclosure?  What effect will this have on your payback period?  Does this add more costs?  What is the risk that they will never get it working "adequately"?)   

7) Inflexibility

 Once you commit to purchasing the system and all the associated unknowns... you are stuck with it for the future.... If you have had a look at the resale value of an installed system... unless you sell the house with it installed its going to be alot smaller.  Keep in mind that you need a sparky to extract the system and a way to deliver it to the new owner.  Then you have the loss and liability involved in selling something. (Check consumer law) So, in summary there is no cheap way to change your mind without taking a bath.

Now think about the technology. The panels degrade over time and the inverter is a computer. The batteries do not get better with use.  Everything gets much less valuable the longer you own it.  At some point everything you have purchased will reach a zero value point and you will need to dispose of it.  The batteries are actually the only thing that will have much value at that point as the lead will still get a good price from the scrap metal dealer.  Old panels might be able to be sold as is, but it will take a sparky to evaluate them, so no-one will be buying them on sight.  This will add an overhead to disposal unless you simply sell them for scrap value (about 1kg of aluminum and some steel) or about $10 for a panel that cost you $500 new.

 So, if you look at the system as an investment, its a very rigid deal.  You either stick with it for 15 years or you loose. And your chances of getting any sort of pay-off are minimal.  Your best hope will be to get your money back but after 15 years of indexing at between 2 and 4%... its going to be eaten up either way.

8) Ideological Bullshit

The amount of lying involved in the solar debate is just gobsmaking.  Both lies of ommision and lies of commision.  Once you get past the crap, the only substance left is the ideology.  The idea that in some existantial way, solar systems are somehow better.  They produce less polution (check the production systems and the factories in China, the labour conditions, the social systems that support all this and finally the explointation required to keep the cheap labour) then make the ethical argruments...

From a technical standpoint I like the idea of collecting solar energy, but the technology is just not here yet to store it effectively (hydrogen) and recover it rapidly (fuel cells) without having a massivly parallel system that is outrageouly expensive and a maintenance headache. The inefficiencies in the availible technology just make it impractical.

Hopefully this situation will change over time, but for now, the grid is cheap, efficient, low risk, ethically neutral and very low maintenance.   Economy of scale is very hard to beat.




Tuesday, February 10, 2015

Updating my web editor - or the great dreamweaver debate....

I have been using Dreamweaver CS4 since... a while now.  As I was only doing a bit of simple web gui work mostly with tricky clientside javascript but fairly basic hand coded interfaces... CS4 was fine.

I have recently landed with a whole pile of projects that are very GUI dependant. Active websites and quite complex web apps.  Trying to hand code all this stuff is just painful. 

So, this got me looking at my tooling. What I wanted was a fairly high level way to compose the GUI and then good support for the inevitable coding under the bonnet.  Dreamweaver is a nice editor but the CS4 designer is just broken on modern coding styles.

Thus began the quest to get something better.

The starting point is just to upgrade to the latest version of Dreamweaver... but Adobe have continued their mission to make their software both ridiculosly expensive and even more entangled in their ecosystem. 

Firstly, I don't have a problem with paying for software.  I recognise that Adobe make a simply exceptional product line and the amount of working code that you get for your money is gobsmacking.  That being said, their pricing model is just mean unless you are a fulltime production user.  (I also recognise that they are the most pirated software ...ever.  Which I think might explain a bit of the "meanness" in their attitude.)  Anyway.... the pricing was a bit of a shock to the system. 

The level of entanglement in their online ecosystem and the frailty of that system is a bit worrying.  Honestly, its not very worrying... just a bit new and took a bit of time to think about. (Again, there was a hint of mean-ness in the attitude from Adobe customer service about their handling of service outages... but that could be just me projecting into what I have read) 

However, in summary, I suspect that the monopoly position that Adobe has occupied for some years is getting ingrained as arrogance throughout the organisation.  This should be a cause for concern among its managment.

Anyway, back to the problem.

After some thrashing around and getting my head back in the game of current generation web design I started looking at the contenders.

My first realisation was that working at the HTML/CSS level is a mugs game.  Its just not feasible to hand write current generation web app.  (It is, but you are not going to get much done and it will consume your life) There is no way to avoid frameworks now.  Its just not even a question.  The only question now is which framework(s) you choose to use for the projects.

The put me onto a side detour to play "pick a framework".... talk about down the freak'n rabbit hole!

As I have dabbled in a couple of back and front end frameworks before, I had some starting places.  After a really messy period of reading and test playing,.... I settled with:

PHP (4-5 frameworks still to be refined down) for the server side
Angular.js + bootstrap for the client side

Why did I pick these?  In the end it was mostly cause those were the ones I liked best and the tutorials were pleasant.  (Honestly that was about it.  I stopped looking for compelling reasons as it was all bullshit.... the tradeoffs are just to many and varied for me to make an informed choice yet... so I just stopped trying to "be right". I will build a couple of projects and then re-evaluate as I get more literate.)

In the time since I made those choices... even more choices have come to light... its just a mad mess out there.  (Which supports the main theme of this train-wreck of a post)

Editors.

To recap, I wanted an editor that would allow me to do high level assembly of pages and support the inevitable coding with all the bells and whistles of coding editors like Visual Studio ( big ask I know)

Drag and drop, wysiwyg designer, syntax highlighting, project managment, debugger... etc.

Coffeecup HTMLEditor (and other tools)

This is an interesting little suite of tools... but they are almost novelty tools.  The editor is pretty but has no support for backend coding.  Just look at its javascript handling.  This is dead technology from a decade ago.

Ignore these tools. There is nothing here anymore.  No.

Amaya

Just don't bother. It's an academic toybox.

No.

Kompozer

Yet another web editor from the past. WYSIWYG....bullshit.  This is a very competent editor for about 10 years ago.

NO.

Webstorm and PHPStorm

Code editors without any GUI designer. Very nice code editors which should be rolled together... but I want a GUI designer.

High quality code tools... no help for the design stuff.  Could be a contender if they ever get the graphical nature of the web.

No. 

Microsoft Webmatrix

Yet another in Microsofts long line of half-assed attempts to almost commit to having a web development toolbox. Frontpage, ASP, Expression, Metro, Silverlight blah blah blah.  I just cannot understand why they always manage to totally stuff it up so completely. Get the politics out of the tools you idiots!  I have a lot of trust issues with MS and this is not giving me any warm feelings.  It looks like yet another abandonware project that has no commitment behind it.

So, this is a No from me until the whole company grows the fuck up.

Microsoft Visual Web Developer (Express, Pro... whatever)

Already dead by the time I got there.

No.

Microsoft Visual Studio with plugins 

Only if you want to use MS backend tech.  And you are happy with really shit rendering of the page you are working on....

No.

Microsoft Expression Web

Go the fuck away MS.

NO you fucktards.  Commit to something and stop fucking around or just die and get out of the road.  Do you have any idea just how much effort has been wasted on your endless inability to commit?

Google Webdesigner

 Almost a mirror image of Microsofts poor effort.  Inhouse tool that is not getting much love. Beta feature set. 

No. No. NO.

There are more...

 BlueFish, Open Bexi, Namo, Aptana, Text Mate, Blue Griffon, Top Styler.. etc.  Many are good code editor.  None that I spent the time looking at are good GUI designers. It all became a bit of a blur honestly... I may have missed stuff but the fatigue of trying to honstely evaluate this many similar bits of software and seeing the same repeated patterns of failure; same decade old model got a bit depressing. 

However, it did bring the problem into sharp focus.

I also think that the buffet of half-assed editors should sound a real warning to the industry.  The tools are lagging well behind the needs of the people building this generation of web-apps. 

The editors are not keeping pace with the frameworks. Very few of them are what I would describe as "framework aware".  There are some that have "plugins" to handle the framework classes and make their syntax checkers work with your framework of choice.  This is a BIG PLUS in their favour... but its a trick that has been around for years.

In my survey, I did not find many editors that were able to do "high level" manipulation of common stuff like bootstrap.  The fact that bootstrap is fragmenting and evolving as soon as you look hard at it is similar to all the frameworks.  It must be a nightmare for the groups working on these editors.

Support for big ticket middleware frameworks like jquery or angular was also a bit underwhelming.  What I did find was still very "low level"... essentially ... "here's a nice code editor" ... hand code it yourself.

The other trend I noticed was the reliance on external browsers to render a "preview" of the page. This is good. Any serious web editor needs this... but its not a high level design tool.  There is no drag... no resizing of grids... no direct editing of properties or adding of directives.  This is just a better preview system.


This is bullshit. 

So back to Dreamweaver. 

Get a trial of Dreamweaver CC.  Start testing.  Jaw hang open in shock.  Fuck me!  All the problems I have found in "the other packages" are in dreamweaver too.  (Excep the design surface is a bit better)

There is still really crappy awareness about frameworks and middleware.  You still have to crack open the code editor to get much done. The GUI designer still plows the code into its own way of thinking.  Honestly its hard to say that this is any sort of step upward from CS4.  (Lots of stuff if better.. but the high level productivity is not really there. )

This is so broken. 


Where to go from here?  

 Time to change the way I'm thinking about the problem. Certainly time to review my expectations. 

My (new) expectations....

Productive IDE for code and project managment.  Needs to be FTP/Server aware for rapid round trip testing.  Should have easy round trip local browser preview for different screen/device configs. 

Ability to plug in frameworks and packages.  I have seen a bunch of package management systems and none are pretty.  But the idea is solid.  

The browser based debuggers are really where the state of the art is at.  Get used to them. 

Generic GUI design surfaces/toolboxes are dead.  I struggle to imagine how they can become relevant again unless they pair with a specific set of frameworks and widgets and work at being very aware of their conventions, models and idiosynchracies.  (Almost back to the Microsoft model... *vomit*rage*rant*blame*cry*acceptance*)

A CSS designer is a different beast.  I think this is now essential in a modern web editor. This atleast dreamweaver has... but I am still working on being happy with it.  There has to be a better way.  LESS and SaSS are now essential to making CSS managable.  These need to be supported and compiled/minified at runtime.

Database managment, SQL code generation, PHP etc... templates?

Templates were great back in the day.  But unless I start building static sites like its 1995 again, I think that tech is gone.  Generating a page using javascript on the client side is much more valuable to what I need to do now.

The more I google, the more I find that other people have traveled the same road ahead of me.  The more I find that their outcomes are .. variable. 

http://www.theopensourcery.com/keepopen/2014/replacing-adobe-dreamweaver/


The way I read it, there are two camps. Those who are looking for a "better" dreamweaver. That is, a better graphical editor for making static pages.  This is the Graphic Designers mindset.  They need visual tools with high level control.

The second camp are people who need to build current generation sites and for that dreamweaver is irrelevant (and has been for some time).  This is the programmers mindset.  They need a web focused IDE.

At the moment I cannot honestly say there is any replacement for Dreamweaver for the designers.  There are a couple of interesting things emerging... but nothing compelling. 
The intersting thing with this is how Adobe themselves have changed course.  Their "Edge" suite has already lost its code editor package in favour of Brackets with some glue code to allow export from PSD files.  This looks very like a tool path for "Designers" and basically replaces the "visual" design tools in Dreamweaver (which have not really progressed in the past decade).

http://designbump.com/top-alternatives-to-adobe-dreamweaver/


For people who are coding websites.... there are a slew of contenders.  Most are solid IDE's, but the better ones are tuned for web dev and integrated with browsers and remote debuggers etc.  

The best of the crop are going to be the ones that make it easy to work with the frameworks, CSS, LESS, HTML5 Canvas and the sound api's while still being able to hack on a database and remote debug javascript.  This stuff is beyond a standard single language IDE.  Even IDE's can handle the syntax of multiple languages.  A web app is a ball of multiple languages all mushed together.  The fact that there is a buffet of options just makes it a really ugly problem.



Brackets

This is the first editor that I have seen that seems to get the idea that CSS and HTML are no longer seperate.  (I don't mean inline styles)  Being able to directly step between the HTML and the CSS file is a big deal... being able to step to the LESS file is even more of a deal.