http://www.insidegamingdaily.com/2011/11/10/the-elder-scrolls-v-skyrim-review/
There are some interesting feedback fragments in the comments post on this review.
Showing posts with label UX. Show all posts
Showing posts with label UX. Show all posts
Sunday, November 27, 2011
Monday, November 21, 2011
The Stylus as a UI
http://techcrunch.com/2011/11/11/in-defense-of-the-stylus/
This is an interesting post on the state of the art in stylus/screen/UI development.
This is an interesting post on the state of the art in stylus/screen/UI development.
Labels:
UX
Tuesday, November 8, 2011
Article on Siri
http://labs.vectorform.com/2011/10/the-impact-of-apple%e2%80%99s-siri-release-from-the-former-lead-iphone-developer-of-siri/
More voice control background....
http://searchengineland.com/head-to-head-siri-vs-google-voice-actions-96998
Analysis of what Siri can and cannot do along with Google Voice...
More voice control background....
http://searchengineland.com/head-to-head-siri-vs-google-voice-actions-96998
Analysis of what Siri can and cannot do along with Google Voice...
Labels:
Interface Design,
UX,
Voice Control
Microsoft TellMe UI
http://www.microsoft.com/presspass/Features/2011/aug11/08-01KinectTellme.mspx
This is an interesting article on the voice interface for the Xbox... which I see more and more in my future.
This is an interesting article on the voice interface for the Xbox... which I see more and more in my future.
Labels:
Game Design,
Interface Design,
Programming,
Research Resources,
UX,
Voice Control
Philisophical post on UI design
http://www.practicallyefficient.com/2011/09/29/flat/
UI design, general philosophy. Need to read this again.
UI design, general philosophy. Need to read this again.
Labels:
To Read Again,
UX
Tuesday, July 12, 2011
Degrees of abstraction in UI design
Just reading the help file for a particular piece of research software (which will remain nameless). The point is that the help file and the UI that it describes is operating at a very low level of granularity. Each of the controls does some tiny cryptic operation that only makes sense to the original programmer/designer. So to configure it to do one high level function, you need to manipulate a whole slew of tiny, fine grained controls that by themselves are semantically meaningless and completely logically disconnected.
Don't get the idea that my software is so much better, the very fact that this is a bit of a revelation to me probably means I have been committing the same crimes... (probably? Definitely!) but admitting you have a problem is the first step....
So how would I do it better?
Well for starters, I think rather than having many low level settings that can be used to construct a higher level abstraction, start with the high level abstraction ( like a template or profile of settings) than can then be modified if required. This provides context for each of the settings and allows the user to build an easy mental schema and then to modify it and derive variations easily. Much easier to understand the system and provides a simpler way for the developer to cater to the needs of the users.
The other issue is documentation. By working from the high level abstraction down to a low level, its easy to build the users mental schema. However its hard when the task the user is trying to accomplish is not like the high level abstraction that you are using as the basis for the explanation. So in the case where the research software is kind of a "build an experiment toolkit", its important to do both. Provide a number of high level case studies, to communicate the high level context that all the settings and bits fit into as well as a low level description of each individual component and how they might interact with every other component. Easy... lol.
Don't get the idea that my software is so much better, the very fact that this is a bit of a revelation to me probably means I have been committing the same crimes... (probably? Definitely!) but admitting you have a problem is the first step....
So how would I do it better?
Well for starters, I think rather than having many low level settings that can be used to construct a higher level abstraction, start with the high level abstraction ( like a template or profile of settings) than can then be modified if required. This provides context for each of the settings and allows the user to build an easy mental schema and then to modify it and derive variations easily. Much easier to understand the system and provides a simpler way for the developer to cater to the needs of the users.
The other issue is documentation. By working from the high level abstraction down to a low level, its easy to build the users mental schema. However its hard when the task the user is trying to accomplish is not like the high level abstraction that you are using as the basis for the explanation. So in the case where the research software is kind of a "build an experiment toolkit", its important to do both. Provide a number of high level case studies, to communicate the high level context that all the settings and bits fit into as well as a low level description of each individual component and how they might interact with every other component. Easy... lol.
Labels:
Interface Design,
UX
Monday, June 6, 2011
Software Design cycle and cost to change.
Just thinking about some of the issues within the software design cycle and the cost of making changes.
The simplest start point for the design cycle might be:
1 Start working on design
2 Hack up prototype (to help clarify requirements)
3 Display it to the client in some form
At this point what are the implications of choices you have made or missunderstandings between you and the client?
I would suggest that at this point the cost of reversing a decision in your head is relatively small. ( It probably has a few high level dependencies that have to be refactored and untangled but its essentially still a clean slate)
Depending on how many of the clients people have seen the prototype and talked about it ( blogged about it with screen shots??) will depend on how the ideas you have communicated have become semantically embedded in the clients understanding of the project.
Now begin a cyclical design/build/show process. At each turn of the cycle, the client gets more engaged ( you hope) with the semantics of the solution. What is the cost to add or remove these semantics? Whats the cost to "tweak" them? Whats the cost to "improve or clarify" them?
At some point the project is moved into beta testing. At this point the cost to change semantics in the project gets heavier. Additions of new semantics is cheapest, changes to/replacement of existing semantics are complex and removal of existing semantics is highest.
Once the project moves into general use by the client(s), the cost to change the semantics of the design are significant. Time in re-training, time in learning new ways to talk about tasks and processes, time to update documentation and remove old references.
The only way we currently know to do this is to explicitly replace the semantics with a completely new semantic network ( change the version number and attach the new ideas to the new version number)
So whats the idea here?
Like everything once we abstract it, we can manipulate the abstraction.
We can re-arrange the size of the costs and the time that we need to pay the cost by manipulating the way the semantics are exposed to the client and how "concrete" they are to reduce the way the client absorbs them early in the lifecycle of the project.
Rather than describing the project features with a specific name like "Print Receipts" use a vaguer abstraction such as "Feature to print receipts from a sales transaction". This reduces the specificity of the semantic network that can be built from the information and keeps it malleable. Once you start reducing the words it takes to describe an idea, the term gets more and more semantically specific and takes on a more and more defined role in the semantic network that can be formed from the semantic soup of the whole design.
By keeping the semantic network loose and malleable, its much cheaper to change. However the cost in complexity of the network is higher. I.e the client has less clarity about the semantics of the design. (Some good, some bad...depending on the client I guess)
That being said, you as the designer need to have clear semantic network... so does this help you or bog you down in vague language that you will be tempted to clarify and thus harden the design? Tradeoffs are a bitch.
Needs more thinking.
The simplest start point for the design cycle might be:
1 Start working on design
2 Hack up prototype (to help clarify requirements)
3 Display it to the client in some form
At this point what are the implications of choices you have made or missunderstandings between you and the client?
I would suggest that at this point the cost of reversing a decision in your head is relatively small. ( It probably has a few high level dependencies that have to be refactored and untangled but its essentially still a clean slate)
Depending on how many of the clients people have seen the prototype and talked about it ( blogged about it with screen shots??) will depend on how the ideas you have communicated have become semantically embedded in the clients understanding of the project.
Now begin a cyclical design/build/show process. At each turn of the cycle, the client gets more engaged ( you hope) with the semantics of the solution. What is the cost to add or remove these semantics? Whats the cost to "tweak" them? Whats the cost to "improve or clarify" them?
At some point the project is moved into beta testing. At this point the cost to change semantics in the project gets heavier. Additions of new semantics is cheapest, changes to/replacement of existing semantics are complex and removal of existing semantics is highest.
Once the project moves into general use by the client(s), the cost to change the semantics of the design are significant. Time in re-training, time in learning new ways to talk about tasks and processes, time to update documentation and remove old references.
The only way we currently know to do this is to explicitly replace the semantics with a completely new semantic network ( change the version number and attach the new ideas to the new version number)
So whats the idea here?
Like everything once we abstract it, we can manipulate the abstraction.
We can re-arrange the size of the costs and the time that we need to pay the cost by manipulating the way the semantics are exposed to the client and how "concrete" they are to reduce the way the client absorbs them early in the lifecycle of the project.
Rather than describing the project features with a specific name like "Print Receipts" use a vaguer abstraction such as "Feature to print receipts from a sales transaction". This reduces the specificity of the semantic network that can be built from the information and keeps it malleable. Once you start reducing the words it takes to describe an idea, the term gets more and more semantically specific and takes on a more and more defined role in the semantic network that can be formed from the semantic soup of the whole design.
By keeping the semantic network loose and malleable, its much cheaper to change. However the cost in complexity of the network is higher. I.e the client has less clarity about the semantics of the design. (Some good, some bad...depending on the client I guess)
That being said, you as the designer need to have clear semantic network... so does this help you or bog you down in vague language that you will be tempted to clarify and thus harden the design? Tradeoffs are a bitch.
Needs more thinking.
Labels:
Interface Design,
UX
Thursday, January 20, 2011
Interface design concept - Subjects and Tasks
Just kicking around thoughts on how my latest plowing of the interface layout and organisation has ended up. I now have a little more clarity on how I am organizing access to the various features.
BTW the app is an access db based enterprise tool.
Basically I had initially built very basic forms to work with particular elements of the data. (Think 1 form per table kind of thing) With a table representing essentially an entity in the schema, it was a nice clean design. BUT it was focused on designing the UI around the schema, not what the users wanted to do with the schema elements.
So, its a simple but unintuitive design that forces the users to understand and work with the database schema rather than being able to build their own schema of their tasks. Not good! But still important, sometimes the users just want to mess with one of the entities. (Add a new entity, modify it etc) So I had added additional task focused features that crossed the boundaries between entities to provide a more task focused schema.
So my current thoughts are to build a UI based on a matrix
The whole problem is the root node in the schema. Does the user understand the schema in terms of subject then task ( find the "person" and modify them) or task then subject ( "I want to modify some people so I want the modify tool and apply it to some people").
This seems to be a common pattern in my experience of UI design. Usually we end up with a mix of tools, some are task first, while others are subject first. The question is whether either is always superior, both have their value or one is consistently worse for some things.
I also wonder how all this applies when either the task or the subject are more complex. For instance when you have a compound subject( multiple subjects glued together, such as an ad-hoc group of people that need to be modified.) that has a simple task applied and conversely when a simple subject has a complex task applied.
Another problem is the issue of composing a complex task from multiple simple tasks in a fixed order. The point of a UI is that the user can compose their desired task without the UI having to have a button that already does it. So we end up with hundreds of simple task controls that can be composed into a sequence by the user to achieve their desired objective. The task units must be orthogonal in various ways, and its nice if their is some discovery mechanism to allow try/fail/undo semantics which allow low cost exploration and experimentation.
Also, how can a user create an ad-hoc complex subject. Both in parallel and in serial. (A group of "people" subjects need to be modified at the same time (parallel) or a set of "orders" need to be updated in sequence to reflect moving some "units" of stock from one order to another while still respecting some business rules about not returning too much stock to the warehouse(Serial)) that could be used. In some systems this can be done by picking an arbitrary list from a multi-select list box or something similar. (Search filters etc etc)
So in an ideal system you would be able to compose either the task or the subject first and then compose the other side of the activity second. Both need to be able to compose arbitrarily complex groups and sequences, apply them in controlled ways, correct if errors are encountered, rollback if required and save for later if desired. Sounds like a macro system for the task and some sort of data description/selection language for the subject (SQL perhaps?) All this in a nice clean high level interface...
Thinking....
BTW the app is an access db based enterprise tool.
Basically I had initially built very basic forms to work with particular elements of the data. (Think 1 form per table kind of thing) With a table representing essentially an entity in the schema, it was a nice clean design. BUT it was focused on designing the UI around the schema, not what the users wanted to do with the schema elements.
So, its a simple but unintuitive design that forces the users to understand and work with the database schema rather than being able to build their own schema of their tasks. Not good! But still important, sometimes the users just want to mess with one of the entities. (Add a new entity, modify it etc) So I had added additional task focused features that crossed the boundaries between entities to provide a more task focused schema.
So my current thoughts are to build a UI based on a matrix
The whole problem is the root node in the schema. Does the user understand the schema in terms of subject then task ( find the "person" and modify them) or task then subject ( "I want to modify some people so I want the modify tool and apply it to some people").
This seems to be a common pattern in my experience of UI design. Usually we end up with a mix of tools, some are task first, while others are subject first. The question is whether either is always superior, both have their value or one is consistently worse for some things.
I also wonder how all this applies when either the task or the subject are more complex. For instance when you have a compound subject( multiple subjects glued together, such as an ad-hoc group of people that need to be modified.) that has a simple task applied and conversely when a simple subject has a complex task applied.
Another problem is the issue of composing a complex task from multiple simple tasks in a fixed order. The point of a UI is that the user can compose their desired task without the UI having to have a button that already does it. So we end up with hundreds of simple task controls that can be composed into a sequence by the user to achieve their desired objective. The task units must be orthogonal in various ways, and its nice if their is some discovery mechanism to allow try/fail/undo semantics which allow low cost exploration and experimentation.
Also, how can a user create an ad-hoc complex subject. Both in parallel and in serial. (A group of "people" subjects need to be modified at the same time (parallel) or a set of "orders" need to be updated in sequence to reflect moving some "units" of stock from one order to another while still respecting some business rules about not returning too much stock to the warehouse(Serial)) that could be used. In some systems this can be done by picking an arbitrary list from a multi-select list box or something similar. (Search filters etc etc)
So in an ideal system you would be able to compose either the task or the subject first and then compose the other side of the activity second. Both need to be able to compose arbitrarily complex groups and sequences, apply them in controlled ways, correct if errors are encountered, rollback if required and save for later if desired. Sounds like a macro system for the task and some sort of data description/selection language for the subject (SQL perhaps?) All this in a nice clean high level interface...
Thinking....
Labels:
Interface Design,
Rant,
UX
Friday, January 7, 2011
Designing the User around the Tool or how software design is like a childs birthday party
I have been thinking about my design philosophy of a database system. (By db I mean a database centric application. The actual scenario is not relevant so ignore it except in the most abstract way of having users and some purpose)
This though relates indirectly to a post I read a few days ago about a programmer who had given their all to a project and had come to resent the users for not using the product the way it was designed and doing all sorts of stupid things which by his attempts to deal with them proceeded to turn the back-end code into rubbish in an attempt to cope with user stupidity.
Anyway, there are so many things wrong with the point of view expressed in the post that its hard to know where to start, but the point is not to attack the writer, it was just another seed for the idea I actually want to talk about.
The point I am trying to get around to is the idea of designing the tool to fit the user rather than trying to fit the user to the tool ( training, using it the "right way" etc).... seems obvious when I say it this way... but read on...
There is always a certain amount of tension between training the user to use the system and modifying the system to fit the users. There are economic arguments for an against both propositions and they underpin various business,design and engineering decisions... however all this aside and ignoring the issues with the limitations of interfaces and lack of computing resources and all the other limits and assumptions that come to the design table... there is still a point worth making. And it is...
A data centric application is, at its simplest, a sheet of blank paper. This is the fundamental metaphor upon which a database application is built. We will call this Layer 0 of the abstraction stack.
Layer 0. Medium.
The user can write on it, they can change things and erase things and freely scrawl whatever they like. The paper is infinite in size and there are no limits about where or what or why. They can share it and let others draw on it as they like. Everyone's drawings are mixed together. Think of a sheet of butchers paper on a table at a children's birthday party. Hand around the crayons and see what happens...
Layer 1. Structure.
The next simplest idea is to begin applying some simple helpful structure. Divide the paper into two areas with a single line (again infinitely long etc etc) Anything written on one side of the line has one meaning, and everything on the other side of the line has another. (The meanings are only in the head of the user at the moment, the paper does not "enforce" any meaning in any way.) The only useful rule in this case is that things should not cross the line or exist on both sides at the same time as this would be ambiguous. However the paper in no way enforces this concept, so its still quite possible for the user to put their stuff wherever they like, line or no line.
Obviously, one can then add more lines to the paper and arbitrarily assign meaning to the areas that are starting to be defined. These fields have no intrinsic relationships, except that of mutual exclusivity due to the dimensional nature of paper. However its possible to create relationships using ideas like Venn diagrams. Once you reach the level of sets and set theory you're wandering off into abstractions that are dealing with other ideas and not the point I am working on... so back up a bit. Keep it at the simple level of a 4yr old child with a sheet of paper.
Layer 2. Limits.
Rather than drawing lines on the paper, cut it into pieces. It's now less possible to draw or write something that spans two areas. Simply by moving the two pieces of paper apart, anything over the edges looses coherency and is divided. This is the simplest concept of enforcing a limit with this metaphor.
The things on the paper are now "in" or "out". Venn diagrams with scissors start to not really work out so well. The divisions become much clearer and unambiguous. Its still possible to write the same information or draw the same scrawl in two places, but they are two clearly and functionally separate places.
Another useful property is that the pieces of paper can be shared more easily. They can be passed around individually and manipulated separately. ( I think I just invented the foundation of data security in the metaphor... but lets not rush ahead)
Layer 3. Tools.
Ink versus pencil. One is permanent while the other is malleable. By introducing this idea, some drawing start to be unable to be erased or changed while others are still free to be added and removed, changed, reworked etc.
The concept of read, write, edit, delete, modify etc have suddenly become more clearly defined. We are no longer talking about the surface on which the writing is done but the properties of the writing itself. How permanent is it, what can happen to it in the future, what has happened to it in the past? These concepts and others can now be determined from intrinsic properties of the writing.
Layer 4. Properties.
Color. This is an intrinsic property of the writing tool but does not carry a semantic meaning beyond what we ascribe to it. Put down a handful of colored pens and pencils at a party. Each child picks one up and you can immediately start to determine authorship of each potato-headed figure on the paper. However, every time they swap pens with someone else the whole system collapses. So authorship is difficult to identify or verify from the data itself; no matter how much everyone wants to believe they can do it... its just layers of "proof" based on nothing.... there is no intrinsic property of the system that can identify authorship....without additional external ... something.
Try it at a party, be vigilant and write the name of the children next to each drawing, get all the parents to do the same, once you start this system, I figure it will be about 30 seconds before the children start writing their own names and the parents start helping them.... then a few minutes later the children are writing each others names and the parents have stopped supervising them.... after that... hows your system going now?
So whats the point? Am I saying all Users are like 4yr old children? Not quite, however the value of this comparison is that its a useful thought exercise. (The fact that the aggregate behavior of many "qualified" and mature staff in a large organisation is very similar to the aggregate effect of 4yr old's at a birthday party is amazingly similar... (even before the red cordial hits their systems)
The point is that a business system needs to be built from Layer 0 upward, not imposed with the expectation of the users being other than they are...
The other interesting point is that the layer model stops being useful pretty quickly as the ideas are not dependent upon each other in any order.... (might need to polish that idea a bit later)
So how does this gel into a useful idea?
Rather than building a system that is a mess of fields and data entry boxes and business rules, start with a sheet of paper and some pens and slowly add limits and boundaries that are realistic. Evolve the design by adding a simple concept rather than starting with complexity and expecting people to use it in the right way.
Damn but this is probably one of the most rambling posts I have written so far. And you know what... its working. I have turned over the ideas and tried to articulate them and most importantly I have captured the raw flow as it developed.
These writings are not intended to be a polished paper, they are raw ideas that are bubbling around. I write here to capture them while they are fresh. I can come back and edit them later when they gel again.
Simply by letting the idea flow out illustrates part of the point of this post. People use their sheets of paper differently. The role of the designer is to accommodate that while finding ways to extract business value from that process.
This maybe by allowing the content on the paper to be stored by machine, read by machine, shared with others, structured in small or large ways, edited, preserved, reported on etc. Thus the role of the application designer grows, but at its heart is the idea of a sheet of paper that someone wants to write on. The balance comes from making the process of writing easy and fluid while still being able to extract the business value.
And then the rubber hits the road with all the force of a cheap special effect and you get hit by all the compromises and limitations of the environment that we work in. Price, time, tools, resources, compromise, decisions etc..... all the bullshit and complexity of software engineering that obscures the reality...
..... of that simple sheet of paper and the brightly colored crayons in the hands of a 4yr old.
Edit.
The following casts a different perspective on the same issue. Redmond Reality Distortion Field. Same idea, different source.
http://www.stepto.com/Lists/Posts/Post.aspx?ID=486
This though relates indirectly to a post I read a few days ago about a programmer who had given their all to a project and had come to resent the users for not using the product the way it was designed and doing all sorts of stupid things which by his attempts to deal with them proceeded to turn the back-end code into rubbish in an attempt to cope with user stupidity.
Anyway, there are so many things wrong with the point of view expressed in the post that its hard to know where to start, but the point is not to attack the writer, it was just another seed for the idea I actually want to talk about.
The point I am trying to get around to is the idea of designing the tool to fit the user rather than trying to fit the user to the tool ( training, using it the "right way" etc).... seems obvious when I say it this way... but read on...
There is always a certain amount of tension between training the user to use the system and modifying the system to fit the users. There are economic arguments for an against both propositions and they underpin various business,design and engineering decisions... however all this aside and ignoring the issues with the limitations of interfaces and lack of computing resources and all the other limits and assumptions that come to the design table... there is still a point worth making. And it is...
A data centric application is, at its simplest, a sheet of blank paper. This is the fundamental metaphor upon which a database application is built. We will call this Layer 0 of the abstraction stack.
Layer 0. Medium.
The user can write on it, they can change things and erase things and freely scrawl whatever they like. The paper is infinite in size and there are no limits about where or what or why. They can share it and let others draw on it as they like. Everyone's drawings are mixed together. Think of a sheet of butchers paper on a table at a children's birthday party. Hand around the crayons and see what happens...
Layer 1. Structure.
The next simplest idea is to begin applying some simple helpful structure. Divide the paper into two areas with a single line (again infinitely long etc etc) Anything written on one side of the line has one meaning, and everything on the other side of the line has another. (The meanings are only in the head of the user at the moment, the paper does not "enforce" any meaning in any way.) The only useful rule in this case is that things should not cross the line or exist on both sides at the same time as this would be ambiguous. However the paper in no way enforces this concept, so its still quite possible for the user to put their stuff wherever they like, line or no line.
Obviously, one can then add more lines to the paper and arbitrarily assign meaning to the areas that are starting to be defined. These fields have no intrinsic relationships, except that of mutual exclusivity due to the dimensional nature of paper. However its possible to create relationships using ideas like Venn diagrams. Once you reach the level of sets and set theory you're wandering off into abstractions that are dealing with other ideas and not the point I am working on... so back up a bit. Keep it at the simple level of a 4yr old child with a sheet of paper.
Layer 2. Limits.
Rather than drawing lines on the paper, cut it into pieces. It's now less possible to draw or write something that spans two areas. Simply by moving the two pieces of paper apart, anything over the edges looses coherency and is divided. This is the simplest concept of enforcing a limit with this metaphor.
The things on the paper are now "in" or "out". Venn diagrams with scissors start to not really work out so well. The divisions become much clearer and unambiguous. Its still possible to write the same information or draw the same scrawl in two places, but they are two clearly and functionally separate places.
Another useful property is that the pieces of paper can be shared more easily. They can be passed around individually and manipulated separately. ( I think I just invented the foundation of data security in the metaphor... but lets not rush ahead)
Layer 3. Tools.
Ink versus pencil. One is permanent while the other is malleable. By introducing this idea, some drawing start to be unable to be erased or changed while others are still free to be added and removed, changed, reworked etc.
The concept of read, write, edit, delete, modify etc have suddenly become more clearly defined. We are no longer talking about the surface on which the writing is done but the properties of the writing itself. How permanent is it, what can happen to it in the future, what has happened to it in the past? These concepts and others can now be determined from intrinsic properties of the writing.
Layer 4. Properties.
Color. This is an intrinsic property of the writing tool but does not carry a semantic meaning beyond what we ascribe to it. Put down a handful of colored pens and pencils at a party. Each child picks one up and you can immediately start to determine authorship of each potato-headed figure on the paper. However, every time they swap pens with someone else the whole system collapses. So authorship is difficult to identify or verify from the data itself; no matter how much everyone wants to believe they can do it... its just layers of "proof" based on nothing.... there is no intrinsic property of the system that can identify authorship....without additional external ... something.
Try it at a party, be vigilant and write the name of the children next to each drawing, get all the parents to do the same, once you start this system, I figure it will be about 30 seconds before the children start writing their own names and the parents start helping them.... then a few minutes later the children are writing each others names and the parents have stopped supervising them.... after that... hows your system going now?
So whats the point? Am I saying all Users are like 4yr old children? Not quite, however the value of this comparison is that its a useful thought exercise. (The fact that the aggregate behavior of many "qualified" and mature staff in a large organisation is very similar to the aggregate effect of 4yr old's at a birthday party is amazingly similar... (even before the red cordial hits their systems)
The point is that a business system needs to be built from Layer 0 upward, not imposed with the expectation of the users being other than they are...
The other interesting point is that the layer model stops being useful pretty quickly as the ideas are not dependent upon each other in any order.... (might need to polish that idea a bit later)
So how does this gel into a useful idea?
Rather than building a system that is a mess of fields and data entry boxes and business rules, start with a sheet of paper and some pens and slowly add limits and boundaries that are realistic. Evolve the design by adding a simple concept rather than starting with complexity and expecting people to use it in the right way.
Damn but this is probably one of the most rambling posts I have written so far. And you know what... its working. I have turned over the ideas and tried to articulate them and most importantly I have captured the raw flow as it developed.
These writings are not intended to be a polished paper, they are raw ideas that are bubbling around. I write here to capture them while they are fresh. I can come back and edit them later when they gel again.
Simply by letting the idea flow out illustrates part of the point of this post. People use their sheets of paper differently. The role of the designer is to accommodate that while finding ways to extract business value from that process.
This maybe by allowing the content on the paper to be stored by machine, read by machine, shared with others, structured in small or large ways, edited, preserved, reported on etc. Thus the role of the application designer grows, but at its heart is the idea of a sheet of paper that someone wants to write on. The balance comes from making the process of writing easy and fluid while still being able to extract the business value.
And then the rubber hits the road with all the force of a cheap special effect and you get hit by all the compromises and limitations of the environment that we work in. Price, time, tools, resources, compromise, decisions etc..... all the bullshit and complexity of software engineering that obscures the reality...
..... of that simple sheet of paper and the brightly colored crayons in the hands of a 4yr old.
Edit.
The following casts a different perspective on the same issue. Redmond Reality Distortion Field. Same idea, different source.
http://www.stepto.com/Lists/Posts/Post.aspx?ID=486
Labels:
Philosophy,
Programming,
Rant,
Software Design,
UX
Subscribe to:
Posts (Atom)