http://labs.vectorform.com/2011/10/the-impact-of-apple%e2%80%99s-siri-release-from-the-former-lead-iphone-developer-of-siri/
More voice control background....
http://searchengineland.com/head-to-head-siri-vs-google-voice-actions-96998
Analysis of what Siri can and cannot do along with Google Voice...
Showing posts with label Interface Design. Show all posts
Showing posts with label Interface Design. Show all posts
Tuesday, November 8, 2011
Microsoft TellMe UI
http://www.microsoft.com/presspass/Features/2011/aug11/08-01KinectTellme.mspx
This is an interesting article on the voice interface for the Xbox... which I see more and more in my future.
This is an interesting article on the voice interface for the Xbox... which I see more and more in my future.
Labels:
Game Design,
Interface Design,
Programming,
Research Resources,
UX,
Voice Control
Monday, July 18, 2011
Two types of interesting
http://spectrum.ieee.org/static/hacker-matrix/
The content in this article(?) is interesting but there are a couple of other novel aspects to it as well.
The first aspect that I found interesting is that this is an novel way to present a literature review on a particular topic. Displaying a simple graphic summarising the literature is both elegant and engaging. It asks me the question, "Do I agree with the placement of the nodes in the graph?" "What are the extreems in the graph? Can I think of anything more extreme?".
The second aspect is to have made the graph clickable and in so doing, created an interesting sorted menu ( in 2 dimensions) of topics I may wish to further examine. Again, this is novel and engaging.
Good information design, presentation and analysis all rolled into one simple graph.
The content in this article(?) is interesting but there are a couple of other novel aspects to it as well.
The first aspect that I found interesting is that this is an novel way to present a literature review on a particular topic. Displaying a simple graphic summarising the literature is both elegant and engaging. It asks me the question, "Do I agree with the placement of the nodes in the graph?" "What are the extreems in the graph? Can I think of anything more extreme?".
The second aspect is to have made the graph clickable and in so doing, created an interesting sorted menu ( in 2 dimensions) of topics I may wish to further examine. Again, this is novel and engaging.
Good information design, presentation and analysis all rolled into one simple graph.
Labels:
Data Analysis,
Interface Design,
Research Resources
Tuesday, July 12, 2011
Degrees of abstraction in UI design
Just reading the help file for a particular piece of research software (which will remain nameless). The point is that the help file and the UI that it describes is operating at a very low level of granularity. Each of the controls does some tiny cryptic operation that only makes sense to the original programmer/designer. So to configure it to do one high level function, you need to manipulate a whole slew of tiny, fine grained controls that by themselves are semantically meaningless and completely logically disconnected.
Don't get the idea that my software is so much better, the very fact that this is a bit of a revelation to me probably means I have been committing the same crimes... (probably? Definitely!) but admitting you have a problem is the first step....
So how would I do it better?
Well for starters, I think rather than having many low level settings that can be used to construct a higher level abstraction, start with the high level abstraction ( like a template or profile of settings) than can then be modified if required. This provides context for each of the settings and allows the user to build an easy mental schema and then to modify it and derive variations easily. Much easier to understand the system and provides a simpler way for the developer to cater to the needs of the users.
The other issue is documentation. By working from the high level abstraction down to a low level, its easy to build the users mental schema. However its hard when the task the user is trying to accomplish is not like the high level abstraction that you are using as the basis for the explanation. So in the case where the research software is kind of a "build an experiment toolkit", its important to do both. Provide a number of high level case studies, to communicate the high level context that all the settings and bits fit into as well as a low level description of each individual component and how they might interact with every other component. Easy... lol.
Don't get the idea that my software is so much better, the very fact that this is a bit of a revelation to me probably means I have been committing the same crimes... (probably? Definitely!) but admitting you have a problem is the first step....
So how would I do it better?
Well for starters, I think rather than having many low level settings that can be used to construct a higher level abstraction, start with the high level abstraction ( like a template or profile of settings) than can then be modified if required. This provides context for each of the settings and allows the user to build an easy mental schema and then to modify it and derive variations easily. Much easier to understand the system and provides a simpler way for the developer to cater to the needs of the users.
The other issue is documentation. By working from the high level abstraction down to a low level, its easy to build the users mental schema. However its hard when the task the user is trying to accomplish is not like the high level abstraction that you are using as the basis for the explanation. So in the case where the research software is kind of a "build an experiment toolkit", its important to do both. Provide a number of high level case studies, to communicate the high level context that all the settings and bits fit into as well as a low level description of each individual component and how they might interact with every other component. Easy... lol.
Labels:
Interface Design,
UX
Monday, June 6, 2011
Software Design cycle and cost to change.
Just thinking about some of the issues within the software design cycle and the cost of making changes.
The simplest start point for the design cycle might be:
1 Start working on design
2 Hack up prototype (to help clarify requirements)
3 Display it to the client in some form
At this point what are the implications of choices you have made or missunderstandings between you and the client?
I would suggest that at this point the cost of reversing a decision in your head is relatively small. ( It probably has a few high level dependencies that have to be refactored and untangled but its essentially still a clean slate)
Depending on how many of the clients people have seen the prototype and talked about it ( blogged about it with screen shots??) will depend on how the ideas you have communicated have become semantically embedded in the clients understanding of the project.
Now begin a cyclical design/build/show process. At each turn of the cycle, the client gets more engaged ( you hope) with the semantics of the solution. What is the cost to add or remove these semantics? Whats the cost to "tweak" them? Whats the cost to "improve or clarify" them?
At some point the project is moved into beta testing. At this point the cost to change semantics in the project gets heavier. Additions of new semantics is cheapest, changes to/replacement of existing semantics are complex and removal of existing semantics is highest.
Once the project moves into general use by the client(s), the cost to change the semantics of the design are significant. Time in re-training, time in learning new ways to talk about tasks and processes, time to update documentation and remove old references.
The only way we currently know to do this is to explicitly replace the semantics with a completely new semantic network ( change the version number and attach the new ideas to the new version number)
So whats the idea here?
Like everything once we abstract it, we can manipulate the abstraction.
We can re-arrange the size of the costs and the time that we need to pay the cost by manipulating the way the semantics are exposed to the client and how "concrete" they are to reduce the way the client absorbs them early in the lifecycle of the project.
Rather than describing the project features with a specific name like "Print Receipts" use a vaguer abstraction such as "Feature to print receipts from a sales transaction". This reduces the specificity of the semantic network that can be built from the information and keeps it malleable. Once you start reducing the words it takes to describe an idea, the term gets more and more semantically specific and takes on a more and more defined role in the semantic network that can be formed from the semantic soup of the whole design.
By keeping the semantic network loose and malleable, its much cheaper to change. However the cost in complexity of the network is higher. I.e the client has less clarity about the semantics of the design. (Some good, some bad...depending on the client I guess)
That being said, you as the designer need to have clear semantic network... so does this help you or bog you down in vague language that you will be tempted to clarify and thus harden the design? Tradeoffs are a bitch.
Needs more thinking.
The simplest start point for the design cycle might be:
1 Start working on design
2 Hack up prototype (to help clarify requirements)
3 Display it to the client in some form
At this point what are the implications of choices you have made or missunderstandings between you and the client?
I would suggest that at this point the cost of reversing a decision in your head is relatively small. ( It probably has a few high level dependencies that have to be refactored and untangled but its essentially still a clean slate)
Depending on how many of the clients people have seen the prototype and talked about it ( blogged about it with screen shots??) will depend on how the ideas you have communicated have become semantically embedded in the clients understanding of the project.
Now begin a cyclical design/build/show process. At each turn of the cycle, the client gets more engaged ( you hope) with the semantics of the solution. What is the cost to add or remove these semantics? Whats the cost to "tweak" them? Whats the cost to "improve or clarify" them?
At some point the project is moved into beta testing. At this point the cost to change semantics in the project gets heavier. Additions of new semantics is cheapest, changes to/replacement of existing semantics are complex and removal of existing semantics is highest.
Once the project moves into general use by the client(s), the cost to change the semantics of the design are significant. Time in re-training, time in learning new ways to talk about tasks and processes, time to update documentation and remove old references.
The only way we currently know to do this is to explicitly replace the semantics with a completely new semantic network ( change the version number and attach the new ideas to the new version number)
So whats the idea here?
Like everything once we abstract it, we can manipulate the abstraction.
We can re-arrange the size of the costs and the time that we need to pay the cost by manipulating the way the semantics are exposed to the client and how "concrete" they are to reduce the way the client absorbs them early in the lifecycle of the project.
Rather than describing the project features with a specific name like "Print Receipts" use a vaguer abstraction such as "Feature to print receipts from a sales transaction". This reduces the specificity of the semantic network that can be built from the information and keeps it malleable. Once you start reducing the words it takes to describe an idea, the term gets more and more semantically specific and takes on a more and more defined role in the semantic network that can be formed from the semantic soup of the whole design.
By keeping the semantic network loose and malleable, its much cheaper to change. However the cost in complexity of the network is higher. I.e the client has less clarity about the semantics of the design. (Some good, some bad...depending on the client I guess)
That being said, you as the designer need to have clear semantic network... so does this help you or bog you down in vague language that you will be tempted to clarify and thus harden the design? Tradeoffs are a bitch.
Needs more thinking.
Labels:
Interface Design,
UX
Thursday, January 20, 2011
Interface design concept - Subjects and Tasks
Just kicking around thoughts on how my latest plowing of the interface layout and organisation has ended up. I now have a little more clarity on how I am organizing access to the various features.
BTW the app is an access db based enterprise tool.
Basically I had initially built very basic forms to work with particular elements of the data. (Think 1 form per table kind of thing) With a table representing essentially an entity in the schema, it was a nice clean design. BUT it was focused on designing the UI around the schema, not what the users wanted to do with the schema elements.
So, its a simple but unintuitive design that forces the users to understand and work with the database schema rather than being able to build their own schema of their tasks. Not good! But still important, sometimes the users just want to mess with one of the entities. (Add a new entity, modify it etc) So I had added additional task focused features that crossed the boundaries between entities to provide a more task focused schema.
So my current thoughts are to build a UI based on a matrix
The whole problem is the root node in the schema. Does the user understand the schema in terms of subject then task ( find the "person" and modify them) or task then subject ( "I want to modify some people so I want the modify tool and apply it to some people").
This seems to be a common pattern in my experience of UI design. Usually we end up with a mix of tools, some are task first, while others are subject first. The question is whether either is always superior, both have their value or one is consistently worse for some things.
I also wonder how all this applies when either the task or the subject are more complex. For instance when you have a compound subject( multiple subjects glued together, such as an ad-hoc group of people that need to be modified.) that has a simple task applied and conversely when a simple subject has a complex task applied.
Another problem is the issue of composing a complex task from multiple simple tasks in a fixed order. The point of a UI is that the user can compose their desired task without the UI having to have a button that already does it. So we end up with hundreds of simple task controls that can be composed into a sequence by the user to achieve their desired objective. The task units must be orthogonal in various ways, and its nice if their is some discovery mechanism to allow try/fail/undo semantics which allow low cost exploration and experimentation.
Also, how can a user create an ad-hoc complex subject. Both in parallel and in serial. (A group of "people" subjects need to be modified at the same time (parallel) or a set of "orders" need to be updated in sequence to reflect moving some "units" of stock from one order to another while still respecting some business rules about not returning too much stock to the warehouse(Serial)) that could be used. In some systems this can be done by picking an arbitrary list from a multi-select list box or something similar. (Search filters etc etc)
So in an ideal system you would be able to compose either the task or the subject first and then compose the other side of the activity second. Both need to be able to compose arbitrarily complex groups and sequences, apply them in controlled ways, correct if errors are encountered, rollback if required and save for later if desired. Sounds like a macro system for the task and some sort of data description/selection language for the subject (SQL perhaps?) All this in a nice clean high level interface...
Thinking....
BTW the app is an access db based enterprise tool.
Basically I had initially built very basic forms to work with particular elements of the data. (Think 1 form per table kind of thing) With a table representing essentially an entity in the schema, it was a nice clean design. BUT it was focused on designing the UI around the schema, not what the users wanted to do with the schema elements.
So, its a simple but unintuitive design that forces the users to understand and work with the database schema rather than being able to build their own schema of their tasks. Not good! But still important, sometimes the users just want to mess with one of the entities. (Add a new entity, modify it etc) So I had added additional task focused features that crossed the boundaries between entities to provide a more task focused schema.
So my current thoughts are to build a UI based on a matrix
The whole problem is the root node in the schema. Does the user understand the schema in terms of subject then task ( find the "person" and modify them) or task then subject ( "I want to modify some people so I want the modify tool and apply it to some people").
This seems to be a common pattern in my experience of UI design. Usually we end up with a mix of tools, some are task first, while others are subject first. The question is whether either is always superior, both have their value or one is consistently worse for some things.
I also wonder how all this applies when either the task or the subject are more complex. For instance when you have a compound subject( multiple subjects glued together, such as an ad-hoc group of people that need to be modified.) that has a simple task applied and conversely when a simple subject has a complex task applied.
Another problem is the issue of composing a complex task from multiple simple tasks in a fixed order. The point of a UI is that the user can compose their desired task without the UI having to have a button that already does it. So we end up with hundreds of simple task controls that can be composed into a sequence by the user to achieve their desired objective. The task units must be orthogonal in various ways, and its nice if their is some discovery mechanism to allow try/fail/undo semantics which allow low cost exploration and experimentation.
Also, how can a user create an ad-hoc complex subject. Both in parallel and in serial. (A group of "people" subjects need to be modified at the same time (parallel) or a set of "orders" need to be updated in sequence to reflect moving some "units" of stock from one order to another while still respecting some business rules about not returning too much stock to the warehouse(Serial)) that could be used. In some systems this can be done by picking an arbitrary list from a multi-select list box or something similar. (Search filters etc etc)
So in an ideal system you would be able to compose either the task or the subject first and then compose the other side of the activity second. Both need to be able to compose arbitrarily complex groups and sequences, apply them in controlled ways, correct if errors are encountered, rollback if required and save for later if desired. Sounds like a macro system for the task and some sort of data description/selection language for the subject (SQL perhaps?) All this in a nice clean high level interface...
Thinking....
Labels:
Interface Design,
Rant,
UX
Subscribe to:
Posts (Atom)