Ideation around the six design values
In this phase of the project, the six design values are used as start points to develop and explore ideas that would meet the values set earlier. In the end and after conducting ideation sessions about each design value, the concept will be a combination of what can be done to address the values and the user needs.
What natural input is about
The small size of the mobile devices make the input a challenging part of the design and one of the downside to the experience. From the user observations and the ideation it was apparent that enabling users to input in a natural way would add a great value to the experience. There are two aspects to natural input: One is inputing the media like text, voice and taking picture. The other aspect is to the way we navigate through the interface itself.
Input media: Text, voice, and pictures are the main media we enter into the mobile devices. the small size of the device makes inputing the text a challenge. The current solutions is just scaling down the keyboard and putting in on the mobile device and try to type on a tiny pad. Considering that the intention is to input text in a format we have today there are other ways that don’t need the keyboard and could be done in a natural way. We need to get out of the screen and use the whole device and what is outside the device.
UI navigation: Click, swipe, pinch, drag … these are coming from the way we use computers and interaction with computers was developed when the technology was limited. I believe we can get out of touch screen and buttons and use natural ways of interaction within the interface. The design of the interface and the way navigation works also needs to be redesigned to some degrees to facilitate the natural way of interaction.
Handwrite and draw on the device
The natural way of input
Using pen and paper has been the most common way on inputing text until computers came along and changed the way we inout text. Typing is not natural. Why not using a real pen and Handwrite and draw on the phone or go a bit high tech and use finger to draw or write text on the device.
There are already technologies that detect the pen on any surface, we just need a screen that dose not absorb the ink or anything so users don’t need to wipe the screen off. You can draw and write on any touch screen today. We just need a UI that can be used to input longer texts and lines.
Phone as pen
Using the device itself as inout
What would be more natural than using the phone itself as input device. How would that be if we could just write on any surface and even did not need a surface and could just draw things in are. The freedom and the bigger scale here is a valuable character that would let users be less engaged with that tiny keyboard.
Gesture recognition is way more advanced today than a year ago. Even today we can use any surface as input using ultrasonic markers. When using the device as pen there needs to be some kind of feedback on the surface to make it possible to write. This feedback could be in the form of some moisture on the surface or a lighting that create a temporary effect. When using the phone in the air, even today the technology allows for accurate gesture recognitions.
Use outside the device
out of device interaction
Using surfaces outside the device to write and draw makes it possible to naturally move hand and finger on surface and not be attached to the device to input. Users can write and draw in any orientation and in any place and context.
There has been some prototypes already on using the outside of the device to input and interact with the device using image recognition and movement tracking. it is possible to make this happen it is just a matter of time and effort that needs to go into this area.
The way we interact with the world
Might sounds strange but how often do you happen to blow to get rid of dust on your desk or move something out of way? How would that be to use the same metaphor to actually interact with your mobile device? Or even use a more natural gesture and scratch your phone or shadow it? This last ones are specifically valuable when it comes to privacy on mobile devices in public places.
It is simple, we just need to add some extra sensors on the device that can detect users natural actions on the phone. The only reason this has not been done so far is because this might not look so cool and we are used to the way we push buttons or touch screens
Natural way of interacting with the UI
Using natural way to navigate through the UI is made possible buy looking at the device as something more than a screen that can detect your touch. By using a flexible material for the device, the whole device as input and new natural interactions can be replaced with what we use to navigate on smartphones today. Also the input requires less accuracy than methods used by touching the screen. There is also a tactile feedback when you actually interact with the object itself.
Physical objects to interact
Expanding the interface to the objects
Before screens came along, we used to interact with the objects around us. Technology has enabled us to bring the computers into the real world and use more tangible interfaces. what if our mobile devices were smart enough to detects the objects that we keep close to them and we could use those object to interact with the device.
There are many new opportunities when the interface is expanded out of the screens and objects can be used to interact with the device. In many occasions the touch interface is not the most desirable interaction method. Being able to perform basic interaction with the device using objects that are detected by the device allows the development of specific application where the users are able to use physical interface in their mobile devices. This could range from a media controller in the car to a specific medical device.
What makes a device alive
There are two requirements to have the mobile device make the users feel like it is alive. One part is that the phone can behave similar to human. This way the user interaction with the device is more human like, Device has a personality and could be even seen like a pet to the user. The other characteristic that can make phone alive is that phone is really smart. Smart enough to be aware of the user intentions and the context. An awareness that makes users feel that the phone really knows about them and that they don’t get disappointed when phone dose stupid things.
What makes a phone really smart
Smart is a term that is being used for phones that can probably do a little bit of what your computers can do. We need to redefine the what smart means when it comes to smart phones. Three examples shown here in these images explain scenarios that could define what i mean by smart.
MAKE CALL: What is our mobile devices were so smart and aware that they knew when we pulled them out of our pocket and hold them t=next to our ear, this could mean that we want to make a phone call. Then just saying the name of the contact would have started the dialing and a phone conversation.
DON’T LOSE ME: What if we never lost our phones because they were smart enough to detect when you leave then in public places and could notify you in location before you walk away or maybe after you lose them could call your close friends and tell them it was abandoned in public. Or an email could be sent to you with the location and time. meanwhile your phone would not allow anyone else to access your information.
IGNORE CALL: What if we did not have to press buttons to tell our phone that we want to ignore this call. What if just looking at the phone and putting back in pocket would have mean that we want to ignore the call or putting it in another pockets is telling the phone that you want to take a message and tell the caller that you cant really talk.
Feel the interactions
More than touch points and contacts
A phone could be smart enough to not only understand the existence of the user and the input but also the way this input is being done which had direct connection with the users feeling. The big part of communication between human-human is done using body language and we are missing this when it comes to mobile devices.
FEEL THE TOUCH: We always touch the screen of the phone but phone only recognized the touch on/off and nothing else. The pressure we put on the screen and the time we hold the touch could mean something for the phone that is how we really put our feelings into the phone.
GET GESTURES: Gestures, the way they are done and the time and intensity of a gesture is something human being can easily recognize and interpret. Our mobile devices are not there there. There is a great opportunity and a emerging needs for interacting with the phone in this way.
Treating the device
Interact by different treatments
A phone can be so alive that the way we treat it makes a difference, The way we hold it and the way we orient it. The device in above figure is placed on the same surface but in many different orientations and in relation to another object. When the phone is alive it any of these could mean something different to the phone. As as example the way we orient the mobile device can tell the device how we want to be notified of the incoming notifications. When I put my wallet on my phone that means I don’t wanna be distracted at all so the device would keep the incoming messages and notifies me when I want to see or hear them. The same behavior could be implemented with the way we keep our mobile device in our pocket and carry it around.
The alive part of the device
For a device to be alive there needs to be some human like behaviors that is what we are used to when interacting with human or animals. Temperature, texture, force feedback. moisture, air flow, and smell. Also in some cased light can be an element of being alive. Lively pad is a surface at the back of the device that can have all these natural and human like elements. Users can interact with the phone in a very natural way and can feel the device.
Using other senses
Interact with an alive device
by making a device to have capabilities to emit lights, create smells, impose force, let moisture out, blow air and generate heats or cool down, there would be many new interaction method opportunities that don’t need the screen and could be done using other senses. how that would be different when your device could just become colder when it was running our of battery or you did not have to pull your device out and instead interact with the alive surface while your phone was in your pocket.
Human to Human not device to device
How communication could be the way we interact
Mobile devices have made us adopt to a new way of communication that is not associated with our nature is anyway. The way technology made us communicate with eachother lacks what we have in a face to face communication. At the same time technology is advancing and making us capable of adding things to our devices that was impossible at the begging years when mobile devices were introduced to the market.
I a natural human to human communication there are three steps: Approaching, giving and receiving a message and leaving/ending the communication. What happens on the mobile devices on the other hand also consists of three steps but in a different way. Approaching is what we have as notifications such as ringing and messaging. Message is only in the form of voice, text or images/videos and when a communication is ended it is finished and we need to initiate another communication to get in touch again.
Approaching in natural way
Bringing the natural approach methods into our mobile experience
The way we approach each other to start a communication very much depends on the context and the situation we are in. On the other hand when it comes to mobile devices, the technology has made us adopt to a different way of approaching and initialing communications that is very different from who we are and what we need. Making phone calls is about dialing a number and making the receiver’s phone ring and wait for them to pick up. A totally machine to machine approach, something that we are used to now! But then again the technology is advancing and already made it possible for us to being that natural way of initiating a communication into our digital life.
I suggest that we can expand the approach methods to more than just call and text messages. Observing how we start communication in real world, we can see a lot of these happens at the nonverbal level and by using body language.For example, gestures like eye contact is the initiation of most of our conversations.Sometimes we only need to check the situation before starting the communication and that could be done by just a quick look at people we want to talk to. And in some context you need to distract someone to get in involved in a conversation with them. At times just leaving a note at somebody’s desk or on the fridge is the way you want to reach them. These are all missing in todays mobile devices. Following are examples and the tools that can help bring back the natural communication back to our mobile devices.
Low level approach to start communication
On one side of the approach methods are the very low level and subtle ways that are mostly nonverbal and in the form of body language. What if just could poke people on their phones and use that as a way to get feedback when and how a communication could start or maybe that could be just enough and we did not have to make calls or send text messages. A mobile device can provide users with tools that provide opportunities to have these low level approach methods as the communication experience. Imagine we could just knock on somebody’s phone and could get into a conversation. Could just move somebody’s phone slightly to notify them of you intention to initiate something! There could also be open channels that make it possible for you to be always connected with somebody as if you are next to them.
Control the notifications
How users could be notified in natural ways
When it comes to be notified about an intention from someone to initiate a conversation, there are also many other ways that are very natural to us. That could be as extreme and open as having an open channel that callers could just call you with their voice and you will here them no matter what situation you are in. in the example above the user can shout in the phone and wake the receiver up. On the other hand the notification could be so passive that the receiver is notified only when she actually wants to. In the example above the user receives messages but he would never be interrupted until he grabs the phone to move it into his bag. Some passive non visual feedback makes him aware that there are messages that he can check and he decides to check them or not.
How rich or basic we could communicate
Do we really need to hear each other voice? why do we call each other? how many times has happened to us that we could not communicate what we wanted because it was impossible to just put then in world and send it as text message. I believe a device that let users communicate at a very low resolution would provide a richer experience since in the real world we can convey our messages to each other in a very basic way and without a need to have thing verbal or written. At the same time there are occasions that we need more that text and voice to express ourselves and make the communication more meaningful. I suggest that we expand this resolution and bring other tools and senses into our communication. Mobile device can have this capability and design create more desirable experiences with these tools.
Low resolution for rich experience
Using basic tools for meaningful communication
Considering most of the human to human communication happens at the non verbal level and without the need of verbal or visual tools, we can enrich the experience if we limit the communication channels to the basic media that human is used to. Examples illustrated show how we can use smell, light, weather, background noises and even tactile feelings! Which is more meaningful to you? To send a text message that say “kiss” or to actually kiss the phone and send it. To see a text message that say “love you” or a feeling that you get through your phone that someone actually gave you a stroke. Even this seems low resolution to us but the experience and the feeling communicated could be way richer than todays methods.
Using the combination of tools to enrich the experience
On the other hand, if we add the low resolution tools of communication to the exciting methods (voice, video and text), the experience could be a very rich experience. In natural way we use all of our senses when we communicate. One thing said in different ways and with different body languages can mean different things to people.
In the example above, with the lively pad at the back of the device it is possible to engage other senses while in a phone conversation. This could be in the form of just touch, stroke force feedback, heat, background light, smell or texture. These parallel channels can make the experience way richer and more meaningful.
Expanding the time period
How communication could be expended in time
“On/Off”! This is how we communicate in our mobile devices today. we get connected and then get disconnected! We send a message and that is it , we just need to wait to receive something from the other side. But in the real world we don’t just communicate this way there is an engagement that starts well in advance and ends after the actual communication is finished. This could be implemented into our mobile devices. If we want to walk to somebody and start a conversation we first check if that is appropriate to do so. For the same situation and with the same intention when using the mobile device, we just try to reach that person without any consideration. What if the system could let us know about the person we are about to get connected in advance? What is we could see where that person is and with whom he is spending time at that particular moment.
Before and after the connection
Expansion of communication time period
By making users able to engage into the communication to some degrees before and after the full engagement, the experience becomes more natural and human-like. Example would to have the chance to see the orientation of the phone of the person you are trying to reach before you call and to see where the device is stored, in pocket on table or in the bag! This might not communicate much but still is a valuable information in many cases. In the above images, the uses are able to continue a voice conversation by having the touch feeling (finger on finger) for a while after the conversation is over. Depending on the conversation and the relation between the users these tools can bring different values to the experience.
Goals instead of Apps
Goal based UI rather than using Apps
Apps already failed! We have apps because this is how the marketing part of mobile eco system works. The way user naturally think of doing task is different from using apps. Users think of the goal and the tools to accomplish the goal and then figure the steps needed to be taken to accomplish the goals. In todays phone operating systems, users follow a different model which is different from human mental model. Users need to first think of the app, find it and then follow what app tell them to get where they app takes them.
I believe the interface could be closer to human mental model if the system is designed in a way that lets users do what it takes to reach their goals and be free form the Apps! A simple example would be if a user intends to take a photo and put it on Flickr, there are two ways: One is to start Flickr App and then go to the capture and upload part. The other way would be to capture the photo and then decides what he wants to do with it. This is closer to the way we think and take actions.
Capturing vs initiating the app
The main part of a goal based UI is to capture the media that can be an element to initiate a task with. For example if users want to send a message to a group of people, the task could be started by inputing text and then deciding how to send it to the group of people. There are many other kind of media that could be used to initiate a task.Time, location, voice, smell, video, image, light, temperature, touch, object and people could be captured to start a new task on the mobile device. This is a different mental model from todays App based operating systems.
Capture could be done in two ways. One is to enter the manual capture mode and the other way could be the natural way where device is smart enough to know what user intends to capture at anytime. Following comes examples of these two capture methods.
bend to capture and initiate tasks
The default way to start a task is to initiate it manually by bending back the device. Since this is a physical interaction, this could be done at anytime meaning that users can start a new task as the need arise and they don’t need to be thinking about the apps anymore. Bending back takes device to a mode that is ready to capture new media and use that as a start element for initiating a task.
Capture could be done even in a more natural way and without the need to take a step to enter the capture mode. In the above examples capturing voice and image are shown as an example where user can use a certain gesture to make the device realize that what is intended to be captured and used to initiate a task.
Capture passive info
Start a task by capturing passive information
The other way to start a task is to capture the passive information that user need in a natural way. The passive information could be time, location, temperature an…. This could be done again using gesture recognition. In the two figures above examples are shown on how Location and time can be captures in a natural way to initiate a task.
Layering the interface resolution
How to have rich and basic experience together
The resolution of the interface of a mobile device very much depends to the context. In many cases the device is being used in environment where there is a lot of distraction and the simplest the UI the better the experience. On the other hand mobile devices are also being used in contexts that there is less distraction and having a rich UI can be very valuable for the user. The idea of having an interface that change between normal (rich) to basic is the core to this concept.
Bend forward to change the mode
Switching between basic UI and normal UI
In order to move from the normal UI to the basic one, a physical tangible method makes a better sense as this could be independent of the UI content. Also this is a way to visually distinguish the modes from each other.Moreover, it would be possible to switch to between the two modes at anytime and there is no need to use physical buttons or go through the interface to do this. By slightly bending the device the interface would covert to a version that is very basic and could be used with the minimum attention.
New applications of basic mode
Using the bend function of the device to switch to the basic mode also adds a new physical property for the device which with combination of the UI behavior creates the Stationary basic mode. When bent, the device could be in placed in the landscape orientation on any flat surface. This turn device to a stationary one that has a very basic UI. There could also be values in possibilities to stick the device to any surface in this mode. Example above shows how a music player on the device can be switched to the basic mode with only essential functions and how device could be placed on a flat surface in this case in the kitchen and the experience can be followed at a different resolution.
One experience across multiple devices
How to make the experience independent of the device
With the cloud becoming the source of our digital content, we are moving towards experiences that are totally independent from the device, hardware and location. Mobile devices are going to be the main part of this system where the content would be accessible on the go. The design challenge is to make the experience feel as independent as possible from the devices and more associated with the contact. There are three aspects that could be considered in designing such experience. Tasks moving from one device to another in a seamless way. Having access to hardware of any device from any other device. making devices work together. There are the three areas to be explored as part of “One experience multiple devices” value.
Same task different devices
How performing a task could be independent of the devices
When talking about moving the content from one device to the other, the definition of the content and the way this transition happens are the design details that have huge influence on the experience. I believe that this is more that synchronizing the date and have access to the same program on the multiple devices. If we look at this on a higher lever the experience could be in the form of moving the tasks from one device to the other. For example when a user is composing an email on a computer and then moves to a mobile device that she is using on the go, the experience could be more than just having the saved draft or opening up the mail application on the device. It could be actually in a way that she can continue writing the email without thinking of the email application and as a task of composing and sending a message!
One experience across multiple devices
How to make the experience independent of the device
“One experience, multiple devices” becomes really meaningful when the hardware across multiple devices user have access behave as one big hardware system that their components talk to each other and create a range of new hardware capabilities together. In the above example the keyboard, digital camera are being used as input tools for the mobile device. This way the camera and the keyboard act like components that support a bigger system that is all interconnected.
Multiple devices together
new opportunities with combined hardware
“One experience, multiple devices” is not only about moving the content from one device to the other one, it is also about providing new configurations where devices can be used together and create the new experiences which could be richer. Designing the mobile devices that are smart enough to recognize the other devices nearby and can suggest the new applications to the user is a powerful design detail that can change the way we utilize our devices together.
In the fist example above, the mobile device can be used as a companion device that works with the computer and could function as an input tool (in this case mouse). The mobile device can be in the basic UI mode and become a companion device. In the second example, it is shown how multiple devices nearby are detected with each other and the system works in a way that meets the need of the users in that specific context. The users here with the same devices can have access to the same content (in this case the same navigation map).
Form and interaction
The experience and the interactions define the form factor of the product. To start with this product needs to have a screen in some form for displaying the the visuals. Also the back of the device is the lively part that needs to be distinguished in some way. The bend function of the device on both directions (back and forward) have different properties. The bend back is spring like and is used to just to trigger the capture mode. The bend forward function is a mode switch where the device is bent and stays in that mode until it is bent back again.
The device display is in the form of e-paper. This e-paper is covering the whole area on the top of the device so from the top view only the display area can be seen. The martial of the body is from fabric or leather to keep the feel of the craft to the product (no plastic or glasses are being used) There is no buttons on the device as the physical input is done using the physical properties of the whole device (bending, twisting, squeezing and …)
Size of the device is not the most important factor as long as it meets the user needs and falls within the mobile devices category. However the thickness of the device is more important when it comes to the physical interaction. In this product the thickness could vary from 4 to 8 mm. The product is in the form of craft, and since the interactions are happening independent of buttons or specific touch ares, the look of the product could vary in the color, size and material making customizations a big characteristic for the product. Users should be able to choose or make their own body style for the device. Style and fashion in mobile electronic devices has not been really addressed in a serious way and having a form factor which is this flexible can create a new connection between the user and the product.