Master Thesis – Introduction 2


Be careful when copying information from this article! The paper is already published and you will be charged with plagiarism!

My experience on Microsoft SurfaceThe advent of multitouch interfaces leads the interaction between people to new levels of complexity. In the same time it had an impact on the software development: instead of interpreting just a couple of input devices, the software must now interact with multiple fingertip contacts, tags and objects placed on the device. This is quite a big jump from the classical combination of the mouse and the keyboard. Because of the technical difficulties, the multitouch interfaces started to be present on small devices and just one user being capable of controlling the device. Later they evolved to being available also for large devices, usually horizontally oriented, that enable multiple user interactions on the same set of objects. Although some users had more devices, like a special mouse, tablet or joystick, the interaction was quite the same – one user, one computer, usually just one device used in one time.

Along with the increasing computer power the multitouch and (some of them) multiuser devices got more and more complex, not only in respect to the underlying hardware, but mostly regarding the software responsible for driving the experience. New software controls and frameworks were created to support multiple interactions, but this was not all – a new User Interface paradigm was needed.

The natural gesture of touching employed by the new devices’ interface led to it becoming ubiquitous. The interface dissolved itself in the device that we are using, creating a more natural interaction. And so the term Natural User Interface was born. This paradigm of interaction became supported by few different companies who understood that the customers using their devices appreciate their experience when it is simple and as natural as possible. People are not required to learn complex functions or sequence of key presses in order to get a simple result – a swipe of finger on the screen might be enough. Replacing the learning cognitive processes and choosing based on understanding and pointing allows users to concentrate on what they do, as opposed to how to do it. This allows them to easily express their imagination without interruptions due to remembering sequences of actions from the memory.

The questions that arise from this approach are many: regarding implementation due to system novelty and relative low number of resources for creating applications, and regarding social aspect addressing issues like how do the participants feel using the system, as opposed to the traditional approaches.

My thesis proposes itself to study:

  1. If the User Experience using brainstorming applications running on multiuser multitouch tabletops is engaging;
  2. If it is useful to have post-brainstorming ideas organization;
  3. If is the brainstorming process successfully supported by mashing with other systems (e.g. Twitter);
  4. If is integration with small mobile devices a good solution to address the physical limitation of the tabletop;
  5. Is it important to be able to persist the result of the brainstorming session?

Let’s see what other scientists did before!

Leave a comment

Your email address will not be published. Required fields are marked *