For those who are in the gaming industry, pressure (or force) sensitive touch interfaces are likely the most desirable change in modern human-computer interfaces, in particular on mobile.
On 09 March 2015, Apple announced the “Force click” feature on the Trackpad which enables the user to use not just positional touch but also pressure as a meaningful input. This already happens in several devices, most famously in pen tablets where the pressure has been a fundamental feature for artists in doing digital painting. Rumors say that iPhone 6s will finally bring Force Touch into mobile interaction.
The new technology will change how we interact with our everyday applications in OSX, however this topic already has quite a lot of coverage across the internet. This article is slightly projected into the future and envisions how mobile games (and apps) could take advantage of the force touch, making clearer to the reader why developers are waiting eagerly for such a feature.
Now, what will it mean for gaming? In technical terms developers will have the chance to catch more intentions from the user as any touch will carry more information than its mere position so we are going from a boolean touch (like a light switch) to a floating point touch (more like a control to dim a light) which will imply an additional degree of freedom.
We will see entire games based on this!
As a meaningful example, think of any car racing game where you have a virtual gas pedal to control your car’s acceleration. We all experienced situations where we had to control that pedal with just a keyboard button or with one touch that would just activate it. A player can eventually become pretty good and smooth with this kind of interface, tapping on it to control the overall intensity but that’s nothing near how you would control acceleration in real life. This is the kind of problems that pressure sensitive touch interaction will solve.
Players will be able to control the intensity of a punch in beat-em-up games, the intensity of a kick in soccer games, they’ll be able to master slalom in skiing games, have a precise control of a jumping height and there’s even room to let players unleash their rage onto the screen, the resistance of which will be the only (quite functional) limitation.
Some already famous games are likely to be released in new versions that will take advantage of this feature. Imagine playing Flappy Birds (which is tired to be an example) controlling the flapping force or setting the shooting force in Worms with the pressure you put on the screen.
Developers will get creative and players even more!
What about apps? Apps will definitely be affected too but the way they’ll make use of finger pressure will be less immediate and will require higher design awareness and human-Computer Interaction specific knowledge. A common question that pops up in discussions about this topic is “Could a menu be navigated through pressure?” Yes it could. Assume, for example, that we are navigating a menu “in depth”, almost like going back and forth a tunnel; there are at least a couple of methods that could be used in this kind of interaction: one implies that the force you put is translated into a navigation direction, where pressing hard enough means forward and touching with no force means backward; the other one is positional, meaning that every pressure level is mapped into a specific position in the menu. Due to an intrinsic lack of human precision though, in this case the number of menu levels that could be selected are quite limited. The average human would be able to create only four or five distinguished levels of intensity, more like no pressure, light, medium, strong or very strong pressure rather than a number between 0 and 1024. Nonetheless, four levels are quite enough to both activate and navigate the small menu that allows to select a special character (from a to à or ä for example) on virtual keyboards, which we currently do on iOS by tapping for 1 second and moving onto the desired character. In a slightly different scenario letters could be capitalized intuitively, just by putting more pressure onto the touch point.
More in general, force will be used to map user’s intentions more precisely, reducing execution errors. In many cases virtual buttons will require some force to be actuated, mimicking more closely a real button, so that there will be less chances for the users to push it unintentionally plus they’ll have an extra time slot, while they regulate the force, to think “am I pushing the right button?”, “am I doing the right thing to achieve my goal?”, reducing the time spent in what in HCI is called an execution gulf.
One sure thing is that every painting app will be pressure enabled, making the experience of drawing with fingers a lot nearer to drawing on a Wacom tablet with its pen.
Apple made a big step towards this technology, opening up an observatory on how the interaction will change on laptops. Hopefully soon enough we’ll be able to put hands on the first pressure sensitive handset.