Sorin Pintilie http://www.sorpin.com designer with a cross-disciplinary curiosity for what the world has to offer Mon, 29 Aug 2011 10:18:20 -0500 en hourly 1 http://www.sorpin.com/ Speculative Interactions: The Harmonogram http://www.sorpin.com/writings/harmonogram http://www.sorpin.com/writings/harmonogram#comments Sat, 15 Mar 2014 11:42:50 -0500 Sorin Pintilie http://www.sorpin.com/writings/harmonogram For some, project management is part of a big chunk of their lives. A third actually, and it's called work. And for those people, the future of project management can't get here soon enough.

But the thing about the future is that it always comes through the side door. It never comes banging. If everything's there, it will just seem like a natural progression of things.

And I think everything is already here.

There is a vision.

About a hundred years ago, a man in Poland had drawn the first network diagram, which he called the harmonogram. Besides what's certainly one of the building blocks of project management, Adamiecki — the man in question — had also laid out an overarching principle for the future of project management: harmony.

Working in harmony is what matters, from the importance of creating a good team, to a good coordination and to good tools, he literally wrote the book on the Law of Harmony.

However, harmony is contextual. And Adamiecki's solutions were not. Creating a rhythm, a dynamic and reaching an utopian harmony depends entirely on the specifics of the environment.

Workflows are such a delicate problem of project management that we are only just — a hundred years later — starting to move away from that thinking.

There is a way

The best project management software created transparency, created an ambient in which everybody in the team was aware of each other. Because most of the time, work actually means working with someone. And that's why a process of designing with empathy started to gradually get the interface out of the way and started to focus on what people do best: work with each other.

But most of the time, working with different people means working on multiple projects at different stages. That's a lot of extra information that needs to be managed — notifications, messages, documents, priorities, chats, discussions, events. All, things that can create friction and interrupt concentration.

It's our job, as designers, to manage all that information in such a way that it doesn't seem like a burden. To figure out workflows and design emphatically around them. But if there was a way we could build a system that is sensitive, and react to that information automatically, our job would be a lot easier. And people would communicate easier.

It needs context

If you think about it, in real world conversations, we leave out the context under the assumption that our communication partner knows the context as well. It's the machine’s job to fill in that context. And the only way to do that is to use AI methods.

Ambient awareness is a powerful vision for productivity if it's to be enhanced by a bit of machine intelligence. And I'm not talking about General AI here, I'm talking about a concept that that helped a lot of companies stay on top of the curve and innovate: fractional AI.

When you give existing products and environments “a kind of domesticated, not-very-intelligent artificial intelligence”, how Matt Webb calls it, they effectively become something else. They become part of the future, pushed through the side door by fractional AI.

Things that release

What I'm saying is that we can solve real-world, small, practical problems that users face, but use the big guns of AI to do it.

A system that could automatically align all of that information — documents and discussions — to the context you're in would be a polite system.

Objects that draw attention, create friction. But objects that release, that set the mind free to wonder, objects that are polite, those are the objects that delight, that have a personality.

Time awareness, task awareness, social, cultural and organisational settings, all can be used to semantically infer meaning. To understand intent and respond accordingly.

So now, all that's left is to make the transition from a tool to an intelligent tool: project management software will gradually become a personal assistant, that knows your context and reacts accordingly, removing all obstacles between you and your work.

Giving an environment, like a project management system, contextual awareness is the last step in making a 100 year old vision get as close to reality as possible: people working with other people in complete harmony.

I wanted to see what it would take to put this all together. And to do that, it takes a few things done differently: context awareness, a personalised experience ( it's not the same for everybody ) and adaptive UIs.

All new concepts for productivity tools, all new design challenges.

]]>
http://www.sorpin.com/writings/harmonogram#comments 0
Siri, chess, and prostheses http://www.sorpin.com/writings/siri-chess-and-prostheses http://www.sorpin.com/writings/siri-chess-and-prostheses#comments Tue, 25 Feb 2014 12:35:00 -0600 Sorin Pintilie http://www.sorpin.com/writings/siri-chess-and-prostheses Intelligent machines.

There was a time when the mere mention of artificial intelligence was wrapped in constant debate and triggered images of Hollywood-crafted products, like Hal 9000. The concept itself is quite controversial; it challenges human thought as Darwin once challenged human origins. But we moved on, and now we carry these intelligent machines in our pockets.

There’s a 38.9% chance you have one, too. Siri, the out-of-sight personal assistant from Apple, delivers an amazing experience. It listens to you, understands you, does what you say, and even talks back to you.

Sounds simple enough for us humans, but these are remarkable achievements for a machine. It has to process language, interpret context, understand intent, and orchestrate multiple services and information sources. And it brings together technologies that rely on dialog and natural language understandings, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, and service delegation to do it.

Spin back the clock 50 years and all of this wasn’t even remotely possible. But just two years after Turing published the first documented idea of intelligent machines, three people were already working on the first system capable of speech recognition, named Audrey.

It could only process digits. Spoken by a single voice. With pauses in between. And it occupied a six-foot high relay rack.

Not exactly a marvel of technology, by today’ standards. But back then, when computers had only 1kb of RAM, it was an impressive achievement. More impressive still, when you think about how such a system came to be.

It all started with an illusion act

Many elements from very different spheres come together in the story of Siri, and it all starts with a man doing some magic.

Tracing Siri’s ancestry takes us back roughly 250 years, to Austria, when Vienna still had an empress. The story begins with a man known mostly for what was perhaps the most famous illusion in history: the Mechanical Turk, a machine that could play chess on its own and claimed to win over any opponent.

In reality, it was just a wooden cabinet with a life-size, mustache-wearing doll on top and a man inside, playing chess. It tricked people into thinking the machine was intelligent, but the idea itself was enough to intrigue the likes of Napoleon. (He played the Turk and he lost.)

And while the Turk made its creator—Wolfgang von Kempelen—popular, it is another of von Kempelen’s inventions that marks the beginning for Siri’s story.

The first speaking machine was a pretty straight-forward concept that tried to simulate the human vocal tracts—it had lungs and everything. Nevertheless, it was the first machine that could replicate whole words and sentences. It was this machine that would set the stage for Audrey.

Chess, the game that made it all possible

von Kempleton’s Turk was the first machine that could replicate human speech. Audrey was the first that could recognize human speech. But Siri is the first machine that can *understand human speech.

Understanding is the unique ability that swings the story back to the Turk. The machine’s connection with chess isn’t random. Chess is more than a game; it’s an entirely mental activity. And it’s a perfect metaphor that would allow for the birth of a new scientific discipline, artificial intelligence.

A machine capable of defeating a human opponent at a mind game is an intelligent machine, by any logical standards—or, at least, that was the premise.

While the Turk was, for the first time in history, the first real image of a machine that could be better than us at anything, it was just an illusion with a man operating it. But ever since, the idea of an intelligent machine started slowly morphing into physical technologies.

The next obvious stage would certainly seem to be a machine that could play chess and be self-operated. In 1912, the real thing quickly followed. It was called Ajedrecista and it was the first computer game. Only, without an actual, you know, computer.

Making this happen required a deep understanding of how we think when we play chess.

Every move weaves together an amazing chain of mental processes: Perception transforms the pieces on the board into a series of symbols, and long-term memory overlaps perceptions with previous knowledge. Logical thought then searches for variations, and decision-making is needed for the actual move.

Move after move, the chess game becomes a sequence of decision-making events governed by strict logical rules. And it is this logic module in our brain that chess heavily stimulates, so much so that it can be simulated. It doesn’t take a big imaginary leap to imagine that thought can be simulated.

This realization gave way to wonderful theoretical breakthroughs. Concepts like an algorithms, recursiveness and programming were born. Having to analyze how we think about chess quickly lead to computer thinking.

AI: A new, old way of designing experiences

A special group of people made a great imaginative leap. They realized that a game holds the secret into human thought. For people like Edward Feigenbaum, Marvin Minsky, Allen Newell, Herbert Simon, Alan Turing, John von Neumann, and Norbert Wiener—the founders of AI as a scientific discipline—pinpointing all the mental processes that are necessary to generate high-level cognitive activities played a very important role in the development of simulated thought processes through computer programming.

Logic and process alone wasn’t enough though. We expanded our concepts to expert systems, knowledge engineering, neural networks, and so on. The subsequent knowledge-based models of thought are nothing short of amazing. But the real breakthrough came from an anti-type of approach: The father of expert systems, Edward Feigenbaum, called it representation. This approach supported the idea that knowledge-modeling the real world was much too difficult; instead, systems should adapt and respond effectively to real interactions with the world.

This is important because it has finally allowed for the development of a truly human-centered approach to designing systems, an approach initially articulated by Bill Moggridge and one which inspired a major shift in design thinking that we see maturing today.

AI and HCI have been described as having opposite views on how humans and computers should interact. Human-centered computing is somewhat bringing all that together by combining intelligent systems, human-computer interaction, and contextual design. Instead of trying to imitate (or substitute) the human, the goal is to amplify and extend his capabilities, much like a prostheses does, although not in the sense that they compensate for the specific disabilities of any given individual, but rather because they enable us to overcome the biological limitations shared by all of us.

Above all else, a prostheses needs to fit, otherwise it will be rejected. In the same manner, systems designed to assist, rather than replace, need to be personal and contextual. They need to be intelligent in order to fit.

In terms of actual capabilities, Siri wouldn’t pass a Turing Test. But it doesn’t set out to do so. It doesn’t try to augment our abilities, but rather extend them.

For example, say you want to go to the best restaurant around. You know you can do that. With the help of technology, you can combine information from different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and your own favorites).

But why would you want to? You want to use technology as a tool, not get immersed in the experience of interacting with it.

Siri delegates everything you don’t want to do. It lets you use technology as it’s supposed to be used, as a tool. By doing so, it becomes a digital prostheses. As a result, the experience is truly human-centered, built for humans based on real human needs.

Final lessons

The story of Siri is full of great achievements of the human mind. It shows us how the power of thought can fuel great technological breakthroughs. It ends with the same man that started it all: von Kempelen, the man with the kind of thinking that gave birth to the first speaking machine, a truly amazing technological achievement. But more importantly, the kind of thinking that creates genuine human experiences.

The Turk’s biggest achievement was to challenge how we think about machines. This is the type of thinking that I like to call design thinking.

Yes, Siri still has its shortcomings, starting with the fact that it’s voice-controlled. But the mechanisms behind it are nothing short of amazing. Properly pairing machine intelligence with true contextual awareness is what created the first conversational interface that actually works.

And simply because it works, it marks an important milestone: It becomes a template for all future voice-controlled interactions. Even Google has updated its interfaces to include conversational and contextual interfaces. What Siri did was show the world a bright idea and made it stick.

More importantly, for professionals, the story behind Siri offers valuable lessons in true experience design, vital lessons in times clearly dominated by form instead of content, where an excessive preoccupation with formalism can impede further developments.

Experience design is more than numbers, boxes, and diagrams. It’s emotional, invisible at the time of inception, innovative, developed intelligently, and deeply contextual. A complex multiplex, feeding on a variety of different disciplines, such as neuroscience, psychology, linguistics, logic, biology, social sciences, computer science, software engineering, mathematics, and philosophy.

Much in the same way that Siri forges new tools from old technologies, good design feeds on AI for the raw materials to conquer human experience. To add function to experience. To add personality.

Avoid fields. Jump fences.

Disciplinary boundaries and regulatory regimes are attempts to control the wilding of creative life. They are often understandable efforts to order what are manifold, complex, evolutionary processes. Our job is to jump the fences and cross the fields.

—Bruce Mau

]]>
http://www.sorpin.com/writings/siri-chess-and-prostheses#comments 0
If interface, then navigation http://www.sorpin.com/writings/if-interface-then-navigation http://www.sorpin.com/writings/if-interface-then-navigation#comments Thu, 20 Feb 2014 08:58:03 -0600 Sorin Pintilie http://www.sorpin.com/writings/if-interface-then-navigation Usually, I can find my way to things. And sometimes, I even help other people find their way. I'm lucky to do that for a living. But when things find me, I feel completely lost.

One of those things was a piece of writing from Anne Burdick, called “If words, then reading”. It's a small piece so I'm going to quote it here in full so you can appreciate the playfulness of it all.

If interface, then navigation. If disguise, then disclosure. If map, then itinerary. If resource, then use. If environment, then wayfinding. If plan, then practice. If erasure, then reconstruction. If sketch, then animation. If architecture, then dwelling. If capture, then release. If trace, then archeology. If program, then process.

Some things, like that first line — “If interface, then navigation” — find their way into your mind, lodge themselves there and start nesting.

The connection, inside Anne Burdick's mind, managed to colonise the word interface so completely for me, that whenever I mumble it, I automatically connect it to the end of that line — then navigation. It's concept imperialism, is what that is. And each time that happens, doubt sets in, questions start flooding in. Are we doing the right thing? Are we that used to leaving breadcrumb trails for our users — a word I'm not at all comfortable with — that its effect has escaped us?

We like to think we're building things people are interacting with. But are we really making stuff that users have to navigate through?

Coming back to that first line, it stayed with me because most of the people using what we build make the same connection. That's why it haunts me. Because Anne's association comes from the real world, not the world of design theory or practice.

But that can't be right, can it? Navigation is merely a part of a structure-making process we use to handle information, mould it into a thing and release it into a space in which people can interact with it and make other things — the interface.

Shannon Christine Mattern recently published a solid overview of the term, covering a good spectrum of ideas from both an engineering and humanistic mindset. In that spectrum, the end of “If interface, then” could be a lot of things: space, zone, threshold, translation, semantics, interaction, even allegory and metaphors. Yet, we seem to have locked ourselves in one dimension, in which the sense of orientation — across time or space — absolutely dominates. A dimension that inherits its vocabulary from HCI and engineering, where clicks, tasks, hierarchies, usability and effectiveness are the norm. Where people are users.

And it's this kind of jargon that crutches our own vocabulary instead of augmenting it.

Maybe Jason Santa Maria is right. Maybe we don't have the language to talk about these things. If that's the case, maybe we need a bit of a re-think, because — using Paul Dourish's words — the whole "world can become an interface to computation". And the world is a big word.

Looking through that lens, I can't help but wonder: is this what we have done to the people using the things we make? Have we invaded the word interface so thoroughly, that they can only equate it with navigation?

We need a new design language because science and society aren't polar opposites.

Daisy Ginsberg

]]>
http://www.sorpin.com/writings/if-interface-then-navigation#comments 0
Generative typography http://www.sorpin.com/writings/generative-typography http://www.sorpin.com/writings/generative-typography#comments Thu, 16 Jan 2014 17:37:12 -0600 Sorin Pintilie http://www.sorpin.com/writings/generative-typography TYPOGRAPHY is based on a notion of encoded experiences, meaning that the typeface in which a message is set becomes part of the message itself. A font absolutely in tune with the text — comfortable legibility, as Jan Tschichold called it — is the benchmark for all typography. And the question of how you ensure a comfortable reading experience still fascinates me.

Jan Tschichold was one of the many who talked about the importance of working with typefaces that are in spiritual accordance with our times. But in a time defined by a constant stream of information, proclaiming one typeface or style over another proves to be a challenging pursuit. In an never-ending stream, universality seems to lose its rule. This is a time of circumstantial solutions, a time of contextual thinking. And this is where, I think, the concept of generative typography comes in.

* * *

With one foot firmly lodged in parametric design (or by its first name variational geometry) and the other suspended in potential futures, generative typography, at the moment, hangs in a limbo.

The idea is fairly simple: in the old days, designers had a set of punches and chisels, now they have a mouse and a keyboard. So instead of being carved into metal, the typeface is now being generated by a set of rules or algorithms which can churn out a variety of forms. And when you factor in that those algorithms can now tap into the network through APIs, the possibilities seem endless. And what amazing possibilities they must be. It used to be all about geometrical rules, now typefaces can be tied to environmental, social and individual activities feeding from networked information.

However, all potentially generative systems can quickly go into an exponential “runaway” if uncorrected. What keeps them in check, in a stable state, is a sort of governing loop — the parameter. And in the case of typography, what usually defines the relationships between parameters is the shape of the letter themselves.

Inherited from architecture, where parametric design has a richer history, this preoccupation for form is exactly what keeps generative typography planted within a niched area. At the moment, generative typography is all about futuristic form-making and experimenting: posters, book experiments, interactive installations, and conceptual artworks. It’s still very much a playground for new technologies.

But it doesn’t have to be.

“Every piece of typography which originates in a preconceived idea of form, of whatever kind, is wrong.”

Jan Tschichold

The great misunderstanding of generative typography is to believe that it is only the content and the appearance that matters, while its true potential — comfortable legibility — remains unrealised. Its function — readability, its essence — clarity, is what merits further discussion. And what I propose is we bring systems thinking into generative typography.

* * *

Good typography catches the spirit of a text and stays loyal to it. It’s like a perfect speaking voice, it neither overshadows nor patronises the content. How the text speaks to us, how it is presented to us determines how we approach it. And unconsciously, we demand the shape to which we have been accustomed to. Simply put, when reading any text, the typeface is vital to the clear transmission of ideas.

That can have tremendous implications for — since this article was initially published on Medium, say — a publishing platform, where the wide range of topics and writing styles is bewildering. Beautiful typography is essential, but a single typeface is simply not enough. And certainly, designers can’t design every piece of writing on the platform.

So, to ground this discussion into more practical terms, if every text requires a certain typographical atmosphere, why not generate that atmosphere with the help of algorithms?

I think there’s a new kind of product emerging in that space and I wanted to see what it would take to put it together. So if you will, let your imagination run along these following lines as I will attempt to describe an integrated vision and a few speculative details.

* * *

Getting to that comfortable reading experience means starting with the thing that creates it in the first place: writing. When you write, words have value. Arguably, that value can be represented or encoded in a system. And it is that system that is at the heart of it all.

As you type, algorithms can analyse the text in real time, scanning, identifying and constantly matching keywords to sentences and paragraphs — in short, context — in order to find the best possible typographical match. They go scouring through every word to look for indicators that can be fed into that value system. Language processing algorithms, data detectors, intent understanding algorithms come together to align with the semantics of the text. A sort of semantic snap-to-grid.

Based on linguistic style interpretations, the system snaps to those typographical matches that make the most sense depending on the text you’re writing. The algorithms can learn to distinguish between news and personal pieces, between political and fiction, and ultimately between any two styles. They can then automatically set the type for you. The whole idea is about bringing a certain level of intelligence at the interface, using Tom Gruber’s words.

Basically, as you write, the typeface changes in real-time to match the style of your writing. No superfluous, visual interface. Your writing is the interface. You want a more ragged look, write more raggedly. If you’re looking for a soft look, tone down your voice. The system is there only if you need it, acting as your own aesthetic assistant, if you will.

In this scenario, generative typography becomes a real-time visualisation of context. The system reflects your writing style and adapts to it. You can see the changes happening and you can shape them. This idea of systems that show their context, proposed by Paul Dourish in 2004, is one I wholeheartedly agree with.

Under the hood, the architecture would be based on a declarative model of “which typeface is good at what?”, connected to different APIs, mapped to domain models and domain models mapped to language.

A lot of services are available on the web with structured data over APIs. For relevant services, designers can declare and model the capabilities.

This is an environment with new possibilities. Designers don’t just fiddle with fonts anymore. Nor do they design an overarching style or proclaim an universal solution. They can also design the system that intelligently delivers the best possible typography for any given piece of writing. It’s a fine balance of form and function.

Good typography — again, in Jan Tschichold’s words — acts as a “tactful servant to the written word”. And maybe, just maybe, the day in which we give new meaning to those words is closer than we thought.

]]>
http://www.sorpin.com/writings/generative-typography#comments 0
The line of least resistance http://www.sorpin.com/writings/the-line-of-least-resistance http://www.sorpin.com/writings/the-line-of-least-resistance#comments Thu, 12 Dec 2013 13:25:20 -0600 Sorin Pintilie http://www.sorpin.com/writings/the-line-of-least-resistance In chess, the psychological dimension that springs from a dialogue between two brains, two ideas, two strategic conceptions that depend on the personality of each chess player has long been somewhat of a romantic mystery. How do Grandmasters think? What strategies do they use?

More often than not, the most successful strategies are rooted in our own very nature. And common to most Grandmasters is that they almost never take the easy way out. A different, better alternative is always available, and they go looking for it. That creativity, that compulsion, that drive to look beyond what comes instinctively is what fuels successful strategies and explains why so few Grandmasters are out there.

For most of us, however, things are simpler. We tend to favor the shortest path, the easy way out, the shortcut. We cut through the middle of the park if we have to.

We’re naturally lazy. When we think, we search for the nearest pattern, and when we find the pattern, we don’t need to think anymore — we just follow the pattern. Our brains have evolved to be — borrowing Edward de Bono’s expression — “brilliantly uncreative.”

This preference for shortcuts is what Grandmasters naturally exploit. Playing on well-known weaknesses, automatisms or unconscious actions has long been a common strategy in chess.

* * *

When it comes to using and interacting with technology, that underlying behavior seems to hold. Users unconsciously look for the line of least resistance — la loi du moindre effort, as psycholinguist François Richaudeau called it. Richaudeau’s work revolved around words, but considering how vital communication, language and words are to the design of interactions, the core principles still apply: The shortest words will be those emitted, understood and read with the minimum of effort, and also those most frequently employed due to their more profound presence — or incrustation — in our memory.

We remember shorter words better. That’s because, according to Kenneth S. Goodman, we process graphic, syntactic and semantic information simultaneously. In doing so, users carry out cycles of sampling, predicting, testing and confirming as strategies to bring in “the most reliable prediction with the minimum use of the information available.”

The process isn’t specific to words and reading, though. It underlies our behavior and interaction with the world around us.

Framing it in this way seems to suggest the generalized idea that behavioral patterns are linked to common causes — a concept expanded by behaviorism to the extent that it proposes a predictable and reliable link between a stimulus and the response it produces. That generalization is largely refuted by the cognitive revolution on the grounds that prior knowledge and mental processes intervene between a stimulus and response to reduce the predictability of human behavior — or the response — given a stimulus. Essentially, we can’t predict behavior.

While that may hold true at a micro level, there are still plenty of common paths of action in the way we work — stimulus, sensory organs, interneuron, brain, processing, motor neuron, response. It’s still safe to assume that if we design an environment in which even the subtlest of details is sympathetic to how we, as humans, work and think, and with the singular purpose of easing the experience, then users will always favor that line of least resistance. Is the environment we design sympathetic to the way we work?

Designing an experience from a perspective that is more closely related to how we naturally strategize, designing guidance through pattern prediction, as opposed to limit enforcement, might prove to be a more viable approach.

Let’s put that in perspective. Take decision fatigue.

We all make decisions every day. Some of them big, some of them small, some conscious and some so insignificant that they slip by us without our even realizing. And in a world in which we are constantly bombarded with information and media options, such as blogs, social networks, magazines and TV, the effects of so many decisions that we have to take day by day become increasingly obvious.

Our decisions, even those that are mere preferences between option A and B, gradually take their toll on our cognitive load. The mental processes of decision-making are strenuous; we have only a finite store of mental energy available to exert self-control.

Basically, we get tired if we make too many decisions. As a result, according to Barry Schwartz, one of three things is likely to happen: we end up making poor decisions, we become more dissatisfied with our choices, or we get paralyzed and don’t choose at all.

Interaction processes — regardless of the medium — are a prime example of decision fatigue in action. At a micro level, hundreds of small, subtle and, most of the time, unconscious decisions that we take every time we use an application, browse a website or even follow an ad can affect the way we experience the artefact that we’re interacting with.

At a macro level, thanks to what we know about decision fatigue, we can predict that users can be overwhelmed to the point of making a default choice or no choice at all, depending on the option they face. One example of this is Dan Ariely’s opt-in versus opt-out example for forms of the US Department of Motor Vehicles. It shows that we can basically predict how people will react in a very specific situation by controlling the design of the application form, paying close attention to the defaults.

This is a very specific prediction of behavior. We’re not normally used to thinking of behavior as being that predictable. But in a lot of situations, it is. Ariely calls it being predictably irrational.

Contextual cues — through signs or linguistic cues, because they have the potential to refer to a specific value system — can sometimes direct cognition in particular ways, either encouraging or discouraging certain patterns of behavior. Thus, it’s vital that user experience practitioners maintain complete control over and responsibly use all elements in an interface that could be used as contextual cues and predictors.

And one of the biggest cues at the disposal of designers is language.

We, as a species, have refined language through countless iterations over the span of millennia to such an extent that words are tied to complex experiences, rather than to singular instances, by way of symbolic representation.

The Power Of Words

In 1967, Paul Watzlawick described what later became a cornerstone of communication theory: meta communication. He postulated that every communication includes, apart from the plain meaning of words, more information: information on how the talker wants to be understood and how they see themselves in relation to the receiver of the information.

Language, as a clear manifestation of the cognitive capacity of the human mind, is only a medium to carry out acts of communication. That’s one of the reasons why H.F. Bradley once said that experience is basically incommunicable.

Still, there is always something to communicate — moods, annoyances, happiness, feelings, ideas. There is always something that we need to understand — a noise, a color, a sign, a danger, an emotion. There is always some medium that is familiar to us — a grammar, an artistic language, a chess board, an interface.

While context provides a window into how people behave, words have the power to steer people towards a particular pattern of behavior through frame manipulation.

Frame manipulation invariably affects users — admittedly, to different degrees. One prime example of how the context of options and the wording of a question affects behavior is Tversky and Kahneman’s famous experiment in 1981, which demonstrated systematic reversals of preference when the same problem was presented in different ways.

The experiment required participants to make a hypothetical decision, and the researchers tried to determine whether they could steer the participants towards an answer simply by wording the questions carefully.

The experiment proposed a hypothetical outbreak of disease with 60,000 predicted deaths, and participants had to choose between two programs:

Program A 20,000 people would be saved. Program B There is a 33% chance that all 60,000 would be saved, and a 66% chance that none of them would be saved.

With these options, program A was a favorite among participants. But what happened when the same question was asked again, but this time, instead of counting the number of lives saved, counting the number of deaths? Here’s how the second option would have sounded:

Program A 40,000 people would die. Program B There is a 33% chance that none of them would die, and a 66% chance that all of them would die.

This time around, the clear favorite was program B.

The interesting part is that the math is exactly the same for both: 40,000 out of 60,000 would die anyway. But with the first option, participants were given the choice of a gain: they were given the choice to save 20,000 people. So, the gamble was perceived as negative against the word “save.”

In the second option, participants were given the choice of a loss: 40,000 people would die, or they could gamble that none of them would die with a 30% chance.

Users were influenced simply by the wording. This later became known as the framing effect. The effect helps to explain why people are much more likely to buy meat when it’s labeled as 85% lean, instead of 15% fat. And it’s why twice as many patients opt for surgery when told they have an 80% chance of surviving, instead of a 20% chance of dying.

As the experiment demonstrates, words have the power to selectively influence a user’s perception of meaning. And, for most applications, words can also be used to elicit interactions.

Summing Up

Context is the barrier that makes or breaks a user’s experience. Those unconscious moments, every bit of information we discard on our way up an emotional peak, all of those fine details — they all could potentially lead to a sensible, sympathetic experience. A enjoyable experience is shaped only by context.

Some people use context and design great experiences, some don’t. Oliver Reichenstein calls it a continuum. That’s perfectly natural. We need to constantly make an effort to glance at the future once in a while — to permit ourselves educated dreams.

We need to permit ourselves to keep the vision going — from the ones whom we got it from, to the ones coming to get it. We all have the tools to do it. We just need to be responsible with them.

]]>
http://www.sorpin.com/writings/the-line-of-least-resistance#comments 0
Yes, experience can be designed http://www.sorpin.com/writings/yes-experience-can-be-designed http://www.sorpin.com/writings/yes-experience-can-be-designed#comments Wed, 28 Sep 2011 14:37:20 -0500 Sorin Pintilie http://www.sorpin.com/writings/yes-experience-can-be-designed Experience is one of the most compressed areas in human life. It brings together so many, complex factors like emotion, perception, reason, memory and intuition. In itself, it is an immensely complicated concept and it exerts a — sometimes overwhelming — responsibility on a designer's role as a systems creator.

With each day we learn something new that helps us better understand what human experience is really about, it repeatedly challenges our perception of it in some fundamental way.

But looking back to how design has shaped the necessary tools to study, influence, mediate and sometimes even control the way we experience the artifacts we interact with, it raises the question if that experience can really be designed. And it certainly triggers lively debate.

First, semantics

An apparently simple statement like "experience can/can't be designed" requires at least a working definition of the terms at hand.

experience /ɪkˈspɪərɪəns, ɛk-/ noun an event or occurrence which leaves an impression on someone

Oxford American dictionary

design /dɪˈzʌɪn/ verb do or plan (something) with a specific purpose in mind

Also, logic dictates that can implies the principle of alternate possibilities while can't implies absolute. So, on one hand we have a possibility and on the other, we have a bold statement according to which, under no circumstances, an impression — admittedly, with all the associated cognitive implications — can be planned and resulted for. Dismissing all other possibilities seems a bit drastic.

But a dictionary definition will not suffice. The complexity involved in dealing with human experiences can be obtained by merely considering behavior; the sheer number of functions we have to call upon is simply astonishing: stimuli, reception, expectancy, response, function, meaning, mental models, perception, encoding, memory, engagement, interaction, emotion and so forth.

This involves insights from a long array of separate disciplines like linguistics and communication, cognitive and perceptual psychology, information architecture and design, sociology and social interaction. We learn what we have to. Only to inch closer to solving a problem. And we constantly redefine ourselves to better design a system in which all these insights come together harmoniously, regardless of medium, be that a brand, a website or an application.

But complexity can also be judged by the minimum information content that can trigger a observable reaction. If we can look deep enough at the smallest, simplest entity that can further form the simplest, quantifiable, controllable and repeatable sequence, we can then get an idea of how we can control and maybe design experiences.

The god helmet

Up until recently, one of the most common and also powerful experiences known to man involved religion in some form. ( 1 ) Reasons for this are varied and irrelevant in this context. What is relevant is the fact that, if we can find a way to mimic and simulate a religious experience, an experience so basic that our society has evolved throughout thousand of years directly into our brains, we just might discover the argument and inspiration we need to pursue designing experiences properly.

This is where the god helmet comes in. It is a device designed originally by Stanley Koren to study creativity. But the participants reported a sensed presence and about 1% claim to have experienced God. This obviously lead to a media hype which also gave it it's dramatic name. Although the experiment still awaits proper scientific peer-reviews, it successfully proved that a subjective experience can be induced by using specific brain functions.

Basically, a small set of brain cells in your right temporal lobe can produce a powerful sense, a memorable experience. And if we can map out the stimuli to which our brain responds to — in this case a magnet on your right hemisphere, but we can safely extend the list from physiological stimuli to psychological ones — we can determine a sequence of different stimuli, all carefully controlled to trigger a response. We can develop whole systems and procedures to induce a certain type of impression.

Even if this example explicitly outlines the physiological nature of the experiment, nevertheless it supports the idea that there are possibilities to design experiences. We just have to find them. Signs of patterns are everywhere.

Just look for psychological tripwires and think about how people think.

Experience designers are structuralists

Or rather, they should be. In the words of Carl Sagan, only a small group of individuals, men or women, who find all human knowledge — the arts and sciences, philosophy and psychology — interesting and, most importantly, accessible can truly look for insights and connections to coherently synthesize a system and manipulate it in such a manner that it results in a real, hopefully lasting, emotion.

Practitioners of specialized crafts like typography, usability, information architecture, interaction design, content design can greatly influence a user's perception. But experience finds it's roots in systems.

Structuralism, as defined in the Oxford American dictionary, is a method of interpretation and analysis of aspects of human cognition, behavior, culture, and experience that focus on relationships of contrast between elements in conceptual systems that reflect patterns underlying a superficial diversity.” Thus, structuralism straddles multiple disciplines, such as language, architecture, graphic design, sociology, and anthropology, to name a few.

It is the one trait that sets us apart as an industry and enables us to take on the role of system creators: the ability to make connections. Norman Potter refers to it the trait that unites the very disparate standards that coexist in any one profession, ( 2 ) Milton Glaser calls it a way to unify separate occurrences and create a gestalt, and experience in which this new unity provides insight, Simon Collison calls it our spirit of inquiry and Dan Cederholm describes us as 80 percenters.

The risk of mistaking ignorance for perspective

A structuralistic approach to design is not without risk though. Scientific observation, a process that is to observe, collect, sort, analyze, postulate a theory, test also leads to mountains of data, that more often than not are hard to make sense of.

Getting data is easy, but selecting, storing, indexing, updating, and most importantly contextualizing the information is rather difficult.

To accurately form conjectures about possible interactions between insights obtained from brain physiology and human behavior, comparative and analytical thinking is critical. Observations need to be rigorously studied to be adequate enough to form a basis for solid reasoning.

But the benefits of churning through cognitive complexities far outweighs the costs. Mapping out common sequences of particular cognitive functions is a solid way of mediating and creating experiences, regardless of medium.

]]>
http://www.sorpin.com/writings/yes-experience-can-be-designed#comments 0
Foreword http://www.sorpin.com/writings/foreword http://www.sorpin.com/writings/foreword#comments Mon, 29 Aug 2011 10:18:20 -0500 Sorin Pintilie http://www.sorpin.com/writings/foreword EVERY ARTIST STARTS a project with a problem to solve. Even if it's an abstract painter, they still have some type of formalism they want to apply to the canvas. ( 1 )

I am putting forward a series of scribblings, covering a good spectrum of ideas that fit the label of my own formalism, my personal approach to design as it directly relates to human experiences.

As I explore related disciplines and their tangents, I hope to articulate how human conceptual, living structures and patterns — letters, words, language, interaction — relate to the seams and scars, the nodes, edges, tangents, trajectories, bends, pipelines ( 2 ) they create in the systems we design.

Centering on the connection between the physical and the digital world, topics will include textual interaction, personal informatics, information interfaces, software architectures, ubiquitous and pervasive technologies.

]]>
http://www.sorpin.com/writings/foreword#comments 0