|( ! ) Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/sorpin.com/libs/lib_mysql.php on line 66|
|4||0.0004||281344||mysql_connect ( )||.../lib_mysql.php:66|
The only rules are the limitations super-imposed by the systems we collectively design. Just think of an annotation: usually found at the bottom of the page or on the side, referencing another book or paper that you’d have to physically locate and search through. Quite a bottleneck, in terms of navigating information. The web made everything a little bit easier. It created affordances that made it easier to follow the references, but we still have to dig deep to reach what we need.
So this always raised the question: how can we harness the power of the web to make tools that allow the system to surface — rather than having to browse, explore, navigate and manage — information?
* * *
Readers are innate intellectual nomads, constantly stumbling across fields they don’t own. They always seem to navigate in unanticipated directions, seeking connections that suit their own interests.
So it’s no wonder that the power of text, this source of great technological breakthroughs, is in its flexibility. What’s essentially nothing more than a sequence of symbols has been a vital force in our evolution.
Type, what McLuhan called “the prototype of all machines”, lead to things like the printing press, the assembly line and the internet. And it’s fascinating to think about the systems that made it all possible. One of those mechanisms, typographical fixity, a linear, uniform and repeatable characteristic that allowed for the reproduction of information in unlimited quantities is something that has been — on more than one occasion — likened to how the web works.
Only when you get down to the most basic structures and think in very general terms will the patterns become clear. High-level systems, usually, have a small number of features that make it all possible. The more complex the system is, the more it can be distilled into simple, elegant, constituent functions. Basically, the vaguer the rules, the more possibilities to create, learn and evolve. In the case of HTML, I — like many others — think that one of those characteristics was the humble <a href="...">.
The power of <a href="..."> was to turn users into makers. It allowed people — normal people who didn’t have to know programming — to put things together. And it’s this ability to link files together that is an important part in HTML’s success story. And an inspiration for hacking the paragraph. But the ability to link text together, that could be part of HTML’s future.
* * *
Reading is also writing.
We explore, we read from multiple sources, drawing selectively and critically from them, and weave together our own line of reasoning, reworking the material into constructions of our own. These sources, referenced through annotations, citations and quotations sit at the core of a vision that has generated a lot of interesting work and discussions over the years — the web as an annotation tool.
In its current form, it’s a vision that Ted Nelson, the father of hypertext, strongly disagrees with. His initial vision was a system that allowed authors to include portions of existing documents into their own articles, without duplicating them. Basically, a direct reference to a piece of text (not the file) that was virtually included in the document. Or what we, today, call embedded content.
In simpler terms, it allowed users to link any piece of text to any piece of text. Again, not files, not pages, but content. Turns out that was quite an ambitious vision.
His implementation back then used normal links to which it appended an address within the document that looked something like http://documentlink.com//1942/879. Those last digits in the link meant that the search starts with character 1942 and goes on for 879 characters. The biggest problem with that implementation is that, if you change the layout, you lose the address, you lose the text.
And here we are, 50 years later, with an elegant solution to that problem from Kevin Marks, resurfaced with the occasion of this years’ W3C workshops. He calls it fragmentions and his implementation uses search links instead of cool URLs because, arguably, they’re more resilient. His solution looks like http://samplelink.com/sample.html##annotate+the+web, where “##annotate+the+web” means search for the words “annotate the web”.
Renewed interest in the fragments problem might make things interesting. We could now — in theory at least, until browsers decide to support these features or not — link to any piece of text. But we still can’t transclude. Kevin’s implementation means that when you click a link, a new browser window opens and it scrolls down to the desired text.
The whole point of linking directly to a piece of text would be to reference it, quote it, cite it, or give it ampler context. Basically, to paraphrase it. To use it in that text. So why not do it in place? In the window that’s currently opened, in the text you’re currently reading? Why not transclude it? We have enough problems with fragmented reading as it is.
I think that the <p> tag can do that.
The paragraph was the first idea of an annotation. It literally comes from the greek paragraphein — “to write by the side” — and it evolved from a note set in the margin of a paper into what we know today as a division of text to mark a change in ideas. And what are references, citations and quotations if not “ideas that are trying to escape”?
The paragraph is the perfect element that can be used to bring a piece of text inside another piece of text. If we start thinking of the paragraph as an annotation tool, rather than simply markdown, we can start adding attributes to it. Like the attribute that made the anchor such an important characteristic of the web: the href.
Essentially, the hack would look something like this:
Syntax is pretty simple: ‘^^’ defines the start of the quote, it ends at the expression followed by the ‘...’ or at a normal sentence stop (.,!,?)
This slight recasting of Kevin’s solution combined with Ted Nelson’s idea of transclusions means that the browser would search for the words “annotate the web” and bring them “in” my page. Writers wouldn’t have to manage sources or copy and paste content anymore. Readers wouldn’t have to navigate to read.
For now, due to the same origin policy, this implementation uses a jquery plugin that calls a proxy service on a server, the proxy looks into its internal cache to see if that particular reference has been previously called and used. If so, it serves the content from cache. If it’s a new resource, it crawls the page, strips it down to content only, drops any formatting, puts it in the cache for later usage and serves the request. Basically, that means that if the linked content changes, the text persists as long as the cache does, after which it gets updated with the modified source text. This should work pretty well in authoring systems or even with features like Firefox/Chrome sync.
However, at a browser level, one of the biggest problems that plagued Ted Nelson’s Xanadu, as well as Kevin Marks’ fragments still remains: what if the linked-to text changes? Anyone can contribute to this problem by way of an Indiewebcamp wiki that Kevin put together and does a pretty good job of centralizing all the work done around this topic, as well as community discussions.
We’re still figuring out how to build systems like this, but the focus of this article isn’t exactly on the technicalities of it all, but rather on the interface.
If you think about it, it’s not exactly hacking the <p> as it is a natural progression of a system that already enabled so much evolution and transformation. If <a href="..."> expands the content outwards, then <p href="..."> could expand it inwards.
It’s fascinating to think about the semantic structures that could arise from that. But for the <p href="..."> to turn into a “semantic” thing, it first has to evolve. It has to have meaningful relationships with other bits of the system or with users of the system. Imagining how that will happen is not easy. Few things are apparently obvious, like the fact that it taps directly into the linked data phenomenon. Small, direct applications in authoring tools and writing systems, such as new ways to handle quotations, citations and annotations also come to mind. This could directly effect our interfaces for linking content and our typographical systems that handle citations in new and exciting ways.
Being able to refer directly to a small piece of text is a powerful expression tool for people that work with multiple sources. It creates a world of opportunity to do things app authors don’t have to pre-arrange support for. Browser-provided default behavior can bootstrap massive benefits.
Linkability for very little extra effort is also a tremendous advantage for businesses. Because it makes it easy for lots of people to refer to the same data, it also makes it interesting for search engines as a machine-readable, extractable data source.
As a practical matter, it could also operate as an augmented copy/paste tool, in the sense of the copied material also carrying its source. Who knows?
I see massive benefits at a macro scale, where the <p> can do for HTML what typographical fixity did for text: by producing a uniformly repeatable commodity, it can be an archetype for any subsequent semantic structures.
It could make it easier to build successive iterations on the same text, from multiple sources, instead of trying to retrieve scattered fragments of a text, and thus supporting our natural, textual poaching habits. It could balance the mobility of content brought on by the <a href="..."> with some well needed immutability.
But the predictability of these systems starts at the micro scale: developers trying to communicate things to users. And one of the primary reasons people build things on the web is because they want some other people to find it, use it and make other things with it. And I think the humble <p> can enable an ecosystem of people to do exactly that.
And maybe talking about the paragraph from this perspective will convince standards creators that the <p> needs a “href”.]]>
Now I use a keyboard and I search.
And I used to struggle with the emptiness of a white piece of paper. That’s when thinking happens, the kind of thinking that is slow, deliberate, and effortful. Sometimes, maybe even logical.
It seems like an odd quirk that the only thing that didn’t change over the years is the emptiness of a white search box. Everything else changed: the algorithms, the interaction and even what search really means. But that white search box, still forces me to slow think sometimes.
Remember what it was like to search in 1998? You’d sit down and boot up your bulky computer, dial up on your squawky modem, type in some keywords, and get 10 blue links to websites that had those words.— Amit Singhal, senior vice president of search at Google
That basic idea of putting in a couple of key words and getting back a result that contained those exact matches worked pretty well in the early days. Until somebody figured out that relevant documents often have relevant neighbours. This idea, known in the industry as “fish search”, is what made keywords evolve into some sort of digital magnets, pulling on other data structures to create a space of meaning, which the machine can then interpret.
Going from strings to things, search is now ubiquitous. From text-based, to voice recognition, to visual matching, search has evolved into a context based system that can recognise and understand references to actual “things,” i.e. ideas or entities. And with that, the interaction changes as well.
Now, instead of inputting unnatural, mechanistic queries, you can ask almost-human-like questions. Instead of handling the system, you talk with the system.
The future of search is a conversation with someone you trust.— John Battelle, The Search
There’s something vaguely synaesthetic about talking in writing. As unnatural as that feels, we all adapt by transitioning into a more — what Kahneman calls — slow type of thinking, which activates the conscious, logical bit of the brain, because the fast part — the more primal, instinctive and emotional one — needs help.
I know how to talk to another person, but I don’t intuitively know how to talk to a computer. So I need to think. About keywords, queries, algorithms and generally about how the system works.
Talking to a machine like talking to a human is — for now, at least — still a thing of the future. The vision is a dialogue between an agent and a human, in which they know the context, proactively suggest things and interrupt each other. Like humans do.
But the truth is that we’re not quite there yet and I’m not always ready to think about the mechanics of it all. For now, we’re stuck in a temporary loophole, in a constant switching between slow thinking and fast thinking, forced by the design of our machines. And that continuous swap has a rhythm of it’s own.
* * *
There used to be a time when the only things gravitating around my words were my thoughts. Now, each letter I type in attracts swarms of algobots that organise themselves around my words. And it’s getting harder and harder to read what they’re doing. But maybe there’s a way to feel what they’re doing.
Getting what I say has to do a lot with integrating technologies like speech, natural language processing, semantic autocomplete and getting them to align with the context I’m in.
The system takes into account social, cultural and organisational settings in which computing and information technology will be used. It uses contextual awareness to semantically infer meaning. To understand intent and to respond accordingly.
From location, time and task awareness algorithms to reasoning, planning and learning algorithms, they all need to work together and align with semantics, in ways that are not human readable.
And that’s ok sometimes. I’m not always thinking of the inner workings of the system. On the contrary, most of the times I start a search with whatever is available to me as a trigger: a sound, an emotion, a physical characteristic, a moment, it could be anything.
I start a search by thinking fast. And that’s usually when search fails — when my input is one you’d normally find in the longtail. But with things like semantic auto-complete and semantic snap-to-grid, the system can fail gracefully, guiding me into a more slow way of thinking, so I can type in words that a machine can understand and process. Or that others have typed in for millions of times. It guides me back to the fathead, so I don’t have to slow think anymore.
Paradoxically, it makes me think so I don’t have to think. And the way that unfolds is a bit like a sine wave, that repeats each time I perform a search. So, there’s an important aspect to this type of interaction that we, as designers, should be talking more about: rhythm. A concept that is neither visual, nor linguistic, but essential to our daily lives.
In thinking, fast and slow, we are creating rhythmic patterns. If those patterns match those that our technology imposes, then the interaction becomes fluid, it disappears from your conscious mind.
Good design communicates with the broader, faster, more emotional system.— Joichi Ito, Director, MIT Media Lab
A sense of rhythm is fundamental for any interaction, any conversation, even one with a machine.
In real world conversations, we leave out the context under the assumption that our communication partner knows the context as well. Same goes for conversations with machines. Only that it’s the machine’s job to fill in that context.
And now that we’re teaching our machines to process information contextually, maybe we should start thinking about ways of teaching them how to use that information organically.
We’re already creating interfaces that send information to — and receive controls signals from — our fast system. Wearable sensors, assistive robots, embodied technology in general, all have the potential to enrich our words by adding to the what we’re already collecting — at least 57 data signals if you’re not logged in.
So maybe rhythm can be expressed as an orienting feature — rather than just another datapoint — for structures in information seeking systems. As a descriptive force, that can be used, once you determine a rhythm’s function, to make educated predictions about how it’ll manifest in a variety of situations, like daily activities which occur in regular patterns.
But so does the cyclical balance of our slow and fast minds. And the transition from one state of mind to another gets reflected in our interaction with the system. So rhythm needs to be embedded in the interaction, at the level of the interface.
If we can figure out how to do that, maybe then we can talk about a real conversation between a man and a machine.]]>
A never-ending stream of information just waiting to spill right in front of our eyes, each time a device wakes. But our visual system is uniform, sequential and continuous. That means that we have to process information bit by bit. We have limited resources and bits add up exponentially.
It gets too much, too fast. Until it all becomes noise. A type of constant, background noise that is now an integral part of our lives.From the front of your eyes, to the back of your mind
This is a box that moves the stream of information from your eyes to your ears. A box that plays digital noise.
It offers a new kind of interaction, a synaesthetic one, rather than analytical and linear. The ear, unlike the eye, can process information simultaneously, so, tweets, statuses, posts, check-ins, everything that makes up your stream of information will trigger a specific soundwave. This enables you to hear the information coming your way. It’s what Alex Dong, the maker of choir.io, calls ambient sound monitoring.
By hearing, it’s easier to be aware of the data, rather than actively engage with it. Rather than visually consume the data. And in a world of information overload, things that don't crave for attention, things that sit politely in the corner of the room, also sit in the corner of our eye and in the back of our minds. In our secondary attention.Things that capture the attention, focus the mind. But things that release, delight.
Russell Davies talked about a box with a simple design, an “on/off” button and a speaker playing 3 channels. Like a radio. It’s a good design principle. In fact, it’s the same design used in black box trading: all the traders had was just a monitor in front of them with some numbers on it and just a red button that said, "Stop."
But precisely because it’s there’s just an “on/off” button, the device provides great amounts of auditory information and affords little interaction. It’s what McLuhan calls a “hot medium” and means a high definition-low participation medium. And objects that create friction, draw attention. But if hurdles are removed, the interaction becomes fluid, hiding the object from attention.
And the best part is that only one thing needs to be changed: the toggle becomes a knob.
Almost unparalleled in its versatility, the humble knob shows up everywhere. It can be continuous or discrete (a volume knob or a selector switch), analog or digital (volume again, or on/off). It can be finely graded with a scale, used to control water flow through a faucet, shuttle through movie footage, open a door and, if it’s Griffin’s Powermate, it can do just about anything else.the knob is one of Wired's all time best interfaces
More importantly, a knob is an interface built for rhythm. It ceases control, thus empowering the user to set his own pace. A wonderful control interface.Play some sweet music, not noisy geekiness
The question is now how to seamlessly translate the loudness of data into something that can be physically handled.
This where some proprietary algorithms come in, looking over two axes - importance and frequency, filtering notifications, assigning different sounds to different types of data. Based on similarity and relevance, soundwaves learn to group together to form, more complex and harmonious sounds. Music, almost.
Volume scale is relative rather than finely grained, as is human perception. A turn of the wheel increases or decreases the gaps between groups of sounds. So what the box essentially does, is build quiet gaps in between all that white noise, that is the stream of data.Boombox or platform?
Sound engineering and a bit of machine intelligence is a space with great potential.
The original plan was to move the stream of information to a box that sits politely in a corner of the room. And transforming it into an audible experience might actually work. But the keyword here is a box — as in, any box.
Because mostly everybody already has a box of some kind — a radio, a stereo, anything that has a volume knob will work (the screen is optional), there’s no need for a new product to replace an existing one. But there is one for a product that will augment an existing one.
So I guess the next step would be a bridge between these two worlds, of pre-existing, manufactured products and digital communications. And with platforms for connected devices, like BERG Cloud, that are positioned right in this space, maybe there is a way to make your make own, old stereo to algorithmically broadcast the sounds of your network. All you need is a knob, a bit of Berg and some code. Wonder how many late nights started with this sentence?
By allowing people to extend the list of devices directly, we stand a chance to make the experience of hearing data more meaningful.
That’s the challenge, I think.]]>
But the thing about the future is that it always comes through the side door. It never comes banging. If everything's there, it will just seem like a natural progression of things.
And I think everything is already here.There is a vision.
About a hundred years ago, a man in Poland had drawn the first network diagram, which he called the harmonogram. Besides what's certainly one of the building blocks of project management, Adamiecki — the man in question — had also laid out an overarching principle for the future of project management: harmony.
Working in harmony is what matters, from the importance of creating a good team, to a good coordination and to good tools, he literally wrote the book on the Law of Harmony.
However, harmony is contextual. And Adamiecki's solutions were not. Creating a rhythm, a dynamic and reaching an utopian harmony depends entirely on the specifics of the environment.
Workflows are such a delicate problem of project management that we are only just — a hundred years later — starting to move away from that thinking.There is a way
The best project management software created transparency, created an ambient in which everybody in the team was aware of each other. Because most of the time, work actually means working with someone. And that's why a process of designing with empathy started to gradually get the interface out of the way and started to focus on what people do best: work with each other.
But most of the time, working with different people means working on multiple projects at different stages. That's a lot of extra information that needs to be managed — notifications, messages, documents, priorities, chats, discussions, events. All, things that can create friction and interrupt concentration.
It's our job, as designers, to manage all that information in such a way that it doesn't seem like a burden. To figure out workflows and design emphatically around them. But if there was a way we could build a system that is sensitive, and react to that information automatically, our job would be a lot easier. And people would communicate easier.It needs context
If you think about it, in real world conversations, we leave out the context under the assumption that our communication partner knows the context as well. It's the machine’s job to fill in that context. And the only way to do that is to use AI methods.
Ambient awareness is a powerful vision for productivity if it's to be enhanced by a bit of machine intelligence. And I'm not talking about General AI here, I'm talking about a concept that that helped a lot of companies stay on top of the curve and innovate: fractional AI.
When you give existing products and environments “a kind of domesticated, not-very-intelligent artificial intelligence”, how Matt Webb calls it, they effectively become something else. They become part of the future, pushed through the side door by fractional AI.Things that release
What I'm saying is that we can solve real-world, small, practical problems that users face, but use the big guns of AI to do it.
A system that could automatically align all of that information — documents and discussions — to the context you're in would be a polite system.
Objects that draw attention, create friction. But objects that release, that set the mind free to wonder, objects that are polite, those are the objects that delight, that have a personality.
Time awareness, task awareness, social, cultural and organisational settings, all can be used to semantically infer meaning. To understand intent and respond accordingly.
So now, all that's left is to make the transition from a tool to an intelligent tool: project management software will gradually become a personal assistant, that knows your context and reacts accordingly, removing all obstacles between you and your work.
Giving an environment, like a project management system, contextual awareness is the last step in making a 100 year old vision get as close to reality as possible: people working with other people in complete harmony.
I wanted to see what it would take to put this all together. And to do that, it takes a few things done differently: context awareness, a personalised experience ( it's not the same for everybody ) and adaptive UIs.
All new concepts for productivity tools, all new design challenges.]]>
There was a time when the mere mention of artificial intelligence was wrapped in constant debate and triggered images of Hollywood-crafted products, like Hal 9000. The concept itself is quite controversial; it challenges human thought as Darwin once challenged human origins. But we moved on, and now we carry these intelligent machines in our pockets.
There’s a 38.9% chance you have one, too. Siri, the out-of-sight personal assistant from Apple, delivers an amazing experience. It listens to you, understands you, does what you say, and even talks back to you.
Sounds simple enough for us humans, but these are remarkable achievements for a machine. It has to process language, interpret context, understand intent, and orchestrate multiple services and information sources. And it brings together technologies that rely on dialog and natural language understandings, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, and service delegation to do it.
Spin back the clock 50 years and all of this wasn’t even remotely possible. But just two years after Turing published the first documented idea of intelligent machines, three people were already working on the first system capable of speech recognition, named Audrey.
It could only process digits. Spoken by a single voice. With pauses in between. And it occupied a six-foot high relay rack.
Not exactly a marvel of technology, by today’ standards. But back then, when computers had only 1kb of RAM, it was an impressive achievement. More impressive still, when you think about how such a system came to be.It all started with an illusion act
Many elements from very different spheres come together in the story of Siri, and it all starts with a man doing some magic.
Tracing Siri’s ancestry takes us back roughly 250 years, to Austria, when Vienna still had an empress. The story begins with a man known mostly for what was perhaps the most famous illusion in history: the Mechanical Turk, a machine that could play chess on its own and claimed to win over any opponent.
In reality, it was just a wooden cabinet with a life-size, mustache-wearing doll on top and a man inside, playing chess. It tricked people into thinking the machine was intelligent, but the idea itself was enough to intrigue the likes of Napoleon. (He played the Turk and he lost.)
And while the Turk made its creator—Wolfgang von Kempelen—popular, it is another of von Kempelen’s inventions that marks the beginning for Siri’s story.
The first speaking machine was a pretty straight-forward concept that tried to simulate the human vocal tracts—it had lungs and everything. Nevertheless, it was the first machine that could replicate whole words and sentences. It was this machine that would set the stage for Audrey.Chess, the game that made it all possible
von Kempleton’s Turk was the first machine that could replicate human speech. Audrey was the first that could recognize human speech. But Siri is the first machine that can *understand human speech.
Understanding is the unique ability that swings the story back to the Turk. The machine’s connection with chess isn’t random. Chess is more than a game; it’s an entirely mental activity. And it’s a perfect metaphor that would allow for the birth of a new scientific discipline, artificial intelligence.
A machine capable of defeating a human opponent at a mind game is an intelligent machine, by any logical standards—or, at least, that was the premise.
While the Turk was, for the first time in history, the first real image of a machine that could be better than us at anything, it was just an illusion with a man operating it. But ever since, the idea of an intelligent machine started slowly morphing into physical technologies.
The next obvious stage would certainly seem to be a machine that could play chess and be self-operated. In 1912, the real thing quickly followed. It was called Ajedrecista and it was the first computer game. Only, without an actual, you know, computer.
Making this happen required a deep understanding of how we think when we play chess.
Every move weaves together an amazing chain of mental processes: Perception transforms the pieces on the board into a series of symbols, and long-term memory overlaps perceptions with previous knowledge. Logical thought then searches for variations, and decision-making is needed for the actual move.
Move after move, the chess game becomes a sequence of decision-making events governed by strict logical rules. And it is this logic module in our brain that chess heavily stimulates, so much so that it can be simulated. It doesn’t take a big imaginary leap to imagine that thought can be simulated.
This realization gave way to wonderful theoretical breakthroughs. Concepts like an algorithms, recursiveness and programming were born. Having to analyze how we think about chess quickly lead to computer thinking.AI: A new, old way of designing experiences
A special group of people made a great imaginative leap. They realized that a game holds the secret into human thought. For people like Edward Feigenbaum, Marvin Minsky, Allen Newell, Herbert Simon, Alan Turing, John von Neumann, and Norbert Wiener—the founders of AI as a scientific discipline—pinpointing all the mental processes that are necessary to generate high-level cognitive activities played a very important role in the development of simulated thought processes through computer programming.
Logic and process alone wasn’t enough though. We expanded our concepts to expert systems, knowledge engineering, neural networks, and so on. The subsequent knowledge-based models of thought are nothing short of amazing. But the real breakthrough came from an anti-type of approach: The father of expert systems, Edward Feigenbaum, called it representation. This approach supported the idea that knowledge-modeling the real world was much too difficult; instead, systems should adapt and respond effectively to real interactions with the world.
This is important because it has finally allowed for the development of a truly human-centered approach to designing systems, an approach initially articulated by Bill Moggridge and one which inspired a major shift in design thinking that we see maturing today.
AI and HCI have been described as having opposite views on how humans and computers should interact. Human-centered computing is somewhat bringing all that together by combining intelligent systems, human-computer interaction, and contextual design. Instead of trying to imitate (or substitute) the human, the goal is to amplify and extend his capabilities, much like a prostheses does, although not in the sense that they compensate for the specific disabilities of any given individual, but rather because they enable us to overcome the biological limitations shared by all of us.
Above all else, a prostheses needs to fit, otherwise it will be rejected. In the same manner, systems designed to assist, rather than replace, need to be personal and contextual. They need to be intelligent in order to fit.
In terms of actual capabilities, Siri wouldn’t pass a Turing Test. But it doesn’t set out to do so. It doesn’t try to augment our abilities, but rather extend them.
For example, say you want to go to the best restaurant around. You know you can do that. With the help of technology, you can combine information from different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and your own favorites).
But why would you want to? You want to use technology as a tool, not get immersed in the experience of interacting with it.
Siri delegates everything you don’t want to do. It lets you use technology as it’s supposed to be used, as a tool. By doing so, it becomes a digital prostheses. As a result, the experience is truly human-centered, built for humans based on real human needs.Final lessons
The story of Siri is full of great achievements of the human mind. It shows us how the power of thought can fuel great technological breakthroughs. It ends with the same man that started it all: von Kempelen, the man with the kind of thinking that gave birth to the first speaking machine, a truly amazing technological achievement. But more importantly, the kind of thinking that creates genuine human experiences.
The Turk’s biggest achievement was to challenge how we think about machines. This is the type of thinking that I like to call design thinking.
Yes, Siri still has its shortcomings, starting with the fact that it’s voice-controlled. But the mechanisms behind it are nothing short of amazing. Properly pairing machine intelligence with true contextual awareness is what created the first conversational interface that actually works.
And simply because it works, it marks an important milestone: It becomes a template for all future voice-controlled interactions. Even Google has updated its interfaces to include conversational and contextual interfaces. What Siri did was show the world a bright idea and made it stick.
More importantly, for professionals, the story behind Siri offers valuable lessons in true experience design, vital lessons in times clearly dominated by form instead of content, where an excessive preoccupation with formalism can impede further developments.
Experience design is more than numbers, boxes, and diagrams. It’s emotional, invisible at the time of inception, innovative, developed intelligently, and deeply contextual. A complex multiplex, feeding on a variety of different disciplines, such as neuroscience, psychology, linguistics, logic, biology, social sciences, computer science, software engineering, mathematics, and philosophy.
Much in the same way that Siri forges new tools from old technologies, good design feeds on AI for the raw materials to conquer human experience. To add function to experience. To add personality.
Avoid fields. Jump fences.
Disciplinary boundaries and regulatory regimes are attempts to control the wilding of creative life. They are often understandable efforts to order what are manifold, complex, evolutionary processes. Our job is to jump the fences and cross the fields.
One of those things was a piece of writing from Anne Burdick, called “If words, then reading”. It's a small piece so I'm going to quote it here in full so you can appreciate the playfulness of it all.
If interface, then navigation. If disguise, then disclosure. If map, then itinerary. If resource, then use. If environment, then wayfinding. If plan, then practice. If erasure, then reconstruction. If sketch, then animation. If architecture, then dwelling. If capture, then release. If trace, then archeology. If program, then process.
Some things, like that first line — “If interface, then navigation” — find their way into your mind, lodge themselves there and start nesting.
The connection, inside Anne Burdick's mind, managed to colonise the word interface so completely for me, that whenever I mumble it, I automatically connect it to the end of that line — then navigation. It's concept imperialism, is what that is. And each time that happens, doubt sets in, questions start flooding in. Are we doing the right thing? Are we that used to leaving breadcrumb trails for our users — a word I'm not at all comfortable with — that its effect has escaped us?
We like to think we're building things people are interacting with. But are we really making stuff that users have to navigate through?
Coming back to that first line, it stayed with me because most of the people using what we build make the same connection. That's why it haunts me. Because Anne's association comes from the real world, not the world of design theory or practice.
But that can't be right, can it? Navigation is merely a part of a structure-making process we use to handle information, mould it into a thing and release it into a space in which people can interact with it and make other things — the interface.
Shannon Christine Mattern recently published a solid overview of the term, covering a good spectrum of ideas from both an engineering and humanistic mindset. In that spectrum, the end of “If interface, then” could be a lot of things: space, zone, threshold, translation, semantics, interaction, even allegory and metaphors. Yet, we seem to have locked ourselves in one dimension, in which the sense of orientation — across time or space — absolutely dominates. A dimension that inherits its vocabulary from HCI and engineering, where clicks, tasks, hierarchies, usability and effectiveness are the norm. Where people are users.
And it's this kind of jargon that crutches our own vocabulary instead of augmenting it.
Maybe Jason Santa Maria is right. Maybe we don't have the language to talk about these things. If that's the case, maybe we need a bit of a re-think, because — using Paul Dourish's words — the whole "world can become an interface to computation". And the world is a big word.
Looking through that lens, I can't help but wonder: is this what we have done to the people using the things we make? Have we invaded the word interface so thoroughly, that they can only equate it with navigation?
We need a new design language because science and society aren't polar opposites.
Jan Tschichold was one of the many who talked about the importance of working with typefaces that are in spiritual accordance with our times. But in a time defined by a constant stream of information, proclaiming one typeface or style over another proves to be a challenging pursuit. In an never-ending stream, universality seems to lose its rule. This is a time of circumstantial solutions, a time of contextual thinking. And this is where, I think, the concept of generative typography comes in.
* * *
With one foot firmly lodged in parametric design (or by its first name variational geometry) and the other suspended in potential futures, generative typography, at the moment, hangs in a limbo.
The idea is fairly simple: in the old days, designers had a set of punches and chisels, now they have a mouse and a keyboard. So instead of being carved into metal, the typeface is now being generated by a set of rules or algorithms which can churn out a variety of forms. And when you factor in that those algorithms can now tap into the network through APIs, the possibilities seem endless. And what amazing possibilities they must be. It used to be all about geometrical rules, now typefaces can be tied to environmental, social and individual activities feeding from networked information.
However, all potentially generative systems can quickly go into an exponential “runaway” if uncorrected. What keeps them in check, in a stable state, is a sort of governing loop — the parameter. And in the case of typography, what usually defines the relationships between parameters is the shape of the letter themselves.
Inherited from architecture, where parametric design has a richer history, this preoccupation for form is exactly what keeps generative typography planted within a niched area. At the moment, generative typography is all about futuristic form-making and experimenting: posters, book experiments, interactive installations, and conceptual artworks. It’s still very much a playground for new technologies.
But it doesn’t have to be.
“Every piece of typography which originates in a preconceived idea of form, of whatever kind, is wrong.”
The great misunderstanding of generative typography is to believe that it is only the content and the appearance that matters, while its true potential — comfortable legibility — remains unrealised. Its function — readability, its essence — clarity, is what merits further discussion. And what I propose is we bring systems thinking into generative typography.
* * *
Good typography catches the spirit of a text and stays loyal to it. It’s like a perfect speaking voice, it neither overshadows nor patronises the content. How the text speaks to us, how it is presented to us determines how we approach it. And unconsciously, we demand the shape to which we have been accustomed to. Simply put, when reading any text, the typeface is vital to the clear transmission of ideas.
That can have tremendous implications for — since this article was initially published on Medium, say — a publishing platform, where the wide range of topics and writing styles is bewildering. Beautiful typography is essential, but a single typeface is simply not enough. And certainly, designers can’t design every piece of writing on the platform.
So, to ground this discussion into more practical terms, if every text requires a certain typographical atmosphere, why not generate that atmosphere with the help of algorithms?
I think there’s a new kind of product emerging in that space and I wanted to see what it would take to put it together. So if you will, let your imagination run along these following lines as I will attempt to describe an integrated vision and a few speculative details.
* * *
Getting to that comfortable reading experience means starting with the thing that creates it in the first place: writing. When you write, words have value. Arguably, that value can be represented or encoded in a system. And it is that system that is at the heart of it all.
As you type, algorithms can analyse the text in real time, scanning, identifying and constantly matching keywords to sentences and paragraphs — in short, context — in order to find the best possible typographical match. They go scouring through every word to look for indicators that can be fed into that value system. Language processing algorithms, data detectors, intent understanding algorithms come together to align with the semantics of the text. A sort of semantic snap-to-grid.
Based on linguistic style interpretations, the system snaps to those typographical matches that make the most sense depending on the text you’re writing. The algorithms can learn to distinguish between news and personal pieces, between political and fiction, and ultimately between any two styles. They can then automatically set the type for you. The whole idea is about bringing a certain level of intelligence at the interface, using Tom Gruber’s words.
Basically, as you write, the typeface changes in real-time to match the style of your writing. No superfluous, visual interface. Your writing is the interface. You want a more ragged look, write more raggedly. If you’re looking for a soft look, tone down your voice. The system is there only if you need it, acting as your own aesthetic assistant, if you will.
In this scenario, generative typography becomes a real-time visualisation of context. The system reflects your writing style and adapts to it. You can see the changes happening and you can shape them. This idea of systems that show their context, proposed by Paul Dourish in 2004, is one I wholeheartedly agree with.
Under the hood, the architecture would be based on a declarative model of “which typeface is good at what?”, connected to different APIs, mapped to domain models and domain models mapped to language.
A lot of services are available on the web with structured data over APIs. For relevant services, designers can declare and model the capabilities.
This is an environment with new possibilities. Designers don’t just fiddle with fonts anymore. Nor do they design an overarching style or proclaim an universal solution. They can also design the system that intelligently delivers the best possible typography for any given piece of writing. It’s a fine balance of form and function.
Good typography — again, in Jan Tschichold’s words — acts as a “tactful servant to the written word”. And maybe, just maybe, the day in which we give new meaning to those words is closer than we thought.]]>
More often than not, the most successful strategies are rooted in our own very nature. And common to most Grandmasters is that they almost never take the easy way out. A different, better alternative is always available, and they go looking for it. That creativity, that compulsion, that drive to look beyond what comes instinctively is what fuels successful strategies and explains why so few Grandmasters are out there.
For most of us, however, things are simpler. We tend to favor the shortest path, the easy way out, the shortcut. We cut through the middle of the park if we have to.
We’re naturally lazy. When we think, we search for the nearest pattern, and when we find the pattern, we don’t need to think anymore — we just follow the pattern. Our brains have evolved to be — borrowing Edward de Bono’s expression — “brilliantly uncreative.”
This preference for shortcuts is what Grandmasters naturally exploit. Playing on well-known weaknesses, automatisms or unconscious actions has long been a common strategy in chess.
* * *
When it comes to using and interacting with technology, that underlying behavior seems to hold. Users unconsciously look for the line of least resistance — la loi du moindre effort, as psycholinguist François Richaudeau called it. Richaudeau’s work revolved around words, but considering how vital communication, language and words are to the design of interactions, the core principles still apply: The shortest words will be those emitted, understood and read with the minimum of effort, and also those most frequently employed due to their more profound presence — or incrustation — in our memory.
We remember shorter words better. That’s because, according to Kenneth S. Goodman, we process graphic, syntactic and semantic information simultaneously. In doing so, users carry out cycles of sampling, predicting, testing and confirming as strategies to bring in “the most reliable prediction with the minimum use of the information available.”
The process isn’t specific to words and reading, though. It underlies our behavior and interaction with the world around us.
Framing it in this way seems to suggest the generalized idea that behavioral patterns are linked to common causes — a concept expanded by behaviorism to the extent that it proposes a predictable and reliable link between a stimulus and the response it produces. That generalization is largely refuted by the cognitive revolution on the grounds that prior knowledge and mental processes intervene between a stimulus and response to reduce the predictability of human behavior — or the response — given a stimulus. Essentially, we can’t predict behavior.
While that may hold true at a micro level, there are still plenty of common paths of action in the way we work — stimulus, sensory organs, interneuron, brain, processing, motor neuron, response. It’s still safe to assume that if we design an environment in which even the subtlest of details is sympathetic to how we, as humans, work and think, and with the singular purpose of easing the experience, then users will always favor that line of least resistance. Is the environment we design sympathetic to the way we work?
Designing an experience from a perspective that is more closely related to how we naturally strategize, designing guidance through pattern prediction, as opposed to limit enforcement, might prove to be a more viable approach.
Let’s put that in perspective. Take decision fatigue.
We all make decisions every day. Some of them big, some of them small, some conscious and some so insignificant that they slip by us without our even realizing. And in a world in which we are constantly bombarded with information and media options, such as blogs, social networks, magazines and TV, the effects of so many decisions that we have to take day by day become increasingly obvious.
Our decisions, even those that are mere preferences between option A and B, gradually take their toll on our cognitive load. The mental processes of decision-making are strenuous; we have only a finite store of mental energy available to exert self-control.
Basically, we get tired if we make too many decisions. As a result, according to Barry Schwartz, one of three things is likely to happen: we end up making poor decisions, we become more dissatisfied with our choices, or we get paralyzed and don’t choose at all.
Interaction processes — regardless of the medium — are a prime example of decision fatigue in action. At a micro level, hundreds of small, subtle and, most of the time, unconscious decisions that we take every time we use an application, browse a website or even follow an ad can affect the way we experience the artefact that we’re interacting with.
At a macro level, thanks to what we know about decision fatigue, we can predict that users can be overwhelmed to the point of making a default choice or no choice at all, depending on the option they face. One example of this is Dan Ariely’s opt-in versus opt-out example for forms of the US Department of Motor Vehicles. It shows that we can basically predict how people will react in a very specific situation by controlling the design of the application form, paying close attention to the defaults.
This is a very specific prediction of behavior. We’re not normally used to thinking of behavior as being that predictable. But in a lot of situations, it is. Ariely calls it being predictably irrational.
Contextual cues — through signs or linguistic cues, because they have the potential to refer to a specific value system — can sometimes direct cognition in particular ways, either encouraging or discouraging certain patterns of behavior. Thus, it’s vital that user experience practitioners maintain complete control over and responsibly use all elements in an interface that could be used as contextual cues and predictors.
And one of the biggest cues at the disposal of designers is language.
We, as a species, have refined language through countless iterations over the span of millennia to such an extent that words are tied to complex experiences, rather than to singular instances, by way of symbolic representation.The Power Of Words
In 1967, Paul Watzlawick described what later became a cornerstone of communication theory: meta communication. He postulated that every communication includes, apart from the plain meaning of words, more information: information on how the talker wants to be understood and how they see themselves in relation to the receiver of the information.
Language, as a clear manifestation of the cognitive capacity of the human mind, is only a medium to carry out acts of communication. That’s one of the reasons why H.F. Bradley once said that experience is basically incommunicable.
Still, there is always something to communicate — moods, annoyances, happiness, feelings, ideas. There is always something that we need to understand — a noise, a color, a sign, a danger, an emotion. There is always some medium that is familiar to us — a grammar, an artistic language, a chess board, an interface.
While context provides a window into how people behave, words have the power to steer people towards a particular pattern of behavior through frame manipulation.
Frame manipulation invariably affects users — admittedly, to different degrees. One prime example of how the context of options and the wording of a question affects behavior is Tversky and Kahneman’s famous experiment in 1981, which demonstrated systematic reversals of preference when the same problem was presented in different ways.
The experiment required participants to make a hypothetical decision, and the researchers tried to determine whether they could steer the participants towards an answer simply by wording the questions carefully.
The experiment proposed a hypothetical outbreak of disease with 60,000 predicted deaths, and participants had to choose between two programs:Program A 20,000 people would be saved. Program B There is a 33% chance that all 60,000 would be saved, and a 66% chance that none of them would be saved.
With these options, program A was a favorite among participants. But what happened when the same question was asked again, but this time, instead of counting the number of lives saved, counting the number of deaths? Here’s how the second option would have sounded:Program A 40,000 people would die. Program B There is a 33% chance that none of them would die, and a 66% chance that all of them would die.
This time around, the clear favorite was program B.
The interesting part is that the math is exactly the same for both: 40,000 out of 60,000 would die anyway. But with the first option, participants were given the choice of a gain: they were given the choice to save 20,000 people. So, the gamble was perceived as negative against the word “save.”
In the second option, participants were given the choice of a loss: 40,000 people would die, or they could gamble that none of them would die with a 30% chance.
Users were influenced simply by the wording. This later became known as the framing effect. The effect helps to explain why people are much more likely to buy meat when it’s labeled as 85% lean, instead of 15% fat. And it’s why twice as many patients opt for surgery when told they have an 80% chance of surviving, instead of a 20% chance of dying.
As the experiment demonstrates, words have the power to selectively influence a user’s perception of meaning. And, for most applications, words can also be used to elicit interactions.Summing Up
Context is the barrier that makes or breaks a user’s experience. Those unconscious moments, every bit of information we discard on our way up an emotional peak, all of those fine details — they all could potentially lead to a sensible, sympathetic experience. A enjoyable experience is shaped only by context.
Some people use context and design great experiences, some don’t. Oliver Reichenstein calls it a continuum. That’s perfectly natural. We need to constantly make an effort to glance at the future once in a while — to permit ourselves educated dreams.
We need to permit ourselves to keep the vision going — from the ones whom we got it from, to the ones coming to get it. We all have the tools to do it. We just need to be responsible with them.]]>
With each day we learn something new that helps us better understand what human experience is really about, it repeatedly challenges our perception of it in some fundamental way.
But looking back to how design has shaped the necessary tools to study, influence, mediate and sometimes even control the way we experience the artifacts we interact with, it raises the question if that experience can really be designed. And it certainly triggers lively debate.First, semantics
An apparently simple statement like "experience can/can't be designed" requires at least a working definition of the terms at hand.
experience /ɪkˈspɪərɪəns, ɛk-/ noun an event or occurrence which leaves an impression on someoneOxford American dictionary
design /dɪˈzʌɪn/ verb do or plan (something) with a specific purpose in mind
Also, logic dictates that can implies the principle of alternate possibilities while can't implies absolute. So, on one hand we have a possibility and on the other, we have a bold statement according to which, under no circumstances, an impression — admittedly, with all the associated cognitive implications — can be planned and resulted for. Dismissing all other possibilities seems a bit drastic.
But a dictionary definition will not suffice. The complexity involved in dealing with human experiences can be obtained by merely considering behavior; the sheer number of functions we have to call upon is simply astonishing: stimuli, reception, expectancy, response, function, meaning, mental models, perception, encoding, memory, engagement, interaction, emotion and so forth.
This involves insights from a long array of separate disciplines like linguistics and communication, cognitive and perceptual psychology, information architecture and design, sociology and social interaction. We learn what we have to. Only to inch closer to solving a problem. And we constantly redefine ourselves to better design a system in which all these insights come together harmoniously, regardless of medium, be that a brand, a website or an application.
But complexity can also be judged by the minimum information content that can trigger a observable reaction. If we can look deep enough at the smallest, simplest entity that can further form the simplest, quantifiable, controllable and repeatable sequence, we can then get an idea of how we can control and maybe design experiences.The god helmet
Up until recently, one of the most common and also powerful experiences known to man involved religion in some form. ( 1 ) Reasons for this are varied and irrelevant in this context. What is relevant is the fact that, if we can find a way to mimic and simulate a religious experience, an experience so basic that our society has evolved throughout thousand of years directly into our brains, we just might discover the argument and inspiration we need to pursue designing experiences properly.
This is where the god helmet comes in. It is a device designed originally by Stanley Koren to study creativity. But the participants reported a sensed presence and about 1% claim to have experienced God. This obviously lead to a media hype which also gave it it's dramatic name. Although the experiment still awaits proper scientific peer-reviews, it successfully proved that a subjective experience can be induced by using specific brain functions.
Basically, a small set of brain cells in your right temporal lobe can produce a powerful sense, a memorable experience. And if we can map out the stimuli to which our brain responds to — in this case a magnet on your right hemisphere, but we can safely extend the list from physiological stimuli to psychological ones — we can determine a sequence of different stimuli, all carefully controlled to trigger a response. We can develop whole systems and procedures to induce a certain type of impression.
Even if this example explicitly outlines the physiological nature of the experiment, nevertheless it supports the idea that there are possibilities to design experiences. We just have to find them. Signs of patterns are everywhere.
Just look for psychological tripwires and think about how people think.Experience designers are structuralists
Or rather, they should be. In the words of Carl Sagan, only a small group of individuals, men or women, who find all human knowledge — the arts and sciences, philosophy and psychology — interesting and, most importantly, accessible can truly look for insights and connections to coherently synthesize a system and manipulate it in such a manner that it results in a real, hopefully lasting, emotion.
Practitioners of specialized crafts like typography, usability, information architecture, interaction design, content design can greatly influence a user's perception. But experience finds it's roots in systems.
Structuralism, as defined in the Oxford American dictionary, is a method of interpretation and analysis of aspects of human cognition, behavior, culture, and experience that focus on relationships of contrast between elements in conceptual systems that reflect patterns underlying a superficial diversity.” Thus, structuralism straddles multiple disciplines, such as language, architecture, graphic design, sociology, and anthropology, to name a few.
It is the one trait that sets us apart as an industry and enables us to take on the role of system creators: the ability to make connections. Norman Potter refers to it the trait that unites the very disparate standards that coexist in any one profession, ( 2 ) Milton Glaser calls it a way to unify separate occurrences and create a gestalt, and experience in which this new unity provides insight, Simon Collison calls it our spirit of inquiry and Dan Cederholm describes us as 80 percenters.The risk of mistaking ignorance for perspective
A structuralistic approach to design is not without risk though. Scientific observation, a process that is to observe, collect, sort, analyze, postulate a theory, test also leads to mountains of data, that more often than not are hard to make sense of.
Getting data is easy, but selecting, storing, indexing, updating, and most importantly contextualizing the information is rather difficult.
To accurately form conjectures about possible interactions between insights obtained from brain physiology and human behavior, comparative and analytical thinking is critical. Observations need to be rigorously studied to be adequate enough to form a basis for solid reasoning.
But the benefits of churning through cognitive complexities far outweighs the costs. Mapping out common sequences of particular cognitive functions is a solid way of mediating and creating experiences, regardless of medium.]]>
I am putting forward a series of scribblings, covering a good spectrum of ideas that fit the label of my own formalism, my personal approach to design as it directly relates to human experiences.
As I explore related disciplines and their tangents, I hope to articulate how human conceptual, living structures and patterns — letters, words, language, interaction — relate to the seams and scars, the nodes, edges, tangents, trajectories, bends, pipelines ( 2 ) they create in the systems we design.
Centering on the connection between the physical and the digital world, topics will include textual interaction, personal informatics, information interfaces, software architectures, ubiquitous and pervasive technologies.]]>