Here are the slides and a complete transcript of my 2017 Information Architecture Summit talk on language, cognition, and vocabulary design, delivered this March in Vancouver, BC. This is the latest iteration of a topic I first presented at Taxonomy Bootcamp in Washington, DC in November 2016.
Because we use all use language proficiently every day, we tend to assume we know how it works. Sometimes we’re even right! When designing communication systems for others, however, we frequently run into wild discrepancies between what we expect our users to understand — and what our users actually understand.
One culprit of this understanding gap is the set of assumptions our always-on, automatic cognition systems make about what we see, experience, and read. By understanding how these systems work — and what sometimes makes them work against us — we can learn to make smarter recommendations for user-facing design vocabularies that not only technically “work,” but that also help us better facilitate user, experience, and business goals for our clients.
This talk will draw on recent cognitive sciences research in perception and decision making to unravel the common ways the words we design sometimes work against us. Using examples from my own work and from the web at large, we’ll look at:
- the brain’s two modes of interrogating the world around us
- the role of context in the meaning we assign to words
- the salient thresholds at which the message we’re trying to convey “clicks” … or crumbles
- ways you can leverage this knowledge to make better design decisions in the work you do
Slides & Transcript
Language Arts for the Lizard Brain: Vocabulary Design for the Predictably Irrational
Several years ago, as part of a small team at Deloitte Digital, I worked on a website redesign project for the Employment Development Department of the State of California. This is the site where unemployed Californians can apply for state benefits.
As it turns out, in California, you don’t ever have to have been employed to apply for benefits, so this site is really popular.
Unfortunately, it wasn’t very usable for most of its visitors — users had a notoriously hard time finding the information they needed. As a result, they quickly gave up on the site and called customer service, which as a result was routinely overwhelmed. The Employment Development Department ended up with dropped calls left and right, which made for lots of angry Californians with extra time their hands (and active social media accounts).
It was our job to alleviate that problem by fixing the website.
If you’ve ever worked on a project like this, you know that there are some pretty obvious hierarchy, layout, type, and color improvements that are pretty easy to spot. These are the kinds of things that partners and stakeholders can recognize easily recognize, even if they might not have been able to suggest them themselves.
These first impression changes, of course, are important: Web pages that look better are perceived to work better.
Sometimes this is the affect bias at work, but sometimes it’s also the case that we’ve created more easily scannable hierarchies and information paths. We notice things like grid, hierarchy of elements, visual grouping, and color palette automatically. We can’t help but look at a group of clustered items and recognize them as a related set. This is thanks to Gestalt perception processes like similarity, symmetry, and proximity.
Even when we succeed in advocating for effective visual design, however, these elements alone are insufficient to help our users form an accurate impression of what a site is and what they can do with it.
This understanding comes from the words that we use on the page and the way that site visitors are able to assemble those words into a coherent, holistic model.
This assembly process happens quickly, and, in the first instance — much like our gestalt perception — it happens automatically. Jakob Nielsen writes that
“The first 10 seconds of the page visit are critical for users’ decision to stay or leave.”
It’s in this brief window that site visitors form crucial and durable impressions about how usable a site may be, as well as what a site does and what it can do for them. In addition to recognizing gestalt level groups and categories, visitors look for key elements like menus, navigation, titles, and headers to provide clues about what the site has to offer.
You might think — or hope — that visitors then form rational opinions based on a careful evaluation of your meticulously thought out Information Architecture. Unfortunately, in the same way that Gestalt elements are evaluated automatically as soon as we see them, the first impressions of the symbolic content of a page is usually the result of a quick, often irrational decision-making process.
In short, it’s our Lizard Brain at work.
In the first few seconds in a new symbolic space, our rational, figure-it-out symbolic reasoning mind isn’t yet involved. When our reasoning self does get involved, it’s these visual and linguistic first impressions we’ve already made that it takes as its primary inputs.
On the web, and in emerging media like chat-bots and speech interfaces, the linguistic component of these first impressions is increasingly important. If users don’t get a quick, accurate sense of what kind of site this is and what they can do there, they won’t stay.
It is equally important that we understand how the lizard brain encounters the knowledge spaces we build so that we can use our knowledge of that process to make better design decisions.
With that in mind, today I’d like to explore this brief space of first encounters with symbolic content across three topics pulled from recent cognitive science and linguistics research:
- the two cognitive systems we use to make sense of our wildly complex world
- the role sets of words play in the construction of meaning as activation patterns
- the process of constructing cognitive frames to create understanding
I’ll wrap all this up by giving some tips on how you can use this information and these concepts in your own design process. Let’s start by looking at the basic systems we use to perceive and understand the world around us.
In the book “Thinking, Fast and Slow” psychologist Daniel Kahneman identifies two fundamental systems behind the way the brain operates. These are, creatively, called System 1 and System 2:
“operates automatically and quickly with little or no effort or voluntary control”
“allocates attention and effort and is associate with agency, choice, and concentration”
Let’s look at an example. We all recognize this as a math problem:
17 x 24 =
We see right away that this is simple arithmetic; we don’t even have to think about it. That’s System 1.
We also know that the solution probably isn’t 124, or 16,548. That’s also System 1.
To solve the problem, however, is effortful. It takes concentration. We’ll often go to great lengths not to have to expend this effort.
17 x 24 = 407
Did anyone figure it out before I revealed the answer? (crickets) No one ever does. Unless we have a reason to expend the effort, we generally don’t. Laziness is built into the operation of our cognitive systems.
The fact that System 1 is automatic is also built in: the process you use to recognize an arithmetic problem is the same process by which, barring brain injury, you can’t help but to understand words and simple sentences in your own language.
I know, this is kind of a dirty trick. But it proves a point: we understand the words as soon as we see them. It doesn’t’ require any effort on our part.
Incidentally, you can apply effort to try not to read the next slide I show, but that’s hard to sustain over time (that’s System 2).
When we scan a new webpage and notice key elements, System 1 is at work. And this has implications! As Kahneman puts it:
“If System 1 is involved, the conclusion comes first and the arguments follow.”
We prefer not to have to engage System 2 to work through a statement — whether in math or language — and figure out what the author probably intended.
These operations of Systems 1 & 2 have some key implications for information architects:
- Visitors don’t read your website, they scan it
- A site’s key words are automatically parsed — and that parsing is done by the laziest part of our brain
- The first and easiest sensible solution to how those key words fit together most often wins
This “easiest sensible solution” becomes our baseline understanding of what a site is and what it has to offer.
So now that we have a (probably disappointing) basic understanding of how our cognitive systems perceive information, let’s look more closely at how these initial models are built.
In “The Way We Think,” cognitive sciences professor Gilles Fauconnier argues that:
“Linguistic expressions prompt for meanings rather than represent meanings”
Words don’t “mean” things to us on their own, but rather prompt us to construct meaning out of our education, experience, context, and, often, our desires.
There’s a good reason for this. The average educated adult has a vocabulary of about 30,000 words. Many if not most of these words have multiple meanings — sometimes many meanings. Our ability to assemble meaning on the fly is critical to our being able to assemble meaning at all.
These constructions are what Fauconnier calls “activation patterns.” Let’s look at an example of how this works.
On the California Employment Development site we can pick out the terms that stand out on a scan:
- Home, Inbox, New Claim, Profile
- Personal Information, Message Center
- Help, Logout
There are a couple of quick, easy conclusion you can make to answer the question “what kind of space is this?” Maybe it’s an administrative space: I have access to my profile, mailing address, and a system inbox. This could also a space for accessing forms — that’s definitely what the state of California wants you to do here.
When my team and I did just a bit of light research into what users actually came to this site for, we found neither of these were the reasons people came here. There was overwhelmingly one reason people visited this site. They want to know two things:
“When will I get paid?”
“How much will it be?”
Sadly, this information was usually located within this set of pages for most users. But they gave up on the site before they found it because it didn’t look like the answers to these questions was there.
Our redesign did fix some of the visual problems with this site, but first we fixed the activation patterns that told users what kind of place the site is. We used all the same information, but instead of being organized around forms, we organized that information around claims — particularly active and pending current claims.
Once we established that core pattern, we weaved it into the site across the updated user interface:
We added a status label, that essentially replaces “home.” In this context “home” didn’t work to create any useful meaning for users. It didn’t help them figure out what kind of place this site is.
We kept access to forms easy, but we made it secondary to better match users’ information needs.
We moved all of the moved administrative and account information out of the main view to keep this page’s purpose clear
Most importantly, we brought the language of claim status front and center. We put it where visitors see it immediately and understand that they’re in the right place for what they want to know.
Though this initial interpretation process happens largely without our knowledge, the way that activation patterns prompt for meaning, is, arguably, the primary way we differentiate between information spaces on the web. We can find a lot of examples of sites that have nearly identical layout and hierarchy but which we rarely ever confuse from one to the other.
For example, take the Starbucks marketing website:
We have a very standard horizontal menu header with a logo, nav, shopping cart, and store locator, all on top of a rotating hero image.
Compare this to the outdoor equipment outfitter REI, which has virtually all of the same elements in almost exactly the same place:
It does, of course, have a different logo and different labels in its navigation bar — and in these examples, logos do play an important brand role — but they’re not the key difference in how these sites create meaning for their users, especially for new visitors. Consider that even sites for new brands can successfully convey what they are with appropriate labels.
In each of these examples, it’s the salient text we see on screen that creates our sense of place. This attention to salience is important: on the California Employment Development site, “Home” wasn’t salient; it didn’t carry rich meaning. By the looks of it, it’s not salient on these sites either — neither one of them uses a “home” category.
When you create shared information spaces for others, an awareness of how activation patterns work gives a few insights you can bring into your own design process:
Words don’t “mean” on their own — we assign them meaning on the fly. This happens automatically and often without our conscious intervention or effort.
Vocabularies take on sense based on the activation patterns in which they’re presented. We use sets of words to build understanding in context.
Visual design and layout can influence activation patterns, but rarely create new meaning alone. It’s our words that differentiate information spaces and afford the creation of new meaning
This process of creating new meaning is paramount to how we use language on the web. It happens in what Fauconnier calls “Cognitive Frames.”
Fauconnier writes that Cognitive Frames are assemblies of “mental spaces,” the “small, conceptual packets constructed as we think and talk, for purposes of local understanding and action.”
In order to make sense of the vast complexity of the world, we start by “framing in” new information based on what we already know. This happens with the help of System 1. Because System 1 is lazy (and because we eventually have to figure things out), once we’ve created a frame that seems to make sense, we are very reluctant to revise it.
Mental spaces and the cognitive frames they build allow us to do quite a bit of impressive mental work. Among other things, they help us create:
Global insight, which allows us to bring the entire history of our experience to bear on new situations
Human scale understanding, which is how we translate our experience into orders of magnitude we can relate to
New meaning, which, in Fauconnier’s terms, is the blending of individual cognitive frames to create novel insight
We’ve seen examples of this blending process to create global insight and new meaning already. The language I scan on the California Employment Development site tells me what I can learn there, and what I can do there.
Because that website’s context was relatively simple, my team and I didn’t have to go to any heroic measures to ensure that it operates at the human scale. With even just a little more complexity, however, we can run into problems and mismatches with the scale of the frames our visitors create. The Starbucks website provides an example of this.
In the Starbucks site top level navigation, we can see labels for Coffee, Tea, Menu, Coffeehouse, then a little further along: Blog and Shop. If we open the Shop menu, we also see Coffee and Tea. We then see Gifts, Equipment, and Drinkware — which, if you’re in a Starbucks Coffeehouse, are likewise on the shelves all around you.
When you click on any one of these links, you end up on page with a different hero image, but with largely the same layout and many of the same navigation and wayfinding elements. And we still see Coffee, Tea, Drinkware, and Equipment in that menu bar.
But if I click on Coffee here I’m now in a completely different place. I can’t read about these coffees or the coffee shop experience any more. I can only buy coffee — and I can only but it online.
If I click on the Starbucks logo, which I would expect to take me back to what I understand as my starting point, it actually just takes me back here, to this shopping page. The result is that this is a separate space. It is a locked “mode” in the Starbucks online experience. But it’s one that’s presented as continuous based on the consistent framing provided by the site’s navigation and labels.
The effect is that what is presented online as a single space is experienced at the human scale as multiple: it’s a marketing site, in-store experience, and an online shop. Shop, Coffeehouse, Store, Coffee, Tea, and product categories are used without discrimination across the experience.
In this online experience I exist in different places and different modes simultaneously. Digitally, there’s no problem with this. It probably also makes perfect sense from a product line perspective. Trying to make sense of it based on the experience of space we all have as embodied individuals, however, is a different story. It’s difficult and effortful for us to track.
We can think of this problem of scale as one of “seamfullness.” Starbucks has framed their site as a seamless experience, and in this case that actually makes it confusing.
REI offers a counter-example. REI uses language to expose the seams of its digital experience, allowing users to create effective cognitive frames as a result.
The REI main page uses navigation elements that prop up its brand: Camp & Hike, Climb, Cycle, Paddle, Run, Snow, etc. This is how REI messages where you are and what you can do here.
REI also has a second “bargains” eCommerce site called “REI Garage.” Though the layout style of the site is similar, and though they carry many of the same items in the same categories, they’ve used language to create an entirely different space.
Instead of Camp & Hike, Climb, Cycle, Paddle, etc, we see: Activities, Men, Women, and Kids. It’s clear that you’ve moved to a new space with a new set of meanings.
You’ll also notice, of course, that there is a distinct visual design change here: one page is predominantly black, the other predominantly white. And you might (very reasonably) think that this is how people know they’re in a new space. As before, visual design elements are important, but they’re not the primary differentiators in how we create cognitive frames.
We can test this by removing the images: Here’s the REI Co-Op site and REI Garage, side by side. Our automatic, several second scan of each of these information spaces, even without images, sends a clear message that they’re different:
They have similar content and similar things you can do, but the seams are clear. We’re able to operate within them as distinct places at the human scale, just as we would expect with physical spaces.
This insight about cognitive frames can help us create the meaning we intend in the digital spaces we design by keeping in mind a few things:
Frames are how we understand where we are and what we can do there. We use frames to create mental spaces to process the wide complexity of language.
Successful frames create human scale understanding, in order to better relate to human scale experience
Leveraging human scale experience helps us see and create new meaning
This is the case whether we’re helping a user understand what he or she can do in an entirely new space, like California’s Employment Development site, or helping them more easily and intuitively understand how to navigate a familiar digital space like Starbucks or REI.
Putting It All Together
At a high level, you can begin to design more effectively for even the most irrational user’s lizard brain by keeping in mind a few key points:
Select Language for the Easy Construction of Intended Meaning
Site visitors scan for sense making language automatically and involuntarily. The words we choose need to help them easily construct the meaning we intend.
Create Intentional Activation Patterns
The words around the key terms you choose influence the patterns users see — even if you don’t “own” or control those terms. In the Starbucks example, we saw that Shop, Store, and Coffeehouse work on their own relative to Coffee and Tea, but assembled together they create activation pattern chaos.
Build the Information Spaces you Create to the Human Scale
Sometimes the easy things to do in the digital world don’t translate to human experience. As we saw with the REI example, creating human scale cognitive frames, in this case a clear sense of seamfulness, REI was able to keep the digital spaces they build clear for their human users.
And this, or course, is our ultimate goal: designing clear, easily understandable information spaces for our users. By understanding a bit more about how the Lizard Brain processes new information on its first encounter, you’ll be better equipped to reach this goal for your clients and for your users.
If you’d like to read more about the science behind these concepts, check out “A Cognitive Sciences Reading List for Designers.”