Here is a video and complete transcript of my 2018 Information Architecture Summit talk on prototyping information architecture with dynamic content, “Prototyping Information Architecture,” delivered in March 2018 in Chicago. For a click-through of the full deck, check out the slides on Slideshare.

Talk Description

When designing information systems for human use, there’s no substitute for putting a contextually authentic experience in front of actual users and getting real, actionable feedback. This is especially true for the solutions we design as information architects: our categories, labels, and menus don’t “work” unless they work for our target audience. Common methods for prototyping and testing information architecture, however, tend to be either very abstract (like card sorting and tree testing), or mind-numbingly tedious (which you know if you’ve ever had to “find and replace” menu items in an InVision or Axure prototype).

In this talk I will show you a method for building lightweight, medium fidelity prototypes that start with content and structure and let you easily model and test information architecture solutions early in the design process. I’ll share the goals of this approach, and then walk through how to build your own prototypes using free, open source tools. I’ll also illustrate how IA prototyping works in the context of a project with a case study drawn from my own recent client work.

By the end of this talk, you’ll have seen:

  • why prototyping IA is important and how it differs from interaction and visual design prototyping
  • how to build structured content driven IA prototypes for your own projects and how to access a free, open source framework to get started
  • how to integrate an IA prototyping approach into your existing research, ideation, review, and iteration process

Full Talk with Audio and Slides

Prototyping Information Architecture

In December, 1903, on a remote set of sand dunes in the far reaches of the North Carolina Outer Banks called the Kill Devil Hills, two unknown bicycle makers from Ohio ushered in the era of aviation by successfully completing the first piloted, powered flight of a fixed wing aircraft. This first flight only went 120 feet and it only lasted 12 seconds.

In addition to the Wright Brothers, there were only five other people that day to witness the event. Four men from the local weather station that were there to help with the machinery, and local boy named Johnny Moore who just happened to be passing by and thought he’d stop and see what all this hubub was about. Of the seven present that day, 17 year old Johnny Moore was the one who could have lived long enough to witness the other bookend to the pioneer era of flight, the Apollo 11 moon landing 63 years later in 1966.

As a youngster myself, I was (of course) fascinated with airplanes and flight. I had the pictures; I built the scale models; I flew the gliders. I even had one incendiary misadventure with a model rocket engine. You can imagine my excitement, then, when an east coast relative offered to take me to the Smithsonian Air and Space Museum to see the Wright Flyer, the original airplane, the one that started everything.

I can still remember as a youngster of seven or eight, turning into the Wright exhibit, seeing the single speed Wright bicycle there to the left, encased in glass, patiently awaiting hipster revival twenty years later. And then turning the corner and laying my eyes for the first time on the first airplane: the Wright Flyer, the machine that started it all.

If I recall correctly, my exact words at that moment were: “What the hell is this?!”

It’s got no cockpit. The tail is on the wrong end. It looks like it’s built with sticks and held together with wire. And why is this guy laying in the middle of it? What is this, an airplane for narcoleptics?

Needless to say, young Andy was appalled. It wasn’t until much later that I developed a more nuanced and sophisticated appreciation of the importance of prototyping. And it was much later still that I realized that while prototyping is important for the validation and testing process, it is equally if not more important for the learning and creation process. This is particularly true when it comes to the kinds of complex problems that we face as information architects.

As unlikely as it may sounds, the Wright flyer is an excellent example of this process in action.

Modeling, Simulation, and Iteration

Let me tell you a story — a true story this time. Though the Wright Brothers are duly credited with the innovation of powered flight, before they solved the 1903 problem of power, they first solved the problem of how to control an aircraft in flight.

Lift and power were technical problems. These were problems the Wright Brothers knew would eventually be solved. There were lots of other people also working on this at the time. The problem of pilot control, however, had no precedent. Aerospace research engineers Padfield and Lawrance write that

“the invention of the aeorplane in 1903 somewhat overshadows the earlier flight control developments.”

Padfield and Lawrance and many others — including the current curator of the Wright exhibit at the Smithsonian — attribute the “birth of the airplane” to the 1902 glider.

This problem of control, while also technical, was at its core a human problem. The Wright Brothers needed to solve for how humans behaved in the complex system of machine, environment, and human agent that they were working to construct. The Wright Brothers used a process of modeling, simulation, and iteration to learn from each version of the gliders and they flyers they built.

They also assiduously limited the variables in each version that they tested to ensure that they understood how the system worked together as a whole. We can see this process of limiting variables in their very first ideas about control.

Whereas their competitors — and there were many — were looking to contemporary models of passive and stable control, like rudders on ships and tillers on cars (remember, cars still had tillers at the time — also a “rudder” model), the Wright Brothers, being bicycle makers, were convinced that the best way to steer, and, crucially, to correct unintended direction changes, was by banking the machine, leaning it to one side, much the same way you would a bicycle, not trying to turn in on a flat plane as though it were sitting on the ground like an automobile, or floating in water like a ship.

To test this, they began by developing a system of wing warping. This was based on their observations of birds in flight and, as the story goes, a flash of inspiration Orville had while playing with an empty bicycle inner tube box. With the help of a series of guide wires connected to the ends of the wings and a control harness fitted to the waist of the pilot, by leaning in one direction or another, the pilot was able to change the shape of tips of the flyer’s wings, effectively creating a lift differential on one side of the craft that would bank the flyer in the opposite direction.

The Wright Brothers tested this concept with box kites, and then in 1901 with their first piloted glider. This version of aircraft, in stark contrast to the other machines that their competitors were building, had no rudder — in fact, it didn’t have a tail section at all.

The concept of a rudder didn’t fit with the model the Wright Brothers were trying to test. So they left it out — even though it was an obvious piece to everyone else at the time. (And it certainly appears obvious to us now.)

The 1901 glider both did and didn’t work as they had hoped. Wing warping did produce differential lift, which did bank the aircraft. But by creating additional lift on one side of the wing, they also created additional drag on that side of the wing. This resulted in a condition called “adverse yaw.” What that means is that as the glider banks to the left, it also spins to the right. And vice versa. Not terribly ideal.

But: the Wright Brothers learned from this. And to correct it they introduced a fixed, vertical stabilizer in the 1902 version of the glider. Although this meant that the Wright glider began to formally resemble the flyers and gliders others were experimenting with, the purpose and the function of this new addition was wholly different that what is intended by a rudder on a ship: they were after stabilization, not control.

And it did work — in fact it worked a little bit too well. The fixed vertical stabilizer did keep the 1902 glider from adverse yaw, spinning in the opposite direction of the turn, but it also created a feedback loop in the direction of the bank, making it difficult to right the glider once they had initiated a turn.

The effect this time was that the lower wing ended up dipping lower and lower until it finally caught the ground, and then, having anchored itself in the sand, it spun the glider around in a circle. The Wright Brothers called this “well digging.”

In response to this latest development, the brothers introduced a pilot controlled rudder that could be used to counteract adverse yaw and to pull the glider out of a banking turn. This version was later modeled in 1902, and then this was, of course, the series of flight controls built into the 1903 flyer — to which, of course, they also added power. And thus, we have the airplane.

This problem of flight control that the Wright Brothers faced over the course of these three years was one that in today’s language we refer to in systems design as “dynamically complex.” Dynamically complex situations are ones in which:

  • Seemingly obvious interventions produce non-obvious consequences
  • Connections between causes & effects are subtle
  • Effects of interventions over time are not immediately apparent

The rudder on the Wright flyer is a perfect example of a non-obvious intervention. In the early 1900s, everybody knew what a rudder does. They knew exactly what it did, but it also turns out that this knowledge was catastrophically wrong when it came to aviation. By ignoring the “obvious,” and, in the first versions, leaving it out entirely, the Wright Brothers were able to learn about the dynamics of the system they were working in.

The second feature of dynamic complexity is that the connections between cause and effect are subtle. In the case of the Wright Flyer, the relationship between adverse yaw and well digging is pretty easy to spot — and its consequences aren’t all that tricky to pull apart. But differential drag and reinforcing banking loops, though seemingly obvious to us now, had no precedent at the time. They had to be teased out and understood methodically.

Finally, in dynamic complexity the effects of interventions over time are not immediately apparent. In the case of the Wright Flyer, the intervention that changed over time was the skill of the pilot. As the Wright Brothers learned over the course of these three years, a skilled pilot was essential to a working aircraft.

Now, this of course seems obvious to us today (I hope!) but this was not the dominant thinking of the day. Most of the Wright Brothers’ competitors sought passive stability over active control, again like a ship or a car. At best these machines failed to fly; at worst they produced spectacular crashes.

In retrospect, we can see in this story that the Wright Brothers used a process which, though they may not have articulated it themselves at the time, serves as an archetype for solving dynamically complex problems: they modeled in order to explore ideas; they ran simulations on their models in order to learn about the parts of the system they couldn’t see, couldn’t articulate, and for which they had no precedent; and then they iterated on their model to incorporate what they had learned, and they started the whole process over again.

This process of modeling to learn is what I would like to discuss with you today in the context of information architecture: how to use modeling and simulation to more effectively solve the dynamically complex problems we increasingly face as practicing designers of information spaces. Over the next 30 or so minutes, I’ll discuss:

  • where we encounter dynamic complexity in IA
  • a modeling approach to facilitate discovery, alignment, and validation of IA solutions,
  • an example of a modeling to learn process drawn from my own recent client work, and, finally
  • I’ll introduce you to an open source toolkit you can use to get started integrating this process into your own work.

Let’s start by building some detail around this concept of “Dynamic Complexity.”

Dynamic Complexity

In the classic systems thinking book The Fifth Discipline, Peter Senge describes dynamic complexity as one of the core challenges we face in designing with, in, and around complex systems. We’ve already seen an initial definition of dynamic complexity in the context of the Wright Brothers. In order to give the concept more nuance, Senge additionally contrasts Dynamic Complexity with its counterpart, “Detail Complexity.”

Detail complexity arises in situations where you have:

  • lots of variables
  • incremental increases in power, speed, efficiency, or scope
  • predictable solutions to known outcomes

For the Wright Brothers, detail complexity was “wing and engine complexity.” These problems had to be solved, but they knew the technology was there and a solution was imminent. Progress could be made linearly based on what they already knew. For us, we encounter detail complexity in doing our taxes, or in entering our very complicated passwords when our social media accounts get hacked. There’s a process; we follow it (and we stop unintentionally spamming our friends and relations).

Dynamic complexity on the other hand is a little bit harder to spot. We often see dynamic complexity evoked in the literature when we’re talking about big social problems: things like chronic homelessness, the school to prison pipeline, and global warming. But dynamic complexity doesn’t only exist in large scale systems. We’ve already seen where the question of pilot control is a dynamically complex problem. The pilot’s abilities, balance, and reaction time are all part of the forces at play.

Senge provides an example that hits a little bit closer to home. He asks: “Who’s part of a family?” He then asks,

“Have you ever seen people in families produce consequences that aren’t what anybody intends?”

That’s dynamic complexity at a local scale. We can adjust this a little bit for our context and ask, “who in the room works with clients, on product teams, or with partner agencies?”

Have you ever seen people on teams produce consequences that aren’t what anybody intends?

This is also dynamic complexity. As information architects, we’re in a unique position to affect these dynamically complex problems because a core element of our work is building bridges between different models of thinking, for example between the models our users understand from their own experience and the models that we define in the information systems that we’re asking them to use.

When I explain this to clients, I describe information architecture as a “shim,” or a fitting. It connects two fundamentally different systems.

One of these is a carbon-based, analog, human system. It’s heuristic, which means that it relies on rules of thumb and best practices; it’s associative in that it excels at pattern matching and categorization; and it’s approximate: it’s successful by getting close enough, most of the time. It satisfices.

The other system is a silicon based, digital information system. This system is exhaustive: it always finished what it is told to do, even if what it is told to do is to go until it finds an error and then report it back. It’s never just going to stop halfway through and go watch TV. It’s enumerative, in that it can account for and recall precise qualities every time, over and over again. And it’s exact: a value is either true or false, on or off. There are no “maybes” or “kind of”s, or multiple versions of what “is” is.

IA is the shim that matches what is modeled in carbon to what is modeled in silicon. It is the set of symbolic and conceptual controls that connects one incompatible endpoint to the other. Because these two systems are fundamentally different, there is always a translation.

People don’t have the capacity to think like machines, at least not beyond the most rudimentary scale — which is what we use machines for to begin with. Likewise, machines don’t have the capacity to think like people — not in the heuristic, associative, approximative way. (You know this if you’ve ever interacted with your phone’s “smart assistant.”)

IA is the set of symbolic and conceptual controls that creates an operable interface between these two systems. It’s how we enable each of them and encourage each to do what it’s best at. And it’s how we allow each system to benefit from instead of fight against the strengths of the other.

We saw with the Wright Brothers how flight controls — the harness and levers, discovered and refined through simulation — provide the interface between the balance, judgment, experience, and skill of the pilot, and the lift, thrust, and control surfaces of the machine. Likewise, the information architecture controls that we build with symbols and concepts provide the interface between the attention, potential for action, and patronage of our users and the information access and reach of the data to which we’re connecting them.

Like the Wright Flyer, there are elements of both detail complexity and dynamic complexity throughout these connections. It’s tempting to think that all of the detail complexity sits on the machine side, and all the dynamic complexity sits on the human side, but the nature of complex systems is that they’re hopelessly intertwined. It is a multiplicative relationship, not a summative one. Just as with Wright flyer, trying to pull them apart and deal with them separately is usually a failing proposition.

Some of the places that we see dynamic complexity arise in IA project work include:

  • user, stakeholder, and content discovery
  • client organization and workflow
  • stakeholder priorities and alignment
  • information model creation and validation
  • team alignment and coordination

You notice that these are situations that all universally involve people. These are points in a project that are concerned with making IA work for users, and also for stakeholders and teams. An elegant IA solution that’s ignored or improperly implemented by the organization that commissions it doesn’t do anybody any good.

The good news is that by modeling, simulating, and iterating on potential solutions, we can untangle these systems to better understand the relationships between their parts and propose solutions that allow each element of the system to contribute according to its strengths. So, let’s return to this modeling to learn cycle that we first saw with the Wright Brothers and look at how this works for information architecture.

Modeling to Learn

For IA, as for avionics, experiential modeling processes like this serve to externalize a problem space, to expose a set of variables, and to make the relationships between those variables clear.

For the Wright Brothers, we saw that they created models that isolated and exposed the control problem in order to more clearly understand the relationship between banking, yaw, and the rudder. They ran simulations in a controlled environment — Kitty Hawk was chosen both for its steady winds and soft sands (think landing) — and they iterated their model based on what they learned at each step.

For us as IAs, we want to model the symbols and concepts that create connections between the user and digital systems, between carbon and silicon.

Most often, this means modeling a controlled use of language and vocabulary, such as labels and metadata; an orderly, deliberate categorization scheme; and a coherent set of navigation and findability metaphors, such as menus, page tables and page layouts, and contextual associations.

We then want to simulate these models, and yes, we simulate them with users — we do some usability tests, sure — but we also need to simulate them with stakeholders, with developers, with team members. Remember, our goal in this process isn’t to “vet” a concept, it’s to learn about the concept, much like in that opening example. To do this we need to learn about the use cases, and also about the organizational and technical context in which that use case exists. In dynamically complex structures, we’re looking for the subtle, non-obvious, time delayed relationships.

And then finally, of course, we iterate based on what we’ve learned, which means updating and experimenting with language, categorization, and layout patterns based on the simulations we’ve run.

We’ll get into these steps in detail in a case study that I’ll share with you in just a minute. But before we do, I want to talk about this first step: Modeling.

It turns out that this can be challenging, and it’s challenging not because we don’t know how to document or specify IA, we’re good at that, but rather because the tools most readily at our disposal tend not to lend themselves to the next two steps in this process: simulation and iteration. So let’s once again go back to our Wright Brothers example and break down what worked well about their model that allowed them to solve the problems that stymied others for so long.

We can pick out here a few things in particular that they did that helped them get to the core of the dynamically complex problems they were wrestling with. The first of these was to isolate variables. To test control they used kites and gliders — so, at first no people or power at all — and then, as we saw, until they knew the purpose it served, they eliminated the rudder entirely.

They also balanced complexity. I’ve already talked about Kitty Hawk’s steady wind and soft sand, but they did other things to pull complexity out of the model as well. For example in the 1903 flyer, the first flyer, they used a tow rope that got it up to airspeed for takeoff. They also, whenever possible, flew into the wind so that they could generate more lift. These were “detail complexity” problems that they would have to iron out later on, but they didn’t need to deal with those when they were working on the question of control.

Finally, they used their experiences to surface feedback loops. This is perhaps the most difficult element of modeling to grasp. As linguistic thinkers, we have a natural affinity to linear narrative; it’s built into the very grammar of our language. In order to understand dynamically complex systems, we need to be able to see the relationships between elements as causative feedback loops. These are situations where every element is both cause and effect, input and output for multiple things. For the Wright Brothers, the incremental reintroduction of the rudder allowed them to understand the relationship between banking and yaw. Understanding the feedback cycle was crucial to their final design of a pilot controlled rudder.

As information architects, we can get to the core of the dynamically complex problems we face by:

  • isolating variables to understand the impact of the relationship of language, categorization, and structure
  • balancing complexity in order to leave the detail complexity to the digital systems, which are really good at it, and focusing our attention on the relationships that machines don’t understand
  • surfacing feedback loops to uncover the ways that the human actor is part of the feedback process

So, what does this model look like for us? It’s obviously not sticks, wire, and fabric. In the sea of prototyping tools we have at our disposal, however, which approaches best afford us the opportunity to model with language, then to simulate across audiences and iterate easily can be challenging to pick out.

This problem has bothered me literally for years. Not too surprisingly, Don Norman offers some insight. In Emotional Design, Norman identifies three levels of processing the information we find in our environment: visceral, behavioral, and reflective.

At the visceral level are the rapid judgments about good and bad, safe and dangerous. These are our gestalt perception, our snap judgments, our gut feelings.

The behavioral level is the level of well-learned routine operations. Do you remember learning to drive a stick shift? It was misery, right? It was terrible; you think you can never do this. But now you can shift from second to third while eating a bagel, listening to a podcast, and shouting at someone in the back seat. No problem. That’s behavioral.

The reflective level is sensitive to experience, training, and education. These are our learned skills, such as close reading and detailed analysis of research results. And these are also the elements that reflect our world view, our values, and our assumptions about the world that make up our core beliefs.

Most prototyping tools and techniques target testing and learning between the visceral and the behavioral. What’s my impression? Do my well-learned routines work?

In order to model the dynamic complexity and the connections between language, categorization, and structure that we’re talking about today we need to model at the behavioral and, more importantly, at the reflective levels. To do this we need tools that allow us to simulate actual content, not filler or a random selection of words, and which allow us to iterate on the quality and structure of that content easily.

Now don’t get me wrong, I like InVision and Sketch as much as the next guy — I use them all the time. But like Axure, Principle, Proto.io, and others, these start from an image of the page, not actual content. Just like our carbon and silicon based systems, these two approaches are fundamentally different. Trying to get an image based prototype to behave like a semantic one will always be an uphill battle.

So what tools to we have to build semantic, content and structure driven models? The most obvious one has been right in front of us literally the whole time: HTML and CSS.

Now, before I go on and before you flee, let me tell you what I mean by HTML and CSS. HTML and CSS in production and at scale has gone bonkers. There are more build systems and frameworks than there are deep dish pizza joints in River North. That’s not what I’m talking about.

Basic markup and styling are inherently semantic. And they provide the simplest connection between content and the structure that allows users to find it, and the digital structures in which that content is accessed and stored.

Now, am I saying designers should learn to code? (Nervous laughter.) At a basic level, yes. (Louder nervous laughter.) As type designer Erik Spiekermann puts it,

“Code is today what nuts and bolts were 100 years ago.”

To fail to learn the rudiments of markup and styling means that we remain ignorant of the medium on which our work depends.

So what does this look like in practice? How do we model IA in code in a way that allow us to easily isolate variables, balance complexity, and surface feedback loops? Over the last couple years I’ve put together a loose framework of open source tools that make it easier to explore the dynamic complexity of representing information structures in digital spaces, but without getting bogged down in the detail complexity of databases or production code.

What I’m going to show you is only one of several ways that you might accomplish the goals that we’ve discussed so far. Other ways include in-house development environments or local builds that might let you get started in a lightweight way. I’ve used those on client projects as well and it’s another great way to begin. If you need to start from scratch, though, here’s one way to get going.

I call this a “content-first prototyping framework.” It uses a javascript task-runner called Gulp to string together four tools that automate the repetition of creating individual pages from structured content without databases and without server-side operations. It works like this:

  • we create structured content in Excel (which is usually where it ends up anyway)
  • our Gulp plugin takes this structured content and converts it into an open, readable data format
  • Gulp then passes that content into the static site generator tool Jekyll, which is where we can define page structure with page templates, reusable content blocks, and global navigation
  • we then use the front end framework Foundation — or you can use Bootstrap, if you’re into that — to provide basic wireframe styles
  • finally, Gulp passes the whole works into Browsersync which serves it up to any browser connected to your local network. Any time you save a file anywhere along this path, Gulp updates the whole stack and serves up a fresh copy

So our first step is: don’t start with images. Start with structured content. If you have a content audit, an inventory, or a manifest, you likely have the beginning of structured content already. This could be a client roster, a course list, a product catalog, or a list of articles. In the example I’ll show you today, our structured content is about college programs.

Your list doesn’t have to be exhaustive. If you have 5000 things, you probably don’t need every last one of those. But your content list should be representative — enough to communicate the breadth and range of the content in question. A plugin in our javascript taskrunner, Gulp, converts these discrete chunks of content in our structured content packages into page variables that we can use on individual screens.

In the curvy brackets between the paragraph tags in this example, you can see the page variables. These read, “page.title,” “page.description,” “page.category.” They’re basically references to the structured content in our Excel spreadsheet.

Now, you can put these in one by one — that would be crazy — or, even better, you can let Jekyll automate page generation across your content sets. Each row in your spreadsheet becomes its own page. Or you can work down a column and output a list of links and labels.

This gives us a basic, semantic HTML page to which we can add simple wireframe styles:

You’re starting from a set of structured content, which could be anything from dozens of pages to hundreds. From here, we can use reusable chunks of markup and insert global navigation and wayfinding elements, and then end up with a prototype with which we can move through actual content.

Because our model is built and continuously updated from the content out, we’re never in a position of having to hunt through pictures of screens to update our content, iterate our categories, or to revise our structure. By beginning with the content first, we can more easily simulate and iterate to learn how organizations create and maintain content, how users find and consume that content, and to learn what attributes are necessary to structure and order it in a useful way. We can also iterate on different organizational methods in order to better understand the relationships between each of the pieces in our dynamically complex system.

So, what have we done? We’ve isolated variables that we needed to test: language, categorization, and structure. We’ve balanced complexity by leaving the detail complexity to the machines (it’s what they’re good at). And we’ve allowed ourselves to focus on the interdependencies of they systems that we’ve modeled between user, client, and developer constituents. These feedback loops, remember, are where each element is both input and output.

This last point, I’ll admit, is the hardest one to grasp. These feedback loops are not things that our language sets us up to even begin to be able to see. This is, however, the crux of where dynamic complexity lives in IA. Let’s dig into the details here by looking at an example.

Case Study: Shoreline College

I recently completed a project for Shoreline Community College, a local campus of the Washington State College system near where I live in Seattle. Shoreline offers a range of options for students, and is particularly known for its nursing, automotive, and music technology programs.

Though advising is available, most students learn about the outcomes, completion requirements, and prerequisites of Shoreline’s programs through the “Program Pages” section of their website. Shoreline had recently undertaken a visual design refresh of their site, but they found that students still had a hard time finding and understanding the options available to them in the program pages section. They brought me in to help them untangle the structure in this content and rebuild it in a way that was both findable for students and accurate and maintainable for stakeholders and content creators.

Our first step, working hand-in-hand with Shoreline’s web team, was to start off by interviewing students and building a clear picture of the decision process potential students use to choose a program. We also looked into where they potentially drop out of that process prior to registration. We identified three major steps and three potential hurdles that students had to negotiate to start the application process.

The first was “imagining a future”; the second was understanding the requirements and prerequisites of a program; and the third was understanding the enrollment process. We chose to focus on the first two of these steps — discovering programs and understanding requirements — because these were the places that students were having the most difficulty in the process.

Early on recognizing that this was a good candidate for IA prototyping, I started building a model of Shoreline’s current state program pages and structure, using their program list, a collection of their PDF planning guides, and all of the official and unofficial topics associated with their transfer degree programs. From this I created an inventory of all their current programs, their categories, and the related metadata.

Even before a single of these structured content elements was represented in clickable UI, modeling the content in this way allowed us to have a conversation with stakeholders that hadn’t previously been possible. By seeing a detailed view of the structures that Shoreline had put in place over the years, such as organizing topics by focus, by area, by concentration, and by pathway, we could start to take apart the range of interests that had shaped their information design to date.

What we found was that there were three fundamental forces shaping these categories: marketing, accreditation, and instruction. Each of them had a different set of motivations, and those motivations were sometimes at odds with each other. Each of their interests crossed into each others’ territory in ways that, while they weren’t breaking any rules, were historically very difficult for them to reconcile and agree on.

The tragedy, of course, is that all of these constituents want the same thing: a sustainable college that equips students for success. But each of them were (and are) beholden to a different set of pressures and viewpoints.

With this model, we were able to tease out these interests, and then by putting the content in the context in which it would be used, to simulate the effects of potential decisions. We did this by stepping through the ways that stakeholders knew students move through the process — stakeholders have years of internalized knowledge about how people get through this process. Our goal was to engage them in a dialog about user needs that would surface their assumptions, and help create alignment in areas where their responsibilities overlap.

To do this, we built an overlapping set of search and discovery tools with the framework we looked at a minute ago and stepped through the key points in the user journey that emerged from our initial research. The search and browse patterns that we used here were very basic and they were at a wireframe fidelity. This was important for what we were trying to learn: We weren’t trying to reinvent navigation. They didn’t need something “slicker” than what they had. We were looking for the right balance of specificity and coverage that worked for everyone.

Because all the data in this model came from a single source and updated automatically when we changed, added, or deleted categories, both team members and stakeholders could experience in real time the effects of different configurations. This meant that we could build an understanding of program pages categories and structures that the business could agree on, and it also meant that we could immediately get input from potential users about which of these structures best helped them meet their search, discovery, and information gathering needs.

The users in this case, of course, were the students. For this step, we took the same prototype that we had been building all along and ran a series of moderated and unmoderated wireframe usability sessions that tested the degrees to which our information model matched potential student mental models.

Finally, since we had been working with actual content and metadata from the start, when the project moved to development we already had the data elements that Shoreline’s developers needed to implement the system as designed.

So, how did this perform according to the criteria we set out to achieve at the start?

  • We isolated variables to discover, align, and validate stakeholder and user needs.
  • We balanced complexity by relegating the content and metadata detail complexity to the prototyping framework.
  • And we surfaced feedback loops to understand the relationships between individuals and the parts of the system that emerged over the course of our simulations and iterations.

As I mentioned at the start of this example, this framework is only one way to prototype IA. I wanted to show you a specific series of steps that you could follow, but I also want to stress that this isn’t the only way to meet the goals that we identified at the start. By way of closing this talk and, hopefully, of opening discussion to the innovations that I hope our community continues to bring to this process, I’d like to make one final observation about the Wright Brothers.

Scholars of the original Wright flyer have noted that with its anhedral, downward sloping wings and its delicate controls, that the 1903 flyer was so unstable as to be almost unmanagable by anyone but the Wrights, who had trained themselves on the previous gliders. After several years of working with the model I just showed you, I’ll be the first to recognize that some may find this to be the case with the framework that I just demonstrated.

Notwithstanding, what I hope you will take away from this talk is a better understanding of

  • dynamic complexity
  • of modeling to learn
  • of the way that both of these can be applied IA practice in order to address the complex interdependencies that are part and parcel of our craft

I also hope this inspires you to get your IA off the page and put it in motion early and often, in whichever way best matches your skill set. Finally, I hope these ideas taken together set you and your clients up to learn something new, to discover something surprising, and to, perhaps, create something revolutionary.

Thank you.

If you would like to give the framework I demonstrate a try, you can download it on GitHub and read a more in-depth and detailed tutorial on how to build your first prototype over on Smashing Magazine. I also recommend checking out the excellent documentation for Jekyll and Foundation, if those are new to you. Finally, if you’re still completely freaked out by the idea of prototyping in HTML and CSSand think this is something you could never pull off, I can’t recommend the excellent A Book Apart series highly enough. These brief books will easily equip you to handle this — and much more.