While it may seem like a bit of a stretch, product designers and developers can actually learn a lot from the disastrous implications of dinosaur cloning that play out in “Jurassic Park.” Hear me out on this one.
As one of the most successful blockbuster films of the 1990s, “Jurassic Park” doesn’t offer much in the way of hard scientific accuracy. It does, however, raise some pretty significant ethical questions, most notably around the morality of cloning dinosaurs—or any living creature for that matter.
This idea is best articulated in the film by mathematician and chaos theorist Dr. Ian Malcolm, played by Jeff Goldblum. When he’s brought in to consult on the park’s viability, Malcolm expresses disagreement with the scientific motivations for bringing dinosaurs back to life, stating, “Yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Here, Malcolm is outlining some of the film’s key ethical questions: Just because we have the technology and expertise to pull something off, does that mean that we should? Is what we’re working to bring into the world responsible and designed to fulfill a specific need? Is our pursuit of scientific discovery rooted in a desire to make the world a better place?
Anyone that has seen the film knows that cloning dinosaurs has some pretty dire consequences. Chalk it up to the hubris of man, a lack of preparedness, or a combination of both—the park scientists’ efforts resulted in disaster.
These days, it seems like everyone has an idea for the next “killer app”—an application that will demand users’ attention, transform an industry, seize market share, destroy the competition, etc. Often, these ideas are spearheaded by enthusiastic, self-assured companies that have an immense amount of confidence in their app’s potential. So much so, in fact, that they fail to fully understand if their app will accomplish the one thing all apps should: making users’ lives easier.
Just like the Jurassic Park scientists failed to consider the moral and ethical implications of cloning dinosaurs, app designers and developers frequently fail to consider the perspectives of their most important stakeholders: their app’s end users. They neglect to recognize almost the exact same fundamental question that the scientists did: Just because we can build this app, does that mean we should?
Designing, developing, and launching applications has never been easier. Thanks to significant advances in front-end and back-end technologies, turning your “big idea” into a reality is relatively simple. However, building apps, bringing them to the marketplace, and enticing users to adopt them is still an extremely costly endeavor. Before you spend all that money bringing your big idea to life, why not ensure you incorporate the user perspective into every aspect of the design and development process?
Creating apps that satisfy user wants and needs is all about challenging assumptions. It’s not enough to just assume your big idea will translate into an application that makes users’ lives easier, and build the app just as you see fit. You need to fully understand what your users are looking for, how they complete tasks, and what they are ultimately trying to achieve.
Evidence-based design is a complex decision-making process that seeks to steer design away from assumptions and toward research-based approaches to understanding users. When companies make assumptions about their users, they directly affect how their app will be perceived by those same users. Evidence-based design seeks to bridge the gap between app developers and users, creating a mutually beneficial environment where developers are creating apps that improve users’ everyday lives, while users provide developers with invaluable insights that help them continue to deliver impactful applications.
In the context of evidence-based design, evidence refers to the opposite of assumptions—the real-world facts gathered about how and why users behave the way they do. Common types of evidence utilized in the design process include:
Perhaps one of the most common forms of research is usability testing. As a crucial component of evidence-based, user-centric design, usability testing’s main goal is to elicit feedback from an app’s most important stakeholder: the user. During a usability test, users are typically asked to complete certain tasks on an app, often using clickable mockups or wireframes. Then, design teams aggregate insights gathered from these tests to influence design decisions and guide the overall design strategy. It’s an invaluable effort that produces feedback that would be unattainable through other methods.
Conducting substantial usability testing on designs before completing development helps you prove the viability of the app before investing significant amounts of money in its launch. Incorporating insights gathered from usability tests helps you ensure that your designs are intelligent, efficient, and create a positive experience for the user. Usability testing can ensure you deliver a quality product to the marketplace by gathering invaluable feedback from users, helping you challenge assumptions, and guaranteeing that your app is user-focused.
Just because companies can build an app, it doesn’t always mean that they should. While the consequences of a failed app are nowhere near as violent as the consequences of cloning dinosaurs in “Jurassic Park,” they can still be costly. It’s imperative that companies thoroughly investigate the viability of their app, and not get blinded by their own assumptions.
Taking an evidence-based approach—complete with significant usability testing—can help companies make sure that their idea for a “killer app” is going to be viable. Want to learn more about how Codal uses evidence-based and user-centric design practices to deliver cutting-edge applications that satisfy user wants, needs, and desires? Get in touch today.