What exactly do we mean when we refer to User Experience (UX) Research or UXR? There are many methods and experiments that qualify as UXR— contextual studies, usability testing, card sorting, A/B testing, and interviewing to name a few. Distilled to its essence, UXR is any technique that would allow a team to investigate the impact of design on an audience.
Through conducting UXR, a team can glean information that allows them to peer into the context, mindset, behaviors, and challenges of system users. UXR can fuel product strategy in a way that allows a team to connect the dots between user needs and shipped features. Outcomes from research should aim to guide what is prioritized in the development process and accelerate decision-making.
However, resources such as time, money, and actual users tend to be tight — especially for UX practitioners working in and around the federal government. We are left asking, how might teams deliver UXR value when resources are constrained and/or stakeholders object to the process?
Start With Design Advocacy
The first piece of advice primarily pertains to those who view themselves as UX advocates, whether they wear the title of “Designer” or not.
Your stakeholders might be omitting UXR — deeming it too expensive, time consuming, and unnecessary. After all, researching user needs is an abstract concept in contrast to polishing a User Interface (UI), which yields a tangible, visual result. Lack of user empathy is likely to result in a product that does not align with user needs or expectations. When teams are not engaging in research and testing, they are more likely to arrive at inaccurate conclusions. A resistance to invest in design-related research could be an indicator that an organization is in the initial stages of UX maturity. This mindset reveals an organization that is ripe for design advocacy.
When advocating for UXR, it is best to start small and persist. Instead of attempting to arbitrarily educate stakeholders on the UX design process, educate your team and direct leadership on a particular research approach you want to employ in addition to why that approach supports the business at the present time. UXR is commonly intended to uncover barriers to the use of a product. Uncovering areas of friction in a product enables the possibility to move the needle in a positive direction — illustrating how the goals of your research align with product potential. Successfully demonstrating value on a small scale opens the opportunity to expand to new and larger areas.
Research Without Actual Users
Now, you may be thinking all this talk about advocacy is great, but what if you simply do not have access to users to make your UXR dreams come true? A pragmatic rule to abide by is that some research is better than no research and some testing is better than no testing.
Below, we will unpack some techniques that you might consider when accessing actual users is off the table.
Stakeholders are certainly not equivalent to users, but we would be remiss not to recognize the treasure trove of knowledge that exists within a team responsible for a product’s success. Stakeholder interviews can be used at any phase of a project to gather context from a diverse point of view. Consider interviewing product team members, developers, decision-makers, and adjacent teams as a means of collecting data.
Stakeholders at every level have knowledge and experience that, when externalized, can be used to paint a picture of the current state and where a team thinks they are headed. As with any UXR interviews, stakeholder interviews should be carefully synthesized for themes. Keep an open mind as discoveries from stakeholder interviews are likely to reveal blind spots about system users, assumptions the team has made about user challenges, or even significant discrepancies in product vision.
Utilize stakeholder interviews to collect information as opposed to evaluating ideas. We will touch on how to evaluate ideas when users are not within reach a bit later.
Some questions that you might consider during a stakeholder interview:
- What is the vision for the product, website, app?
- Who are our users?
- What are our users trying to accomplish?
- What challenge(s) do our users face?
- What problem(s) are we solving for our users?
- What assumption(s) have we had to make about our users and their needs?
Lacking access to time and money that is needed to conduct UXR with actual users is a reality many teams face. However, that should not prevent you from tapping into what actual users are saying in more creative ways. Peering into sentiment across a variety of channels can serve as an indicator of how users are feeling and what challenges they are struggling to overcome. Whether your team is responsible for developing a public facing website or an enterprise product, there are avenues to collect the voice of your users quite literally.
Consider these channels when collecting user sentiment:
- Social Media (Reddit, Facebook, LinkedIn, Discord etc.)
- Help Desk Tickets
- Feedback Surveys
- Product Inboxes (complaints, requests, questions)
Blogs and Forums
Analyzing sentiment does not have to be an exhaustive data analysis effort (or a pointless word cloud exercise), but instead may manifest into the creation of sentiment score charts that quantify positive and negative points of view.
Analyzing sentiment can be useful when resources are particularly tight, and a team is struggling with prioritization. Peering into sentiment across channels may be just what your team needs to see where they may have veered off track and need to adjust upcoming plans. It also surfaces what is working well to reinforce confidence in a team. Reporting the results of sentiment analysis is an opportunity to advocate for design-led approaches. Communicate to stakeholders what you have done as an alternative to in-depth methods, why it was necessary, and what was revealed.
Whether you are constrained by access to users or not, competitive analysis should be an essential facet of your UXR practice. Ask yourself if there are other solutions in the landscape similarly aligned to what we are offering. Surely enough, a shortlist of competition will come to mind.
A competitive analysis exercise aims to catalog the features and functions of competitor products and compare them to your own offering. Following competitive analysis, your goal should not be to “copy & paste” competitor features into your solution, but instead consider which parts of a competitor solution fit your context, which can be improved upon, and which do not align altogether.
“Know when to perform a “comparative analysis” [ vs. competitive analysis]. Study solutions from products that are not direct competitors. For example, if you are designing a solution that includes a calendar scheduling feature, explore the best calendar scheduling solutions, regardless of the vertical.”
— Jill DaSilva, Adobe Xd Ideas
It is possible to evaluate an idea or prototype without needing actual system users. Remember, some testing is better than no testing and a popular concept you can employ to achieve some testing is known as Guerrilla testing. Guerrilla testing is simple, inexpensive, and you may even find it reasonable to forgo formal reporting by involving stakeholders and/or product team members in test planning and observation.
When you are evaluating an idea via Guerrilla testing, you are taking an idea to anyone who is familiar with navigating the web and asking them to complete a task by interacting with a prototype. Leverage your network of coworkers as test subjects, but be sure your test participants are not developers, designers, or directly part of the technical project you are supporting.
Guerrilla testing requires some basic preparation to create a prototype, discussion guide and schedule participants which can all be achieved in a matter of hours vs. days or weeks. A few hours for planning and conducting rapid Guerrilla tests could mean the difference in shipping a product with a confusing flow and unclear language or revealing an opportunity to make something better.
When users are not accessible prior to launching a feature, there is an opportunity to reach them in production via A/B testing. The purpose of A/B testing is to evaluate two different versions of an experience or user interface against one another and optimize accordingly. While A/B testing will not allow you to gain insight into a user’s challenges and frustrations, it will offer a chance for you to make data-driven improvements based on what produces a preferred result.
A/B testing is dependent on your ability to collect data and measure results. Your measurement criteria may come in the form of clicks, views, or conversions. Start by collecting baseline data to evaluate the current state. Then, determine what success would look like if you were able to influence a change over the current state. Next, design and review some options with your team that map to your preferred result. Lastly, deploy your A/B experiment and compare the results.
A/B testing is one tool among many and should not account for your entire research and test strategy. Remember, data will tell you quantitatively what is occurring, but on its own, fails to surface a qualitative reason for why.
Delivering UX R value is entirely possible even when resources are limited. UX professionals operate within the constraints of technology day-to-day, and the constraints of an organization are not so different.
When resources present a challenge, consider how you can investigate the impact of design on an audience as opposed to spending your dollars on a polished UI. While a polished UI is a tangible and desirable outcome for many organizations, what is to gain from a product or website that looks great, but fails to deliver value?