Phase 3: Design Value Propositions
Unit 1: Creating Lab Propositions
Assessing the desirability of the concepts
In this unit:
The aim was to test the value of the concepts, so the lab team could begin to assess which services could be engaging. At this stage, overall impact or efficacy of the proposals could not be verified, but it was possible to gauge desirability for users.
For the lab team, this involved bringing the concepts back from a more futuristic setting to a believable present scope, so that users were able to interact with the prototypes and the team could understand the value of each concept.
In parallel, the selected Studio teams (students) took their projects further, developing prototypes and testing them with users.
Another aim was to spark conversations with people around each topic, in order to collect deeper insights. These insights expand the understanding and perspective of both the organisation and the lab team. Ultimately, the approach puts the client in a position where they can answer questions like, ‘Is this service in line with our overall strategy?’ and ‘Should this service be built?’.
The process of turning the lab explorations into propositions required multiple stages. The main challenge was to transform those concepts —that had been created to fit into a particular future context— into propositions that felt believable and tangible today, while still investigating the critical assumptions related to each future scenario or concept.
The lab team had to shift from exploring possible futures to creating more plausible present concepts to be able to prototype and test their critical assumptions with users.
The explorations of service visions were not suitable for testing for desirability because the discussion would have likely been overtaken by the provocative and controversial elements of the ideas. However, the intended objective was to collect insight on how those concepts could impact people’s lives, if adopted. Should a natural path to these futures occur, the concepts may not seem provocative and therefore this influence had to be dulled.
The intent of the propositions was to be vehicles to explore more strategic questions with the users and the public. This meant they had to be convincing and easy to engage with in a meaningful way.
The approach the lab team followed to bring those future concepts into the present, is called ‘Backcasting’, a popular methodology used in the context of scenario planning. It is adopted by businesses and organisations that create future scenarios in order to outline the strategies they should adopt in the present to make those outcomes possible.
For the lab team, backcasting simply meant bringing their ideas from the future back toward the present —reframing the service visions as propositions that might exist in the near future rather than the far future. These reframed versions still retained a traceable line of evolution to the future, and they were therefore simply disguised and digestible, intermediary stepping stones.
With the new value propositions, the lab team then built prototypes to be tested with different types of audiences, such as recruited users, targeted users on social media and the client’s core team.
The result of testing with multiple stakeholders allowed the lab team to get quantitative and qualitative data, as well as strategic feedback to compare the concepts and conduct a final evaluation and selection of the propositions to prioritise.
Defining a design research strategy
The lab team went through a process of turning the explorations into propositions before prototyping and testing the concepts. Although the explorations were useful to extract insights on the themes and topics, now the lab team needed something more tangible, that people could relate to and believe.
To achieve that, the lab team followed a series of activities that helped them reframe the value propositions and prepare a prototyping plan.
This stage is called ‘Design research strategy’, meaning they went through a design process to reframe the value propositions, create a strategy to conduct research, outline a plan to test critical assumptions, gauge the desirability of the new propositions, and make the strategic alignment with the client.
To reframe the value propositions, the lab team followed a methodology called backcasting, which helped them transform the explorations into the more believable propositions. This process is also tied to the prototyping plan, as the propositions had been reframed in a way that made it possible to be quickly and easily prototyped and tested.
The first activity was to review the assumptions linked to the explorations to understand which were safe assumptions and which required testing to be validated, i.e. which were ‘critical assumptions’.
For each of the selected explorations, the lab team went back to the original design challenges and scenarios and identified the initial key assumptions that led to the creation of the explorations. They then considered the original project goals and what the client wanted to explore based on their strategy. Then, based on all this information, they created a specific strategic question for each proposition.
These strategic questions helped the lab to maintain the right direction during the development of the prototypes, as well as further challenge their thinking. Each strategic question was a synthesis of the hypotheses made about the user, the problem and the solution’s mechanisms, combined with the client’s specific strategic interests in each area.
Options for prototyping
After the definition of the strategic questions, it was necessary to start thinking about how to prototype and test the critical assumptions.
The objective was to structure the hypotheses into something actionable, which could be tested and validated. So, the team explored different ways of prototyping and tried to get an initial understanding e of the feasibility and timeframe to build the prototypes. To brainstorm those prototyping options, the lab team imagined how the concepts created for the explorations would look like in the present context, while maintaining the focus of the strategic question.
Thinking about ways to prototype at this stage made it easier for the lab team to narrow down the scope for the definition of the new value propositions, by considering what could be developed with limited time and resources
Backcasting of value propositions
For users to engage with the concepts meaningfully, the lab team needed to reframe them to make them less provocative and less future-oriented, so they would interact with and critique the concept without spending time questioning the plausibility of the idea and instead to reflect on its impact
The way they did this was by following a methodology called ‘backcasting’, which is a practical approach that starts from the definition of desirable and optimal future scenarios and then works backwards to identify the main steps or mechanisms that can be applicable in the present to realise that future scenario. It is more common to see backcasting applied in the context of sustainability strategy; however, the lab team saw the opportunity to adapt the methodology to the reframing of the explorations in what they called ‘value propositions’.
For each selected exploration, the lab team had a strategic question and a series of critical assumptions. These were used as the starting point for the ‘backcasting’ process. While the importance of ethical exploration had always been present, this was a key moment to delve into it because the concepts were about to be taken to a more tangible level, and put in front of real users. It was the moment to examine more closely the these concepts’ impact on people and how the approaches aligned with the ethical principles of the client.
Considering all these elements, the lab team used the backcasting methodology to work backwards. From the exploration, which was an extreme idea set in the future, they started imagining how the users could genuinely adopt it. They started analysing how the mechanism they identified in the strategic question could be enabled through a service, how this service would look like if it existed today, what types of features it would have and whether the service could evolve into a future service that contained the critical elements of interest.
The resulting value propositions were not all entirely ethically sound. However, problematic value propositions and the responses they may elicit, still enrich our understanding of the plausibility of potential futures and inform current strategy.
The desirable future scenarios were those where a particular service provides users with more agency over their choices and an improvement to their health and happiness. The way the lab team framed each value proposition was to consider those scenarios as the outcome and then design propositions that could lead there, so they could understand if users also saw those scenarios as desirable.
In this way, each value proposition was simply a disguised and digestible, intermediary stepping stone into the future – these experiments allowed us to understand if users would engage with that first intermediary step, which tells us something about the potential of subsequent steps.
Planning of experiments
With the value propositions defined, the lab team started outlining a plan to prototype and test them with the users.
As mentioned earlier, the objectives of testing of the propositions were to:
- validate the critical assumptions;
- measure desirability; and,
- evaluate the strategic alignment with the client.
For the lab team, it was essential to make the testing results comparable, so that they could make comparisons between the propositions and make an informed decision on which ones to bring forward.
Thus, they included three critical components in the planning:
- the research questions that needed to be answered through the testing;
- the methods that they will use to conduct the testing; and,
- the type of audiences they would test with.
The research questions were based on the critical assumptions, and they were related to the user, the problem and the solution. In particular, the lab team wanted to understand if the needs and goals they identified for the user segments were accurate, if the problem they defined was the right one to be solved, and if their proposed solution would be desirable to the user segment.
To find the answers to these questions, the lab team decided to collect qualitative data and feedback about the user, the problem and the solution, by speaking directly to the users. They used an external agency to recruit people who fit in the specific user segment of each proposition to participate in interviews and workshops. This segment was defined based on the identity of the extreme user profile of the first phase.
For each prototype, the lab defined what type of people they were looking for and created detailed briefs for the agency. The agency then used a ‘screening criteria’ for the participants’ selection, based on the information shared by the lab team.
As each user segment and the proposition were so different from each other, the lab had to ‘standardise’ the ways users would engage with the concept, through the creation of interfaces as prototypes, and using the same structure for all the interviews and workshops.
To test desirability, it was also necessary to think on a bigger scale. Therefore, the team used advertising on social media to present the propositions to the general public. They defined a strategy, which started by researching the best practices for the different platforms, identifying what channels and what type of media to use, and all technical aspects.
Finally, to test the strategic alignment with the client, the lab team proposed an in depth ‘Service Safari’ as a method to present all the concepts together with the insights collected from user testing.
The objective of the experiment stage was to explore the validity of the core assumptions associated with each proposition. The lab team designed this process as an experiment, creating low-resolution prototypes to test key assumptions and hypotheses, such as desirability and strategic alignment, and to answer the critical research questions. With only a minimum investment of resources, these steps inform the selection of concepts to bring to the next level.
This practice made an important contribution in de-risking the development of the services in the next stages, by putting the concepts in front of people and getting feedback early on, before significant investments of money and time were made. The three primary audiences for experimentation were: recruited users, who engaged with the prototypes directly; the general public, who interacted with the social media ads; and finally the client, who tested the prototypes and reviewed the insights to give a more strategic assessment.
The model was based on the build-measure-learn feedback loop, from the lean startup methodology. It is a loop constructed to maximise learnings by using an incremental and iterative approach.
It starts by turning ideas into something concrete and tangible;
then measures how people respond to them;
followed by a learning moment to process insights and make decisions about how to start the building stage again with new objectives.
What is interesting about the prototyping and testing of these propositions is how the future contexts were presented to the audiences. The new value propositions were reframed to be believable in the near future, but they still included some aspects of the far future scenarios. To the recruited users, the lab team presented the concepts as ideas that were in development. Therefore, they knew they were entering a research context, but reacted to the propositions as though they were a close reality, rather than hypothetical. With the advertising on social media, these fictional services were brought to an even more realistic level, by claiming they were in beta testing and about to be released in the market.
The use of these techniques to immerse people in a context where the propositions could conceivably exist, helped the lab team to validate the assumptions and get insights on the desirability of the concepts.
By rapidly testing multiple propositions, the lab team could assess the value of the concepts before investing significant time and resources into developing a smaller selection. The model had to create prototype results that could be comparable with each other and be flexible enough to allow quick and frequent iterations in order to tailor the testing with for different audiences
To test the propositions with recruited users and the general public on social media – the lab team developed some tangible artefacts to facilitate the interaction.
Starting from the core value proposition, they defined the main features of each concept in a way that would expose the critical assumptions. They then created a specific brand identity to convey the idea, its context, and who it was for. Then using these elements, the lab team proceeded to design the interactions and touchpoints, mostly by designing app mockups and clickable UIs (user interfaces) for each concept.
The features and interactions were summarised in a website landing page which was to be used as the destination for the adverts. And for the adverts, the lab team also prepared a series of images and copy. These included variations on the target audience and features in order to test multiple elements at the same time. All of these artefacts posed the proposition as a reality.
Discussion of implications
After creating the prototypes, the next activity was to show those prototypes to people in the field to collect feedback and observe reactions. In this first round, the lab team worked with two types of audiences: recruited users and the general public on social media.
The lab team recruited a group of users for each proposition with whom they either conducted a workshop or individual face-to-face interviews. Face-to-face interviews were sometimes more appropriate due to the sensitivity of the topic. For other propositions, the lab team wanted to observe how participants would respond to each other and what conversations would be sparked by the interaction, so they opted for a workshop.
For both the workshops and the face-to-face interviews, the lab tried to standardise the structure by using the same 3-stage process:
- understand the users;
- understand their view on the topic; and,
- understand their response to the proposition as a solution to the problem
There was also an extra stage for filling in a standard scoresheet, which collected some quantitative data about the appeal, the impact, the influence on happiness, the significance, longevity, likeliness and trust associated with each proposition.
The lab team presented the context to the recruited users by introducing them to the central topic of health and happiness. They explained how the lab was conducting the research and that the objective was to understand how the users would feel about the service propositions. To limit any discussion about the legitimacy of the idea, the participants were told those concepts were in development and would potentially be released into the market soon.
In parallel to this recruited user testing, the lab team also conducted testing through advertising on social media. The objective of these ad campaigns was to generate comparative data on the performance of each different advert to understand which concepts were more desirable. Therefore, each concept has a standard advert structure and received the same funds. At the same time, they ran experiments with different audiences and highlighted different aspects of the service in order to test their research questions, the relevance of the problem, if their user hypothesis was correct, and the appeal of specific features —all simply by comparing the results of alternative adverts.
For each proposition, they had an image and text introducing the service, a call-to-action to find out more on the website – Then, on the website landing page, the users were prompted to leave their email addresses to get early access. The collection of emails was useful as quantitative data, but also to get permission from interested users to be contacted again.
Synthesis of insights and discussion of learnings
At the end of this first round of testing, the lab team assembled the qualitative and quantitative data and feedback. From the recruited user interviews and workshops, they collected a series of comments and questions that they synthesised into insights. The insights were either about the user, about the problem or about the solution (following the same three categories used in the experiment).
For the more complex social media data, the lab team examined and compared metrics across all the propositions. They collected quantitative data from the performance of the ads and insights about demographics as well as some qualitative feedback through the users’ comments under the adverts. By analysing all the insights and data collected, they were able to validate or invalidate some of their critical assumptions and answer key research questions. This moment of synthesis was an advantageous point to make a strategic decision for each proposition, before entering the next prototyping round.
In the lean startup methodology, the options for this decision moment are called persevere, pivot or kill. For some of the propositions that reported quite successful results, the decision was to persevere, meaning they would enter the second round of testing without major changes in strategy. For others, the decision was instead to pivot, which for the lab team meant considering variations of the concept or hypotheses for further testing, such as a change in target audience or brand positioning, or even combining two propositions. This allowed them to explore and test other possibilities instead of killing the idea entirely because of less successful results. Only a few concepts were “killed” after a team assessment that determined they were not worth being brought forward, especially if they had not been as convincing from the start.
Refinement of prototypes
Based on the learnings of the first round of testing, the lab team assessed each proposition to see if there was a need for further testing with users. They took the insights and turned some into new research questions to test. The additional user testing was conducted through adverts on social media, which proved to be a quick iterative method to test alternatives. Some poor-performing propositions were excluded from this round. In contrast, for others, the lab team proposed different branding to test the appeal to different genders, or combined similar concepts to see if the proposition had stronger results. For these alternatives, they created new prototypes, such as interfaces and ads, and they modified the landing pages when necessary, which enabled them to experiment online with the public and collect fresh quantitative data.
Second round of experiments
The second round of experiments included further testing with users and with the client to assess the strategic value of the propositions.They prototyped and tested some alternatives of the propositions using social media advertising, as they had done in the previous round of experiments by varying branding, features and positioning and then comparing success metrics to answer their questions. They tested only a few concepts, leaving out the ones that had great performance (and were already successful) and those that had poor performance in the first round. This process was intended to decide on ambiguous concepts.
The second type of testing in this second experimentation round was with the client. The lab team prepared for a workshop to present all the propositions and their main insights to allow the client’s team to become familiar with the concepts and discuss how strategically aligned they were with the organisation’s strategy and current objectives.
The type of workshop they conducted with the client is called a Service Safari, which allows the participants to take the point of view of the users by experiencing the services in first-person and by imagining the contexts in which those services would be used. For this, the lab team created a template to present all the elements of each proposition, such as links to UI prototypes, descriptions, features and insights.that In this way, the client’s team would have an instant overview of the entire service as well as information about the original problem and the strategic question.
The workshop had two parts associated with two objectives:
- To share with the client the progress made on the development of the ten selected concepts; and,
- to discuss the concepts together to better understand which propositions were more strategically valuable and to vote on which were priorities.
Each member of the client’s core team was asked to express a preference on the propositions by prioritising them using a scale from 1 to 10 and justifying their choices.
Synthesis and conclusion
The lab team then reviewed the results internally, collecting and analysing all the insights from both experimentation stages and compared the data. The intent was to make sense of the qualitative and quantitative data and the strategic insights they got from the different audiences they had tested with, and therefore identify the potential opportunities for each proposition. They also internally reviewed everything they did during the prototyping stage, assessing what they did wrong and what could be improved, as well as the effectiveness of the testing methods they used.
Assessing and prioritising
The final assessment was conducted based on two main factors:
desirability, which the lab team evaluated mainly through the ads on social media; and,
strategic value, which they gauged from the interviews and workshops with recruited users and the workshop with the client.
This was to ensure that the subsequent development of the research would be about topics of relevance to the future because people are interested in them, and at the same time, bring new information that would be of strategic importance to the client’s future.
Definition of metrics and KPIs
An essential activity for the assessment was the definition of the critical metrics and KPIs (Key Performance Indicators) to consider for the next development phase. During the review of the concepts, the client identified some parallels between the lab team’s propositions and the research done by their emotional AI team and they saw potential opportunities to support each other’s work. On the client’s suggestion, the lab team organised a workshop to engage with the AI department, to understand more of what type of data they were collecting, what kinds of technologies they had available and how they measured success.
This was an excellent opportunity to align with the client not only on a strategic level but also regarding their technological prospects, so that they could potentially offer even more value to the client with the project. It was a way to explore ways to pivot some of the concepts to maximise value for all the stakeholders.
This session gave the lab team some direction in terms of which metrics to prioritise and how to measure the success of the subsequent developments.
By working with the AI team in particular, they defined priority metrics and targets for three main variables:
- user engagement
- data collection
This model was an adaptation of the desirability, viability and feasibility lenses from the design thinking approach to innovation. For engagement, they defined a specific metric to measure success; for the data collection, they looked at how to generate the type of data they needed to train the algorithms; and finally, they brainstormed ways to test the potential of the technology.
Comparison of results
The second activity of the assessment was the comparison of the results from the two experimentation rounds. The lab team had already compared the results of each proposition individually, but now it was about considering the propositions against each other in terms of desirability and strategic value.
The first set of data they examined was from the results of the social media ads testing to compare desirability. Among the metrics the lab team had tracked and measured during the experiment, the most relevant was the rate of sign-ups per landing page views. This rate measured the number of people who left their email on the landing page to receive early access, divided by the number of people who got to the landing page from the ads. They primarily used this metric to rank the proposition’s desirability (while still observing other results).
For the other key variable, strategic value, the lab team relied mainly on the prioritisation voting they had conducted with the client’s core team in the workshop. This voting had considered not just the importance of the concept itself, but also how relevant the entire topic was to the client. They visualised the two rankings with tables and identified which concepts resulted in being both desirable and strategically valuable.
Once the lab team had identified the four most desirable and strategically valuable propositions, they conducted a strategic assessment to decide which concepts to prioritise for development.
They identified what outstanding questions remained unanswered from the prototyping and they outlined a strategy for further testing. They then met with the client once again and discussed the next steps. The ambition was to identify which proposition’s development to prioritise in the next stage. The questions discussed with the client to support the selection were – What could fit with their current strategy and offer credible but innovative alternatives that challenge the market and the organisation? What could be the most engaging value proposition based on pursuing happiness? What could help them explore use cases for ‘explainable’ and ‘emotional’ AI? And, finally, what could help them test the front-facing value of the concepts, their assets and their ethical standpoints.
The final decision was to prioritise a concept called EQLS, which aimed to help people with anxiety through the use of artificial intelligence. The plan was to develop more sophisticated MVPs to test other elements such as feasibility, viability, engagement and impact.
Dimensions of change
Self Identity is challenged, open for exploration and the building of character.
Dimensions of change
Your relationship with your body may be pressurised but you could have more capacity than ever to control it.