Phase 4: Develop Live Services

Unit 1: Developing services in-house

Measuring impact and engagement

In this unit:

The value propositions of the previous phase are transformed into live services that can be tested in the market. 

In this section, the process focuses on the development of services within the In-house team, which had been selected for investment by the client because they not only represent an opportunity for commercialisation, but also a strategic value beyond monetisation (for instance, for the collection of data or the testing of technologies).

The aims of this unit are about defining success criteria and redesigning the proposition to extract maximum strategic value for the client, before developing the services, releasing them into the market and analysing their value.

 

Overview

One of the studio teams was selected for investment, but instead of setting up independently or as part of Innovation RCA, this studio team joined the lab team for incubation. 

The in-house development unit considered the work of the lab team and the work of the studio team. However, in this unit, the process will be explained following only the work of the lab team, but references to process signify the process of both the lab team and the studio team being incubated.

The development of services in the form of MVPs required a shift of focus from defining the value proposition to designing a user experience that reflected the value proposition. Most of the work done developing the MVPs was aimed at testing the user experience, in particular the impact and engagement for the users.

There were two main rounds of testing: 

  • The first round was to prove the concept and assess impact and engagement, which had been conducted with recruited users; 
  • The second round was to look for market fit with the development of an MVP to be released on the market.

The approach they followed for testing the MVPs was again based on the build-measure-learn feedback loop from the lean startup methodology. However, for the development of the MVP, the lab team introduced the agile approach, which is an iterative way of working with the development team, based on incremental delivery and rapid iterations.

Overall, the testing of the MVPs was greatly influenced by the technological capabilities of the in-house teams. The application of artificial intelligence, which was the main component in the propositions, was not feasible at this point. Therefore, the options were to either create a concierge MVP, where the AI would be just simulated or to make a completely functioning MVP able to test impact and engagement of the value proposition without the use of AI, and its associated features. Both teams opted for the latter.

The main questions to answer were whether the product could fulfil the value proposition, (ie. if it could be of value for the client and the user), but also if it could collect meaningful data for the client’s AI team (to train the algorithms they were working on). Answering these questions meant assessing the success of the services in a live market environment.

 

Jump To:

Scenario

Religious Malleability

Religious structures could feel unstable in their ethical foundations as they shift in reaction to threats from the World. It may be difficult to find a community to put your faith into because it could become difficult to distinguish between religious practice and an organisational/commercial service.
01

Defining a design research strategy

During the previous phase a proposition called EQLS was selected as it proved to be engaging, aligned with the client’s strategy and could provide data for the client’s AI team. The EQLS proposition aimed to help people with anxiety through conversation and self-learning empowered by artificial intelligence.

The main challenge was to create a user experience that took into account the technological limitations of the MVPs, but still experimented with the experience or mechanisms of the value proposition. Because the application of AI was not feasible at this stage, the lab team took a step back and considered other options that could deliver the elements of the value proposition. In particular, they explored the structure, context and framing of conversations and other internal or external processes that encourage self-reflection and could have a meaningful impact on people’s self-reflection and anxiety.

Using this new lens that focuses the research on the theory of self-reflection instead of the technology, the lab team defined a new strategy for prototyping of the service. This, coupled with an updated set of the strategic questions, user segmentation, and technology assessment enabled them to plan the next steps by defining the scope of the two MVPs.

Activities

Activity 01.
Update of strategic questions

Following the learnings from the testing rounds of the proposition, and the feedback from the client, the lab team started to refine the strategic question, for the selected concept. In this phase, they planned to build a working service that would be tested with the general public.  Therefore, it was necessary to adapt the strategic questions, considering a more market-oriented approach.Consequently, they shifted their approach and strategic question concerning reducing anxiety —from technologically driven mechanisms, AI characters and chatbots, to more personal and human mechanisms of self-reflection.

Activity 02.
Segmentation and recruitment strategy

In light of what was observed during the previous testing of the EQLS proposition, the lab team considered doing more research on potential user segments. The interviews with users for testing the proposition, had disproved some critical assumptions and uncovered new possibilities for the user hypothesis.

A key finding was that users who experience high anxiety and frequently reach out to friends, family and professionals to reflect on their emotions, were more likely to have overcome stigma associated with self-reflection, and they were therefore less inclined to try new methodologies.

Consequently, the lab team expanded the research to users who had less experience in dealing with their anxiety and would benefit from learning how to practice self-reflection. To recruit these new users, they used an external recruitment agency, providing them with an updated brief and further screening questions to identify the right people to interview.

Activity 03.
Technology assessment

After the discussion with the client about the technical considerations for the creation of the MVPs, the lab team decided to limit the use of AI applications in this phase of the development. Instead, they experimented with other ways to simulate conversations within a digital device. The idea was to be able to test the mechanisms, and once validated, they would eventually build out a machine learning version over time.

So, the lab team researched simple rules-based chatbots, exploring what options were available to use, and how they could integrate them with online platforms and other channels. They also looked at Natural Language Processing and voice detection services that were available off-the-shelf. This technology assessment helped the team understand the limitations of the development, and define the scope for the MVPs.

Activity 04.
Scope of MVPs

Based on the technology assessment, the lab team defined the scope for the MVPs, considering their budget and reach. Since the next objective was to test impact and engagement, there was a clear need to create functional prototypes that users could really experience. This initial prototype could de-risk investment in the building of an actual product, which could go on to  find the correct fit in the market.

Based on these considerations, the lab team eventually decided to have two different MVPs: 

  • One to test with recruited users, based on a web app – something more flexible, fast and easy to develop
  • Then, depending on how successful the testing had been in terms of impact and engagement, they would invest more money and create a native app to launch for the general public on the app stores.
Activity 05.
Research and prototyping plan

Based on the decisions made regarding the scope of the MVPs, the lab team outlined a plan for the next stages of research and prototyping.

The plan detailed the research needed to adapt the value proposition to the new strategy. This involved more in-depth research into the theories and frameworks of self-reflection, more primary research with the newly defined user segment, expert advice in the field of psychology and behavioural science, and finally research on the competitive landscape to help decide where to position the service in the market.

Additionally, the lab team also planned out how to  conduct the experiments for the two MVPs they were going to build. For the web app, they decided to conduct a week-long trial with the recruited users, to be able to observe their use of the app over an extended period of time. Using recruited users would allow the lab to measure the impact of the service on the users and to track changes in behaviour and patterns through more extensive interviews. Monitoring a small group of users for the entire duration of the interaction with the web app would allow the team to collect meaningful qualitative data.However, for the native app, as they were planning to release it to the public, the focus was more on the collection of quantitative data and on tracking the frequency and types of usage. For this reason, they planned an experiment with detailed and robust data collection from the app and to use an external service to track analytics in real-time.

02

Reframing user and value hypotheses

The new objective for the lab team was to explore the potential of digital conversational tools to make self-reflection more accessible and impactful. Based on this new direction, they went through an additional discovery phase to improve their knowledge of the topic and market, conducting research through different means. This was to expand their understanding, but also to validate some of their critical assumptions.

They first conducted desk research on the theories and frameworks for mental health, to understand what the conditions for self-reflection are and how to create those conditions. They also delved into market research and competitor analysis to gain a better understanding of what was already available to users and where to find opportunity spaces in the current market landscape.

In parallel, the lab team conducted primary research with the new target users to understand more about them, their understanding of self-reflection and the issues they have with it. To explore these issues more, they also talked with experts and learnt more about the steps and triggers of self-reflection and the digital therapy services that were available.

The research uncovered some key findings, such as the benefits of creating calm and clarity through self-reflection, without the need to get into more therapeutic mechanisms, and how the problem of anxiety affects a much broader range of people than initially thought. All of that served as the basis to reframe with more detail the target user and the opportunity statement.

This is the phase where it makes a lot of sense to get visual. Especially when running this long in longer form, sketching out ideas into flows or stories is quite helpful. John Makepeace Managet at xploratory

Activities

Activity 1:
Desk research

Using desk research methods, the lab team explored other frameworks and theories for mental health and happiness. One of the main insights from the testing of the propositions was that there was an excellent opportunity for self-reflection as a way to deal with anxiety and improve wellbeing.

The desk research helped the team to think beyond the technology when designing the solutions and focus more on the mechanisms that could trigger effective self-reflection. They researched by consulting articles and resources to understand the problems of relevant mental health issues and identify the main barriers to mental health services in the UK.

They also explored psychological models for therapy and counselling, both traditional and digital, as well as explored the theory behind self-reflection and how it works. This activity is essential, as the theories behind the development of the concept up until this point were based on assumptions and research with users. The desk research helped the lab team prove or disprove their hypotheses.

Activity 02:
Market research

 In parallel, the lab team also conducted market research and a competitor analysis. This type of research was useful to get a good understanding of what was already in the market and what competitors were doing, identifying potential opportunity areas to differentiate the offer from what was available to the users.

The lab team focused their research on the main competitors that provide services around mindfulness and wellbeing, self-reflection and therapy support. They examined their strategies and features and how they positioned themselves in the market space. They compared the strengths and weaknesses of each service and the types of audiences they were targeting, which helped to uncover the spaces still not filled by the competition.

Another analysis was made about branding. It was important for the lab team to be aware of how  their main competitors branded themselves, to understand trends and patterns around the same topic, but also to make sure to create something distinguishable in the market and easily recognisable.

Competitor analysis tools

Competitor analysis:

Use this template to analyse and breakdown key attributes of your competitors, including values, visual aspects and features. Use one template per competitor.

Download tool

Competitor analysis overview:

Use this template to compare key elements of competitors in your marketplace against your concept and against each other, and then categorise them.

Download tool

 

Activity 03.
Primary research

At this stage, the lab team conducted primary research based on their new user segmentation strategy, which had been defined through the value proposition experiments of the previous phase. During those previous experiments the original users, ‘the self-declared anxious’, revealed that, although self-reflection was useful for them, they did not see the value of an external conversation agent, because they were already able to practice it on their own. Based on these discoveries, the team decided to investigate opportunities for another user segment, which experiences lower levels of anxiety, but is less familiar with self-reflection techniques and would benefit from the service.

The lab team conducted face-to-face interviews with recruited participants. The sample scope was designed to be broader than the previous time, giving the lab the possibility to uncover multiple points of view and new user segments. This time the research focused more on the understanding of the user, and their opinions around self-reflection, giving much less importance to the current concept itself. This research phase was more about exploring the topic and less about validation.

The lab team also interviewed some experts in the field of psychotherapy and behavioural science to delve into the methodologies and tools used in cognitive behavioural therapy (CBT) and counselling. In addition to these interviews, they also conducted another round of guerrilla research (as in the first phase).

This type of research is conducted in public spaces and it aims to get quick feedback from people on the street. The lab team prepared a set of questions about how people feel, behave, reflect and express themselves and then went to specific locations where they might find their desired users and approached them for these on-the-spot interviews.

Through primary research, the team was able to build valuable insights about people’s barriers to, and methods for, self-reflection. This enabled them to form new hypotheses about who faces these problems most acutely, in what way, and how these challenges could be tackled.

This tool helps you to form a persona through key insights collected during your research and highlight their problems, attributes and goals.

User persona

This tool helps you to form a persona of key insights collected during your research and highlight their problems, attributes and goals they have.

Download tool

User landscape

This tool is used to create an overview of users and define the attributes which might influence the users relationship to the problem. Select an influencing attribute for the horizontal axis and place users horizontally in the map relating to high or low extremes of that attribute. Then either select another attribute for the vertical axis, to see what relationship it may have to the first, or use the vertical axis to examine the transformation you hope to see in your users in order to directly show the impact of your first attribute on your objective. Creating different landscapes will allow you to explore the influence of different attributes which can help determine what attributes are of importance or which users to focus on.

Download tool

03

Reframing the value proposition

After defining the new user and value hypotheses, the lab team proceeded to reframe the value proposition (the proposed service concept).
Based on the research conducted on self-reflection, the new user segment and the technological assessment, the lab team took a new approach, which was less focussed on the use of artificial intelligence. The new objective was to get mildly anxious people who were not reflective to reduce their anxiety by learning how to self-reflect or simply engaging in the practice more. That meant that the new value proposition had to meet new criteria and expectations. First, the lab team used their research on the market to define their market positioning, and once assessed and agreed on, they focused on the mechanisms of impact that would allow users to self-reflect in an easy and accessible way.

Activities

Activity 01.
Competitor orientation

Following the market research and competitor analysis conducted by the lab team, the next step was to identify where they wanted to position themselves in the market.

Based on the market research, they decided where to position the service amongst the current offer of apps and services around mental health and happiness, choosing the digital therapy service category. They also identified the elements that would give them a competitive advantage, improve accessibility for users and distinguish themselves in terms of branding and interactions.

Through the competitors’ analysis, the lab team established that most of the apps and services available tend to be either explicit about the mental health aspect or to hide it behind gamification. Based on these insights, they decided to position themselves in a more neutral space, letting the users choose how to interpret the service based on their individual needs, not hiding mental health aspects but focusing on cathartic or meaningful experiences that the tool could provide. 

Market landscape

This tool is used to have an overview of the market landscape by comparing the positioning of your potential competitors. This helps you to identify gaps in your market segment that aren’t currently filled so you can differentiate your service from the others.
Try different combinations of influential attributes as the axis variables, in order to compare your competitors. These attributes should be characteristics of features which consumers depend on to make decisions.

 

Download tool

Activity 02.
Mechanism of impact

After the technology assessment and the research conducted on the methods to trigger self-reflection, the lab team started to consider which mechanism of impact to use that could be prototyped in the new value proposition. This mechanism had to be implementable in a web app, which was the first MVP they decided to develop for testing with recruited users. Moreover, it had to allow the team to measure the impact on the users in both the short and long term. 

Based on the research, they found an opportunity space around the use of conversational interfaces. In particular, they discovered that engaging in conversations with an ‘other’ had tremendous value for users, even if that ‘other’ was not a real person

Conversational interactions seemed to encourage the user to reflect in order to express in a way that can be communicated, and in turn allowed them to look at or hear their thoughts as though from an external point of view, triggering greater self-awareness and self-reflection. They decided to test such a mechanism through the development of the first MVP.

Use cases

This tool is used to have an overview of the market landscape by comparing the positioning of your potential competitors. This helps you to identify gaps in your market segment that aren’t currently filled, so you can differentiate your service from the others.

 To compare your competitors, try different combinations of influential attributes as the axis variables. These attributes should be characteristics of features which consumers depend on to make decisions.

Download tool

Design objectives

This tool helps you identify the main problems the users would experience in their current journey and structure objectives that could overcome these issues in your new service proposition.

Download tool

Activity 03.
Positioning options

Once the lab team had agreed on the mechanism of impact, there should have been an additional activity to define potential positioning options. Positioning options are alternative ways to design and deliver the value proposition, considering different narrative framings, different branding, and different ways of implementing the basic mechanisms of impact.  

For this project, the lab team did this activity after testing the first MVP. However, they later recognised that exploring the positioning options before the first MVP would have let them experiment with a broader range of interactions, instead of limiting themselves to testing only one in the form of the web app. In fact, the lab team chose to focus on a tool which encourages users to ask questions to themselves in a conversational format, but they did not explore other options, giving more importance to the testing of the conversational interface

For this reason, the activity is described at this stage. Carrying out this activity before the development of the first MVP would bring more value to the project.

The positioning options were based on an ideation of different ways to convey the value proposition through the user experience. They consider multiple aspects simultaneously, such as branding, interface, interaction and message, which represent different ways to present the service to the market. Different positioning options would have also meant slightly different user segments by adapting the value proposition to their personal goals and preferences.

As examples, the lab team proposed self-conversation mechanisms in positioning options like a private version of a social media network or a playlist of thoughts.

Positioning

This tool should help you define and express the positioning of your brand and service so that the service is distinguished from competitors and conveys the correct values to the user. Use one set of positioning tools for each positioning option.

 

After you define the positioning, use this tool to delineate the unique way you aim  to deliver your promise to your customers, specifying key features and ways your users would interact with your product/service.

Download tool

04

Assessing the experiment strategy

This stage is designed to bring together all the research insight and assessment of the previous stage about the competitor orientation, the impact mechanisms and the positioning of the concept. The goal is to define the overall value proposition of the concept’s MVP, the technologies to be implemented and a strategy for the experiment, which was to be run as an experiment with recruited users. As mentioned previously, the lab team defined the positioning options after the proof of concept, so this process was reorganised in this stage to represent the preferred order.

Activities

Activity 01.
Synthesis of value proposition

During the synthesis of the value proposition, the lab team reviewed and agreed on the elements they were going to build for the first MVP.

In particular, they processed their research and decided on the mechanism of impact to test within the first MVP. They defined the theoretical framework for conversing with an ‘other’ to encourage self-reflection. They also chose to convey the conversational interface with two card types representing the two sides of a conversation, and they defined the minimum features to include in the web app, evaluating which ones were needed in order to understand their value proposition. After agreeing on the specifications for first MVP, the lab team defined the plan for the trial of the web app. The idea was to recruit users to test the web app for a week-long period, recording their use and completing a short, voluntary survey every day.

Activity 02.
Trial set up

To test their user hypothesis, they decided to recruit three different groups of users:

  • ten people who belonged to the new user segment (less anxious / less reflective);
  • ten of the same segment but of an older age; and,
  • ten with similar profiles to the original target user.

These users were selected and recruited by an external agency for recruitment through detailed screening interviews.

Bear in mind the data you need to collect should be useful for you to understand what’s valuable and the more learning you create the more complex is to decide what taking into consideration later on when it’s the moment to decide. Mattia Gobbo Service designer at xploratory
05

Designing the MVP

The lab team designed the first MVP intending to test the new value proposition with recruited users during a week-long trial. The idea was to design the digital touchpoints and collect the feedback needed before investing more money into a second MVP to be launched in the market.
Given the limitations imposed by the technology and resources available, the lab team focused on simple features that encourage the practice of self-reflection through a conversational interface. The process at this stage was mainly a UX design process, which included activities such as wire-framing, creating blueprints, branding and interaction design. The objective was to create a fully designed prototype that could be easily implemented by an external team of developers.

Activities

Activity 01.
User journey

A user journey is a useful tool to represent all the phases of the experience from the point of view of the user. In order to create the user journey for the web app, the lab team first created the use cases for each one of the users they interviewed that were part of the user segment. A use case is a description of a particular user interacting with the service in order to fulfil their individual objectives, which also depicts how the experience of the service would fit in their daily life. They usually describe the interaction with one or more features of a service that responds to the needs of the user.

The creation of use cases was an effective method to identify which features seem to have more impact on the users, and which ones could be excluded from the MVP. After the use cases, the lab team designed the user journey, which is a tool to visualise the holistic experience of using the service, highlighting three phases in particular: before, during and after the interaction with the service. Unlike the use cases, the user journey describes a general experience, so instead of specific users, it is based on the journey of the persona and it is not limited to the users objectives and interaction with the service, but includes all elements of their experience in the time frame.

The lab team divided the user journey into 5 phases, which they called attract, decide, use, retain and sustain. For each phase, they used storytelling techniques to describe the persona’s actions and emotions at that moment and the features in use. The user journey helped the lab team consider and validate the overall experience and interactions before using it as a framework to plan the service and experiment, and develop user interfaces.

User journey

This tool helps you identify how the users would experience your product.
Take in consideration key phases including the moment they become aware of the service, the first and the continuing use until the moment they leave.
Start from a low fidelity version, as you proceed you will include more detailed touchpoints and actions.

Download tool

 

Activity 02.
UX Wireframe

 With a defined user journey, the lab team designed the wireframes of the web app. A wireframe is a representation of the layout of a visual interface that, in this case, shows the main elements of the app, including things like the position of content and navigation. It basically visualises the basic structure showing all the UX elements that are required for the app to work, before adding branding and content.

The wireframes are also useful for testing the main interactions at an early stage and getting stakeholder approval because although unfinished, it shows the full user experience in terms of functionality.

Activity 03.
Branding

Alongside the design of the wireframe, the lab team started to work on the branding for the web app. Using the research conducted on the market and the main competitors, they started analysing the most used colour schemes and visual languages to identify how to distinguish themselves and represent the positioning of the service.

Branding is an essential component of the web app, it influences users’ first impressions and the expectations about what the service offers. It gives people a sense of what type of experience they will have by interacting with the app, and it also influences trust. To brand the service, the lab team conducted further research on trends for style and colours, and they set up mood boards with a variety of inspirational images and photos. In the end, they opted for a minimal visual language, to maintain their neutral positionality, and bring attention to the content.

Experiment blueprint

This tool helps you delineate key actions and touchpoints of each stage in your experiment, so that you can foresee what infrastructure is required to deliver the experiment smoothly. Depending on the complexity of your experiment you may choose to develop a more nuanced version of this map detailing precise actions and technical integrations.

Download tool

Activity 04.
Developers Brief

The lab team hired an external agency for the development of the web app and created a detailed UX blueprint to ensure every component of the web app, including content, visual design specifications, interaction details and animations. A UX blueprint is a tool that captures all the elements of the app and that everyone involved in the project can use as a reference. It is a powerful and immediate way to communicate how the service works, and it ensures everyone is aligned and updated with the latest changes.

In the UX blueprint, the lab team detailed every screen and every interaction for each feature, from start to finish, and they provided information about the visual elements, the static content, the dynamic content and the type of data to collect at each stage. 

In addition to the UX blueprint, the lab team also made sure to provide all the necessary instructions for the development, the media and content. Moreover, they created videos to convey their expectations of animations and interactions, so that the developers would know exactly what to do.

06

Designing the MVP

Once the design was complete, the lab team entered the prototyping phase, which, similarly to the propositions, followed the lean approach —build, measure, learn. The process consisted of building the MVP, followed by the trial with recruited users recording data, and then a synthesis of the insights and discussion of learnings.
This stage was intended to test impact and engagement and collect feedback from users, which could inform the development of the second MVP that would be launched in the market. It was also an opportunity to explore the key questions and assumptions of the value proposition over a more extended period of time

Activities

Activity 01.
Building of the MVP

The MVP was built through an external agency in charge of the web app’s programming. The mindset behind this development was ‘flexibility’  — to test, learn and improve fast, meant being able to amend and distribute quickly. The lab team adopted an agile approach with the developers, which included multiple revisions and feedback sessions. By using a Progressive Web App, they could push updates and efficiently collect data using an easily accessible analytics tool. The web app underwent a series of iterations, and within a timeframe of 6 weeks was considered ready to be tested with the recruited users.

Activity 02.
Field research

The field research for the proof of concept consisted of a controlled trial with a group of 30 recruited users. As mentioned previously, the lab team recruited three different groups of users:

  • ten users with mild anxiety and no experience in self-reflection, aged 18-24 years old:
  • ten users with the same conditions but aged 30+; and,
  • ten users with more severe anxiety and previous experience in self-reflection.

These participants consented to the research process and conducted the pre-trial survey. In this survey, they were asked about their current situation and experience with self-reflection and a series of questions from standardised questionnaires, specifically created to diagnose anxiety and depression.

As explained previously, this acted as an assurance to prevent anyone who might be experiencing severe levels of anxiety or depression from continuing the experiment in a potentially vulnerable position where unintended consequences of the service may have exacerbated their situation.

The objective of the trial was to test engagement and impact. This is why the participants were explicitly given freedom to decide how frequently they wanted to use the app and when. Regardless of the use, each participant could complete a brief daily survey, logging their experience with the app.

Following the completion of the trial, each participant filled a closing survey.  In this way, the results could be then compared to the pre-trial survey to assess the impact the web app had on the mental state of the users, and if it helped improve the way the users self-reflect.

All the answers from the surveys and the data stored by the web app were collected by the lab team, who analysed them to identify twelve participants of interest who had particular patterns of use, demonstrate high and low frequency and positive and negative experiences. These twelve users were invited to participate in a follow-up phone interview to give more information about the contexts they used the app and how they felt about the features.

Experiment planning

This tool is designed to help you distinguish the key participants in your experiment and operationally plan it’s set up, including any variation of process for each participant type.

Download tool

Activity 03.
Analysis and synthesis of insights

At the end of the trial, the lab team collected both quantitative and qualitative data to analyse and synthesise into insights, which  could inform the next development phase. 

Through the survey results, they got a better understanding of the users and their needs, and when and how they saw value using the web app. The lab team also used the data the users had input into the web app and extrapolated insights, such as the average length of the reflections, the most used features, and the issues encountered.

However, they received the most meaningful feedback through phone interviews. During the interviews, the lab team was able to dive into the behaviours and feelings of the participants. The interviewed users were asked about their experience with the web app, their concerns, and the value they saw for each feature. The lab team documented the profiles of these participants, combining the information received through the interview with the survey results. The trial helped the lab team distinguish how each user segment responded to the service, and what elements would provide long-term value for them.

Qual & Quant review

Qualitative review:

This tool is used to summarise the qualitative engagement of a single participant during the trial after your interviews.

Download tool

Quantitative experiment results:

This tool is used to give an overview of the quantitative results from the experiment.

Download tool
Activity 04.
Discussion of learnings

After the analysis of insights, the lab team discussed the learnings and identified the areas of improvement to understand what direction to take for the next MVP, which would be launched in the market. 

The process helped them narrow down the key features that bring the most value to the users. They also got useful suggestions from the users themselves during the phone interviews. In particular, they received feedback on the branding, which was considered too related to the world of mental health, and made users uncomfortable. This prompted the lab team to explore new strategies and plan a complete redesign of the user interface. They also understood better the barriers the users encountered while self-reflecting through the web app, for instance the self-directed nature of the interface was sometimes quite intimidating. They also identified which environments and contexts users were most comfortable using the app, like in bed before going to sleep, which helped to understand how to position the service.

07

Assessing the results

After the trial of the web app, the learnings were used to assess impact and engagement, but also to answer critical questions relative to the new value proposition.
During this assessment, the lab team, together with the client, made critical decisions on how to structure the building of the second MVP and its launch in the market, with particular attention on which features to prioritise and the technologies to use. At this stage, it was necessary to align the MVP, so it could complement the client’s planned offering of products and services, using the same libraries and software in order to allow a straightforward handover of the final deliverable.

Activities

Activity 01.
Assessment of impact

The insights collected from the trial made it possible for the lab team to understand the actual impact of the web app on the users and what to do to improve it. In particular, they uncovered the need for building rituals to make self-reflection effective. They also evaluated the efficacy of the user experience in encouraging users to self-reflect. 

To make these assessments, the lab team analysed the frequency of use by the recruited users, who had freedom to decide if and when to use the app based on their personal needs. They reviewed any changes in anxiety before and after the trial. And finally, they used the answers given by the users during the phone interviews to understand to what extent the service helped them build emotional resilience and agency over their thoughts and to uncover the risks behind self-directed reflections.

Activity 02.
Prioritisation of features

Following the trial and the interviews with the participants, the lab team gained a good understanding of which features were best delivering the promise of the value proposition. They identified opportunities to improve some features, such as the reminders, and add new ones that would support the implementation of the theoretical framework, and offer a more effective and complete experience of self-reflection. The lab team also considered new strategies for the positioning in the communication strategy, and the look and feel of the app.The lab team then synthesised everything into a short and clear value statement: “a personal space to talk, listen and learn about yourself – helping you take control of your life”. 

With more clarity on the features to prioritise, the lab team wanted to define a clear value statement that would drive the development of the second MVP. To create the statement, they went back to the trial participants’ feedback and analysed how the users described the service with their own words. This helped them understand how the users interpreted the app and the value it brought to them.

The lab team then synthesised everything into a short and clear value statement: “a personal space to talk, listen and learn about yourself — helping you take control of your life”.

Activity 03.
New metrics and KPIs

Before starting the design of the new MVP, it was essential to agree with the client on the Key Performance Indicators (KPIs) and their expectations. In particular, there were three things to assess with the second MVP: how the product fulfilled the value proposition, the value for the client, and the value for the users.

For the first one, the lab team made a plan to assess it through phone interviews of the app users, so they outlined a plan to use notifications to reach out to people for interviews. 

To assess the value for the client, the lab team worked with them to define three KPIs: one for the product, one for the technology and one for the research. These were the three components of value for the client; therefore, they defined specific objectives to measure the MVP’s success.

For the product – the app – the client asked the lab team to measure the retention rate of users after one week. Retention rate is the percentage of users who come back to perform any action on the app, at least one time the week after first use. They set the KPI as 20% retention rate after a week, to consider the product of value for the user segment. 

For the technology what was necessary for the client was to include and test key technological components they also had in their project pipeline, such as voice capture and playback. The task was to test the components in relation to a benchmark to assess the effectiveness 

Lastly, as the client had their own AI team developing algorithms, they hoped to design a service that could capture 1000 users producing 100 messages in order to get enough data to help train the algorithm.

In this phase, the lab team also defined other metrics to assess the value of the app for the users. Some of these metrics considered the number of tools used, the frequency of use, and the average time spent on the app.

Evaluation of KPI’s

This tool will help you and your project partner agree on how to establish the success of your MVP evaluating success metrics related to its engagement, the collection of data and functionalities of the technology employed.

Download tool

08

Designing the second MVP

For the design of the second MVP, the lab team went through multiple iterations, with quick testing sessions of the main elements of the app, before involving the developers. They wanted to get the design of the app interactions as strong as possible before being implemented as a native app.
In particular, the lab team experimented with different branding options, they conducted usability testing for the interface and they conducted further research on the competitors to get more information on the best practices for each feature.

Activities

Activity 01.
User Journey

Entering into this new design stage, the lab team revised the user journey for the second MVP. The first trial validated the user hypothesis, so they confirmed that the target audience should be people with mild anxiety, and generally not familiar with self-reflection. To define the new user journey for this user segment, the lab team first refined the theoretical framework for self-reflection.. 

They identified a sequence of activities that could encourage multiple layers of expression and thinking, from a simple conversation to a long term reflection of what has been said. This framework helped them define with clarity what the user journey in the app should be, identifying when each feature would play its role.

In order to quickly verify if the expected user experience would be appropriate for the different needs of users, the lab team created a series of user stories. They used what they learned about users during the first trial to create different personas and their use cases, to see if there was any incongruence in their hypothetical experience and if there was anything that could be improved to meet their needs.

User story

This tool will help you visualize the concept and figure out possible faults and improvements on the user experience. Feel free to outline different users and develop related stories and multiple paths that the user could interact with the service, this will help you disclose more divergent objectives and refine your service proposition in order to comply those objectives.

Download tool

Activity 02.
UX Wireframe

After exploring the user journey, the lab team started working on the wireframe for the new MVP. After evaluating the performance and adjusting the objectives, they confirmed the need to switch to a native app environment, which allows more engaging interactions and complex functionalities, driven by a more robust UX and perfected UI. 

The challenge was that native apps require a longer time for development. For this reason, the lab team decided to speed up the process by using a design language system that can be easily implemented by the developers. The use of this design language influenced the design of the wireframe, which had to follow specific guidelines. Although limited in some cases, this approach helped the lab team create a more rigorous wireframe for the native app. 

Activity 03.
Content creation

On the first MVP, the content was minimal, as it was based mostly on self-directed reflection by the user. However, the participants in the trial found the lack of guidance intimidating. Therefore, for the native app, the lab team set up a strategy for the content. Working together with a psychologist, they defined the criteria for the language of the app. To give users more guidance, but also allow them the freedom to express themselves as they preferred, they had to set the ‘voice’ of the app carefully, defining characteristics such as pragmatic, supportive, professional and courteous. 

The psychologist also conducted extensive research into the psychological theories and methodologies related to self-reflection, to then apply them to the content strategy of the app. The content was then reviewed and tested by the lab team and refined to fit in the context of the app. 

Activity 04.
Prototyping and Usability tests

With the wireframe defined and some of the content ready, the lab team created clickable prototypes to be tested with recruited users. This activity was critical to test usability before the actual development of the app and led to multiple iterations of the user interfaces. 

Users were shown the principal elements and features of the app and were asked to interact with them and give feedback. These tests helped the lab team realise which functionalities needed more clarity and what elements could be improved for the user experience.

Activity 05.
Branding

The feedback from the trial of the first MVP made the lab team realise there was more work to be done on the branding. The name had to change as users found it confusing, and the aesthetic received criticism for reminding them too much of a medical app, something the users did not feel comfortable with. The lab team conducted further research on the best practices adopted by the competitors, related to the branding and UX/UI for each feature 

An external consultant helped the team define the positioning, creating a series of mood boards which could be applied to different styles and colour palettes in the wireframes. Then, they tested two variants, both with recruited users and through social media advertising. 

The objective of the testing was mainly to test the brand positioning and messaging of the app, with one option focusing on freedom of expression, and the other on connecting to agency and ownership of thoughts. They compared the results and defined which brand positioning resonated more with the audience.

Once the brand positioning was clear and defined, the lab team started to experiment with the design. They took some of the feedback from the trial to understand how to adapt the visuals to the way participants used the app. In particular, they noticed patterns of use during the night, especially in bed before going to sleep. Based on this insight, they decided to adopt a dark mode style for the app, to reduce digital eye strain. 

Activity 06.
Developers brief

The briefing to the developers for the native app required more details compared to the web app. As mentioned earlier, the lab team used a design language system to design the interfaces that follows a set of standard rules and guidelines, which made it easier for the developers to understand the specific requirements. To represent the interactions in the app with as much detail as possible, the lab team created an extensive blueprint, which they updated at every iteration of the app. This blueprint became the main document of reference for the developers.

Another vital part of the development was creating a brief for the collection of data. The lab team used an external service to track analytics, so the set up required specific instructions for the developers to add the appropriate code to the app. Using the KPIs and the prioritised metrics, the lab team defined the type of data they wanted to collect and specified a detailed implementation.

Activity 07.
Promotion strategy

In preparation for the launch of the app in the market, the lab team planned a strategy for the promotion of the app on the stores and social media. First, they redesigned the landing page website, and included more information about the service and the self-reflection practice, with a sign-up form for early access to the app. They then used the website to launch advertising campaigns on social media. 

Unlike the previous uses of this method, which focused on comparing concepts or brand variants, this time, the focus was on optimising the cost per download by testing and iterating the ads. 

The lab team launched a series of tests, varying images, formats and copy, as well as experimenting with different audience cohorts within the same user segment for more targeted advertising. These experiments helped the lab team cut down the cost of advertising before the actual launch and gain a large following.

Also, during this phase the lab team set up an email campaign to reach previous subscribers, and designed the content for the App Store and Play store pages.

Online advert split testing

This tool helps you minimise the cost of your main advertising campaign when you launch the service by running preliminary advert testing. 

Design different adverts that test variations in your advertising strategy.

Create variables in things like the problem you present, the framing of the solution, the users you target, the image you display or the text and calls-to-action.

Download tool

09

Launching the MVP in the market

As with the first MVP, the process was not about building, launching and assessing the service, which was now called ‘Hold’. The objective of the second MVP was to understand more about the public’s response to the service and the patterns of behaviours the app had encouraged.
However, for this second MVP, the process was more complicated, as it involved the launch of a product in the market, with consequently less visibility on users’ behaviour. To know more about the users’ actions on the app, the lab team used an external analytics service, which collected anonymised data for each user. Additionally it enabled them to send notifications to specific cohorts of users and invite them to participate in phone interviews with the team.
This combination of quantitative and qualitative data, helped the team assess the desirability, market fit, impact and engagement. It also made it possible for them to measure retention and define which KPIs the app had achieved.

Activities

Activity 01.
Building of the MVP

During the building of the second MVP – the native app – the agile approach became even more prominent than for the first MVP.

The way of working with the developers included frequent reviews and feedback sessions.

To improve the efficiency of the communication with the development team, the lab team experimented with various tools to optimise and simplify the exchange of information and feedback, such as spreadsheets and kanban board apps. Since the lab team wanted to keep some flexibility to update the content and be able to make changes based on early results, the app was created with a CMS (content management system).  The collection of data on the analytics service was much more rigid and difficult to manage. The process to test the tracking of data required many iterations.

Activity 02.
Field research / Launch of the MVP

With the app finally available, the lab team started the promotion on social media and via email. As there were still some details to test, they promoted the app gradually. They started with a few hundred users downloading the app, while the lab team tracked usage data to check for any bugs or issues. After this testing phase, they released the final version of HOLD and started the large-scale promotion. 

The intention was to track and analyse everything that happened within the first month of release. The lab team used the analytics to observe the behaviour of users, even though for privacy, they had no  access to any personal data or inputs from the users. 

The data uncovered unusual patterns of use, which were recorded by the team to inspire further developments. The lab team also used the data to conduct testing on different notifications strategies to see which worked more effectively in prompting users to come back to the app.

They also used the analytics to identify particular cohorts of users, who seemed to engage well with the app. They sent these cohorts notifications, inviting them to participate in phone interviews to give feedback on the app. Through these interviews, the lab team collected invaluable  information from real users about the experience of using the app. This activity helped them assess impact and engagement for users 

Activity 03.
Analysis and synthesis of insights

At the end of the fixed period for the experiment, the lab team collected the quantitative data from advertising and analytics and created an extensive report on the results. In addition to the data, they synthesised the qualitative feedback from the phone interviews into insights. In particular, they were interested in the effectiveness of the app in supporting and enhancing self-reflection, and the impact it had on users. 

By combining the quantitative and qualitative data, the lab team was able to paint a detailed picture of the app’s strengths and weaknesses. 

The lab team specifically analysed the drop off rate of a user’s first use. By creating a ‘funnel’, or a trackable sequence of steps the user takes on the analytics platform, they were able to identify what elements of the user journey caused more users to stop using the app and understand what changes had to be made to improve engagement.

Retention monitoring and improvement

This tool is used to help you monitor and refine your attraction and retention efforts. As you try different promotion, onboarding and retention techniques in the early stages of launching the service, collect data about distinct periods where different approaches have been employed and use this tool to determine what was most successful.

Download tool

Activity 04.
Discussion of learnings

Using the insights collected, the lab team then proceeded with a discussion of the learnings from the experiment and the definition of potential next steps.

In particular, they considered how the results of the experiment responded to the three main questions of the research: if the product was of value to the client based on the desirability, engagement and retention; if the product was of value for the users and for which particular segment; and finally if the product fulfilled the value proposition.

10

Assessing the impact of the Project

During the final assessment stage, the lab team collected the work done and presented the results to the client. First to the core team and then the extended organisation, with the intention of discussing the learnings that had emerged from the process and outputs.
The lab team highlighted how the qualitative and quantitative data collected showed an excellent opportunity for the service to fit into the client’s strategy and product pipeline. They also discussed if Hold met the KPIs and if the collection of data for the training of the algorithm proved to be effective and in line with the client’s needs.
The presentation to the client was a pivotal moment to assess potential business opportunities and the next iterations of the service. In addition to the presentation, the lab team planned the release of a series of deliverables to collect and summarise the project outcomes.
The deliverables included documentation of the process designed by xploratory and applied to the client’s brief; all the ideas created in the various phases, such as scenarios, explorations, propositions and live services; and the learnings gained through the experiments conducted by the lab.

Activities

Activity 01.
Presentation of the work to the client

After collecting the results from the experiment with the native app, the lab team prepared a presentation for the client to share the outcomes and learnings of the second MVP. The objective of the presentation was to share the progress made, and the results achieved, as well as to discuss opportunities for further development. 

The presentation included a recap of the new value proposition, Hold, and of the theoretical framework and science behind the app. It also highlighted how the final MVP responded to the initial brief and what value it would bring to the client, both as a product to be included in their product pipeline and as a research tool for the training of their AI algorithm 

The results collected from the MVP also uncovered some interesting insights from a psychological perspective and the significance of being able to facilitate self-reflection through an app. Another reflection shared with the client was the ethics behind the use of AI in the context of health and happiness.

The lab team shared the quantitative and qualitative results of their tests to give the client an overview of the desirability and impact of the MVP, and some estimates on the potential cost for the data collection for the AI training. They discussed with the client if the results met the KPIs and the potential opportunities for further development.

Activity 02.
Documentation of the project

At the end of the project, the lab team planned to produce a series of deliverables to organise and summarise all the work they had done. In particular, they focused on the three main outputs of the project: the process (methods) they followed, the ideas they created and the learnings they gained. The lab team designed these deliverables to be accessible both in digital and printed version, completed with an overview of the project.

The process was reviewed and published as the Methods section, to which this writing belongs. The objective of the Methods section is to share the Design-driven Service Innovation framework created by xploratory, and how the lab team combined methodologies and tools from different practices to explore and uncover innovation opportunities for the client. 

The Ideas section consists of a collection of all the concepts and discussions produced by xploratory and the RCA Service Design students for the Koa Health project. 

Additional deliverables are currently in production concerning the learnings acquired by the lab team during the project. The current plan includes reports on the case studies from the lab team and the studio teams and research papers produced in collaboration with Imperial College London.

Case Studies

Dimensions of change

Identity

Self Identity is challenged, open for exploration and the building of character.

Dimensions of change

Body

Your relationship with your body may be pressurised but you could have more capacity than ever to control it.

Dimensions of change

Work

Work becomes fluid, remote, unstable and performative The possibilities and challenges for working life continue to grow.

Would you like to know more?

Let's find the place to think, the freedom to challenge and the capability to act on real change. Together.

Let's Talk!

Let's find the place to think, the freedom to challenge and the capability to act on real change. Together.