This is the second article in a series on engagement. Read the first article here.
Engagement is arguably the single most important challenge in digital health today. Yet, while we recognize that engagement is important, there is a distinct lack of clarity regarding what exactly engagement is, and even more importantly, how to measure it.
Increasingly, digital health technologists and researchers agree that we need to find measures of meaningful engagement. That is, we must avoid the temptation to immediately use generic measures like number of sessions, weekly active usage, or program completion, and instead follow a data-driven approach to find the specific engagement metrics that uniquely predict long-term value in each intervention.
The quest for meaningful engagement metrics is not unique to digital health. In fact, many of the most successful technology companies have sought out such a metric, which when found, is closely linked to their ‘aha moment’. For example, Facebook identified ‘number of friends’ as its key engagement metric, and found that long-term retention was greater when users added 7 friends in 10 days. Slack, Twitter, Zynga, LinkedIn, and Twitter have all shared similar aha moments. A key principle is that meaningful engagement metrics are usually specific to the product. While it’s certainly possible that a generic measure of engagement could also be meaningful, it is rarely the case.
A key principle is that meaningful engagement metrics are usually specific to the product.
Finding ‘aha moments’ can be decomposed into two parts. First, you must find meaningful measures of engagement (e.g. friends), and second, you must identify critical levels of those engagement metrics that, once achieved, predict long-term user value (e.g. 10 friends in 7 days). In this article, I’ll cover the first step, finding meaningful measures of engagement. I hope to cover the second step in a future article.
Drawing from work in the consumer and SaaS industries, I’ve identified a three-step process for finding meaningful engagement metrics in digital health interventions.
Indicators of value are one area in which there is a fundamental difference between digital health interventions and most consumer or SaaS products. For example, in a consumer product like Facebook, users are looking to be entertained, and therefore retention (consistently coming back to the app) is an excellent proxy for that value. Similarly, in a SaaS product like Slack, users want to achieve a specific objective, such as communicating with their colleagues. Once again, retention (consistently coming back to communicate with colleagues) is an excellent proxy for that value.
For most digital health interventions, the primary indicator of value is clinical outcomes
However, for most digital health interventions, the primary indicator of value is clinical outcomes.¹ Users have health needs and utilize digital health interventions to address those needs. If users come back to the app every day for months (strong retention), but their clinical symptoms do not improve, then they haven’t received the intended value from the product.
Moreover, in most digital health products there are not one, but three key stakeholders — patients, providers, and payors — all of whom value improved clinical outcomes. In this regard, clinical outcomes are a unified indicator of value for all major stakeholders.
Let’s take the example of a digital health intervention for depression. Patients, providers, and payors all care about decreasing depressive symptoms. We’ll continue with this example below.
Now that you’ve identified a measure of value, it’s time to find leading indicators of it. Once found, these will become your metrics of ‘meaningful engagement’.
At this stage, it’s important to resist the temptation to immediately use a generic engagement metric. As mentioned earlier, it’s certainly possible that a generic measure of engagement will be a leading indicator of value, but by and large, meaningful engagement metrics are unique to your product.
To find and validate leading indicators of value, I recommend following the three-step subprocess outlined below.
If you’re building a new product from scratch, then you’ll need to rely on theory or external data to generate your initial hypotheses. Many digital health interventions are digitized versions of face-to-face interventions, and you can look at the predictors of clinical outcomes in those face-to-face interventions to generate hypotheses.²
Many digital health interventions are digitized versions of face-to-face interventions. You can look at the predictors of clinical outcomes in those face-to-face interventions to generate hypotheses.
Let’s turn back to our example. Let’s say your digital intervention for depression is based on Cognitive Behavioral Therapy (CBT). You investigate what predicts positive clinical outcomes in face-to-face CBT, and identify multiple predictors, three of which are:
Continuing with our example, your next step is to explore digital analogs for each of the traditional predictors:
Once you’ve developed your hypotheses, it’s time to look at the correlation between the hypothesized leading indicators and clinical outcomes. Let’s say you find that:
Based on the above, you choose completing in-app exercises as the most promising engagement metric and move onto step 3.
At this point, you’re probably thinking “But wait, correlation doesn’t equal causation!”, and you’re absolutely right. That’s why in this next step you’ll determine whether the hypothesized leading indicator (e.g. completing in-app exercises) causes the desired outcome (improved depressive symptoms), or whether there is actually some third factor (e.g. users’ pre-existing motivation to change) that leads users to both complete the engagement metric and improve their clinical outcomes.
You can test for causality by shipping new product features that increase your target engagement metric (e.g exercise completion rate), and then see whether that leads to a corresponding improvement in your clinical outcome metric (e.g. PHQ-9). For example, you might implement additional reminder notifications for exercise completion, or design a reward system that incentivizes completion of exercises.
Once you ship the feature, you’ll first confirm that the engagement metric does in fact go up as a result of the feature. If the target engagement metric goes up, you will then check whether the correlation with clinical outcomes is maintained. If the correlation holds, then that’s great support for your choice of engagement metric. Congratulations, you’ve likely found a measure of meaningful engagement!³
Finding meaningful measures of engagement is just the beginning. There are several important next steps that I will touch on briefly.
There’s rarely only one metric of meaningful engagement in an intervention. Instead, there are probably multiple ones, in which case you’ll be better off combining them into a hybrid measure of engagement. For example, you might categorize someone as meaningfully engaged if they do any 2 out of a list of 5 leading indicators within the program in a given week. Or you might want to create a hybrid metric that blends various factors into a single weighted average engagement value.
Measures of meaningful engagement are also likely to differ across users. E.g., users with more severe symptoms might benefit from a different style of engagement than users with mild-to-moderate symptoms. Optimal engagement style is also likely to depend on users’ pre-existing motivation levels, and may even change for the same user as they progress through the intervention. SilverCloud and Microsoft recently published an article that outlined their use of machine learning to identify different engagement styles. One of the major advantages of digital health interventions over traditional therapies is the ability to collect large amounts of data, which in turn, enable the personalization of interventions to match individual engagement characteristics.
As alluded to at the start of this article, finding meaningful engagement metrics is only half of the equation. You must also determine the minimum effective dose of the engagement metric that predicts long-term user value. E.g., if you determine that completion of in-app exercises leads to improved clinical outcomes, you’ll next want to determine how many exercises should be completed over what time period in order to produce the greatest likelihood of clinical improvement.
Finally, once you’ve determined your meaningful engagement metrics and minimum effective dose, you should leverage your engagement toolkit to drive the specific engagement behaviors that lead to clinical outcomes.
If you’ve made it this far in the article, then there’s a good chance you’re as passionate about driving engagement in digital health as I am! I genuinely believe that it’s one of the most critical challenges facing our industry today (apart from perhaps commercialization). While there is no single, ‘magic’ engagement metric, the process outlined in this article will position you well to identify the meaningful engagement metrics that are an important milestone along the journey to building truly impactful interventions.
Thanks to: Product Manager Mel Goetz, Head of Content Elise Vierra, and Director of Science & Innovation Jessica Lake for their contributions to this article.
[1] One notable caveat is for wellness products, like meditation apps, for which retention is arguably the best indicator of value. For example, if a user downloads a meditation app and comes back consistently over time, then the product is probably giving them their desired value. Also, since wellness apps generally have direct-to-consumer business models, you don’t need to be as concerned about payors’ and providers’ emphases on health outcomes. In such cases, you may be better off using retention rather than clinical outcomes as your primary indicator of value.
[2] Leading indicators of clinical improvement are also closely related to the concept of active ingredients. Just as there are active ingredients in traditional pharmaceuticals surrounded by the remainder of the pill — which serves as a delivery mechanism — so too, there are elements of digital interventions that are critical for clinical improvement that are surrounded by the remainder of the software. In fact, there is a whole field of research into mechanisms of change within psychotherapies that seek to identify active ingredients, and such research can provide a great starting point for identifying what may be the leading indicator of clinical improvement within your digital intervention.
[3] An important caveat here is that many digital health interventions do not have sufficient data to test for causality. For example, if your product is a prescription digital therapeutic going through initial clinical trials, then you’re unlikely to have a large enough sample size to draw meaningful conclusions about the relationship between individual engagement metrics and clinical outcomes. If that’s the case, then you’ll have to stop at step 2. However, if you’ve identified a moderate-to-strong correlation that is supported by a robust theoretical rationale for causality, then you can have some confidence that you’ve probably found an important relationship.