How Zocdoc Uses Data to Make Better Products for Patients

Zocdoc Engineering
9 min readAug 12, 2020

Note: Numbers have been modified throughout the document and are for illustrative purposes only

Like many technology companies, Zocdoc relies on data to make informed decisions about product changes. In fact, even the tiniest changes to our website require in-depth analysis before and after launch to understand their holistic impact on our users. Here’s a recent example of how Zocdoc used data-driven product analysis to incorporate a sticky Book Appointment button on provider profile pages and help more patients book an appointment for the care they sought.

Problem

Patients come to Zocdoc from many different sources. Many interact with our site through the provider profile page.

On provider profile pages, Zocdoc displays a wide variety of helpful information (such as the doctor’s location, availability, and rating) so patients can make informed decisions about their choice of care. Our user research showed patients appreciated the level of detail but sometimes missed that they could actually book an appointment with the provider.

Our product team had an intuition that by adding a sticky button with a Book Appointment prompt, we could drive more users to get the care they need from the profile page. A sticky button, also known as a fixed or frozen button, stays in place while users move around the page. It’s a common feature of many other websites.

Profile pages appear on Zocdoc across all platforms, but we decided to tackle the mobile web platform first. We hypothesized that because mobile devices have limited screen space, the proposed sticky button would have the strongest impact there.

Sizing

To evaluate whether to build any new feature, we pass the proposed experiment through a phase called sizing that uses data to parameterize the problem space and assess its estimated impact. The sizing results inform the feature specifications (“specs”) that our product managers prepare before introducing new work to their engineering teams.

We began sizing for the sticky button in early November 2019 by first calculating what share of Zocdoc sessions take place on mobile web devices. At that time, our internal reporting showed mobile web accounted for about 50% of all user sessions.

Next, we sized how many users hit an active Zocdoc provider profile in their session, which we define as profiles for bookable providers. This required a query to our analytics database in Amazon Redshift.

A sample query is below. The page_visit table has one record for each page visited in a session. We categorized sessions based on whether any page visit was on an active profile:

The results showed that about 45% of Zocdoc’s mobile web users hit an active profile at some point in their session.

Next, we wanted to understand how many users scroll so far down the profile page that the yellow timeslots to book an appointment leave the viewport. This is the point in the proposed feature where the sticky button would appear.

We can’t know this number with absolute certainty, but on the profile page, we do know when a user scrolls down to the first review. This is our best way to set an approximate floor for how many users will see the button.

The query below introduces a new table action that uses the Twitter-style schema to describe events:

This showed that about 20% of sessions make it to the reviews section of the page, where the sticky button would start to show.

By chaining together the funnel from platform to profile to scroll, we calculated that about 5% of sessions would be exposed to the new feature. That number may sound small, but over time this change will affect a substantial number of patients.

To complete the sizing, we had to make an informed guess about the expected conversion lift we’d see from implementing the change.

For context, when a user clicks a yellow timeslot on the profile page, they enter the booking flow (Zocdoc’s version of the checkout process). If the user is not already signed in, they will be prompted to sign in or create an account. Afterwards, the user will be taken to the Review & Book page to confirm their appointment details. If everything checks out, the patient then clicks Book appointment, and the appointment will be officially in the books.

This is what the flow looks like:

For the conversion lift sizing, we calculated what share of users made it from the profile page to the Review & Book page and came up with about 15%.

Next, it’s important to estimate the improvement versus baseline we might expect from the test. Not every aspect of the sizing process can be entirely driven by data: we relied on our Product Manager’s knowledge of the page and intuition to estimate a conversion lift from 15% to 15.2% (+1.3% relative change). Our Product Manager also estimated a 0.3% conversion rate loss in the funnel from Review & Book to booking by assuming that the sticky button would push some low-intent users into the Review & Book page, where they would bounce. The 1.3% gain in conversion minus the 0.3% loss on the Review & Book page nets to a +1.0% estimated lift.

+1.0% is a relatively small improvement, but that number must be contextualized in a few ways. First, the change is being applied to a page that’s more than 10 years old; most conversion improvements on polished pages are measured in low single digits. Second, this change was one of dozens of experiments that ran on the page during the quarter. Third, a one-percent relative change on a large denominator of visits translates to a material impact for our patients, providers and business.

Building the experiment

After we established this was a worthwhile opportunity to pursue, our design team mocked up the new button in a wireframe and handed it off to product engineering to implement code changes and put the button behind a feature flag in our experiment framework. To aid analysis, engineers added an event for when the sticky button div appears.

Here’s what the profile page looked like before:

And here is the designer’s mockup for the sticky button:

Nearly all Zocdoc product changes go through an A/B experiment framework where users are randomly assigned to receive a control or a test variant. For this experiment, the control is the existing profile page without a sticky button, and the test is the profile page with the sticky Book an Appointment button.

Before starting the experiment, Zocdoc’s product and data science teams worked together to formalize a test hypothesis and establish the experiment duration. The duration calculation requires the baseline conversion rate, the treatment effect to be detected, the significance level of the test, and the weekly test observations (test traffic split * baseline traffic). Based on the results, we established a test duration of about 30 days.

In November 2019, product engineering toggled on the sticky button feature flag for mobile web profile pages. The test was initially rolled out to 10% of traffic on a 50/50 split to guard against code or data issues. After an uneventful day at 10%, we ramped the feature up to 100% traffic on a 50/50 split.

The experiment ran through mid-December. During the experiment run, product managers could check on results to make sure nothing broke but not to draw conclusions about performance. (No peeking!)

Results

Below are the top-line results from the experiment:

The sample query below may be used to produce the results above. Note how Redshift lets an alias be defined and referenced later in the same projection.

The output shows a 2.50% relative increase in the conversion rate, which is statistically significant at the 1% level. This is a positive result, but we can’t celebrate yet. We must answer secondary questions like the following:

  1. Did we push unqualified users into checkout?
  2. Did we drive bookings with negative outcomes (e.g. cancellations, patient no-shows)?
  3. How did the experiment effect differ by user segment?
  4. Are there any other consequences not captured in the top-line results?

Did we push unqualified users into checkout?

When sizing, we predicted some degradation in the funnel step from Review & Book to booking by pushing more users into Review & Book. In reality, that didn’t happen:

The data shows no change in the conversion rate on Review and Book (+0.11%, NS).

Did we drive bookings with negative outcomes?

Some product changes may provide conversion gains by driving low-quality relationships between patient and provider. To assess whether this happened, we’ll compare the share of no-show bookings in the control versus test variants:

The data shows no-show appointments were down 6% in the test, but this result is not statistically significant. This outcome gives us higher confidence that the experiment did not produce low-quality relationships for patient and provider.

How did the experiment effect differ by user segment?

As described above, the provider profile page is a common thread in most Zocdoc user experiences and is used by new and repeat visitors alike. Zocdoc visitors come from many sources: direct, from provider websites, from search engines, and so on. Each source has a different user profile. For example, traffic that comes directly to Zocdoc skews towards repeat users.

Below is a breakout of the experiment’s conversion impact by source. “Brand” means users who found Zocdoc directly. “ SEO” represents users who found Zocdoc from a search engine search. “Doctor” encompasses doctor-sourced channels such as widgets on provider websites. “Marketing” users came from paid marketing campaigns.

Impact was strongest on the SEO and marketing channels. Across most web properties, users from similar channels are newer and less familiar with Zocdoc’s site. It’s easy to imagine that adding a clearer and persistent call to action (CTA) button on the profile page will have more incremental impact on these users than for seasoned brand users who breeze through checkout.

Are there other consequences to the button?

Maybe a button is just a button, but it’s hard to measure every second-order consequence that even a small change might trigger. As they stand, the results above measure activity in a user’s initial experiment assignment session only.

To understand the feature’s impact on user retention, we may want to extend the user conversion window to a few days. But this brings other concerns: how long should the window be and why? Can we wait longer on the results in order to provide users a conversion window? Are we sure we’re not hacking the attribution window to show better results?

Our product team typically uses a 36-hour conversion window for similar experiments, but certain features may have compelling reasons to use a different window.

Here are the results with a 36-hour conversion window. To be precise, that means we’ll count any booking conversions that occurred within 36 hours of the user’s initial experiment assignment.

Nothing concerning emerges from the results; in fact, the effect seems to be stronger with a longer window. For the sake of this educational post, we added a conversion window ex post, but as a best practice the experiment owner should always define the methodology before viewing results.

From Experiment to Product

Once the experiment is complete and all stakeholders are satisfied with the product analysis, the product manager decides whether to keep the feature in place. Based on the results described here, the sticky button went live to all mobile visitors in late December 2019 as a permanent product feature, and its success spawned subsequent sticky button experiments on desktop and the Zocdoc app.

Don’t believe me? Go to a Zocdoc profile on your phone and scroll down the page.

About the author

Sam is a lead on the Business Intelligence team at Zocdoc. Previously he worked in various data roles across the real estate and financial industries.

--

--