Validating an Analytics Start-up with Analytics (Part II)



Validating an Analytics Start-up with Analytics (Part II)

This is a series of posts by Measureful founder, John Koenig. Read Part 1, Launching an Analytics Start-up.

 

Over the last year of building and iterating Measureful, we’ve done a lot of things right and a lot of things wrong. Iterating a start-up idea isn’t about getting it right 100 percent of the time, it’s about developing a system of learning through failure. 

Below, I will share some key assumptions we developed and tested, which ultimately steered us towards the solution we offer today – automating the client reporting process – for agencies, marketers, and freelancers. Read on to see how we tested each assumption to shift focus and shape our product.

Developing and Testing Assumption #1Marketers want their disparate data in a unified report or dashboard.

Most of us can remember a time when marketing data came from a single place, your web analytics software. A single data export would provide a complete view of your online marketing efforts. This reality changed pretty quickly in just a couple of years. On average, our clients (Measureful ideated within an agency) had data residing in eight different marketing platforms.

In a way, we had already validated this assumption. We were manually building reports from disparate data sources including Google Analytics, Facebook, email platforms, Salesforce, etc. Aggregate reports gave our clients a unified view of their marketing efforts, and we were already selling these reports to clients. Did we even need to validate this assumption? Maybe we could just automate this process with software and scale.

Our first MVP (minimum viable product) was a data aggregator.

DashboardWe tried to keep it lean and focused on learning what was most important at the core. In retrospect, we could have tested this assumption more simply. I think we put too much effort into pieces that we weren’t testing directly, such as the design, layout, and back-end.

Regardless, we were still quite lean. We manually uploaded parts of the data, used screenshots to represent the reports, and had faux buttons like “export” and “email.” We pulled in data from Google Analytics, Twitter, and Facebook.

Then we embarked on a plan for information gathering and testing as follows:

1. Find prospects and set up meetings

2. Demo product

3. Ask questions

4. Take copious notes and score results 

Finding prospects was easy because we already had clients on the agency side. We signed up 10 clients as early users and hooked up our own analytics to track product usage and engagement.

At first, the engagement was good. After a while, it waned. Ultimately, it died.

Ten prospects wasn’t a large enough sample to draw many conclusions from, but the feedback was all the same: “it’s great to have all my data in one place but I need direction. Not more data.” Or as one user put it: “I don’t need more tools. I need to make heads and tails of what I already have.”

When we asked prospects if they would pay for this service, the answer was, “no.”

Keep in mind that our clients were mid-sized brands and retailers, so we really were testing a specific market. From our agency, clients were getting filtered and curated reports. The aggregator product automated the data collection but that was about it. The value wasn’t in the data. The data was just a commodity.

There are many ways to add value to data and we had to find one.

Conclusion on Assumption #1 – Unified data was nice to have, but it was not worth paying for by itself. 

Even the start-ups that are in an arms race to aggregate multiple data sources still have to focus on making this data more accessible and presentable. There is a market here (not the one we were testing), but the barriers to entry are extremely high.

Developing and Testing Assumption #2 – Marketers want “insights” pulled out of their data. 

This is almost a given. I mean we were an analytics agency and clients paid us to extract and present valuable insights from their data. If we couldn’t do this, we wouldn’t be in business. This assumption was less a question of should it be done and more of a question of could it be done.

Launching a startup inside an agency can be done, but the successful examples are few and far between and most angels and VCs we spoke with urged us to break apart the businesses. After founding and running SwellPath for four years, I left to pursue assumption two. We split apart from the agency as a separate business, and I launched Measureful. My goal was to codify the analytics process we used to extract insights from data. 

We had gone back to the drawing board armed with lots of helpful feedback and data. I ended up spending a lot of time with our existing reports and disseminating what an “insight” meant. I met with more customers and developed a plan.

We took a similar approach as before, developed a MVP, identified prospects, and tested.

The solution we came up with used a series of algorithms to pull anomalies out of the data, such as deviations from the average or standard deviations.

measureful_09_filter_ALT.jpg

Trying to filter signal from noise is difficult. We started with changes. What’s different and why? This is where we focused to find insights. 

It took a lot of tweaking to get it right. Essentially our solution functioned similarly to Google Analytics Intelligence Events, except that it didn’t have to be configured, didn’t look at a frequency that became noisy, (daily updates) and presented data in a consumable way.

We again asked users, “Would you pay for this?” The answer this time was, “Yes.”

Boom, we’re in. We got one early customer, then another. We began to validate the MVP. Before we knew it, we had a handful of customers. Most were brands, some were retailers or freelancers.

Something happened though over the next couple of months. As we began to build out more structure around the application, product engagement with paying customers waned. This wasn’t supposed to happen. They were giving us their precious dollars and we expected them to be diving in weekly. Closer inspection revealed a different behavior. Customers were still using the app, but they were only opening our weekly digest emails.

We assumed (erroneously) that the value they were paying for was in the insights. What they were actually paying for was the easy accessibility of the filtered data or the slick presentation. Occasionally, they were paying for an insight our algorithms surfaced.

In conversations, our customers consistently rated insights as being the most critical and worthy of their budget.  In response, we dug in and built out our algorithms to go deeper into deviations and changes. We could successfully identify a change in a particular channel and dive a layer deeper to provide an explanation as to why that change occurred.

Weekly DEMO - Measureful

Engagement still remained low and we had trouble signing up new customers. If it wasn’t the insights, then what was it? It was clear that we had to shift focus again.

Conclusion on Assumption #2 – Customers would pay for insights from their data, but standardizing insights isn’t scalable with software…at least yet ;)

That’s kind of obvious as I write this but we were able to provide enough value that a few customers would pay for it. It was clear it’d be very difficult to scale without a significant amount of capital to invest in our algorithms. We pivoted to find traction.

Developing and Testing Assumption #3Agencies and marketers will pay for beautifully presented and automated reporting.

As we tested the insights, we did learn that users loved our design and approach to presenting findings. We bet here. Building nice looking analytics reports really sucks. Most are derived from some combination of Excel, Google Analytics screenshots, and PDFs. We took our automated approach to insights and applied it towards automated reporting, with a focus on the presentation.

We came full circle. The whole genesis of Measureful grew out of the frustration of trying to build beautiful, unique client reports. We made the shift to focus on insights and how to present the data. We were still focusing on marketers, but now adding agencies to the list.

The product could now take a source, aggregate the data, determine what to report, and present it in a clean, cohesive format. Totally automated. No reporting platform was able to accomplish this.

We shifted our messaging and started to see agencies and freelancers signing-up for a trial. We even signed-up some paying customers to the platform, further validating this shift in focus. However, our overall conversion rate remained low and traction was tough.

One interesting thing started to happen around this time; user feedback started coming in daily. This doesn’t sound like much, but feedback at this level is gold. Many trial users reported that the product just wasn’t there yet. Feedback consistently showed a need for greater customization in the reports. Without getting into all the details of the types of customization, we had validation and a clear plan of attack.

Conclusion on Assumption #3 – No matter the level of automation or how beautiful a report is, customers need a level of customization in their reports.

We had a system that was smart enough to determine whether any top 25 organic keywords changed over the last month. If there were no changes, then the system could decide not to include that keyword information in the report. We didn’t want to waste precious report real estate with flat data. Then we realized if our users were offering SEO services to clients, they would want to show the top 25 keywords regardless of any changes or insights. We had to allow for customization. 

Developing and Testing Assumption #4Agencies and marketers will pay for beautifully presented, automated, and custom reporting.

I don’t believe in silver bullets in this business. I do believe in a focused path of iterating and working closely with customers to develop a viable solution. It wasn’t one thing that customers wanted; it was a collection of needs to solve their problems. Yes, some customer needs are more important than others, and if you can prioritize accordingly, you can get there more quickly.

story-detail-master.png-3

Twelve months ago I had an idea. That idea was challenged. It changed and evolved. Now it has come full circle and to fruition.

We’re building a platform that can fully automate custom reporting.

By simply connecting to a Google Analytics account, Measureful can determine what’s important to report and how to best  display that information. We dubbed this iteration the “Flipboard for Analytics” because the report design changes based on the content (we call our report “content” stories).

We’ve built for automation but accommodated customization. Users can select specific metrics to display and even choose the story designs. You can white-label and add your own analysis. Check it out for yourself by signing up for a free trial here

In hindsight, I feel like an idiot. It was all right there in front of us, but so were many other opportunities. It was only in the process of verbalizing our assumptions and being objective about the results that we narrowed our focus and honed in on an opportunity.

Conclusion on Assumption #4 – Keep grinding.

 

 

Subscribe by Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>