Here’s the situation: You’re building a product, but it never gets done. Or it gets done, but the users can’t figure out what to do with it. Maybe you overdesigned it, made it unappealing or overly complex? Better yet — you loaded it up with a nice rich featureset and are wondering why you can’t figure out why there is no clear user behaviour pattern. It all seemed so simple on paper, right?
Coming from a product building environment, at Martian & Machine, we deal with situations like this early on. To avoid investing time into creating product features that make sense only in theory and did not pass an initial Proof of Concept checklist make sure that you focus only on the USP and its featureset first off. Read on to discover a few tips on how to avoid bloating your MVP.
It usually starts as a simple project. Let’s assume we want to build a mobile app to test out an idea. Since we’ve done some research, we think that it clearly has a chance on the market (again, use our POC checklist) and to prove that before getting into a feature-rich product, we want to test it out with real data. And since we’re restricted with budgets and timeframes, we want to ship the basics and figure out whether the concept stands a chance. Now we’re walking on MVP territory!
We start designing a simple UI and think of core elements that the app could not function without (keeping in mind that this UI needs to serve the sole function of the USP). Without wasting too much time on planning and thinking how to scale it at some point —the concept is designed and developed. At this point the goal seems pretty clear and within reach: get user data in as soon as possible.
Here’s where the problems kick in. Since it’s just an MVP - it serves the sole purpose of testing. It’s may not be impressing you with a slick UI, frictionless transitions and many features. It’s just this one simple, yet such an important set of features that needs to be tested and converted to data. Nothing else.
Still, under the impression of ‘let’s get this tiny sub-feature in there’ we end up with a lot more then we originally planned, and often a lot later too. The truth is, it’s never just an additional feature, and it’s never finished.
Getting additional sub-features into an MVP that aren’t really essential to our goals, are most of the time distractions. To get them into the product, we need to start at the beginning of the proces and go through all phases to get it live. That means, designers think about how to incorporate it, hand assets and flowcharts over to developers etc. Once executed in development, it needs to be tested and polished up. It might sound like a small compromise but wait until the list expands to a dozen tiny ‘just one more’ tasks.
To be honest, we have all made that mistake at such point. I’m no exception. The thing is, having enough ideas and willpower will always end up with product improvement cycles insted of focusing on data and planning improvements upon gathered insight.
First of all, let’s ask another question. Why are improvements needed? Is it because the user had problems with using the main feature? Or do we think that he will be so blown away that he’ll use it all day and that we need to give him a dozen more options so that he stays entertained for a long long time? Quite the opposite..
What’s sure is — if you didn’t test it, don’t improve it.
To improve something means that it did function, but not seamlessly. It means that we got insight on how to remove friction, reduce steps, or even completely change the process. Sometimes killing the feature and replacing it with another one is an improvement. It’s just a matter of perspective.
Chances are, you learned this the hard way and got a product out that had very much to say but no one listened to it. Here’s the good things about MVPs. They are intended to be killed or pivoted. It’s an easy game. A minimal featureset, combined with easy to read metrics, leads to simple insight. The ultimate goal at the end of the day is to learn how users used your product and whether they liked it. On positive ground, you may rethink how to improve the product — but based on facts and user behaviour, not your personal opinion.
The simple trick is to build simple products. As much as we all love choices, having to much of them will not only ruin the experience, but also make your data harder to read.
Imagine you need to perform a simple task of turning the radio app on. There’s just an ON button and that’s it, the music plays. Such data would be easy to read. Either the user loved the experience and turned it on, or not. The fact is, he got through the process as we intended him to do and left some data points.
What we don’t want to do is make him think. Putting choices in front of the main feature, for example. Choices like tone balance, radio illumination, band selection etc. Those choices will only make your data harder to read. Reading whether the user was confused about the options at some point or just did not want to listen to the radio becomes a blurred line. Keeping it simple and reducing the number of events we are measuring, helps us identify if the product is used in the intended way.
Prepare yourself to hit some bumps along the way and remember that the beauty of the process is figuring out what needs to be improved and what needs to be removed. And even if the whole idea failes, at least you got away with a scratch and are ready to try again.
The key is to keep up momentum and be ready to constantly assess, iterate and evolve, even if it means turning the project upside-down. Because, in the end, it’s really not about how many times you’ve redone it — it’s about getting it right.