Updates to the 2023 ETHGlobal Hackathon Season

This year is on track to be the most impactful and busiest in ETHGlobal’s history. Our first event, FVM Space Warp, featured 275 projects built on the FVM, which is going to mainnet next week. We are…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Launching digital products. Making the most of your first 90 days.

From ascent to starting line.

You made it.

After months of planning, designing, and development (not to mention QA) you’re finally ready to release your new mobile or web application to the world. You’ve revisited your definition of “Minimum Viable” over and over, navigated tricky technical integrations, and refined your UI to death. You rallied the entire project team for a compressed QA / UAT effort in a fraction of the original timeline. (Speaking of that original timeline, might want to dust it off and re-read if you need a laugh.) Building software sometimes feels like climbing a mountain.

In the past few years, I’ve been fortunate enough to work closely with a number of great brands bringing new digital products to market. I’ve seen first-hand what’s in-store for project teams when they reach the summit of that “product launch mountain.” The tricky thing about climbing a mountain is that you can’t see beyond the summit. I’m writing this to help you understand what you’ll find on that mountain-top.

Spoiler Alert: It’s another starting line.

If building a new digital product is like climbing a mountain, then the process that comes after is more like running a marathon. Once you get over your initial shock (you didn’t think you were done right?) you also need to re-think a number of your core assumptions around how you and your team are working. You’ll likely need to make changes to your team’s skills, equipment, and mindset as you shift from mountain climbing to marathon running as well. You don’t run 26.2 in a climbing harness.

Here are the three areas to focus on when making the transition from mountain climbing to marathon running.

One of the first things you’ll need to have in place is a simple to understand, and continually updated scorecard of how well you’re doing. What are the three(ish) key metrics you’ll look at on a weekly (daily?) basis to gauge how well your product is performing?

For many, this comes down to posing and answering a few simple questions. Suppose you’ve just launched a new mobile app that includes both a free and paid subscription tier. I might construct a scorecard that answers the below three questions:

Defining your qualitative answers to each of these three questions, and then tracking their trends over time will provide a critical benchmark for your team’s performance in the coming months. You’ll also want to correlate key marketing and product tactics to these metrics to understand how well each is paying off.

You’ll need to identify, and then define programmatically your most important user types (or segments) and ensure that you can slice/dice your scorecard on these lines. One of my recent clients has three major user types, and they expect each of them to interact with the product very differently. By identifying and breaking out these audiences they have a much better understanding of how well their product is serving each distinct set of needs.

Your scorecard will help you quantify how well you’re doing, but it won’t go far helping you understand WHY you’re seeing a specific trend. You’ll need to identify and implement a strategy for “getting to why.” It sounds like common sense, but you need to plan how you’ll periodically TALK TO YOUR USERS and understand the motivations and needs that drive the behaviors your measuring. While quantitative measurement helps you understand what’s happening, it’s your qualitative research strategy* that helps you understand what to do next.

*If you’re looking for more information on how you can plan/flight this type of research yourself (or are looking to contract out for support) reach out to me and I can direct you to some resources.

Once you’ve established a set of norms and a toolkit for periodic measurement of your success let’s talk about an important mental shift when it comes to prioritizing development work.

Another titanic shift that comes with hitting the launch button is that you’re going to want to take a breather from major feature development and focus your development team on the smaller items that can directly drive adoption and engagement.

During this critical “just launched” period you’re looking to learn how well your product has achieved product-market fit. There are two barriers to effectively measuring this:

Barrier 1 — You don’t have enough users

The “market” in product-market fit means people. You need real customers using and interacting with your product long enough and frequently enough to “get it.” You’ll need to work closely with your marketing team to ensure that your product’s first-time user experience, onboarding, and help content is hacking it. At this point, optimizing for adoption and engagement trumps almost everything else.

Look for “one and done” users who never use your product after their first encounter. How long were their sessions? On what screen or pages did they exit the experience? What can you do to drive them back into the app for a second try? Build out or enhance features designed to onboard and orient new users. Look for patterns and relationships between marketing activities and first-time user engagement to understand which marketing messages are most effectively creating not just users but active users.

Barrier 2 — You have critical usability or engineering fails

Next, you’ll want to look for places where the product itself is getting in the way. Are there user experience fails that prevent your users from getting to the functionality or content they were promised? Look for opportunities to improve (or perhaps A/B test) navigational icons or labels. Are there opportunities to optimize interaction design to make it easier for these early customers to use the product?

At the same time look for bugs or technical issues that are preventing the use of or slowing down the use of your product. Implement a tool that tracks both app crashes and latency issues to identify if it’s application performance that’s negatively impacting your users. No matter how well you QA your application pre-launch once you begin to grow an active user base issues will pop-up. Your team needs to be ready and willing to quickly triage, prioritize, and fix these technical problems.

If you’re focused on removing the two barriers above and still find yourself with additional development team bandwidth focus them on non-feature work. This is a great time to pay down some of the technical debt you accrued as you worked to get your MVP to market. This topic alone is worth its own blog post and is likely written by a much more technical professional than myself. As a starting point, I recommend that you sit down with your technical lead and really really understand the compromises that were made during your rush to launch.

A final area of focus should be the various processes and operations that structure the work your team is performing. For one, you now have real customers with expectations around application performance, uptime, and a lack of showstopping bugs. Additionally, with a shift, from feature development to smaller tweaks and optimizations you’ll want to explore an accelerated release cadence. This will allow you to better understand the relationship between the changes you’re making to your product, and the metrics you’re measuring.

Depending on the complexity of your technical architecture, and the teams working on your product you may want to add more structure to the way that changes are planned and understood across all aspects of the team. If you have multiple scrum teams working on distinct parts of the product at once this becomes especially important. How are you aligning sprint plans and discussing cross-team dependencies in advance of making changes?

At the same time, with your product in the hands of real customers, you’re going to see an influx of feedback and new feature ideas. Even if you’re not tackling new feature requests at this time it’s important to capture, think through, and start to prioritize these requests for when you have a clear understanding of how well your MVP is performing. Giving your UX thinkers free reign to explore these concepts sometimes helps keep creative energy flowing during a time where you’ve constrained actual development to very tactical updates.

If building your MVP and getting it to market felt like climbing a mountain you’re going to have to re-think a number of critical assumptions. The tools, processes, and mindset that got you to the summit are likely not ideal when it comes to the marathon of continual optimization.

Many organizations fail to plan for this shift in the scramble of the final ascent, but those that do tend to hit their post-launch goals faster. It’s never too early to start thinking about what’s going to be different when user #1 is on-board.

Add a comment

Related posts:

Quadros preferidos

Esvaziando a casa que nunca foi completa, eu tinha pressa, não esquece de levar embora aqueles quadros que nunca colocamos, das nossas series preferidas, você deveria descolorir essas paredes, porque…