Martin Gontovnikas, who goes almost exclusively by "Gonto", became a systems engineer after learning to code at the age of 12. Gonto built open-source libraries over the years in various engineering roles until realizing that he preferred to promote these repositories rather than build them himself. He then scaled Auth0's marketing team through acquisition by Okta, and left recently to co-found a growth startup. Different from other advisors, Gonto's Hypergrowth Partners take some equity in exchange for their deep advisory role. Their goal? To continue bringing engineering to marketing to help these companies achieve "hypergrowth."
Bringing engineering to marketing
Gonto describes that we're no longer living with the marketing of the past, and we have access to incredible amounts of data:
"We can react and learn from the data, and make better decisions by using those data engineering principles and applying the scientific method. You can improve and get to success faster by looking at data and using that to learn and become better."
Learning faster means better ideation, improving the chances that your ideas will work. Jane argues that controlled failing, failing while protecting the downside of a risk, is a key aspect of professional development, and Gonto agrees — even more important than controlled learning:
"A lot of people take failure as something wrong, but failure is not wrong if you learn something. A lot of companies just break this experimentation or learning culture by making failure a bad thing."
One issue with this engineering, data-focused approach to marketing is that we won't always have enough quantitative data for statistical significance. Gonto believes that marketing is about stepping into people's habits. In circumstances with little data, he establishes a baseline understanding of target customers with qualitative data instead. Then, as the quantitative data accumulates but before you reach significance, he suggests making these gut calls as a way to train the muscles for making data-driven decisions.
Another challenge, besides having little quantitative data in the beginning, is the effort required to stick to a scientific framework. Gonto offers a few practical tips to make it easier:
- Keep the goal in mind as motivation. By adhering to a framework, you will improve your ideation.
- Have leaders set the tone for teams to follow.
- Have patience. Even while iterating from the start of the implementation, it may take 3-6 months to see the results.
- Cross-functional teams share goals and thus have less external dependencies on others. This makes it easier to adhere to a framework.
The step-by-step framework
After giving some tips on adhering to a framework, Gonto describes the four steps of a typical framework that he may use:
- Have some quantitative or qualitative research backing up what you would like to test. This research becomes the basis of a hypothesis.
- Arrive at a KPI or main performance indicator that you are trying to move the needle on. Related to the main KPI, think about what secondary KPIs will help you learn about what is happening during experimentation.
- Establish a goal for that main KPI — how do you want it to change?
- Define the time frame within which the experiment will happen. This could be an actual time frame, or it could be based on a number of incidents or events.
As an example, improving SEO can typically take some time. Perhaps more than you might typically allot for most experiments in this framework. Gonto offers a few specific tips on experimentation with SEO as well. Ideally, target long-tail keywords in spaces where the content isn't over-saturated:
"I think it's hard to get to a first position if it's one or two words. In our case, we had very specific errors that people were having when they were implementing on React or Angular JS and there wasn't much content about that."
Frequency, types, and email automation experiments
When it comes to data about the experiments themselves, Gonto believes that the length that experiments run is less important than the number of experiments running in parallel:
"I think each experiment has its own length depending on what it is. But at least in the beginning, I think that to start driving more experimentation, you should have a metric of how many experiments you want to be running at the same time, or how many experiments you want to be starting in a given week. So for us, I would see something like between 15 and 20 per month."
The size of the experiments should be varied as well. Gonto thinks of experiments as bets — you might take 10 small bets a month. A small bet would be changing some color, some copy. Larger bets may involve process changes for onboarding. They may take much longer to implement and/or measure. You may only start 2 of these in any month. In terms of where to experiment, Gonto walks through a flow of what a company may consider rooted in the customer lifecycle:
"Usually the path that startups take is to first focus on acquisition because you have nobody coming to the website. So you need them to come in. Then you focus on activation — once they come to the website and they register to sign up, 'How can I get them to get to the aha moment?' Then it's retention. 'How can I make sure that once they get into the product and understand it that they still use the product month over month?' And then I think pricing. Once they are using, 'How can we optimize pricing to get more money out of the customers based on what they are doing?' "
Jane points out that email automation is an area where many companies neglect running marketing tests. For many emails, Gonto likes to send changes to around 90% of the population while sending nothing to 10% as a control. He has found that leads to good insights on whether to apply changes long-term or whether it's best to abandon them. For products with long sales cycles like Userlist, Gonto also warns that the attribution of whether or not a change works might have to be manual, regardless of a teams' resourcing. He recalls setting aside time every week to track those types of things down in the early days at Auth0.
Top experiments for B2B SaaS
Wrapping up the conversation, Jane and Gonto conclude with super practical areas of experimentation for B2B SaaS business.
- Form fields. How many fields do you have? Which ones can you remove? Are you using the data or not?
- Navigation. What changes can make your site's navigation more sticky?
- Onboarding. Most people drop in onboarding so if you can improve at this stage, your retention curve will flatten out after you keep more customers.
- Email collection. When signing up, what are the logistics of this? Is there a button? How does it work?
- Ad space. Whether you have navigation or gated content.
Final advice
Do publish results of experiments internally.
"Publish, internally, the experiments you've done, what you've learned and what you failed at because if you start showing it to everybody on the team, you start creating a culture of experimentation."
Don't write off those who present a failure. Instead, ask if they've learned something.
"Ask them if they learned something new. If they learned something new, then it's okay."
Thanks for listening! If you found the episode useful, please spread the word on Twitter mentioning @userlist, or leave us a review on iTunes.