Revenue Operations

Why averages are bullsh*t, and other nuances of productivity analysis

Matt (Funl): Let's get started with an introduction - Ross, could you please tell us about yourself, your role at Toast and overall what your career in RevOps has been like up to this point?

Ross Nibur (Toast): 

You're starting with a real softball here. I work as part of Toast's Centralized Operations team helping to bring together our Strategic Initiatives and Tactical Execution. When we want to launch a new product, reorganize the team to drive efficiency, or implement new technology my team partners cross-functional to ensure that we are moving the entire business in the right direction.

I also am an avid cook, beard grower and have a lifetime achievement award in sarcasm.

For RevOps specifically, my passion for this field grew from watching different teams struggle to align the Data, Technology and Process it takes to drive a modern team.

Matt (Funl): It's also clear you have some * very * strong feelings about averages, so maybe we start there. Why do you hate averages so much? What did they ever do to you?

Ross Nibur (Toast):

My disdain for averages really stems from how I've seen them lead executives down the wrong path before. To take a step back, I think of data and productivity analysis as using 3 main tools -- raw data, summaries, and analysis.

To use an example, raw data is your calling, or activity data. A summary is asking the questions "how many calls did my team do today?" Analysis is asking "on average, how many calls does my team do each day?"

At each step you get further and further away from real world activities and it's bridging this statistical understanding that can lead to real harm for our tactical teams. Averages are a place that people think will tell them an objective truth about their team when in reality it's an inferred value. It's the statistical concepts around sets of numbers that screw teams up.

Again to go to a specific example, if you have 2 deals where the sales cycle for 1 was 1 day and the other was 10 days. Your average Sales Cycle is 5.5 days, but the reality is none of your deals actually take 5 days to close. So if you told your team (as I have heard many leaders do) "close out any deal older than 6 days" you just gave up half of your deal volume. @matt (Funl) do you agree and see where I'm going here? 

Matt (Funl): How should one approach the above example then? How should we communicate the critical insight in that example?

Ross Nibur (Toast):

Great question, I would always recommend looking at this using tools like elapse time, or rate of change analysis. What I mean by that is to take your entire pipeline and graph it based on the number of days it took to close vs the number of deals closed in that group. That will show you the distribution of deals across real time and let you then see the % of total volume in that group. You can also use more sophisticated tools like Standard Deviations to normalize your data sets, but I think that can be hard for our executives to follow. Rate of Change is very easy to follow visual and for people without a statistical background.

Do you have the mock up of the MQL Conversion by Elapse Days we can pin here to show people an example?


Perfecthese visuals show exactly what I mean. In the first example of MQL Decay rate you can see that the fictional team does almost all their conversion in the first 5 days. The average will be somewhere

Matt (Funl): Completely agree - and I think what you're describing sums up the job of RevOps perfectly. In this example you aren't saying "double the marketing budget and you'll get 2x the leads" or "hire 2x the AEs I have now and I'll double new ARR"...you're talking all about productivity, efficiency and effectiveness, and by placing your energy and effort in the right places, you squeeze out incremental revenue growth WITHOUT spending a $ more.

Ross Nibur (Toast):

Bingo, and also a tool that you can share between many teams. This is a methodology that is applicable to basically any funnel stage.

Trevor Greyson (Miro):

Interesting you mention that, Ross. I just thought about all of this from an internal point of view and our funnel of closing new business once entered legal review.  You have me looking at all of our reports and wondering where we are capturing averages!!


Matt (Funl): So I assume you've done this sort of stuff in the past at places like Toast and elsewhere - what results have you seen from looking at things this way?

Ross Nibur (Toast):

Yes I have, the best examples I have from the recent past was doubling our Inbound conversion rate. That's not hyperbole.

Basically, our team had way more MQLs then they could handle and were only working the newest one. We had to fundamentally reset expectations on how long to work a lead to drive that efficiency gain and change underlying automation to proactively manage the data from our IBDRs --> Outbound teams.

Trevor Greyson (Miro):

Interesting thoughts as we have seen similar Attention to detail on MQL/SQL reporting quality has driven us to make fundamental changes to our SDR team philosophy and goals.

Matt (Funl): That's incredible....What are other kinds of analysis you've seen teams struggle with regularly?

Ross Nibur (Toast):

To your point before, we were able to dramatically increase the MQL:OPP conversion while holding steady on downstream metrics like Opp: Win.

One of my pet peeve projects is territory analysis. It's a place where teams generally under invest in thinking about the mix of demographic, and geographic data they could bring in to design territories that are fair for their reps and for other teams that are tied to geography


Matt (Funl): You mean it's not as simple as saying "you get the west coast" and "you get the midwest"?

Ross Nibur (Toast):

Sadly yes it's not that simple, and you're getting to one of the reasons i find this so frustrating. We as leaders do this quickly and then it leads to real moral challenges and frustrated teams. It's never fun to feel like the top rep is the best because they happen to have been given a better patch and as an operator I see that as a wholly preventable problem.


Matt (Funl): You can probably predict my next question then - what's the right approach, without getting too complicated? How would you go about visualizing this to leadership?  or even your GTM

Ross Nibur (Toast):

The biggest thing I'd say is to think through as many variables as possible that could impact your team. For Toast, this can be additional data around restaurant spend, the number of restaurants, as well as demographic data like income.

The best way I've seen this done is by variance from a mean. So if you have a table where 1 column in the territory, you then show how far above, or bellow the median that territory is for the metrics I described above.

One pro tip we've seen is the correlation between customer density and win rate in our market. That might not hold true for you, but it's one of the lenses we've used in the past to understand what a rep needs to be successful before we set them loose.

I've used data from the Census and other US departments as quite a bit of this data is publicly available (in the US).  Like the number of incorporated businesses by employee size by zip code. If you can get multiple lenses on the picture, you can triangulate a more accurate territory model assignment.  I then usually share with the business leaders who know the local markets better for a sniff test - plus it helps with buy-in of the plan


Matt (Funl): with all this talk on executives, what are some of your favorite ways to communicate data and analysis like this to executive teams and GTM leaders?

Ross Nibur (Toast):

With as much context and visuals as I can! The first step in any data driven conversation is to frame what they want to know, where the data is coming from and explain to them what it means. So to our example of average vs elapse days, it's easy to fall into a tactical approach where your team says "I want to see the average number of days to convert a MQL" then you go and get them that thing. Refocus the conversation from the tactic (show me an average) to the strategy (what is the decision you are trying to make?)

That will help you then understand what they are trying to study and let you show them the right way to visualize or interpret data. The goal is to build a strategic partnership between your executives and your operators to enable them to make the best call.

My entire career is trying to get more people to see operations as a strategic asset and not a tactical group.


Matt (Funl): Actually since we talked about territory planning, or very top of funnel activity driven, let's talk about what happens once a sale is made... COMMISSIONS. I’m sure you've got some ideas on: what people are doing wrong today with commissions and incentive comp, some alternative ways to look at this, managing Rep/Mgr feedback on commissions, what is the best distribution of ownership of Commissions planning and execution you've seen?

Ross Nibur (Toast):

I want to be very practical in my advice on commissions and clear that I don't think my team does the best job today. The cardinal sin of commission planning is over-complication. At the end of the day, a commission plan is meant to drive behavior and that means fundamentally the harder it is to understand the less effective it is. If you can't figure out how much a rep will be paid for a deal without a PHD in data science you probably need to simplify.

What I always want feedback from a team on. "Will this motivate you?" "Could you look back at your deals last month and know how much you would have been paid?" The best team to plan this is always a mix of executives who know that topline goal you want to align towards (like increase ARR per unit) and front line teams who have to do that math.


Related posts

Join the Co-op!

Or