“Per-seat + fair-usage” 🥢🍱 — balancing exploration w/ margins
It’s “everyone vs everyone” in SaaS, why you can’t do unlimited AI for $10/m, and going multi-product with Super’s (and Slite’s) Christophe Pasquier.
“Notion launched competitors to Granola, Glean, and ChatGPT use cases
Figma launched competitors to Lovable, Framer, Canva, and Illustrator
Atlassian has launched competitors to Granola, Glean, Claude Integrations and more
Google just launched Prototyping and Codex, competitors
The list goes on…”
Brian Balfour, founder of Reforge, and a prolific writer on the business of tech recently made a stirring observation: we are officially in the era of everyone competing with everyone.
Point solutions, he believes, are under imminent threat. Bundling is happening all across, everywhere. And the PMF expansion (or second acts :)) blitz is accelerating as we speak.
Christophe Pasquier, co-founder of Slite (YC W16), in the meantime, has launched a new product that looks a lot like a point solution — BUT with a critical overlap with Brian’s description of winning startups: they too have bet on the AI flywheel of more data, albeit in an unconventional way.
Chris’ team turned what started as a feature into an independent product called Super, sold both with Slite and on its own. It does AI search for company data and extends on Slite’s mission to make work simpler and easier for teams.
We’ll dedicate this edition of PMF /evals to the key moments punctuating this decision, primarily:
the pricing challenge, market opportunity, and packaging confusion that became the bedrock of Super’s independence,
why they chose a seat-based + fair-usage pricing philosophy against a pure-play usage-based model, and more.
Note: If you’re thinking through an AI adjacent bet or
transition to multi-product, this one’s worth a close read.
It all started when Chris saw ChatGPT and felt pulled to build it for private knowledge. This birthed the first AI feature within Slite called “Ask”.
Users loved it, but Chris had a hunch that they might need to expand and connect with more tools, including competitors like Notion and Confluence, but that was for later. They revisited the hunch once Ask had matured. “It was something that we wanted to pursue and put heavy investment into, if nothing, to just see if it can work or not.” Chris said.
Very wary of diluting the team’s focus, they created a commando team with Chris and 2 other people who enhanced Ask and built AskX (the new version). By December 2024, AskX was ready to be sold.
This is when they started running into a set of “pricing and real estate” challenges that convinced them that AskX warranted an independent existence — as Super. Chris goes deeper into these challenges below. 🖌️
#1 ➤
Three factors that shaped Super’s independence: Packaging, market opportunity, and pricing
“When you try to package too many things in the buyer’s journey it becomes messy and counterproductive.”
Even though both Slite and Super were technically solving the same problem, users approached them in markedly different ways.
Slite fell into the category of a knowledge base, but Super was a universal search or AI assistant. So as much as the products aligned, the buyer’s intent didn’t.
“We had already tried packaging features that were not really connected.”
Then came the too-big-to-ignore market opportunity.
If they supported non-Slite customers, the market would be massive. And they could: “We saw that we could deliver incredible value even if people didn’t use Slite”.
This unlocked a whole new user base for them. “Most of our customers now don’t have Slite.”
It wasn’t a part of their original plan, but it made sense. The reason is captured further in this post that Chris recently shared while launching support for Notion, a Slite competitor:
“You’d have told me we’d build a tight Notion integration 18 months ago, I’d have burst laughing… Slite is a direct Notion competitor…
but I respect Notion’s craft a lot, they built something special for a part of the market, and now a lot of knowledge is stored in their tools. Our entire purpose is to break team silos and solve access to team knowledge. With Super we could help Notion users, so letting them use Super felt like a no brainer.”
Now pricing…
Slite had always delivered an enviably strong bang-for-your-buck and Chris had a strong opinion that the search capability was table stakes to this promise. So, even with the upfront AI costs, they initially offered it at their regular price as a part of the deal.
Soon the costs started spiking and they knew they had to find a middle ground. Especially for the better models that costed them too much.
“We had a pricing model for the knowledge base and it was quite hard to include good LLMs in that price. It just cost too much.”
The quintessential AI pricing problem had hit them.
They decided on a win-win. All Slite customers get the base AI Search at no added expense. But for teams that want advanced features like external sources, customer assistance, automations, expensive models, they have the option to buy Super as an add-on.
Slite customers can get Super by paying $25/m/seat, which comes with a fair-usage limit on the number of queries they can run.
#2 ➤
Pricing mindfully for their stage: Going seat-based with fair pricing to encourage exploration
All AI tools are figuring this one out. Usage-based, outcome-based, seat-based… the frenzy is understandable, even underrated.
There is no “right” model, no playbook, but this choice affects your PMF journey more than it ever did — your margins, your adoption, your chances of standing out amidst the current rush.
Chris has been clear on this: it’s important for them to enable exploration. “We really want your team to not even have to think before using Super.” They want a pricing model that lets users explore freely (something that Wispr Flow’s Tanay Kothari makes a case for as well), find use cases, and form stickiness — all of this while ensuring the business is protected against bad margins.
So they are keeping it simple. Their current pricing model is seat-based with fair usage. Products like Fin by Intercom have popularized outcome-based pricing and some others like coding agents have leaned into usage-based models.
Chris thinks it is a good strategy for those orgs, but they wouldn’t use it at this stage. He believes usage-based, especially, can heavily limit exploration and create a barrier to usage and that’s the last thing they want.
He draws an internal use vs. external use comparison here. When external users interface with a product, they use it as they’d like, no holds barred. But internal users hesitate. When interacting with a usage-based product, they become extremely conscious of tokens/credits spent and start curbing their exploration.
This is bad for a product trying to build a habit.
“If ChatGPT cost $1 per request, we would not have discovered 90% of the use cases it could solve.” he adds. Interestingly enough, encouraging this kind of education/exploration of use cases was a big focus even for ChatGPT, as Krithika S, Ex-VP of Marketing, OpenAI had shared in a Lenny’s Podcast episode.
“When you think about all of the different stages of the funnel, awareness was clearly not the problem that ChatGPT or OpenAI had. Everyone knew of ChatGPT, but when you clicked one zoom level further, the thing that came up was, ‘I don’t know what to use it for. I don’t know what it replaces. Should I be using search for this? Should I be using ChatGPT for this? How can it even help me?’ And so the work of marketing ended up becoming, creating this sort of use case epiphany where people could say, ‘I had no idea ChatGPT could do that. And yeah, maybe I should be using it for X, Y, Z reason in my own life.’”
For Super too, this strategy did what was intended. Like in this one instance of unprompted exploration, where they recently had a customer use Super for the use case of performance reviews, which is not their primary use case at all but still has a massive market.
AI inference costs are real, though. Which is exactly what the fair usage element seeks to solve.
The “limit” component helps them with two things. One, it protects their margins. And two, it safeguards against platform-disrupting behavior by any one user. “We want to make sure indexing performance (how fast your sources are added) is never broken because 1 customer decides to index millions of docs every day.”
There is a fixed number against these limits under different plans. We asked Chris how they got to it, he shared:
“We compute the cost of storage, and the costs of our queries with best models out there (we assume for general, non-reasoning ones that the cost will remain similar while quality raises), and look at usage to make sure we have something non blocking for 99% of cases.”
That said, there might be a future where once Super has become sticky enough, along with a more exhaustive set of tools and use cases, that they may move to credits-based pricing. It stands to be seen.
#3 ➤
To close, here are some notes from Christophe’s LinkedIn expanding on the ideas we’ve discussed:
1️⃣ ☇ Dissecting the “it’s a wrapper” dismissal:
2️⃣ ☇ “You can’t do unlimited AI for $10/m” — and how a thorough pricing model can protect your margins from evaporating:
A big thanks to Chris and Ishaan for taking the time and collaborating on this story!
PMF /evals ◎ is just getting started. Tell us about how you’re approaching AI-native building in the wild! Or what you’d like us to cover. Hit reply.
Brought to you by Chargebee. Chargebee helps AI-native, recurring revenue businesses scale with billing and monetization infrastructure built for speed, flexibility, and rapid iteration.









