Something is moddable when it is modified by the end user rather than exclusively by the original designer. While the original designer plays a foundational role in laying the initial groundwork, they are but one part in a chain of creators that result in the final product. Markets that enable the transacting of moddable assets are exceptionally predisposed to emergent behavior, significantly more so than their non-moddable counterparts. This emergence is key because it always shows itself on the supply-side. In other words, moddable markets will always have greater supply levers.
A lot of these moddable markets don’t look very much like marketplaces today, but I anticipate they will in the future. Most moddable markets will start as creation tools because it is the act of creating that generates a supplier emergence. But like any creation tool, it can be copied by a competitor. The next step for most successful creation tools that face competition is to start moving into marketplace territory.
TikTok, Instagram, Spotify, and Youtube are obvious winners of this approach, but are probably not traditionally considered marketplaces. Yet, their entire competitive advantage rests on the nature in which they have generated superior supply. There are small rumblings of this approach happening amongst creation tools already. I would anticipate that nascent creation tools today that have some level of transparency in the usage of their product are going to be strong future moddable market contenders and obtain outsized returns as a result.
What’s a marketplace exactly?
Marketplaces describe any platform that facilitates a two-sided transaction. As long as any asset is transacted between a supplier and a buyer (attention, likes, cars, houses, etc. all count), then the platform obeys the common dynamics of marketplaces. The core value proposition of a marketplace is ultimately distribution. Suppliers will pick the marketplace with superior demand distribution just as buyers will pick the marketplace with superior supply distribution.
Marketplaces win on supply levers
Marketplaces are unlocked by using a common set of supply levers. A new marketplace typically wins by doing something unique with supply by adjusting one or more of the following levers:
- Quality of supply
- Unit of supply
- Market awareness of supply (ie. price, reviews, social, etc.)
- Supply diversity
- Supply discoverability
Let’s consider the case of Substack as an example. Substack started out by building a membership capability on top of traditional email services. Since its paid newsletter launch, Twitter via Revue, Memberful, Ghost and a host of other competitors have moved in with feature parity. Substack has since moved into a marketplace play by curating winners (discoverability) and financing prolific writers (quality of supply).
It’s debatable if Substack will win, but the marketplace angle is a strong contender. In order to win, Substack will need to pull more supply levers like unit of supply (smaller pieces provider higher liquidity), market awareness of supply (increased social proof, sales), and diversity of supply (grow the type of content beyond just newsletters, expand into more subjects).
But what about demand?
Supply is king, demand is queen. Supply and demand go hand-in-hand, there is no world in which you can win without having both. However, demand is largely straightforward and its leverage first requires supply. The major demand levers are:
- Buyer’s ability to determine quality of supply
- Simplicity of acquiring a unit of supply
- Forgiveness for when the buyer changes their mind
Easy Science ML
Easy science means no model selection, no training, no trade-off evaluations - everything is pre-made and only the output customization is given to the end user. This is essentially the equivalent to no-code tools - you can’t determine your stack, your language, or how it’ll deploy - but you’ll have a lot of tooling to make sure the output is what you want it to be within design.
What falls on the end user is then only the onus of data selection, which is a significantly more doable art. It provides a strong level of model usage to those who want the superpowers of ML without having to know the technical details. I also believe that while data clearly varies, most use cases tend to be similar enough to generalize within a reasonable space. For example, in e-commerce you’ll always want product recommendations and search. In social platforms, you’ll always want search, feed, and connection recommendations.
Successful companies will need to understand how non-technical users want to use machine learning. The user interface, the controls, the methods will matter greatly. Further, the explanatory nature and ability to error correct will be essential. Knowing why a result happened and then being able to correct it accordingly will provide a level of control to the end user that will allow them to build and trust easy science ML tools with confidence.
Machine learning isn’t like software development
Software development is ultimately an engineering problem while machine learning is ultimately a science problem. As long as you can engineer something, you can turn it into a product and sell it; as long as it’s science, you’ll be stuck at the services layer. This is perhaps the largest fundamental misunderstanding of machine learning. We have seen dramatic increases in the number of data scientists (even the term is indicative) and their impact inside companies yet little success in the extraction of those outputs as a product that can be scaled and sold like traditional software.
This is because machine learning has two parts that cannot be abstracted away: (1) the results are highly dependent on the data at hand; (2) the results are highly dependent of the judgment of the modeler. Contrast this with software where the results are exactly the same as long as you can run the code. One is highly portable in a modular fashion while the other is not.
Pick strategies that work for machine learning
Instead of fighting this dynamic, successful companies will need to embrace it. The path to success will require picking one of the following key strategies.
|Strategy||Target User||Value Proposition|
|Sell pre-trained models||Non-data scientist (PM, dev, analyst)||Models and model interface|
|Sell ML-powered insights||Non-data scientist (PM, dev, analyst)||Models and data ingestion|
|Sell a ML-based product to handle physical tasks at scale||End user in target industry||Cost reduction or capability enablement|
|Sell tools along the model development chain||Data scientists||Workflow integration|
Companies that sell pre-trained models to non-data scientists are not very common today because most take a very general approach. Unfortunately, a general approach is roughly akin to selling generic software to companies. Winners in this category will need to specialize because both the model performance and the model interface will matter greatly. The strongest use cases are likely to be around data generation, image detection, entity extraction, and entity classification. These are areas where the manual effort is extremely high but the desired outcome can be easily analyzed.
Companies that sell ML-powered insights to non-data scientists will accelerate in any data-oriented decision making space. Both the model performance and the data ingestion system will matter greatly. For the most part, insights that offer optimization, detection, and explanation are likely to be the most useful. Most business analytics can generally be done without machine learning by using basic statistics. However, optimization, detection, and explanation tend to be significantly harder and not in the toolkit of most business analysts. These use cases tend to be difficult to accomplish but the desired outcome is often interpretable enough to make a business decision.
Companies that sell ML-based products for physical tasks to end users are probably the most difficult to start because they tend to require a heavy full-stack approach across software, hardware, and ML. Most physical tasks worth automating with machine learning tend to require some kind of hardware specification. Physical tasks are a specific area where machine learning is more performant that software alone. For most digital tasks, a series of if/then statements are often more than enough to handle automation. If they are more complicated that if/then statements, then typically a company will see value in developing the ML-based automation in-house. On the other hand, physical tasks tend to require some capture that is either not easily computer readable (images, video) or some actual movement (drones, robots). The difficulty of this task is unlikely to be accomplished by the target company in question. Despite the full-stack difficulty of such companies, once a superior product is created it can be very difficult to compete with because of the aggressive cost reduction or new capability enablement it creates.
Companies that sell tools to data scientists along the model development chain are more traditionally software infrastructure or developer tooling companies. Like how software (previously outsourced under “IT”) has now become a core part of nearly every company, there is a reasonable future in which data scientists become a standard part of operations, whether in an analytics or development capability. Like any software that serves a role in a toolchain, the workflow integration will be essential to success.
Minimium Knowledge DevTools
Enabling developers to accomplish more with minimal knowledge of the underlying mechanics is a supeior dynamic. Reducing a high knowledge area into a minimal knowledge one generates extremely high leverage for a greater group of developers.
Target less knowledgeable developers
Most software development requires some level of knowledge in order to meet demands at varying scales. Things like maintenance, scaling, automation, balancing, security, deployment are often developed after learning through the experience of building systems that demand these outcomes at high scale. Tools that dramatically reduce the required knowledge needed by handling best in class engineering and practices out of the box will enable more developers to engage deeper down the stack (i.e. a front-end developer setting up data APIs).
PlanetScale is a great example of a company that takes the lessons developed in high scale environments (in this case, at Youtube) and building a beautiful UI to make it super simple for the developer to use.
Developer tools that take advantage of this dynamic by actively targeting less knowledgeable developers can be extremely successful. The less a developer knows about something, they more willing they are to buy rather than build. For example, backend developers who can’t develop CSS classes will install heavy CSS libraries for which front-end developers would ignore in favor of crafting custom CSS. Likewise, frontend developers with no back-end experience will gladly take an out of the box API tool while a backend developer would prefer to design the APIs from scratch for flexibility. By going after the knowledge gap, developer tools can enable a new subset of developers to accomplish something new that they would previously have to develop the skills to learn.
Hasura is a great example of a company that actively does this. Hasura intentionally focuses on developers who do not have back-end skills. By generating GraphQL APIs out of the box for them, Hasura is able to enable front-end developers to quickly develop APIs and manage a Postgres DB.
Developer experience is a competitive advantage
Better developer experience itself is a way to dramatically reduce the amount of knowledge needed. Developer experience is largely exposed in superior documentation, education, feedback, and examples. The more of this there are, the less direct knowledge you need since copying and running appropriate code snippets can allow you to learn what you need.
Tools with better developer experiences win more developers. Functional is no longer enough. Superior developer experience means well-written docs, a wide array of examples, an active community, educational resources, extensible toolchains and integrations, and dedicated developer relations engineers. Consider the dramatic difference between Stripe and PayPal developer experiences:
|Examples||Github repo with 42 repos of completed examples||--|
|Education||Youtube channel with a wide array of videos||--|
|Documentation||Full end to end tutorials for common use cases with detailed code snippets, searchable API docs that are attached to your account||Basic API reference|
|Tools||VSCode extension, CLI for webhooks||--|
From a pure feature standpoint, PayPal is not dramatically different from Stripe. Both essentially offer different types of payment collection. But developers by far prefer Stripe because the experience is dramatically better. Even when Stripe first started out with a simple payments API far below the feature parity of PayPal, developers still chose to go with a solution that made their lives significantly easier as long as a main use case was available.
Value capture at the right layers
Most tools require a level of usage and integration before the developer believes that there is legitimate value behind the tool. If a tool is priced in a way to prevent appropriate testing, a developer is very unlikely to use it. For this reason, it’s essential that software tools only begin to capture value from users at certain layers. Gating access is unlikely to develop the kind of developer ecosystem needed to make tooling work, so how pricing and value capture happens needs to be thought of strategically.
For the most part, the following models generally have strong value capture and pricing in a way that aligns with the developer community:
- Volume-based for transactions (ex. MailGun, Stripe)
- Value capture on hosting while leaving the software itself free (ex. Next.js on Vercel)
- Value capture on enterprise-grade features (ex. Hasura Cloud, Auth0)
- Services to support complex integrations
While pricing plays an important role in any product, it is extremely relevant here. The way successful companies price allow for strong exploration at no cost to the developer while capturing eventual upside. Failing to do this appropriately can dramatically reduce the potential for a software tool.