VC: The Data Science Behind Testing Estimated Valuations

Updated on

The Data Science Behind Testing Estimated Valuations by Nathan Eyre, PitchBook

At PitchBook, we pride ourselves on having the most accurate and expansive data on the private markets. One key data point that recently sparked some debate was whether or not it was possible to estimate private company valuations. This question became even more pertinent as some industry players have begun incorporating estimated valuations into their datasets.

We decided to test whether or not it was possible by using algorithms from the data science and machine learning community to estimate private company valuations. In this post, I will review in depth the process taken to run the test. For a high-level look at the results, I encourage you to visit our previous post.

The Data We Used & Our Benchmark

The most important step in developing a good predictive model is understanding your data. With the largest set of private company valuations (97,247 to be exact), we were confident we could come up with a good training set of data to develop a model.

Since this is our first time attempting to estimate company valuations, we wanted to limit our scope. We have more valuations for venture rounds than any other dataset in the PitchBook Platform, so it made sense to start there. Finally, we limited our dataset to only be data from the past five years, which gave us an end set of 12,876 valuations to use. In order to avoid overfitting, this set was split with 66% used for training the model and 34% used to test the model.

The major assumption used during this project was that private companies with similar characteristics would have similar valuations. To define this similarity, we started out by hypothesizing a few factors that we felt affected the valuation of a company. On our first go-round, we decided that data on the financing round, data on previous rounds, investor-level data, location of HQ, and industry data would have the highest predictive power.

Before developing a model, we needed to have a benchmark to measure our success. We used three factors (series, location and industry), and if there were over 30 cases where these factors were the same, then we used a median valuation to predict what the true valuation would be. Using this simple method, we could estimate 16% of valuations that were within 15% of their actual valuation. Obviously there was some improvement to be had.

Model Selection

One of the factors that we figured would be key in predicting valuations was including which investors are participating in the round. Since we had over 9,593 investors participating in the 12,876 rounds, this could have ended up being an additional 9,593 binary variables added to the model. To reduce this dimensionality problem, we leaned on the heavily used Principal Component Analysis to get the top ‘N’ components that explain the most variation in the investor data. We varied the ‘N’ throughout our experiments, but found that roughly 10 components worked well.

We tried lots of models, but the most accurate one we found was a random forest regressor, which essentially creates a variety of decision trees based on the training data and averages the results to get a prediction. The random-forest regressor performed well because it is able to find non-linear relationships and requires fewer assumptions about the data than an ordinary least-squares regressor. In exploring the data, we found nothing that suggests the relationship is linear between valuations and the predictor variables. The nature of random forests is to also avoid overfitting, as the importance of spurious relationships found in the data is reduced.

The initial random forest model performed better than our benchmark, with about 30% of estimations falling within 15% of their actual. However, there were 6.5% of the test cases where their prediction was over 100% of their actual. We knew we could do better.

Since this whole project was based off of the assumption that similar companies will be valued similarly, the next step was to incorporate PitchBook’s company similarity index, which uses company descriptions in our platform to find companies that are similar. For each company, we took the top ‘N’ most similar companies and used their data to train a model, and then used the one company as a singular test case for if the predicted valuation was accurate. By doing this we saw no real measured change in performance, but we did see a decrease in outlandish predictions—down to only 3.5% of test cases with a prediction over 100% of actual.

We think that any extra information gained by only using similar companies based on the company similarity index was lost due to the fact that other similar companies exist that might be more similar based on the other variables.

In our last-ditch effort, we came up with some additional variables. While we had already included investors for each round, we thought there might be additional predictive power from extra information about the investors such as if they’re following on from the last round, how many are participating, and how many are new. After including these, we saw a marked increase in accuracy—40% of test cases within 15% of actual valuation—and a decrease in crazy predictions with only 2% of test cases with a prediction over 100% of actual.

The Results

The following data visualization shows the results of the top three methods we took to estimate company valuations. On the X-axis is the accuracy threshold and the Y-axis show the percent of companies that fall within that threshold. For example, using the “Random Forest – New Features” model (blue line), we can see that 39.67% of test cases fell within 15% of the company’s actual valuation.

The end goal of this project was to determine if a model built using algorithms from the data science and machine learning could accurately estimate company valuations. For most professionals, a 15% accuracy threshold is likely the highest that could be used to conduct business with any degree of confidence. But with only about 40% of test cases falling within that range, there’s another 60% that have an accuracy rate well outside of that 15% threshold.

Those are not the kind of results that we are willing to give the PitchBook stamp of approval.

Additional Insights

Even though estimated valuations turned out to be a data point that cannot meet our standard of precision, there are still some things to be learned from the process. For instance, with the model trained, could we find out which variables had more predictive power? Take a look at the below chart that shows some of the most important features used in the final random forest:

Variable Feature Importance
Investor Component 1 0.582
Round Size 0.072
Investor Component 2 0.051
Valuation of Previous Round of Financing 0.050
Investor Component 10 0.030
Total Amount of VC Funding Raised Prior to this Round 0.027
Months Since Last Financing 0.016

 

The sum of all feature importance will equal one, so knowing that Investor Component 1 was 58% important seems like a nice bit of information. This component is the initial projection of the participating investors, which explains the highest amount of variance in the data. This shows that our inkling that participating investors have a large impact on valuations was right.

Future Work

Our final model achieved a jump in performance when we added more predictive variables. Often the features that affect a company’s valuation are subjective and difficult to quantify, but if you have ideas for more variables to include, please reach out to [email protected] and maybe we can include them in our research down the road.

All of this work was done on data from VC financing rounds in the last five years. However, we track financing rounds for Private Equity and M&A, as well, so future work could be done on these datasets to estimate their valuations. Finally, we could expand our data to include rounds that happened earlier than five years back and there might be some information there. We will keep you posted on any additional work we do in this space.

Leave a Comment