Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

130 E 59th St., Floor 17
New York, NY

RRE Blog

Our Investment in Pebble Post

Jason Black

Today we are excited to announce our lead investment in PebblePost’s $15M Series B.

Over the last decade, advertising and marketing technology has gone through a massive transformation. Digital retargeting, social media channels, personalization, and - most recently - programmatic, have quickly become vital to the success of the modern CMO. To date though, one marketing channel has been left behind: direct mail.

Understandably so. Bits and bytes are the low hanging fruit. Our own portfolio reflects some of the players in that first wave of revolution. Sailthru leverages this flexibility to build email software that individually personalized email based on hundreds of data points. Beeswax enables companies to programmatically buy ads across the web based on a company’s proprietary data sets at scale, and tweak and tune that optimization in real time.

But as digital channels start to saturate, marketers are looking for new, differentiated, and highly engaging ways to reconnect with their existing customers, and intelligently reach out to new ones. Despite the disadvantages of printing and delivering a piece of snail mail, direct mail is a massive industry (~$50B/yr) for one simple reason — it works. It works despite the fact that the most sophisticated providers rely solely on demographic and geo-targeting data. The tools and capabilities, however, have kept modern marketing professionals from wading into its opaque, slow moving industry.

Enter PebblePost. To call PebblePost a direct mail company is like calling Amazon a bookstore. By marrying the best of traditional direct mail with the best of digital advertising, the company empowers marketers to transform real-time interest and intent data into high quality, personalized, postcards and catalogs. This enables PebblePost to re-engage existing customers and, critically, target website prospects who have yet to convert. Because it begins online, the PebblePost platform can match a return visit and provide real-time analytics on response, conversion and ROAS, allowing digital marketers to optimize their campaigns just as they would on other programmatic platforms. The advantages can’t be overstated.

The results don’t disappoint. Taken together, the PebblePost platform has enabled clients to perform 10x better than traditional direct mail campaigns and a full 100x digital retargeting campaigns.

As venture investors, we invest in companies that use data and technology as a leverage point to create new business opportunities and build in step-function increases in performance and usability. PebblePost does both in a big way.

We couldn’t be more excited to be a partner along their journey in transforming what direct mail can do for brands around the globe.

Our Investment in Knock: Home Selling Finally Enters The Age Of The Internet

Raju Rishi

RRE Ventures is excited to announce our Series A investment in Knock, the modern way to sell your home. Knock — co-founded by former Trulia founding team members Sean Black and Jamie Glenn, along with co-founder and Chief Architect Karan Sakhuja — utilizes historical market and transaction data, machine learning, and intelligent pricing algorithms to predict home prices with unparalleled accuracy and to alleviate the pain of selling (and buying) a home. By guaranteeing individuals the ability to sell their home within 6 weeks, Knock reduces the hassle, uncertainty, and inefficiency of selling a home. And by offering to buy homes outright from homeowners, Knock solves a key pain point for the 47% of US homebuyers who must sell their existing home in order to afford a new one.

The proliferation of tech-enabled marketplaces represents disruption at its best. In aggregating a near limitless number of potential buyers and sellers, these marketplaces have the potential to increase price transparency and drive transactional efficiency. This not only creates value for businesses, but drives significant value to consumers, saving people money and time. And nowhere are these savings more important than in the residential real estate market, given that buying a home is often the most significant financial decision an individual will make.

Over the past several years, the promise of fully efficient marketplaces has been realized even in historically slow-moving industries like commercial real estate. Today, startups like 42Floors and The Square Foot (an RRE portfolio company) are bringing commercial listings online, while Hightower (another RRE portfolio company, which recently merged with VTS) is driving efficiency by arming brokers with intelligent tools for managing leasing workflows.

Yet despite this proliferation of new marketplace businesses, one space has been left behind. Until now, there has been no true tech-enabled marketplace for selling residential real estate. We believe that this market is finally ripe for disruption, and there exists no better team to execute against Knock’s vision.

By bringing listings online in a searchable way, companies like Zillow create transparency in the residential real estate market, enabling consumers to search directly for properties. However, the actual transaction process remains time consuming, inefficient, and expensive for sellers and buyers alike. While Zillow renders properties searchable, potential buyers and sellers are still forced to contact a broker, hire a series of professionals to assist with the closing process, and wait months to complete a transaction. And the stakes are high; if a sale falls through, a family may no longer be able to afford the new house they were planning to buy.

Knock solves this problem in an elegant and thoughtful way. The immediate resonance of Knock’s value proposition with consumers serves as further evidence that despite the prevalence of online real estate listings, actually buying and selling a home remains a legacy experience. Fundamentally, Knock is a transaction-oriented marketplace for real estate. Two of Knock’s co-founders having deep domain expertise as founding team members of Trulia made Knock an especially compelling investment for us. Moreover, the rest of the Knock team are uniquely positioned to solve this problem. The promise of Knock is that a homeowner can sell his or her house within six weeks, or that Knock will buy it directly. This is the promise of the truly efficient marketplace. We couldn’t be more excited to work with Sean, Jamie and the rest of the team at Knock.

(Thank you to my RRE colleague Cooper Zelnick for helping with this post)

BUILDING BRIDGES, AND THE FUTURE OF HEALTHCARE

Raju Rishi

RRE Ventures is excited to announce our Series B investment in Redox, the modern API for healthcare. Redox is building a highly scalable network which integrates healthcare applications and health systems. Today, Redox is the leading healthcare integration platform with the largest ecosystem of enterprise applications. And tomorrow, Redox will become a powerful force for the achievement of better data, better patient care, and better outcomes across our nation’s entire healthcare system.

"The thesis which led to our investment in Redox is simple," said Raju Rishi and Cooper Zelnick, who have joined Redox's board. "The proliferation of consumer healthcare devices, the explosion of healthcare applications, and the trend towards data-driven healthcare creates an enormous opportunity for patients, providers, and payers alike." However, the complexity of legacy electronic health records and other healthcare technologies has made it increasingly difficult for healthcare system CIOs to keep up with the demands of both doctors and patients for application integrations. As a result, healthcare feels like an industry left behind by the data revolution. We believe that the true promise of value-based care and dramatically improved outcomes will not be realized until new technologies can rapidly penetrate existing healthcare systems.

After meeting Luke, Niko, and James, Redox’s three co-founders, we knew we had found the team to solve this problem. And since launching in 2014, they’ve done just that. Their extensive backgrounds in healthcare interoperability (all three worked for Epic Systems in Madison Wisconsin) makes them uniquely positioned to understand and solve this key problem in healthcare. Our partnership with Redox represents not only an investment in a painful problem within a massive space, but also an investment in an unparalleled team of passionate domain experts.

Redox is the first and only company developing a networked approach to healthcare interoperability. After evaluating the landscape of interoperability solution providers, we came to believe that businesses enabling one-off, point-to-point healthcare connections fall short of achieving the ultimate goal of data liquidity in healthcare. Redox, with its scalable, reusable infrastructure and standardized data models, has created a unique, elegant, and exceedingly valuable solution to this problem.

Coupled with the Redox team’s deep technical expertise, the company’s location in Madison, Wisconsin gives Redox several key competitive advantages beyond unparalleled access to craft beer and cheese curds. Luke, Niko, James, and the rest of the Redox team are recognized as thought leaders within Madison’s health tech ecosystem, enabling the company to recruit talent, operate efficiently, and secure key partnerships in a large and growing market. We look forward to working closely with the Redox team as they help payors, providers, and patients work together to realize the promise of better, cheaper, smarter healthcare for everyone.

 

Becoming more human through mass automation

Steven Schlafman

Earlier this week. Amazon announced their latest innovation, Go. Think of Go as a futuristic grocery store. Using sensors, artificial intelligence and computer vision, Amazon is reinventing the shopping experience that we’ve all grown accustomed to for the last seventy years. That’s right. No more check out lines, registers or cashiers. If you want to buy an item, just grab it from the shelf, and then Amazon will automatically add the item to your virtual shopping cart. When you walk out of the store, Amazon will magically charge you for that item. Amazing, right? Yup. It’s also potentially scary when you think of the implications that this, and other forms automation, could have on our society.

Many industries are facing unprecedented changes largely driven by increasing wages and advancements robotics / artificial intelligence. This trend isn’t just limited to retail in the Amazon example but also transportation, food service, manufacturing, and administrative to throw out some examples. The number of jobs on the line is potentially massive. There are 3.4M cashiers nationwide according to the Bureau of Labor Statistics (BLS). There are 3.5M professional truck drivers in the U.S. according to the American Trucking Association. There are 4.7M food service workers in the U.S. (BLS). These are just a few examples. I don’t even need dig up all the numbers to conclude tens of millions of American jobs are at risk due to rising labor costs and automation.

All that said, I’m not here to paint a doomsday picture like many before me have. Hundreds if not thousands of articles have been written about our robot overloads and how we’ll eventually become slaves to them. I’m also not here to look at what we stand to lose. Instead, I’m here to look at what we all stand to gain in a world of mass automation. I believe if managed properly this massive shift could unlock enormous long term opportunities for our society and increase our overall quality of life. While there’s no doubt some pain will be felt in the short to mid-term, humanity has faced several major technological upheavals over the last thousand years and we’ve walked away every time with higher productivity, more time to focus on new activities and a higher quality of life. The mass automation era will be no different.

But first, how do we get there? Implicit within the concept of mass automation is the reality of significant structural unemployment. People will lose jobs, and those people will need to find new ways to support themselves and to support their families. This means several things, not all of which are bad. First, there’s a huge opportunity that exists around education and retraining. Retraining programs — if executed effectively — will yield not only a growth in talent available for existing American industries, but also an enormous increase in human capacity to tackle new or unsolved problems. As mass automation sets people free from menial work, socially, economically, technologically, and globally meaningful issues will become practically relevant in way they’ve never been before.

Of course, government and private retraining programs will hardly be enough to convert the millions of displaced laborers into newly productive workers in emerging industries, but they are a good start. Business, governments, and non-profits alike are already thinking about how to solve this issue. They’ll continue to do so. And I expect they’ll be successful. But for now, let’s move on. Assuming a large portion of the population no longer needs — or is able to — work in “traditional” industries, what will they do?

That brings us to the most interesting ramification of mass automation. How will we fill our time? Maybe some portion of the population will sit on the couch, drink beer, and watch reruns of Seinfeld ad infinitum. But I have more faith in us than that. I believe that we will begin, evermore rapidly, to solve the problems which have long perplexed humanity. More minds will be put to work against the problems of climate change, for example. Hopefully we’ll be able to invent and implement new responses to large societal issues like poverty, crime, sickness, pollution, the list goes on. But the true promise of increased human capacity goes beyond any one problem. By freeing our time and resources and redirecting them towards our largest problems, we’ll be able to focus on helping one another. Building and rebuilding communities. Engaging with each other emotional and spiritually. Being of service to our fellows. Ironically enough, I believe that mass automation will give us the capacity to be more human.

On top of all that, we get to reimagine the concept of work. What if we didn’t get up each day — 5 days a week — and sit in an office from 9 to 5. What if we engaged with the projects, the people, and the pursuits about which we’re most passionate? What if we did that always? And what if we were compensated not for our hours, but for our impact? What if everyone was guaranteed a universal basic income so that they could focus on these things? Making this shift will be difficult for many of us, but with strong, affordable retraining programs, millions of Americans will be granted opportunities that most of us can’t imagine today.

In the world of mass automation we will have more time than ever before. I don’t believe that this time will be wasted. I believe it will be invested. In self-expression. In art. In education. In service to one another. I believe that our newfound freedom will lead not to the destruction of our society, but to its elevation. So when I hear about a fully automated supermarket, I think not about our robot overlords, but about our potential as humans, and about achieving that potential.

(Thanks you to my trusted RRE colleague Cooper Zelnick for editing this post)

Cutting Through the Machine Learning Hype

Jason Black

The tech ecosystem is well acquainted with buzzwords. From “Web 2.0” to “cloud computing” to “mobile first” to “on-demand,” it seems as though each passing year heralds the advent and popularization of new catchphrases to which fledgling companies attach themselves. But while the trends these phrases represent are real, and category-defining companies will inevitably give weight to newly coined buzzwords, so too will derivative startups seek to take advantage of concepts that remain ill-defined by experts and little-understood by everyone else.

In a June post, CB Insights encapsulated the frenzy (and absurdity) of the moment:

It’s clear that 9 of 10 investors have very little idea what AI is so if you’re a founder raising money, you should sprinkle some AI into your pitch deck. Use of ‘artificial intelligence,’ ‘AI,’ ‘chatbot,’ or ‘bot’ are winners right now and might get you a little valuation bump or get the process to move quicker.

If you want to drive home that you’re all about that AI, use terms like machine learning, neural networks, image recognition, deep learning, and NLP. Then sit back and watch the funding roll in.

Pitch decks and headlines today are lousy with references to “artificial intelligence” and “machine learning”. But what do those terms really mean? And how can you separate empty claims from real value creation when evaluating businesses and the technologies which underpin them? Having at least a passing knowledge of what you’re talking about is a good first step, so let’s start with the basics.

Definitions

Artificial Intelligence

The terms “artificial intelligence” and “machine learning” are frequently used interchangeably, but doing so introduces imprecision and ambiguity. Artificial intelligence, a term coined in 1956 at a Dartmouth College CS conference, refers to a line of research that seeks to recreate the characteristics possessed by human intelligence.

At the time, “General AI” was thought to be within reach. People believed that specific advancements (like teaching a computer to master checkers or chess) would allow us to learn how machines learn, and ultimately program computers to learn like we do. If we could use machines to mimic the rudimentary way that babies learn about the world, the reasoning went, soon we would have a fully functioning “grown up” artificial intelligence that could master new tasks at a similar or faster rate.

In hindsight, this was a bit too optimistic.

While the end goal of AI was — and still is — the creation of a sentient machine consciousness, we haven’t yet achieved generalized artificial Intelligence. Moreover, barring a major breakthrough in methodology, we don’t have a reasonable timeline for doing so. As a result, research (especially the types of research relevant to the VC and startup world) now focuses on a sub-field of AI known as machine learning aimed at solving individual tasks which can increase productivity and benefit businesses today.

Machine Learning

In contrast with AI’s stated goal of recreating human intelligence, machine learning tools seek to create predictive models around specific tasks. Simply put, machine learning is all about utility. Nothing too flashy, just supercharged statistics.

While there are plenty of good definitions for machine learning floating around, my favorite is Tom M. Mitchell’s 1997 definition:

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.

Rather formal, but this definition is buzzword-free and gets straight to the elegance and simplicity of machine learning. Simply put, a machine is said to learn if its performance at a set of tasks improves as it’s given more data.

Need an example? How about one from your Statistics 101 course: simple linear regression. The goal (or Task) is to draw a “line of best fit” given some initial set of observed data. Through an iterative process that seeks to minimize the average distance from the regression line and the scatterplot of data (its Performance measure), linear regression improves its predictive “line of best fit” with each additional data point (Experience).

Red dots represent scatter plot of all data. The blue line minimizes average distance from the regression line (represented here by grey lines).

Red dots represent scatter plot of all data. The blue line minimizes average distance from the regression line (represented here by grey lines).

Boom. Machine learning.

Given that relatively low bar, nearly any tech company can claim to be “leveraging machine learning.” So where do we go from here? To further demystify the topic, it’s also useful to understand how machine learning algorithms are developed. With linear regression, the algorithm in question simply draws a line which gets as close to as many individual data points as possible. But how about a real world example?

While the math behind more sophisticated machine learning models quickly becomes incredibly complex, the underlying concepts are often very intuitive.

Developing a Machine Learning Model

Say you wanted to predict what new songs a particular Spotify user would enjoy. Follow your intuition.

You’d probably start with his or her existing library and expect that other users who have a large number of songs in common would be likely to enjoy the complement set of the songs in the other user’s library (a process called collaborative filtering). You might also analyze the acoustic elements in the user’s library to look for common traits such as an upbeat tempo or use of electric guitar (Spotify uses neural networks to do this, for example). Finally you might assign an appropriate weight to the tracks a user has listened to repeatedly, starred, or marked with a thumbs up/down.

Check out this visualization of the filters learned in the first convolutional layer of Spotify’s deep learning algorithm. The time axis is horizontal, the frequency axis is vertical (frequency increases from top to bottom).

Check out this visualization of the filters learned in the first convolutional layer of Spotify’s deep learning algorithm. The time axis is horizontal, the frequency axis is vertical (frequency increases from top to bottom).

All that’s left is to translate these intuitions into a mathematical representation that ingests the requisite data sources and outputs a ranked list of songs to present to the user. As the user listens, likes, and dislikes new music, these new data points (or Experience in our earlier terminology) can be fed back into the same models to update, and thus improve, that prediction list.

If you want to learn more about more complex machine learning algorithms, there are ample resources across the web that do a great job of explainingneural networks, deep learning, Bayesian networks, hidden Markov models, and many more modeling systems. But for our purposes, technical implementation is less relevant than understanding how startups create value by harnessing that technology. So let’s keep moving.

Where’s the value?

Now that we have covered what machine learning is, for what should savvy investors and skeptical readers be on the lookout? In my experience, the initial litmus test is to walk through the three fundamental building blocks of a machine learning model (task T, performance measure P, and experience E) and look for new or interesting approaches. It is these novelties which form the basis of differentiated products and successful startups.

Experience | Unique Data Sets

Without data, you can’t train a machine learning model. Full stop.

With a publicly available training set, you can train a machine learning model to do specified tasks, which is great, but then you are relying on tuning and tweaking the performance of your algorithm to outperform others. If everyone is building machine learning models with the same sets of training data, competitive advantages (at least at the outset) are all but non-existent.

By contrast, a unique and proprietary data set confers an unfair advantage. Only Facebook has access to its Social Graph. Only Uber has access to the pickup/dropoff points of every rider in its network. These are data sets that only one company can use to train their machine learning models. The value of that is obvious. It’s basic scarcity of a private resource. And it can create an enormous moat.

Take *Digital Genius, as an example. The Company offers customer service automation tools and counts numerous Fortune 500 companies as clients. These relationships offer Digital Genius exclusive access to millions of historical customer service chat logs, which represent millions of appropriate responses to a wide swath of customer queries. Using this data, Digital Genius trains its Natural Language Processing (NLP) algorithms beforebeginning to interact with new, live customers.

In order to attain the same level of performance, a competitor would have to amass a similar number of chat logs from scratch. Practically speaking, this would require performing millions of live customer interactions, many of which would likely be frustrating and useless for the customers themselves. While the algorithm would eventually learn and improve, the model’s day one performance would be lackluster at best, and the company itself would be unlikely to gain traction in the market. Thus, having the proprietary data sets from their largest clients gives Digital Genius a real, differentiated value proposition in the chat automation space.

Of course, another way to go about gaining access to a unique data set is to capture one that has never existed. The coming wave of IoT and the proliferation of sensors promise to unlock troves of new data sets that have never before been analyzed. Companies which get proprietary access to new data sets, or those which create proprietary data sets themselves, can thus outperform the competition.

*OTTO Motors (a division of Clearpath Robotics), has captured one of the most robust data sets of indoor industrial environments on the planet from their network of autonomous materials transport robots (pictured below). Every time an OTTO robot makes its way around the factory floor, information about its environment — moving forklifts, walking workers, path obstructions — can be sent back to a centralized database. If the company then develops a more robust model to navigate around forklifts, for example, the OTTO Motors team can backtest and debug their improvements against real-world, historical environment data without needing to actually test their robots or even use physical environments.

An OTTO 1500 robot autonomously navigates around a warehouse.

An OTTO 1500 robot autonomously navigates around a warehouse.

This same data-race is even more competitive on the road. The reason why the Google Self-Driving Car, Tesla Autopilot, and Uber Self-Driving teams all tout (or forecast) the number of autonomous miles driven is because each additional mile captures valuable data about changing environments that engineers can then use to test against as they improve their autonomous navigation algorithms. But relative to the global total number of miles driven per year (an estimated 3.15 trillion miles in 2015 in the US alone), only a de minimus number of those are being captured by the three projects mentioned above, leaving greenfield opportunity for startups like Cruise AutomationnuTonomy, and Zoox.

The final, and most experimental approach to leveraging unique data sets is to programmatically generate data which is then used to train machine learning algorithms. This technique is best suited for creating data sets that are difficult or impossible to collect.

Here’s an example. In order to create a machine learning algorithm to predict the direction a person is looking in a real world environment, you first have to train on sample data that has gaze direction correctly labeled. Given the literal billions of images that we have of people looking, with their eyes open, in different directions in every conceivable environment, you’d think this would be a trivial task. The data set—it would seem—already exists.

The problem is that the data isn’t labeled, and manually labeling, let alone determining, a person’s exact gaze direction based on a photograph is way too hard for a human to do to any degree of accuracy or in a reasonable length of time. Despite possessing a vast repository of images, we can’t even create good enough approximations of gaze direction for a machine to train on. We don’t have a complete, labeled set of data.

Programmatically generated eyes used to train machine learning algorithms to determine gaze direction.

Programmatically generated eyes used to train machine learning algorithms to determine gaze direction.

In order to tackle this problem, a set of researchers at the University of Cambridge programmatically generated renderings of an artificial eye and coupled each image with its corresponding gaze direction. By generating over 10,000 images in a variety of different lighting conditions, the researchers generated enough labeled data to train a machine learning algorithm (in this case, a neural network) to predict gaze direction in photos of people the machine had not previously encountered. By programmatically generating a labeled data set, we sidestepped the problems inherent to our existing repository of real-world data.

While means of finding, collection, or generating data on which to train machine learning models are varied, evaluating the sources of data a company has access to (especially those which competitors can’t access) is a great starting point when evaluating a startup or its technology. But there’s more to machine learning than just Experience.

Task | Differentiated Approaches

Just as access to a unique data set is inherently valuable, developing a new approach to a machine learning Task (T) or starting work on a new or neglected Task provide alternative paths to creating value.

DeepMind, a company Google acquired for over $500M in 2014, developed a model generation approach that enabled them to pull ahead of the pack in a branch of machine learning known as deep learning (hence the name). While their acquisition went largely unnoticed by the mainstream press, it was difficult to miss the headlines as their machine learning algorithm dubbed “AlphaGo” squared off against the world champion of Go in early 2016.

The rules of the game of Go are relatively simple, yet the number of possible board positions in the game outnumber the atoms in the universe. Traditional machine learning techniques by themselves simply could not produce an effective strategy given the number of possible outcomes. However, DeepMind’s differentiated approach to these existing techniques enabled the team not only to best the current world champion of the game, Lee Sedol, but do so in such a way that spectators described the machine’s performance as “genius” and “beautiful.”

However, the sophistication of performance on one Task does not translate well to other domains. Use the same code from the AlphaGo project to respond to customer service enquiries or navigate around a factory floor and the performance would likely be abysmal. Practically, the approximate 1:1 ratio between Task and machine learning model means that for the short- and medium-term there are innumerable Tasks for which no machine learning model has yet been trained.

For this reason, identifying neglected Tasks can be quite lucrative and easier than one might expect. One might assume, for example, that since a significant amount of time, effort, and money has been spent on improving photo analysis, that video analysis has enjoyed the same performance gains. Not so. While some of the models from static image analysis have carried over, the complexity associated with moving images and audio has discouraged development, especially as plenty of low hanging fruit in the photo identification space still remains.

*Dextro’s Stream API annotating live Periscope videos in real time.

*Dextro’s Stream API annotating live Periscope videos in real time.

This created a great opportunity for *Dextro and Clarifai to quickly pull out ahead in applying machine learning to understand the content in videos. These advancements in video analysis now enables video distributors to create searchable videos based on not just the manually submitted metadata from the users who upload, but also the content contained within the video like the transcript of the video, the category of video, and even individual objects or concepts that appear throughout the video.

Performance | Step Function Improvement

The final major value driver for startups seeking to harness machine learning technology is meaningfully outperforming the competition at a known Task.

One great example is Prosper which makes loans to individuals and SMBs. Their Task is the same as any other lender on the market — to accurately evaluate the risk of lending money to a particular individual or business. Given that Prosper and their peers in both the alternative and the traditional lending world live or die by their ability to predict creditworthiness, Performance (P) is absolutely critical to the success of their business. So how do relatively young alternative lenders outperform even the largest financial institutions out there?

Instead of taking in tens of data points about a particular borrower Prosper draw an order of magnitude more data. In addition to using a larger and differentiated data set, the new wave of alternative lenders like Prosper have been rigorously scouring research papers and doing their own internal development in order to incorporate bleeding edge machine learning algorithms to their data sets. Together, the Performance characteristics of the resulting machine learning models represent a unique and differentiated ability to issue profitable loans to a whole group of consumers and businesses who have historically been turned away by legacy institutions.

Being able to judge the performance of a startup’s machine learning models against that of the competition is another great way to cull the most innovative companies and separate out the mere peddlers of hype and buzz.

Back to Business

To be clear, there’s much more to machine learning than hyped-up pitch decks and empty promises. The trick is culling the wheat from the chaff. Armed with clear definitions and a working knowledge of the simple concepts underlying the buzzwords and headlines, go forth and pick through presentations with confidence!

But remember this caveat.

Yes, machine learning — when harnessed appropriately — is both real and powerful. But the ultimate success or failure of any business hinges much more on the market opportunity, productization, and the team’s ability to sell than it does on specific implementations of machine learning algorithms. Just as compelling tech is a necessary but insufficient condition to create a successful tech company, great tech in the absence of a viable business is unlikely to become anything more than a science project.


Big thanks to Cooper Zelnick for being sounding board and an editor on this one. Shoutout to Ryan Atallah and Sven Kreiss for proofing for technical errors as well.

* Denotes an RRE portfolio company.

Talent Playbook

Maria Palma

Our founding teams always tell us that Talent is one of their biggest challenges.  While it's gotten easier and less costly to start companies in the last few decades, getting the right team is still a major challenge and yet is a core differentiator for your company.

Given that Hiring and Talent can often seem unapproachable and complicated, we put this Talent Playbook together as a starting point to cover some of the fundamentals. We interviewed Heads of Talent in our network to highlight a few best practices. This also includes an excel template to get you started on hiring when you aren't at a point to invest in more sophisticated systems yet.