Nathan Eagle did a complete a bit flip on AI

The founder of Jana and Granted AI—and a pioneer of using cell phone data to better understand human behavior—shares what he’s seen from his front row seat in the field of artificial intelligence

October 27, 2023

As a PhD student at MIT in the early 2000s, Nathan Eagle pioneered the use of machine learning to predict human behavior from mobile phone data. In 2008, that project became a book, Reality Mining, and the new field of study was declared one of the “10 technologies most likely to change the way we live” by the MIT Technology Review. Now, as the founder and CEO of Granted AI, Eagle brings the power of today’s large language models to the problem of funding science and scientists.

During his 12 years as the CEO of Jana, a mobile phone service that compensated users for their participation in marketing surveys and advertising, the company became the largest provider of free internet in emerging markets: serving 75 million people in over 100 countries. Named one of the “50 People who Will Change the World” by Wired (just one of many similarly themed lists to which he’s been included), Eagle was awarded the Kiel Global Economy Prize in 2012 alongside Nobel laureates Daniel Kahneman and Martti Ahtisaari.

This interview has been edited for length and clarity.

What are you working on right now?

Right now I’m trying to figure out how these recent advances in large language models (LLMs) can help individuals and organizations—scientists—more efficiently apply for funding. Writing grant proposals takes a tremendous amount of time, and time spent fundraising is time you’re pulled away from the important work: running a lab, or doing cancer research.

My sister-in-law runs a STEM program teaching kids science and math in rural Arizona. The work is really rewarding but she spends a substantial fraction of her time not with the kids or developing curriculum but rather seeking funding to support the program for next year. There’s so much overhead, and it’s so bureaucratic. In many instances, people just don’t do [this kind of work] because it seems overwhelming, or increasingly incomprehensible (all the different types of forms you have to fill out for federal funding, for instance).

It’s not like people in these funding agencies aren’t cognizant of the difficulties of these funding processes—but trying to instigate change at the government level takes so long. If you really want to make an impact in the shorter term, you basically have to deal with the current infrastructure as it is. So the question I’m trying to answer is: How can we free up people’s time by using LLMs in a way that enables them to focus on the most important parts of their work?

What does work mean to you?

Ever since I was an undergrad, I’ve wanted to make an impact. I’ve wanted to make sure that, when I’m doing work, I’m doing something that matters—that improves the lives of people around me. That’s been true whether I’ve been doing work within academia, or starting companies, or investing.

Now, when you try to talk about the word impact, it’s striking [how its meaning can change]. When you’re playing the academic game, an impact factor is actually a metric of your publications, the importance of a particular journal—say Nature or Science—where you’re being published. That was the impact I was trying to maximize at the time.

In hindsight, that’s a pretty warped view of the word impact. Even if you get your papers into these highly visible journals, more often than not those papers are not really making an impact on the world or on people’s lives. I really only realized that after stepping back from academia and into entrepreneurship. Instead of writing papers, you’re actively trying to increase the number of users to your service. It was eye-opening: it made me realize that I had been thinking about impact in the wrong way. 

I’m still trying to figure out the best way I can leverage my time to make the largest positive impact on the world. That’s why I founded Jana, and that’s why I founded Granted. Here are opportunities to make a very positive impact if we can execute efficiently and well.

You’ve moved between academic and business spaces throughout your career. What do you miss when you are operating in one field versus the other?

I’ve been an academic and I’ve been a CEO and founder, and what they have in common is that you are essentially working for yourself. You’re pulling crazy hours, not because you’re told to but because you want to. There’s a lot of agency for entrepreneurs and academics, and I really like that.

I’ve been an academic and I’ve been a CEO and founder, and what they have in common is that you are essentially working for yourself.

One of the things that’s great as an academic is that you can follow your curiosity independent of whether or not it’s something that can make money. You can just keep pulling that thread. What’s not so great is that, while you can obsessively scratch your curiosity itch, it’s much harder to make an impact on the world. On the entrepreneurship side of things, the fundraising is different. If you’re successful, you’re able to raise a lot of funding relatively easily compared to academics. You don’t have to be as constrained financially, you can make an impact on a broad space, but that impact has to fall within this well-defined set of criteria—it has to be either economically viable or you have to tell a story about why it’s potentially economically viable. That really constrains the state space.

What made you first decide to start a company?

I got into business before academia. I did my undergrad at Stanford in the heart of Silicon Valley between 1995 and 1999, right when that first dot-com bubble was starting to grow. I was part of the Mayfield Fellows Program, where 12 students are paired up with CEOs and venture capitalists and get a real hands-on introduction to what it means to be an entrepreneur. That’s what gave me the bug to go out and start my own companies: it seemed like a really exciting and empowering thing to do. After a handful of failed startups, I went to Nepal for a year as a Fulbright Scholar, where I applied to MIT from Kathmandu. I managed to get in. By then the dot-com bubble had burst, so a PhD seemed like a pretty good idea. 

Jana was originally called txteagle, and the idea came from my PhD work and postdoc work. At MIT, I was trying to build models of human behavior from cell phone data.  I started conversations with a lot of different mobile operators and ended up helping them better understand their own customer behavior. These relationships, coupled with the notion that you could start compensating individuals [for their data] with prepaid mobile credit, led to txteagle. Let’s get people to do tasks on their phone—translating words into a local language, telling us how much a two-liter bottle of Coca-Cola costs at your local market—and pay them for their work. What we found was that it was really easy to acquire users who are eager to earn money on their phones!

The challenge of that business is that we’d run out of work for people to do quite quickly. Before, if Procter & Gamble wanted to hear what rural Filipino women thought about laundry detergent, they’d fly someone out from Cincinnati to Manila and rent a car and drive out into the field. Being able to save that trip was a game changer, but once 2,000 people fill out your survey about laundry detergent, you don’t really need more people to fill it out. The incremental value of that 2,001st additional survey is pretty negligible.

We kept running into this problem, of not having enough work for people to do. That’s when the advertising folks at these different companies approached us. The market research people only needed 2,000 people to complete a survey, but the advertising people want everyone. So txteagle became Jana, and Jana became, to some degree, an advertising company. 

I do want to touch briefly on your work with cell phone data and human behavior. The original reality mining dataset was collected almost two decades ago, and your book Reality Mining was published almost a decade ago. That work continues to be incredibly prescient: Does it inform what you do now? How has your relationship to it changed over the years?

I’ve always been an obsessive nerd when it comes to mobile phone technology. I was the guy at Stanford with a Palm Pilot that had this tumor—a cellular modem—attached to it. I was so excited to show people how I could check my Stanford email via this Palm Pilot with a black-and-white screen and the big antenna. In 1997, it was weird. But that kind of technology, for whatever reason, has always been my thing. I can’t not gravitate towards it. At MIT, I joined the Wearable Computing group. At that point, wearable computing meant actual people strapping desktop computers to their backs and setting up head-mounted displays. It didn’t help that they called themselves the Borg. There was an expectation that I would assimilate and spend my career as a graduate student at MIT dressing up like a computer. But I was able to convince my advisor, Sandy Pentland, that instead of having to wear this computer to collect data, I should be able to program a mobile phone to collect similar types of behavioral data. He graciously agreed. Then I started becoming one of the first cell phone programmers, back when cell phone programming was a group of guys—mainly in Scandinavia—who were trying to build mobile apps on top of an operating system called Symbian. It was not user friendly at all: the process was both interesting and also kind of painful.

At that point, wearable computing meant actual people strapping desktop computers to their backs and setting up head-mounted displays.

What was quite eye opening was what could be possible with these devices. People didn’t have to grudgingly strap themselves into [a bulky wearable device]—they’d be volunteering [to participate in the study]: “Yeah, I definitely want a cell phone.” And so the type of data you could capture by piggybacking on top of this very desirable device, especially from a computational social science perspective, was really intriguing. Suddenly you could start capturing data that otherwise required people to fill out self-reported surveys and participate in a lot of other burdensome data collection. It was low-hanging fruit.

Ultimately my PhD tried to quantify routines and patterns in everyday social behavior. You can understand what an individual’s context is just from the data coming from their cellular phone. Creating what was called at that point in time a generative model, predicting where people would be at what time and with whom. You don’t have to have a really sophisticated machine-learning AI system to be able to start quantifying the routines of most people. 

Does your previous work on this kind of generative model affect how you think about AI today?

The complete opposite. I did get to work with Marvin Minsky and specifically one of his students on common sense reasoning. And quite a few of my colleagues at MIT were convinced that artificial general intelligence was right around the corner, that everything’s going to be amazing and the world was on the brink of massive, AI-enabled change. But for me at the time, after seeing how the faculty had been predicting this change for decades without really making substantial traction on the underlying problems —I became pretty jaded about the space. While I kind of did my PhD in AI, I left MIT feeling like AI wasn’t going to live up to its promise. Now I’ve completely done a bit flip. 18 months ago, I would have bet really heavily against an AI being able to help me write substantially better code, or do any of the things that we’re now taking for granted. People seem to think it’s now normal that AI can help explain superconductivity to your 11-year-old. Or write an empathic email to a colleague about an emotionally charged issue. Or summarize a long meeting. But as someone who has been an ardent skeptic of AI hype, I’m truly gobsmacked. These recent advances in AI are fucking incredible! 

How is AI powering Granted? 

At Granted, we’ve built a platform that allows you to upload your previous grants and other background information, which gets vectorized into a database that can be used for retrieval-augmented generation, or RAG. We also have set best practices for what a winning grant should look like. With the user-input information and Granted’s own guidelines, we use AI to write a first draft. 

While having AI generate a first draft is magical, what’s equally magical is how LLMs can not just generate content, but also evaluate content. This is particularly important given that—generally—the first draft of something written by an LLM is pretty crappy. So instead of showing it to the user, we send the first draft to another set of AIs for evaluation—essentially an AI version of a National Science Foundation grant evaluation committee, made up of six independent instances of GPT4, each programmed with their own backstory and biases based off my own experiences sitting on grant proposal evaluation panels. (“You are a 63-year-old white male tenured professor in the computer science department, and you think your research is underappreciated but groundbreaking, and you think your colleagues’ research is overrated and not particularly interesting. That’s how you see the world. Now review this particular grant and give us all the reasons not to fund it.”) These AI evaluators independently reflect on that crappy first draft and then come together in a quorum to identify reasons why it sucks. This feedback gets incorporated into a second draft and then sent back out to the panel of AI evaluators. After several of these recurrent loops of unsupervised reflection and evaluation, ultimately the system converges on a pretty solid grant proposal—all in a matter of minutes!

What’s equally magical is how LLMs can not just generate content, but also evaluate content.

How will AI’s rapidly changing capabilities affect what you do at Granted? 

Right now Granted is making a real and positive impact: it’s not a bad use of my time. However, because of how fast AI is changing, I’m not sure if I as an investor would want to invest. While we’ve built some clever technology, there’s nothing to make me believe that GPT-5 won’t have this already out of the box. The rate of change we’re seeing now, you can build a really clever engineering solution but it might be completely obsolete when the next model comes out.

As someone who has been a part of the artificial intelligence field since your days at MIT, how do you anticipate it will continue to change?

In the longer term, I can see AI being used to automate the scientific process itself. Not just robots with pipettes in the labs—but robots who are then analyzing that data and closing the loop: generating new sets of hypotheses, then running more experiments to test them. We are a ways away from this—there are only toy examples that exist now. But I also thought we were years away from computers helping write code. I’ve been really humbled. Who knows what’s going to happen?

What is the toughest challenge this field currently faces? 

No one can give you a real explanation of how current AI technology works. We can tell you how we put a man on the moon—every part of that process we understood. But when we ask GPT to write a sonnet, we don’t know how it actually generates it. This is one of the first times we’ve built something like this: instead of encountering technical debt or management debt—this feels like intelligence debt or knowledge debt. Jonathan Zittrain at Harvard, he wrote a great article about this in 2019. There’s magic happening right now, and that magic is increasing at what seems like an exponential rate. But our understanding of how it works is not increasing nearly as fast.

This is the end mark. You have reached the end!

Keep reading

Ask a Founder: Should I sell my company or keep trying to grow?

A sale happens not because you want to sell, but because someone wants to buy

Elizabeth Zalman
May 20, 2024
Ask a Founder: How do I hire someone and have them succeed?

Founders can make or break a company, and getting over oneself is the key

Elizabeth Zalman
April 12, 2024
The inverted miracle of accidents

A senior scholar at the National Academy of Engineering explains why human error is a symptom—not the reason—for most safety failures

Guru Madhavan
March 18, 2024