Context
I attended a VC Spotlight with the general partners of Juniper Ventures, which invests in AI safety startups. Notably for me, they funded Goodfire, which is a start-up for mechanistic interpretability, which I heard about via Cognitive Revolution Podcast.
I attended the spotlight because I was curious to learn about their thinking and strategy. I currently have no plan to found an AI Safety start-up, but who knows! Nevertheless, I find the start-up space interesting. Furthermore, the amount of talent that is eager to contribute to AI safety is huge and there are not enough jobs yet for them to contribute to, so useful to see what is happening in this space.
The spotlight was organised by AIS Founders, a new organisation created by Finn Metz and Esben Kran, to encourage creation of AI safety startups. The session was lead by the general partners of Juniper, Nick Fitz (NF) and Griffin Bohm (GB). Esben Kran (EK) was also in the call and has a leading role in Juniper.
The following notes are from the spotlight, and are presented chronologically. Standard disclaimer that I may have made a mistake in my note-taking, so be appropriately critical.
About Juniper
Juniper is based in the bay area. Recently wrote a report called ‘AI Assurance Technology’ giving the current landscape of startups in the AI safety and assurance space, as well as projections into the future.
Juniper is narrowly funding startups that explicitly reduce AI risk. This includes cyber-security, evals, mechanistic interpretability, amongst others. They will have a second round of funds soon which will be bigger than their first round.
Juniper offers startup founders their first check to get things going, access to their network, access to talent as first employees, help and advice with many aspects of founding a startup.
Juniper’s thesis is that AI safety will be a big market, but it is not yet big.
Main advice to start-up founders
Be professional and do the standard things well: have your slide deck, your blurb, your bio, your team’s bio, some kind of demo, your 'data room'. Demos are particularly powerful, even if they are basic.
Great sign that we should invest is if you have one or two paying customers. Alternatively, a video recording of detailed feedback from a potential user is valuable (at least for GB).
Make sure your deck explains you have big market (hint: copy stats from their report) and you have LoI - letter of intent.
Look at NFX Signal. Good place to start for prospecting potential funders
You should have a medium to long term ‘relationship plan’. How do you plan to keep your funders informed? What frequency? What information? What milestones? Most VCs will just say ‘keep us posted’ without any further instruction.
EK: In your deck, need to be particularly ambitious with your vision for the bay area VCs.
What role has VC played in AI Safety already?
A lot of research has not come from academia but from privately funded work. E.g. all the research from Anthropic can in some sense be attributed to VC funding.
Does not think all research should be privately funded, due to bad incentives
EK. Example of how with VC funding, Goodfire has already been able to train large SAEs.
How to balance safety vs capabilities?
We try to fund ppl who push for safety. Less interested in nudging accelerationists.
Important advice for founders: be open and honest about who you are and your values and vision. (But dont be super weird). Be genuine. People want to fund people who really care. Also, many startups fail because of founder issues and not being mission aligned.
YC has article on startups they want to see. Do you have a version of this?
EK. One thought process is to find things that AI labs won’t do but users/governments would like to exist.
GB. Insurance, security and risk management. Examples in history of how innovation percolated throughout society only once insurance and financing was in place (e.g. cars). Boring stuff but is useful. E.g. what is equivalent of fixed-rate mortgage in AI space? Not about finding technological breakthrough, just applying best practice that already exists into AI space.
Other VC funds doing similar to Juniper?
Not a big field right now. Lionheart and mythos are two examples. We have slightly different theses. We are more tightly tied to AIS, less broad. Meteor is getting into it but more late stage.
There is not much thought leadership right now on funding side.
EK. Seqouia. not really in vc space.
EK. Apart research is top result when you search for this! Relatively small effort to reach this spot.
NF. Juniper website is not great [it really is not]. We do not do much blogging/writing in public
Founder J of kairos.fm asked for advice
J. [summarized what kairos.fm is] Offered Juniper help in creating high quality podcast or two. Gives them more professional place to point to. Would be better than LessWrong post, for example.
NF. What is your current bottleneck?
J. Need to think more about this question actually. Time is gut reaction. Maybe that just means funding. Having additional ppl to help scale. Need to think more about how to secure it
EK. Create a pitch deck. And be ambitious: explain how you become a media empire! That is the for-profit perspective on this.
Founder JT on startup for automating AI safety research asked for advice
JT: What is good org structure, how to manage dual use aspect of automating research, what are profit channels to be assessed, how to get more focus within the startup.
NF. Sounds like you need a cofounder. sounds like you already thought through a lot of it. Role of CEO is often is just about making sure there is money and good people. Obviously need good product, but fundamental input is good people, and for that you need money to pay!
NF. When picking co-founder, helps to have particular persona in mind. Be honest with your own skills and preferences and what compliments you. Treat it similar to dating! I find potential collaborators by just stalking ppl on internet and reaching out.
Someone shared foundrs.com.
NF. [I missed this, but something about how finding people is different before and after getting 1st round. Differnce between actually already having the money.]
JT: What is normal equity split for co founders.
(GB: NF is forward thinking in this area!)
NF. Unlikely that 25% vs 30% is big deal. More about fairness overall. Also prefers 8 year vs 4 year exit.
GB. Have to keep in mind long-game. Presumably you want multiple funding rounds, in which case a 5 point difference now will become negligible at the time you want to exit. And if you don’t reach that stage, it means you weren’t successful so who cares about 5 point difference. Do not lose a great co-founder over these details.
NF. And you can always change things later if, somehow, the initial split was wrong.
JT: How much time to spend building up product, to help attract early employees.
NF. Best signal that you are serious is you are talking to ppl about money.
EK. Regarding dual use, would be worth asking if org should be no profit or profit. How does this affect juniper's thinking around difference incentives?
NF. Basically everything is dual use. Knowledge is dual use. Electricity is dual use. So you just have to go into the details. In early stages, founders can maintain control of direction. These org questions become more important in later rounds. Structures can change over time - e.g. Sam Altman changing OpenAI from non-profit to for-profit. Main thing I want to see new products being tried, and during this trial period, you will be able to maintain control. I don’t want this dual use question to stop innovation in AI Safety space.
EK. It is impressive how much can be done if ppl are aligned within themselves. [CEO of Anthropic is big believer of this and believes it more and more each day.] Also, there is huuuge flexibility. Things change all the time. Your original product will likely fail and you will pivot. Do not think being for profit locks you into a plan.
NF. in early rounds, you sell 15%, say, so original founders maintain control. Investor is not getting involved and just want quarterly updates. Funders are not buying the particular vision, but the people and their ability to ask and answer questions.
What do governments demand in AI Safety space?
NF. Already demand for evals and misinformation product. E.g. from Singapore AISI, UK AISI, Taiwan.
GB. For governments, most of the time they care about bureaucratic system. want to preserve structure of government. Totally different to defence contracts [or something along those lines.]
NF. Keep in mind that govt has obligation to public safety
How to contact them
NF. nick AT juniperventures.xyz
GB. griff AT …
EK. esben AT …
Ricardo. ricardo AT …