Viewpoint / AI

Fund/Build/Scale: Understanding Privacy and Compliance (Transcript)

Here is the transcript (edited for space and clarity) of the conversation between Fund/Build/Scale podcast host Walter Thompson and Laura Bisesto, global head of policy and privacy at Nextdoor.

Laura Bisesto  00:00

I think the person who’s developing the product should have a conversation with their fellow colleagues about different issues, that the product may raise different risks. And that’s — you know, in itself AI ethics — you don’t have to be an ethicist to start an AI ethics program. I think anyone who’s developing a product in this space should consider a way to measure its impact, or at least talk about it.

Walter Thompson  00:30

That’s Laura Bisesto, global head of policy and privacy at Nextdoor. I interviewed her in January 2024 at her company’s San Francisco headquarters to get her thoughts about the regulatory landscape facing AI startups. We talked about compliance, how small teams can build frameworks for managing data and privacy, how to recognize when you need to hire a lawyer, and the overall importance of planning for worst case scenarios. She also had some advice for rolling out new AI-powered features, and navigating a patchwork of state, federal and international laws. I’m Walter Thompson. Welcome to Fund/Build/Scale. 

Laura, thanks very much for being here today. I appreciate it.

Laura Bisesto  01:32

Thanks so much, Walter, for having me. I’m really happy to be here.

Walter Thompson  01:35

So your title is, exactly?

Laura Bisesto  01:39

I am the global head of policy and privacy and regulatory compliance at Nextdoor. I am a former prosecutor and I came here to start our policy program, which includes working with governments around the world, developing our own internal content policies, as well as managing our privacy and regulatory legal work. And, and all of that with AI. There’s a big through line, it covers all those surfaces. And so recently, I started our AI ethics committee as well.

Walter Thompson  02:04

I wanted to talk about a lot of very basic issues that early-stage AI founders are gonna face, but kind of starting off first with building an AI ethics framework. I guess the first question I would say is, if we’re talking about a small team of people, largely technologists, whose job it is to get the ball rolling for creating this ethics framework, who does that? Who owns it?

Laura Bisesto  02:27

I don’t think it necessarily needs to be a lawyer or policymaker’s job, I think the person who’s developing the product should have a conversation with their fellow colleagues about different issues that the product may raise, different risks. And that’s, you know, in itself AI ethics, you don’t have to be an emphasis to start an AI ethics program, I think anyone who’s developing a product in this space should consider a way to measure its impact, or at least talk about it. And we’ll call it the red team, some places just sit in a room and talk about some of the bad things that could go wrong and, and how to mitigate against them, and maybe write them down or write down potential solutions as you think through them.

Walter Thompson  03:08

So I’ve worked in a number of startups myself, worked on things like community management and content, where it was my job to come up with a Terms of Use document, is an ethics framework  something you can templatize? Can I take somebody else’s AI framework and kind of adapt to make it my own? Or do I need to start from scratch to really reflect my own problems and challenges?

Laura Bisesto  03:27

You can definitely start with other people’s. At Nextdoor, we developed an AI framework based on the White House’s blueprint, as well as the UK’s white paper. As we were looking to develop it, we took inspiration from other sources, there’s no reason to reinvent the wheel here. I mean, there’s really, you know, some different options in terms of how you want to protect users, protect consumers, or how your product will work and thinking about the different risks, and there’s just a suite of different principles, you could work for your business, for sure.

Walter Thompson  04:03

Actually, going back to that, recognizing when you actually need a lawyer. I know a lot of startups get along early on the kind of a fractional counsel approach, but a four-person team who’s just kind of getting started at the seed stage. When should they start thinking about, “when do we need a lawyer?”

Laura Bisesto  04:19

Yeah, and they don’t have to hire a general counsel, per se. I mean, if you’re thinking about complying with regulations, or just getting ahead of policy or ethics, I mean, there’s a lot of people that can do that work, they could be on your board, for example, and help give advice there for you know, a different a different setup. You know, traditionally you might have waited till person 100 to make a move, but nowadays, given the regulatory framework you’re seeing around the world in terms of AI, it’s no longer the environment where you can just launch, and eventually it’ll catch up with the rules and laws. It really will pay off to get ahead of it in advance. So when there are 10 people or when you reach a certain threshold of user data, it might be a good idea to start thinking about bringing someone on board at least on your board or asking your investors or outside counsel for advice. And you know, frankly, if you’ve got this incredible product, you may want to really be on top of the policy and the regulations. Because you, you know, maybe there’s actually even a story to tell to make sure you don’t have a competitive disadvantage when it comes to regulatory making that we’re seeing around the world.

Walter Thompson  05:33

Are there situations where the thing you’re making you’re working on kind of raises or reduces your need for legal assistance in-house? For example, if you’re working in something health or fintech-related or you’re collecting a lot of personal information or data on people?

Laura Bisesto  05:48

Yeah, absolutely. If you’re collecting personal data, there’s tons of new privacy laws that are passing around the US. I mean, we’re talking in the tens to twenties to 20s site that will go down because there’s just not that many states over the next few years. But there’s a real patchwork of regulations. When it comes to consumer data, there’s the general data protection regime in Europe GDPR, and the UK has its own and other countries have that as well. I think if you’re working in employment, if you’re creating a product for employment purposes, that’s a pretty sensitive area of FinTech as you as you mentioned, also any in biometrics, if you’re working with with that type of, of user, or personal data, or healthcare for that matter. There’s just different, those things can be pretty risky.

Walter Thompson  06:34

So let’s say a small team, they’re working today, they’re spinning up. They’re not in the market yet, but they’re doing customer discovery, and they’re talking to people. How do they fully understand let’s say, like, in talking to customers, if I was doing like, you know, I’m, I’m trying to spin up a veterinary services startup, I’m talking to vets and people who do health care, it’s that kind of thing. Pet Health Care. How do I come up with a checklist? Am I talking to industry experts? 

Laura Bisesto  06:59

Yeah, I think it’s a good idea to reach out to experts if you can. But there’s also a number of public resources. As I said, the White House blueprint for AI is really, really helpful. The AI Bill of Rights is what it’s called, also, the UK government’s pro innovation approach to AI regulations, they have just a number of ideas in there that could help you come up with different opportunities. Even on Nextdoor, I’ll give a plug for next door, we have publicly listed our principles for deployment of generative AI. I mean, those are just different resources that are available online to give you ideas about what might matter to regulators, but also your customers. And then of course, investors as well who want to know that you’re thinking about this and that you’re not going to have huge costs down the line.

Walter Thompson  07:43

I know you’re a very skilled attorney, but you didn’t create your GAI principles out of whole cloth. You had a team of people, I presume?

Laura Bisesto  07:51

Yes, I did. I had some folks. And we and most importantly, you have technologists and you have product people, you don’t want a roomful of lawyers creating these policies, you really do want it to come from the people that are building the product. So you want to bring everyone together and really talk through what you need, what you can commit to, how can you, you know, manage privacy, make sure there’s transparency, and that all is built into a product. And we absolutely use the White House’s blueprint for an AI Bill of Rights as inspiration as well as the UK’s work to help us come up with ours. And then also make sure it’s something we can commit to as well.

Walter Thompson  08:27

Have you rolled out these principles to Nextdoor’s users?

Laura Bisesto  08:33

Yes, we absolutely did at Nextdoor, one of our main values is to make sure we have users’ trust. And so we’ve launched several generative AI features. You can have your content rewritten if you’d like by what we call an assistant. And you can also have your content edited. And when we rolled that out, we also launched our principles. At the same time, our public commitment to making sure that users understand how their data is being used with generative AI as well as that there’s transparency that they know they’re interacting with generative by just some of those things are strong examples. And also, before we launched the product, we went through a series of tests to make sure that it wasn’t going to give a sub optimal experience to users. And we talked through risks, which really at the size of our company, we also have an advisory board that we can talk to about these issues to make sure that the experience is good for users. So really building that out

Walter Thompson

A user advisory board?

Laura Bisesto

This one is an advisory board of social scientists and academics that help us think through some of the issues. But we also created a way for users to give feedback. For example, on Nextdoor when neighbors are providing comments or feedback, you know, there’s different language or tones in different communities and we wanted to make sure that it sounded like the neighbors who were speaking — not too formal, very conversational, that sort of thing. So we actually created a way to get feedback from users as well

Walter Thompson  10:03

Who looks it over before it rolls out? And who decides this is safe to roll out?

Laura Bisesto  10:08

The product manager should really oversee a lot of that risk management. And I think the most important thing is that, before you launch a product, you get in a room and you talk about the bad things that could happen with the product, like just talk about it, it doesn’t have to be this perfectly organized, lawyer led meeting, it can just be like, let’s just talk about worst case scenarios here. And, and talk about because nobody’s thinking about that when they’re building a product. They’re thinking about all these incredible things that we’ll do, and I’m sure it will. But let’s talk about the risks and the things that could go wrong. And, you know, just just doing that is simply a good step. But also being aware if you’re launching a product that is, you know, taking a bunch of user data and doing something with it that maybe users wouldn’t expect you to do. And you should also be aware of the regulations and rules around using private private users data and the risks you’re going to engage in. So there’s different, it just depends on what you’re doing. But at the very least, for anything you should get in the room and talk about where things could go wrong.

____________

Walter Thompson

How would you describe today the regulatory environment for AI startups operating in the US and in Europe?

Laura Bisesto  12:25

The regulatory environment is evolving. Absolutely. I think AI will remain a top priority for governments around the world for the foreseeable future, at least the next five years. And I think there’s an urgency from policymakers to get it right. I think there’s a sense that they haven’t been able to regulate as well around competition or liability content, moderation and content liability. There’s been concerns about that. And I think there’s a real desire to get it right. So there’s real attention and priority. I think the problem is, is what you’re going to see is that you’re going to have a patchwork of regulations, which is really not the reality of how your business will operate. You don’t intend to have a different operating model in different US states, for example. So I think what you’re going to see is, you know, a ton of action, but it depends, it remains to be seen where the actual legislation will come into play. I mean, you’ve recently seen that the European Union passed its AI act in December, the language won’t even be out for another month or so. And really will have a risk-based approach to how it starts going into effect, starting with the most riskiest uses of AI as the law classifies them. And then you’re probably going to see next a ton of US states introducing legislation, and it’ll remain obscene, what what’s going to pass in Congress, tons of hearings, tons of talk, I think, being aware of politics at play is important and to be able to make an educated guess as to whether things will move forward.

Walter Thompson  14:02

I’m guessing most of these companies are going to be based in California, at least to start. So does this mean that similar to the way that manufacturers are selling cars all over the country, but you want to make sure that they’re emission-proofed for California? Is this going to be the same thing for AI startups where they’re basically you’re in Columbus, Ohio, but you have to be compliant with California and the EU?

Laura Bisesto  14:23

California definitely wants it to be that way. I’ve been to EU events here around the AI Act in the San Francisco Bay Area and I’ve seen California legislators there seeking inspiration. There’s a number of pieces of legislation that are pre filed, that you can already see California’s leadership in action in that space. So I think that there’s definitely you can expect to have some laws to follow. I mean, at the very, very least you’ve got California’s data privacy law, and that really, if you use personal data that will come into play pretty quickly and a number of other states have followed suit, but California led the way there. And I think we can expect that with AI as well.

Walter Thompson  15:04

Getting into the Biden administration’s executive order that was issued in October of 2023. Is that correct?

Laura Bisesto  15:10

Yes, it was, I believe it was the very end of October, maybe October 30. So if I recall correctly.

Walter Thompson  15:18

Back of the napkin, what are the big points here, basically?

Laura Bisesto  15:22

Yeah. The Biden executive order really is dealing with eight core issue areas. And this is to be a leader in the world in terms of AI governance. I mean, this was announced before the EU passed its AI act before the UK’s safety Summit. I mean, this was leading the way. And it talks about eight issues, including testing and evaluation, competition, workforce and impacts, equity and civil rights, consumer protection, privacy, strengthening AI expertise, and government and global leadership. And a lot of the work it’s seeking is seeking its agencies, the federal agencies, to put forth rules and regulations around some of these issues. 

Walter Thompson  16:08

But Washington doesn’t move quickly.

Laura Bisesto  16:12

And no, no, no, no. And so I mean, some of these some of the things like for example, AI frameworks for NIST, that would deal with risk management issues. 

Walter Thompson

Sorry, NIST is?

Laura Bisesto

NIST is the National Institutes of Standards and Technology that has to do with cybersecurity. And it’s part of the Department of Commerce, their regulations, they’ve been instructed within 270 days to develop regulations and resources, and a new secure software development framework. And they have to create guidance and benchmarks for auditing and evaluation. Now, they’ve been told they have 270 days. But there’s there’s other issues like privacy. I mean, the White House urges Congress to pass bipartisan privacy legislation, given how AI can link together different forms of personal data. And I think that that’s, you know, less realistic that a bipartisan privacy agreement will pass in Congress at this point. So there’s different parts to the executive order, but none of it has an immediate impact, I’d say.

Walter Thompson  17:27

So if I asked you how this will be monitored and enforced, there’s no real answer at this point.

Laura Bisesto  17:33

But no, it’s totally spread out among different federal agencies. They did invoke the Defense Production Act to compel certain AI companies to share results of red team exercises with the government ahead of launch depending on what you’re developing and how sensitive it is. But the Commerce Department is supposed to help stand up standards for red teaming, and inform what has to be shared. So there’s a lot and they want to, you know, get into gene synthesis, screening standards, cyber issues, just AI with critical infrastructure is a huge concern. So there’s just a variety of issues, including labor as well, and how it will impact the workforce, but not clear cut by any means, at this point, just a really general statement of leadership.

Walter Thompson  18:18

So given that this is this new regulatory environment is still murky. I mean, that’s the best word I can use at this point. What do early-stage AI founders need to do today to set themselves up for future compliance?

Laura Bisesto  18:32

Right, that I think the best thing to do is be mindful of, first of all, the privacy laws, the consumer data privacy laws out there, that’s number one. Those laws really deal with a lot of some of the issues people are concerned about with AI when it comes to user data. If your product is using consumer data, that’s definitely something that’s concrete that you can look to. But you really need to try to follow along the best you can see if there’s someone you can get on your board that follows along politics doesn’t need to necessarily be your general counsel, as I said before, outside counsel, have someone that’s worked in emerging technology before or worked in DC worked in the political space and can really read the tea leaves and understand where where laws might pass or not pass. But I’d say, if you’re working with consumer data, figuring out the privacy laws is like a great first step in building that into your product as you design it, rather than waiting till later. And also just understanding that risk, as I said, doing an impact assessment and understanding where the risk areas are, then you will at least have a concrete understanding of the issues. When the regulations come up. You’ll say, hey, that one probably would apply to us because we’ve got that risk. And you’ll have that understanding of what’s going on for you.

Walter Thompson  19:49

Like, the government’s plans for model evaluation tools and testbeds. Like, what even is that?

Laura Bisesto  19:58

I think it will remain to be seen, it’ll probably start as voluntary. I mean, you’ve seen people who are voluntarily joining some of the commitments that the White House has asked companies to join in terms of AI programming, but it’ll probably be in partnership with some of these federal agencies, you know, the US wants to be a leader here. And so they’re going to want partners to help facilitate leadership and make sure the technology stays on shore and or advances onshore for sure. So there’s definitely an opportunity to work with the government, which is more foreign in the US, it’s not as typical as… other places where you might have a closer relationship with regulators, like in the EU or UK. But I think there’s definitely opportunity, especially if you have a unique product that, you know, implicates national security or could be useful. I mean, that’s another reason to pay attention to the regulations. And what’s happening, you may have something interesting to say, that could get you an advantage competitively in regulations, which is equally exciting and good for your business, and I think would definitely appeal to investors.

Walter Thompson  21:05

Based on what I’ve been reading, the Commerce Department is basically expected to weigh in on things like foundation models, and I’ve watched some of these congressional hearings where these senators and congresspersons talk about — they ask questions that really belie their lack of understanding of this technology. And that’s different from the Commerce Department, but from your perspective, is there a deep enough bench of talent in the federal government to actually regulate this meaningfully and effectively?

Laura Bisesto  21:31

I think there needs to be a solid partnership with the private sector here in the US. The Department of Commerce has had great success advocating for US interests abroad, even in the tech space. And I know they’re tasked now with AI labeling and ultimately issuing guidance on identifying general AI generated synthetic content. And including authenticating official US government digital content, like there’s some real national security issues there, no doubt that they will find the way to figure out the best path forward. And there’s definitely the opportunity for private partnerships, particularly if you have a product that you think would be helpful in this space, I think there’s probably that opportunity to, to work with the government. And also, if you want to see a certain type of regulation, that is the way you do your business, you’d also benefit for potentially sharing that as well. So I think there’s opportunities and you shouldn’t shy away from sharing them if you think you have a solution for what the government is trying to achieve. This is what I would say I think people are scared of necessarily working with the government. But there’s definitely a lot of opportunity there. Especially if you have a way of doing things and you want to preserve that way. That’s it’s not a bad idea to share how you do it.

Walter Thompson  22:55

So something just occurred to me. A lot of teams are distributed. So what does that mean, as far as clients and liability, you’ve got data, you’re sharing customer data with someone who’s in Bratislava, they may not be under the same regulations that you’re under, how do you keep everything tight and zipped up? You

Laura Bisesto  23:13

depending on the type of data you’re sharing across countries, there’s different rules that apply. Of course, if you have an employment agreement with someone, you want to make sure that that’s sufficient that they’re, you know, you’re willing to enforce that agreement, you’re able to enforce that agreement. But if you’re sharing data of consumers across countries, you want to make sure you’re following different rules. For example, the European Union has very strict rules for transferring their users’ data to the US and you have to have a certificate from the Department of Commerce, it’s a new one that they’ve just come up with. Or you have to have previously, you could have done it contractually. But there are different rules for sharing user data across countries. And it is useful to be aware of that, depending on what your objectives are, including sharing the data, just with employees, just sharing it across continents is really important to be aware of those issues, too.

Walter Thompson  24:09

I’ve kept the conversation largely positive and somewhat aspirational. Let’s talk about where things go wrong. And the last bit here, I know you’re here at Nextdoor, so things are pretty steady. No surprises, I’m sure. But in the world of smaller scale startups, you’re seeing, what are some of the most common mistakes that you perceive AI founders making with regard to privacy, compliance, trust and safety?

Laura Bisesto  24:33

Yeah, I think it’s the “it can’t happen to me” or just refusing to look at the risks and the bad things that could happen. I mean, cyber is an obvious place that I think everyone can conceptualize not securing personal data of users, or just thinking, hey, you know, the user told us we could use the data for this reason, but we’ll use it for another reason and getting hooked on that kind of lack of transparency and trickery is strong word but essentially Really, it may be perceived as trickery. So just be pretty careful and cautious when it comes to personal user data for sure. And also just really thinking about the risks and how things could go wrong. I mean, you know, we have an election coming up here in the US, there’s elections all over the world this year, I think it’s one of the biggest election years ever. And so there is a lot of a lot at play, and a lot at stake, depending on what your product can do. So it’s good to think about that. And, yeah, just just just think of the bad things along with the good things, even if that’s not the way you often think, it’s important that you do so here. Again, these mistakes can be expensive, and you can get hooked on them too, because they may cut corners, but in the long run, they’ll be really costly. And it’ll be such a pain to go back and make huge changes.

Walter Thompson  25:53

It’s always good to be virtuous. But is it fair to say that mitigating your risk early makes you a better bet for investors?

Laura Bisesto  25:59

Absolutely. I think, you know, you can say that you’ve thought about these things that you’ve you know, you could even say, you know, you took a look at these AI regulations, you’ve, you know, you believe the tea leaves are going this direction, you think it’s really just going to be a privacy thing, maybe. And you’re just you’re going to focus on that. I think that all just shows you’re looking ahead. And you can even say we won’t have to spend this money in the future to go back and re-engineer. We’ve got these, these consumers and our customers like this as well, so that a lot of customers, especially big B2B Customers want to make sure you’re following different rules. A lot of contractual terms ask that as well. So it’s important to be able to say that

Walter Thompson  26:38

Last question: would it be crazy if you have a 10-slide pitch deck, should one of the slides be about privacy and compliance?

Laura Bisesto  26:45

Absolutely. And considering I think that would be a brilliant idea. And you can talk about how trust is something you’ve been doing since the beginning. And, you know, being, you know, trustworthy and thinking about risk and making sure that you’re with all the things going around with their regulations, making sure you’re staying on top of them. And you know, what’s happening? I don’t think that would be unreasonable at all. And, and you can say that, you know, customers want that, especially your big target enterprise. Customers want that, and it’ll pay off in the long run.

Walter Thompson  27:18

Awesome, Laura. Thanks very much for the time, it’s been a great conversation. I appreciate it.

Laura Bisesto  27:22

Thanks so much, Walter. I appreciate it.

Walter Thompson  27:27

Thanks again to Laura Bisesto of Nextdoor for today’s conversation. On the next episode: coming up with an idea for an AI-powered startup is actually pretty easy, but how do you translate your personal vision into something that’s tangible enough to attract a co-founder investors and eventually paying customers? To find out, I interviewed May Habib, co-founder and CEO of Writer, and Gaurav Mitra, co-founder and CEO of Captions. We talked about building a founding team that aligns with your values, how they validated their ideas before bringing them to market, and how to successfully pivot with the support of your investors and your team if or when you need to. May and Gaurav also spoke frankly about the challenges they both needed to overcome after stepping into leadership roles. I really think this episode’s gonna be useful for anyone who’s trying to hone their storytelling skills, connect with investors, or even become a better marketer. If you’ve listened this far, I hope you got something out of the conversation. Subscribe to Fund/Build/Scale so you’ll automatically get future episodes, and consider leaving a review. For now you can find the FBS newsletter on Substack and check out the new YouTube channel with audio of every episode. The show theme was written and performed by Michael Tritter and Carlos Chairez. Michael also edited the podcast and provided additional music. Thanks very much for listening.

You May Also Enjoy