The State of User Verification for Peer-to-peer Marketplaces Featured Image

The State of User Verification for Peer-to-peer Marketplaces

For online marketplaces, efficient user verification signals are key to maximizing the effectiveness of risk management efforts.

Risk is a cost of doing business. Whether it’s the risk of diminishing returns, lost investments, fraud, or dissolution, it’s the job of risk management professionals to keep stakeholders informed about the most pertinent risks facing the business and what steps can be taken to mitigate them.


For online marketplaces and social networking platforms, the risks are manifold, but so are the rewards. These spaces facilitate connections between peers, buyers and sellers, people looking for love or rentals, and working professionals. The tradeoff is that forming connections between real people online can have far-reaching risk implications. Risk management, Trust & Safety, and fraud prevention teams all share responsibility for keeping these platforms safe and secure for their user bases.

Key TakeAways

  • User verification is a valuable tool to mitigate many downstream risks on online platforms by weeding out bad actors and responding to attacks.
  • A shared internal understanding of the highest priority risks of the organization can make  risk management more efficient.
  • Effective signals for user verification provide more information about real people and real ties to the platform. Combining signals can make platform defenses more robust, but teams have to choose which signals, so they're sure not to increase user frustration and company expenses.
  • The future of user verification lies in refining Trust & Safety operational activities and slowing down attacks by fraudsters who may use AI technology to speed up their operations.

User verification is one way teams can mitigate risk on their platforms. Like any other part of the onboarding process, user verification has changed over the team. In episode ten of Trust & Safety Mavericks podcast and as part of a webinar with Marketplace Risk, Jennifer Kelly of Etsy and Justyn Harriman of Nextdoor talk with Joe Midtlyng of Incognia about the current state of user verification, its evolution over time, and its future.

How user verification supports risk management

Verifying users at onboarding might add friction, but it can also be helpful in weeding out bad actors, depending on the verification signals used. As Justyn Harriman commented in the webinar, “User verification is a key tool that helps you prevent really bad activity. It doesn't prevent everything. Real people produce misinformation all the time, they produce fraud all the time. It is still a problem, but it gives you a tool to help respond and adjust to the various attacks that you're seeing. ”

All webinar participants agreed that one of the best ways to maximize user verification’s effect on risk is to have a shared understanding of which risks require the most priority. Jennifer Kelly explains, “Starting early with a risk taxonomy…[and] a common understanding of the definitions of the risks that you're naming will go a long way in helping you prioritize them and stay organized as you execute and try to mitigate them.”

Justyn Harriman agreed, saying, “...Early on it's about establishing the goal. Of what you're seeing right now, what is the highest level risk that you're seeing? How do you respond to that? How do you build structures so that you can advance, so you can see going forward, how can I see the path [for] how I'm going to improve?”

Jennifer Kelly also provided insight about the global importance of risk management and Trust & Safety initiatives for enabling company growth. While every business has to weigh the importance of their Trust & Safety iniativies against other campaigns to actively grow their customer base, “you can't grow those things if you’re completely ignoring Trust and Safety. If you're not instilling trust in your platform, you won't be able to achieve those other broader business goals.”

Effective verification signals

When it comes to effective, economical user verification, not all signals are created equally. For example, credentials like full names and dates of birth are easily purchased from the dark web. Using a combination of signals makes the system more robust, but it can also add frustration for the user and expense for the company. 

In terms of making the verification process more efficient, Justyn Harriman returned to the baseline of defining risks and goals. “In general, we think about what we decided our goals were. It's real people, real ties to the neighborhood, and signals that provide more information about that are very important.”

Jennifer Kelly agreed, noting that platforms with a payments component like Etsy would be required to have more vigorous KYC processes in place than other types of platforms. 

Content moderation is also an evergreen concern for any platform which supports user-generated content, marketplaces included. Part of efficiency in user verification signals is how effective they can be in reducing negative downstream outcomes, including abusive or illegal content. Kelly and Harriman both agreed that handling content moderation for an entire platform can be like playing Whack-a-Mole and that anything that can boost the efficiency of the process, including user verification, is always welcome.

The future of user verification 

In terms of user verification’s continuing evolution, Jennifer Kelly believes it has power to improve downstream outcomes far beyond its chief contemporary use as an anti-fraud measure. “I'm still in a space of defining the full universe of goals for our seller verification program…I am interested to see how we might lean into seller verification as a way of refining our content moderation and some of our other downstream Trust and Safety operational activities.”

The importance of verification is also gaining policymaking attention. In the EU, the Digital Services Act aims to require verification for all traders and incorporated business sellers, regardless of whether the platform has a payments component like Etsy’s or not. Similarly, the United State’s Inform Act seeks to make it a regulatory requirement that platforms verify their seller’s identities. As Jennifer Kelly summarizes, “[It’s] interesting to the see the value of [user verification] really start to become more common knowledge.”

Readers interested in learning more about user verification and its use in the industry can listen to the Trust and Safety Mavericks podcast on Apple or Spotify or read the full transcript below.

Apple Podcasts Spotify Podcasts

This is an audio transcript of the Webinar: ‘Predictions for Digital Identity in 2023’.

You can watch the full webinar here, or listen to the podcast on Apple and Spotify.

 

Joe Midtlyng 

Before we get into it, it would be great to do a quick round of introductions. I’m Joe Midtlyng, director of solution engineering at Incognia. We're a location behavioral identity solution and a key member of Marketplace Risk. Looking forward to this conversation, and I'm joined by Justyn and Jennifer, a few veterans in the Trust and Safety space at some well-respected brands with Nextdoor and Etsy. Justyn, maybe you could introduce yourself first—role, how you describe your company or platform to the audience, and then your overall mission at Nextdoor. And then Jennifer, you next.

Justyn Harriman

Sure. Thanks, Joe. So, I'm Justyn, I'm an engineering lead and founder of the Trust and Safety team at Nextdoor. How do I describe Nextdoor? Nextdoor is a neighborhood network where you can connect to neighborhoods that matter to you. When people use Nextdoor, they can use it to receive information, give and get help, get things done like finding service providers, local businesses they want to patronize, all with the goal of building real world connections for those that are near you, including other neighbors, local businesses, and public services.

As for the mission of the Trust and Safety team and Nextdoor,  our mission really is to establish that trusted environment for neighbors to be able to interact with each other, with people that they know are real people and have a real tie to their place. That's it for me. Hand it over to Jennifer.

Jennifer Kelly

Thanks. I'm Jennifer Kelly. I lead seller verification and financial crime operations at Etsy. I've been with Etsy for about four years, and the best way to describe etsy.com is a marketplace for unique and handmade goods. We are a marketplace primarily of small businesses or businesses of one selling from outside their homes or from within their homes. And the Etsy Inc. umbrella includes what we call a house of brands.

There's several other marketplaces within that umbrella, including Reverb, which is a marketplace focused on musical instrument and equipment resale, Depop, which is very cool amongst the Gen Z set for resale and upcycling, primarily of clothing, and Elo 7, which is a domestic Brazilian marketplace that's very similar to Etsy. So happy to be here today.

Joe Midtlyng 

Thanks, Jennifer. Perfect. Let's go ahead and dive in, and we just wanted to go ahead and set the agenda here off the top for everyone in the audience so you can have an expectation of what we're going to cover in the session. There's a lot of interesting topics that Marketplace Risk tackles with our members across the board here in terms of Trust and Safety topics.

What we're going to focus on today and the primary topic is going to be around user verification and more specifically how these respective organizations Nextdoor and Etsy are tackling the challenge of user verification and how it fits in the overall Trust and Safety strategy. Very much user verification focused, but we're also going to widen the lens a little bit on Trust and Safety overall.

You can see here we'll first focus on goals with user verification. We'll then kind of go to day zero as we've talked about it in preparing for this webinar and hear from Justyn and Jennifer on early days in terms of establishing Trust and Safety strategies and controls.

Then we'll go deep, dive into user verification and a lot of interesting topics there and then wrap up with a look ahead and recommendations. That is the overall goal.

On the first topic, just to kick it off and establish a good understanding for the audience, it would be great to get a high level sense of the role of user verification at Etsy and Nextdoor, and more specifically, what were your company's goals [to start]? Or what are your company's goals with user verification? And then we can take it from there.

Jennifer Kelly

Seller verification at Etsy historically was driven by the existence of Etsy payments. Etsy has a payments platform, and so, as a payments provider, you have certain regulatory obligations to verify or know your sellers—KYC.  KYC is sort of the common vernacular for describing those processes. They're the same processes that you would see at a PayPal or even a bank, but that has evolved into something that's much greater than just satisfying a regulatory obligation.

Verifying our sellers—our customers at Etsy—in this context is central to supporting a trusted marketplace. It is one of the earliest interventions that you can make on a user-generated content platform—in my case, a marketplace—to understand who will be posting content and what risks that content might pose, whether that be fraud or other bad downstream outcomes.

What started as largely a regulatory-driven exercise has really evolved into something much more strategically valuable.

Justyn Harriman

I can take over there. I think it's interesting because in the prep for this, we kind of talked about how these two kinds of things compare. From the Nextdoor side, a lot of what we focus on and have focused on has not been compliance driven. It's been about creating that trusted environment for neighbors to interact.  As a platform, we bring together people who are neighbors who don't necessarily know each other, and we want them to be able to build connections with each other. In order to do that, they have to be trusted, they have to have that trust with them.

A lot of our goals are around establishing that trust, creating a baseline where neighbors can know that when they interact with someone, that person is a real person and has a real tie to the place they're talking about, they're discussing, they're interacting with. It's interesting to compare because we kind of start there thinking about how we even use that to establish the neighborhood and then think about how it moves towards some of these other compliance-driven goals that we eventually will have and meet in the middle there between our two companies.

Joe Midtlyng

That makes sense, and I think evolutions of platforms influence those changes as well, and you have to adapt. I know I'm a power Nextdoor user, and I interact with selling, when my kids outgrew their cribs, selling their cribs on Nextdoor, that sort of thing. And so there's something interesting as platforms evolve, what's handled on the platform or maybe off the platform influences those processes.

The next question is how you handle user verification. I would guess it is very much dynamic just based on the current state of the business and where it's going or the current state of the platform.

This is kind of a two-parter. How have these goals shaped your user verification process as it stands today? And then maybe you could just more kind of tactically describe what your user verification process looks like today at a high level, so we have that baseline on two platforms that are very different in many ways.

Justyn, maybe you can kick it off first, and then we'll go back to Jennifer.

Justyn Harriman 

Sure. Like you were saying, the interaction that you want to have with sellers in our marketplace product, you want to establish that trust so that when you find someone Nextdoor to sell something to or buy something from, you know they are a real person that's nearby. That goal, for us, because it's a baseline, really does shape the process. Every new name that signs up has to go through the verification process.

The process can be completely invisible to someone signing up, but they always go through it. When you sign up, we know your name, we know your address, we know the email address signed up with and we know general device details.  If possible, hopefully that's enough for us to make a determination that, yeah, you're probably going to fit that baseline of a real person with a real tie to the neighborhood that we're looking for.

And then if you don't, or if we feel that we're still not comfortable with the risk, we add more checks on top of that. Asking for a phone number, comparing that to a suite of vendors, asking for geolocation, comparing that to a suite of vendors. And then finally one of the things that we [have done since] early on, we leverage postcards. We'll send you a postcard to see if you actually do live at that address.

How we've used them changes over time, but we look at that whole scale of different techniques in order to be able to judge the risk of saying, “This is a real person with a real tie to that neighborhood.” If we establish that, then we're able to accomplish that goal of building a trusted environment.

Joe Midtlyng 

Makes sense. That flexibility, in terms of geographically, I would imagine there's flexibility that you have to build in there. Different regions, different requirements, coverage, all these, and then the regulations within some of these regions. I could see the complexity that comes with that and the adoption that you have to make sure you've built into the waterfall or into your flows.

Justyn Harriman 

You have to think about what the baseline is or what baseline you're trying to establish. And that baseline really does depend on what you're able to find. Every market is different, every area is different, but also you have to really interrogate what you value. Every verification process has some amount of loss and you need to balance that against what you're trying to accomplish.

Are we accomplishing that baseline of a trusted local community right, or are we going too far, and we're excluding people that should be in that bucket? Being able to balance those things is really important [along with] thinking about why you're making the trade off.

Jennifer Kelly 

Very similar on my side. I think the integrity of the data points that you're relying on can vary a lot from market to market. You're really looking jurisdiction by jurisdiction. The more countries you operate in, the more complex this gets because it can really vary the quality of the underlying data set that you're accessing. That will drive failures, and failures translate into friction and also opex.

My team is an operational team comprised of people that are reviewing identity-related documentation to manually approve sellers to get them activated on the platform. Often this happens at onboarding, but there's also verification touch points throughout a life cycle of a seller. Getting someone back up and running quickly is really important both to Etsy's bottom line and to that seller.

The question of friction looms really large here because you want enough confidence that this person is who they say they are, but sometimes to get that level of confidence, you're incrementally adding additional verification steps that get increasingly onerous. The framing of that is really important and you have to make sure it's worth it to you or if the risk is worth the amount of friction that you're introducing separately.

Because we do come at this from a regulatory standpoint in many cases, sometimes a regulator will be pretty prescriptive on what data points or what sources they will consider adequate. There's a lot of factors that go into that.

Joe Midtlyng

A lot of good points you made there that I know we want to touch on when we dive deeper into user verification. We'll circle back on those. Some of the things you mentioned are really important to go a little bit further on for the audience, so thanks for the intro there.

Going back to early Trust and Safety and that topic, obviously Nextdoor and Etsy are quite mature platforms with established Trust and Safety teams and controls, and it's really fascinating to be able to understand all the complexities that you have to navigate. Before we get back into user verification in more detail, I think it would be great for the audience if we take a step back and learn about how your organizations approach Trust and Safety in the early days. I know you have some interesting perspectives on that.

What was day zero like in terms of Trust and Safety at your respective organizations?

The methodology that you approached building out Trust and Safety and all the nuances around that, I think that's a great place to start. Jennifer, [we’ll] start with you.

Jennifer Kelly

Yes, I was not at Etsy at day zero of their Trust and Safety program, but the experience I do have with this comes more from the acquisitions that we've made in recent years. Our subsidiary marketplaces are younger than Etsy and with all Trust and Safety activities at any stage of a company, it's about naming your risks and then ranking them, because you'll never have the resources you need to fully mitigate every single risk that you're confronting. What are your important ones and how are you going to address them?

I think that the scarcity of resources and the priority of diverting resources to Trust and Safety related product and engineering or operations builds at a startup at day zero has to be really carefully weighed against how you would use those resources to grow your user base or drive more revenue or do things that will create a virtuous cycle that then will later lead to more investment in the trust space.

But you can't grow those things if you're completely ignoring Trust and Safety. If you're not instilling trust in your platform, you won't be able to achieve those other broader business goals. It's a balancing act, and as you get bigger, your pile of resources also gets bigger. But that prioritization conversation never stops.

Joe Midtlyng 

Justyn, before we go over to Nextdoor, just a follow up there, Jennifer, so just in terms of establishing those goals, in terms of what you want to prioritize from a Trust and Safety perspective, what were the biggest challenges you faced in doing that? Is it internal alignment across the many teams? I know what's fascinating to me is how collaborative you have to be from a Trust and Safety perspective, outside looking in, because you're at a touch point with so many different groups, from engineering to product to legal across the board.

[Were the] the biggest challenges you faced either external and figuring out the right controls and the right methodology to establish what you need to with your users, or was it more internal? Was it a mix of both?

I think that would be just a focus there before we go over to the Nextdoor side.

Jennifer Kelly 

Yeah, I think the biggest problem is a lack of a shared understanding of what the risks are and what specifically they are.

Starting really early with a risk taxonomy where you can ground across all of those different functions and a common understanding of the definitions of the risks that you're naming will go a long way in helping you prioritize them and stay organized as you execute and try to mitigate them. I think that pausing to create a taxonomy that's a source of truth across the company and that will continue to evolve and also recognizing that risk analysis is always a point in time, and it needs to be regularly revisited. Remaining really aware of emerging risk and how risk appetite will shift and how the risk environment is shifting around you, it's a dynamic, moving target.

Joe Midtlyng 

Great point. And Justyn, on your end.

Justyn Harriman 

I think when I'm thinking about day zero, I think a lot of what Jennifer just said rings true. I think the journey of it is several inflection points that all themselves are like day zero. The world is what it is until it isn't. And that's kind of what I think about when I think about Nextdoor. Verification was always a part of Nextdoor from its founding, even before I was here, but that wasn't itself a Trust and Safety team.

It was a notion of how we set the baseline, but it works for so long until it doesn’t, and you have to adapt and create new approaches. When I think about Trust and Safety, it's recognizing that, hey, we saw verification protected us for a long time. And then we start to see a different type of bad actor arise. Credential stuffing attacks become more common, account takeovers become more of a thing you have to think about. That focuses your efforts in a different area.

As you are making that risk analysis day by day, your bad actors are changing too. They are becoming more complex because hopefully, you're growing your platform. It's getting more valuable. You have more value to bring to a bad actor. And so you need to adapt different techniques as they get smarter.

I think early on, like Jennifer said, it's about establishing the goal that of what you're seeing right now, what is the highest level risk that you're seeing? How do you respond to that? How do you build structures so that you can advance, so you can see going forward, how can I see the path on how I'm going to improve, even though I'm not going to build that right now? Eventually I will have a sophisticated bad action that requires that system, but today I don't.

I think for us, when I think back on it, one of the things I learned was that there was no shortcut. There were things that, yeah, ML is great, but you actually have to have a lot of the rest of your system in place in order to really leverage it effectively. And that was one of the challenges as we responded to a threat, like, “Hey, we need to actually follow the path of building up our system in order to be able to respond to that threat.”

Joe Midtlyng

I think you touched on that just on user verification, more specifically as part of your early Trust and Safety strategy. It sounds like day one that was something you were thinking about. Anything else you want to add on to specifically around user verification and how you thought about that from the inception versus how you think about it now on the Nextdoor side?

Justyn Harriman 

I think it's something where our understanding of what we were doing and why we were doing it really evolved over time. Early days, founding days, a lot of what was focused on was, okay, we have a sense that this is important.  We have a sense that knowing that they're real people that have a real tie to your neighborhood is an important thing. But we didn't fully understand what  the risk curve looked like. In reality, if you take a step back and this is Today Justyn talking not long ago, bad activity accounts form a very small proportion of your overall activity.

It depends on the use case, what specific product you're providing, how it can be exploited, but you can kind of say it generally follows a power distribution. Bad actors are really intense, but they're a very small slice of the overall activity. For us, evolving verification, thinking about how to think about it, moving from “Okay, this seems like a great idea,” binary thinking of “you're verified you're not,” evolved into how can we better quantify that risk? How can we think about it more as a risk standpoint that someone is a bad actor trying to deceive us versus not so that we can move along that power distribution and capture more of the group that are good actors but are hard to detect?

Like lots of people, what data is available, what type of information we can gather, is limited by people's role in the world. If someone moves around a bunch, doesn't have a stable place to stay, that shows up as not being present in the data, even though those are perfectly good actors. When I think about the evolution over time, I think about moving from simple notions of, like, how to establish that trust to notions that are a little bit more focused and a little bit more understanding that it's really a risk calculation you're doing.

You're trying to capture as much good activity as you can, trading off against that bad activity.  It's a balancing act, and you have to kind of think about how to draw that line and how you think about drawing the line.

Joe Midtlyng

That makes sense. And just on that topic, for both of you, one thing you just got me thinking about was early days. I work with companies kind of across the board, startups to very advanced companies. And early days, you're really trying to make your mark. You're trying to acquire users. There's not a lot of eyes on your platform, and likely there's not a lot of bad actors thinking about how they can exploit your platform yet. I would imagine another challenge is that you want to grow. Certainly that's the goal. And in the early days, you're not going to have as many sophisticated attacks or strategies trying to scam or commit fraud, etc.

But as your platform grows, visibility grows, and that's where the fraudsters will go. That's another angle to this is that the sophistication increases as well.

Justyn Harriman

Yeah, 100%. As you’re more valuable, bad actors come and try to use that value.  If they can make $100,000 a day, then yeah, they will.  It's because you become more valuable. They'll try to find a way to do it, and your response has to be right size to that risk. If you're really small and really focused on growth, there's risks you should account for, but you may need to make sure it's the right size to where you are as a platform, to what value you bring to a bad actor trying to attack you.

If you're bringing about a bad actor $10, they're probably not going to attack you if you raise the cost of attacking you to $10. I made that sound a little bit easier a calculation than it is It's really hard to quantify exactly how many dollars they can pull out, especially if you're not working in a transaction space, but that's generally how to think about it, is you want to make it costly enough that you're not really a target anymore, and that becomes increasingly harder.

If you're very valuable, if you're bringing a lot of users to your platform, then yeah, you're a big target, you're someone they want to go after, and you have to think about how you scale your response to them.

Jennifer Kelly

I would just add that I think anticipating that vulnerability is really hard to do, and so often in Trust and Safety, you're winning if no one hears about the thing. It can be hard to request resources preemptively before the risk has materialized. It's really hard to predict, but it is almost universally true that as you succeed and you're generating more activity on the platform, more buyers, more users, whatever, you're similarly attracting fraud. So anticipating that and investing in mitigants is really important, but also really challenging, or can be really challenging to advocate for.

And then again, if you succeed, it was the absence of something happening, which can be a difficult message depending on the level of exposure of your executive team to this kind of dynamic. If you've lived through it, I think you'll never forget it, but if you haven't, it's kind of hard to conceptualize.

Justyn Harriman 

I think this is always a challenge for any Trust and Safety team is that if you're doing your job, no one knows you're there. And you're right, it's about communicating what your goals are, establishing metrics that you can talk about and convincing leadership that those metrics are important. Those metrics have a direct impact, even if the metric you're measuring is hard to tie to a specific number. Generating the narrative of why that metric matters, why it has staying power if you don't do anything, helps you to drive those decisions around, investing in Trust and Safety, investing in things that help improve the integrity of the platform.

If you're doing everything right, nothing happens. But then the moment that someone breaks through, like I said, a series of inflection points, someone will break through and then suddenly everything's very bad because you didn't invest right. As that value increases, those inflection points become more probable.

Joe Midtlyng 

Connecting a few things you both said about being proactive and trying to look out a little bit beyond the headlights and there's a lot of challenges there. I want to get to that in a moment. But just connected to Justyn, what you mentioned, it's just in terms of establishing goals, establishing Trust and Safety goals. One of the elements here, just diving a little bit more into Trust and Safety, is how do you look out beyond the headlights, how do you dive deeper into user verification? How do you internally educate and get alignment around what the next goals are, how you continue to strengthen Trust and Safety and user verification more specifically.

And Jennifer, you shared a really interesting story with us as we were preparing for this on some other industries you look at as some models for user verification. I think it would be fascinating for the audience to be able to hear how you approach that from an Etsy perspective.

Jennifer Kelly

Yeah, absolutely. This is sort of my “not safe for work” yet very much we're at work. I think when it comes to the Internet generally, looking at the porn industry is fascinating and very important to do. It is sort of the foundational industry of the Internet. It has an enormous user base. It has huge risks associated with it, real world harm, very severe problems can occur on these platforms. Because of those factors, they’re kind of a breeding ground for innovation and they can be really good indicators of the way that the whole industry may head and the way the internet is functioning.

Seller verification, or user verification specifically, has seen some really interesting things happening in the porn industry. Pornhub, one of the larger platforms, decided—after a very damning New York Times expose about the number of videos on the platform that were either involving underage users or were non-consensual, really, really bad things—that they had no choice but to completely turn their business model upside down and only allow verified users to post content.

What had been more of a free for all, which was great for business, I imagine, but bad for risk, they completely did a 180 and removed millions of videos. I don't know what the revenue impact of that was, but it must have been enormous. And that was a self-directed strategic decision to save the platform and maintain trust and start reducing some of these really harmful downstream outcomes.

One thing that they saw happen as a result of this is their DMCA takedown notices. The notice and action framework where someone who owns intellectual property says, “Hey, you're using my property. You gotta take it down.” The platform removes it, contacts the person that posted it, and those two deal with it off platform. That's a really expensive operational process for platforms to maintain, and we all have to do it to maintain our liability protections.

They saw a 98% decrease in the number of those notices coming through after they introduced this verification requirement. That is astounding. It really demonstrates the value of what this verification can provide as this very early intervention that can save you a lot of money but also reduce a lot of risk. I just have found that really fascinating to follow, especially because it wasn't a regulator telling them they had to do it. It was a strategic business decision that they just chose to pursue themselves.

Joe Midtlyng 

I think the other piece of that connected to what you shared, Justyn, is making those types of changes—especially something as dramatic as that—that is a huge amount of coordination and buy-in internally to make those types of changes. And that stuff doesn't happen overnight. There's a lot of factors at play here, but really interesting insights, Jennifer.

Anything on your side as we dive deeper into user verification? Just factors that you have considered over time or are looking at now that influence your evolution around user verification?

Justyn Harriman 

Yeah, I was talking about earlier, how do you find the right size solution? And I think some of what Jennifer is talking about is when you add an intervention like user verification and for us, we've kind of had it the whole time. We've just kind of thought about it slightly differently through that time period. When you add it, you're creating a new environment where different rules apply. At a naive level you can have something where it's a free for all. Everyone can access something that you think that really drives growth, but it doesn't necessarily drive good types of growth. It drives a lot of fraud, a lot of different bad things that can happen.

When I think about where we started and how we kind of thought about it, it's like, yeah, having that verification process does have some sort of drag, but it does also bring you that real sense of connection and interactivity that people can have on the platform.

Joe Midtlyng

Just on that point, I think another interesting question just to be able to discuss would be what types of signals that you've found in the verification process at Nextdoor, what types of signals that you found most effective.

And then, Jennifer, we can go over to your side on Etsy.

Justyn Harriman

In general, we think about what we decided our goals were. It's real people, real ties to the neighborhood, and signals that provide more information about that are very important. We want to know, does this person, does this identity have a relationship? Are they using a device that we generally trust? Have we seen that device before?

If it's someone returning or even a bad actor that's using the same device on everything, but being able to establish, yeah, there's a real tie to a place, also tells you something about the identity. That's something that we kind of learned here, is that if you focus on just doing the identity side, you can learn a lot about, like, here's an identity.  But it's actually really hard to build up a very solid sense of someone's identity, even if you can know something about where they are in place.

You think about geolocation, you think about other geosignals, you think about ties to areas. Those are all indicators of real personhood, even if they aren't indicators of “Yes, this is Justyn at a real place. It's a person at a real place.” And those things add up. They add up to being able to tell you a lot about “real person, real tie,” even if you don't know specifically, “Oh, yes, this is Justyn, and this is Jennifer. This is Joe.”

Joe Midtlyng

That's fascinating. And I imagine part of that is the context of your business, how the platform operates, what the goals of the platform are. That, of course, influences the user verification balance that you're trying to strike. Jennifer, on your side with Etsy.

Jennifer Kelly 

Yeah, absolutely. One of my goals on etsy, because we're ecommerce, one of the outsized risks is fraud. Sellers that would onboard and try to commit fraud, not ship the items or what have you. Coming at it from that lens again, back to this concept of increased levels of verification meaning increased friction. Trying to do that in a targeted way so that you're using signals to only introduce that friction where you think you need to. But it can be just as simple as thinking of where a criminal is going to buy information to pass an automated check.

If the least friction that you have is something where you give your name and your date of birth, and that's automated and it's verified and the user continues on without interruption, it's pretty easy to buy those types of credentials on the dark web or wherever. It doesn't take a lot for a criminal to get past that. The more data points that you introduce and the more verification steps incrementally makes it more expensive and more challenging for a bad actor to get all of those data points that they need to pass.

But it adds expense for you because you're using more vendors, more sources, and you're introducing more friction. Not easy, but that's what comes to mind for me.

Joe Midtlyng 

Context is important. The context of the progressive sign up or verification process either within the onboarding flow, that's something that I certainly see working with customers, and being contextual about what you're asking a user to do and when is really important.

The other angle to that is, as a company running Trust and Safety, how do you strike that balance on “This is enough to onboard a user onto our platform,” versus “We need to do ongoing user verification at different moments.” That balance seems hard to strike but really important.

Jennifer, on that side of it, how do you think about verifying new users versus an ongoing verification process through the lifecycle?

Jennifer Kelly 

Some of that is going to be driven by regulatory requirements. On the payment side, it is a normal baseline requirement that you would verify at onboarding, but that you would also risk rate your users to have a point of view about their relative risk level throughout their lifecycle, where a higher risk seller is subjected to more frequent verification or even, in some cases, a more heightened manual due diligence review.

Part of it is driven by that, and then beyond that, you go back to the goals. What are you trying to do by knowing your participants and what periodic or ad hoc or trigger-based checks would support you in achieving that goal? If it's fraud reduction, how can you partner with your fraud team to identify the signals that they see that don't tell them for sure that they're dealing with a bad actor, but overlap with trends that they've seen connected, or something really early stage that introducing a verification step could be a good intervention to stop?

If you're thinking about something like ATO, if someone makes a change in the account, maybe that triggers an automatic reverification. To go back to what you were saying, I think that the framing of the why to the user is so important, because this stuff is annoying. We've all done it ourselves on platforms, and it kind of sucks. And so it's hard. But try to communicate transparently why you're asking for it and try to convey that providing it supports the health of the overall platform and will continue to bring buyers to them.

There's a reason behind this and connected to that also consolidating wherever you can. If we have a tax-driven verification requirement and a strategic verification goal and a couple of other things, can we combine them, can we use things that we already have? How can we make this a more streamlined experience for the seller?

Joe Midtlyng

One area there—and I always struggle with this as I think about it, because I'm in the industry in cybersecurity and identity, so not sure if every user thinks about this—but I think there is a broader awareness around “What's my data being used for?” Just the privacy side of it. And so I think there's a general consciousness around those topics now that the average user has.

I don't think that's just us in the industry, and so that transparency around how you're using data, what the benefits are for the user themselves, but also to the security and integrity of the platform they're signing on. Those are some factors that seem like they're increasingly more important to companies and to the users of those companies.

Jennifer Kelly

That's a really great point.  That is another dimension to this conversation, is the infosec implications of collecting and storing this information. It's really sensitive identity information and you may have an obligation to store it for a period that's fairly long depending on what regulatory drivers of the collection and verification exist. It can also be expensive to build the infrastructure needed to store this responsibly and to protect the information that you've leveraged to verify. Yeah, that's a huge, kind of invisible factor of this work.

Joe Midtlyng 

Yeah, very interesting. And Justyn, on your side, just how do you think about new user verification versus active?

Justyn Harriman 

Sure, and I think there's a ton of things that can echo as they go through here. Alot of what I think about is about the velocity of a bad actor and how quickly they can gain access and trying to balance those things. Basically all of our verification, at least at this point, is not regulatory driven. It's protecting the platform. It's integrity driven. We do have a lot of flexibility in how we think about choosing which side of the wall to do the analysis on and trying to trade those things off.

If you're looking at that power distribution of bad actors and trying to draw a line, you want to make sure you get enough people through that you're accomplishing growth goals, and that whatever baseline you set, whatever line it ends up being, you're able to use that to feed into the next stage. We detect like, “Oh hey, someone signed up and then did something bad.”

Because ultimately new user verification, at least at the first stage, is a very small window of time where you have to make a quick decision on whether someone clears the bar or not. And at that point in time, you only have so much information to work with. You need to step up your information, ask for more so you can make a decent risk evaluation and then use that in the next phase once someone actually has access to the platform to decide: did that risk analysis actually help me determine like, oh yeah, I've weeded out most of the bad actors and I can actually see like, this is a group of people who are acting badly, right?

Trading those things off is kind of key. When you're a bad actor, you're trying to create a lot of things at scale, you're trying to find a way through, and that process helps you determine that you have enough information in order to gain more later and use that to make your risk analysis.

Joe Midtlyng

That makes sense. This is one level deeper, but I think a really interesting one to talk about for the audience is mobile versus web around user verification. At Incognia, we work with mobile-first companies. They typically all have a web component; a lot of that is region specific, what markets we're working in the US are much more web-heavy than many other regions in the world. But specifically on the topic of user verification, how do your companies approach some of the nuances and challenges on mobile versus web?

And there's a few factors, what type of signals and data is available, but then also what's the friction in that verification process on a laptop versus a mobile device Anything you can share just on how you think about that angle of this challenge of user verification on mobile versus web.

Justyn Harriman 

I think I can talk a little bit about this, at least on the technical side. I think it is challenging. Web platforms are generally stronger protectors of their users’ data. They generally are more protective, they hide more details. Geolocation is necessarily very hard to do on web versus on mobile, where you have more access to the device, where geolocation is actually an important aspect of providing a good experience on a mobile device. That presents challenges.

I think the key is to think about how you balance risk on those different platforms, where you see most of your risk coming from and how do you adjust your verification process in order to understand how risky it is on this platform, and how do I step up risk? Step up asks users to get more information so I can derisk it.

And then how do I adjust the order, the process for asking questions about users in order to gain more information that you can use to make your risk analysis. At the end of the day, there are different platforms, there's different data available, there's more and less. Just depends on which one it is. Web does remain challenging and it is something we have to think about a lot, especially because a lot of users still use the web.

You look at the US. It's not as much desktop web, but mobile web is still very popular in the US, despite the fact that we have apps available, and that's something that we have to think about. How do I focus on thinking about this specific platform in order to improve our risk analysis and get more folks through?

Joe Midtlyng  

Yeah, fascinating. Jennifer?

Jennifer Kelly

Very same take. Really, you just need to closely align with the business side. Who is evaluating user preferences? Do they prefer to use the app to sell, or are they gravitating toward the web? Buying and selling can have different user behaviors for us. We're mostly focused on sellers, but then also really working closely with the fraud teams to refine your goal, knowing that there's going to be different signals given on mobile versus web or app versus web. It is another way that this is a dimensional issue, so it does add complexity.

When you think about experience, that's also a factor. Does the user think that it was easier to go through this if they were just doing it on their phone versus needing to go sit down at their computer to execute it? A lot of considerations, and building for both is resource heavy.

Joe Midtlyng 

Perfect. We have about ten minutes left, so maybe we can move to the last piece here on looking ahead and recommendations, and then we'll leave some time for Q&A at the end.

I think this is always a nice way to wrap up a discussion. How do you see user verification evolving over the next few years? What new challenges are emerging, or what new approaches or technologies are you seeing that are helping drive user verification forward?

Feel free to take it however you'd like, but I think that would be a nice way to start this. Jennifer, do you want to start?

Jennifer Kelly 

Sure. I'm still in a space of defining the full universe of goals for our seller verification program. We know fraud is one of them, but I think there are more, the DMCA Pornhub example being an interesting one for me. I don't think that because of the nature of our platform, our DMCA violations are different. But I am interested to see how we might lean into seller verification as a way of refining our content moderation and some of our other downstream Trust and Safety operational activities.

Beyond that, there's been a lot of movement in policymaking as it relates to requiring seller verification. The Digital Services Act coming out of the EU has introduced a requirement to verify all of your traders or your incorporated business sellers, so that's a new payments-agnostic requirement to verify. It's going to be one of the more prescriptive inputs to your overall program.

And the Inform Act in the United States is a similar driver of this new regulatory obligation, regardless of what payment platform you're using to verify seller identity. [We’re] starting to see this gain some traction in terms of regulation, which is interesting to see the value of this really start to become more common knowledge.

Justyn Harriman 

I think on my side it's interesting to see beyond regulation, even thinking about social media, there are so many more ways for bad actors to develop very fast attacks. I don't want to overhype GPT, but it's hard not to look at that and see how it can be used for disinformation campaigns. Disinformation campaigns, fraud, and combinations of the two, like running disinformation campaigns to sell T-shirts. The ways that people can use that to increase velocity are things that I think are really concerning when you look at the future.

And one of the things people talk about is using user verification to say like, yeah, yes, this human could have used GPT to produce a piece of content, but they still had to be a human in order to do it. And that introduces a little bit more friction, gives bad actors less velocity in the system. Definitely user verification is becoming more important even in applications beyond regulatory arenas, even if there are regulatory conversations going on in the social media space.

But even beyond that, it's going to be important to do because it's a key tool for slowing down bad activity and preventing abuses from systems that now can make bad activity much faster.

Regardless of how well OpenAI and the other folks are training their AI to avoid doing bad activity, it will be used for this. Verification is one part of that component that can help us at least slow down the attack and gain more time to respond to it.

Joe Midtlyng 

Yeah, that's fascinating. One of the things that I've been thinking about working with Trust and Safety teams is, as we've all been educated over the last ten years on a lot of new platforms, social media, how these platforms work and the consequences around them, is content moderation versus something like user verification and how those two things play together and how much more you would say user verification is a priority now versus some of the initial ways that Trust and Safety, from the outside at least, thought about one of the core strategies to ensure trust and prevent fraud on the platform, scams, et cetera.

Is it fair to say that user verification is now kind of a higher priority just in terms of building out that process on your platforms? To be able to put a finer point on that, is that a fair assessment that user verification is kind of one of those top line priorities as you think about in your Trust and Safety approach?

Justyn Harriman 

Yeah, I'd say it's a key tool. It's a tool you can use to slow down an attack, think about how to introduce important friction. So much of what we do as good tech people, we like to reduce friction, reduce friction, reduce friction. Sometimes you need to think about how you inject it intentionally and in a targeted way so that you gain yourself more time to respond to other concerns like fraud, misinformation, all these different things that can happen quickly if you have that very open platform.

I think you can kind of see that with some, especially in social media, like when you remove those barriers, when you don't have those things. You have so many problems that you're just playing whack a mole on the content moderation side, it's going to take you a month to get through your queue, and by that time the damage is already done.

User verification is a key tool that helps you prevent really bad activity. It doesn't prevent everything. Real people produce misinformation all the time, they produce fraud all the time. It is still a problem, but it gives you a tool to help respond and adjust to the various attacks that you're seeing.

Joe Midtlyng 

Makes sense. Jennifer, anything to add on your side as we wrap up?

Jennifer Kelly 

Yeah, content moderation is always whack-a-mole, but anything you can do to reduce that is welcome. Fraud is an obvious way that seller verification can help reduce downstream bad outcomes. We know that content moderation isn't the way to attack fraud either. That's way too late, you need to be intervening sooner. Seller verification is one of those earlier interventions, but I think that there's opportunities beyond that. We need to know certain things about our sellers to know what policy violations on the content side they may be more or less likely to make.

And then we can tune our detective controls accordingly to maybe pay more attention to a seller that demonstrates certain attributes that for whatever reason correlate more strongly with IP violations, or who knows? I just think there's a lot more to learn about the connection between verification and content moderation.

Joe Midtlyng 

Great, that's really interesting.

Jennifer Kelly 

Great.

Joe Midtlyng 

Well, I know I could go on here all day, but this is a great conversation. I know we've got about three minutes left, so Andrea, do we have any Q and A questions we want to use the remaining few minutes for here?

Andrea at Marketplace Risk

Yeah. Thank you everyone. We did have a bunch come in. Hopefully we can get one real quick. First one, does verification happen for a family or address or at an individual level?

Justyn Harriman 

Yeah, I'm going to assume that's directed at me. For us, it's every new neighbor that joins. It is a person who joins. When that neighbor joins, one person, they join, and address is one component of verification. Like I said, a real person with a real tie is what we're looking for.

It is both the address component and the real person component.

Andrea at Marketplace Risk 

Awesome. I think maybe we will do one more. On what stage of a platform growth would you say the ID verification might be introduced?

Justyn Harriman 

I'll let Jennifer go if she has anything, but I have some thoughts.

Jennifer Kelly 

It depends on the goal of your verification process and how important identity specifically is to that. If you have a payments platform, you'll definitely need to do this, but you also don't need to collect the ID. That is a more escalated or heightened level of proof. You can do less than ask for the ID and still consider someone's verified identity to be verified.

Justyn Harriman 

From my side, I would think about what level of ID verification do you need? What are we using it to accomplish? If you're a payments platform, you may need this in order to even be a platform legally. If you're a social platform, you may want to think about what environment are you trying to create, what level of verification do you need?

Because ID verification isn't binary. Even if you pull out a driver's license and have someone verify that, even that's not perfect, there are ways to spoof that and think about it.

Think about what your risks are, what environment you're trying to build, and then scale the verification you're trying to do to them.  Do you really need an ID? Can you go down to understanding, is this a real person, or is that even not required? It just depends on what your application is trying to do and if your application needs that level of verification or what specific level of verification it needs.

Andrea at Marketplace Risk 

Awesome. Well, thank you, everyone. Any final remarks before I close this out?

Just a thank you to Jennifer and Justyn. Really appreciative of your time and insights, just the experience and the knowledge. Thanks for all the prep and all the great insights today. This was great.

Jennifer Kelly 

Yeah, my pleasure. It's always fun to be reminded of how similar the challenges that we confront across trust & safety teams are. It can feel like that's not true, and so it's refreshing to hear the overlap with Justyn.

Justyn Harriman 

Yeah, same here. At the end of the day, you have bad actors. You're trying to figure out what their motivations are and how to prevent the abuse that they cause from happening, whether it's payments or it's misinformation, romance scams, et cetera, things that happen on the social side. They're very similar, and you can think about them very similarly, even if they have nuances that distinguish them.

Andrea at Marketplace Risk

Awesome. Well, thank you, everyone.