• last month
Michael Sayman, the creator of a viral new app called SocialAI, joins the show to discuss why he built a social network where you're the only human around. He tells us how he thinks about AI interfaces, what's next for ChatGPT and other chatbots, and why posting to a language model might be better than posting on a social network.
Transcript
00:00Welcome to The Verge Cast, the flagship podcast of Bot Farms. I'm your friend David Pierce,
00:08and this is the second episode in our miniseries all about AI in the real world. AI is so abstract,
00:16and it's a term that we now use in so many ways that honestly, it can feel sort of meaningless.
00:21So we've been on a quest to find actual examples of actual AI showing up and being useful or
00:28at the very least interesting in our actual lives. For this episode, the last one in our
00:34little series for now, I have a feeling we'll come back to this subject, but last one for now,
00:38I'm talking to Michael Samen, who recently launched an app called Social AI that has
00:43become kind of a viral phenomenon on the internet. We'll get into what it is and how it works in
00:48pretty serious detail here, but basically I'd explain Social AI this way. Imagine a social
00:55network, Twitter or Threads or whatever, but every user other than you, every single one
01:00other than you is a bot. Does that sound interesting? Pointless? Terrible? Dystopian?
01:06Amazing? Maybe all of those things? I wasn't sure where I fell on that line when I started
01:13talking to Michael, but we ended up having frankly one of the most fun conversations I've
01:18had in a while, all about how AI works and how we're actually supposed to use it. Spoiler
01:25alert, he thinks about this less as a network and more as an interface, and I find that
01:30fascinating. We happen to agree, actually, Michael and I, that a chatbot cannot possibly be the
01:37future of everything in technology, and Michael has some big ideas about what else we might be
01:43able to do. All that is coming up in just a second, but first I have to tell my bots what's
01:48going on. They worry when I'm gone for too long. This is the Verge cast, we'll be right back.
01:55Welcome back. Let's get into my conversation with Michael Samen from Social AI. Michael's had kind
02:01of a fascinating history in the tech industry, by the way. He got a job at Facebook when he was 17
02:06after Mark Zuckerberg discovered an app he'd written and just offered him a job. I think it
02:11was an internship, but he ended up working there for a while. After that, he went to Google to work
02:15on Google Assistant, then he went to Roblox, and then Twitter. He's been through a surprising
02:20number of the biggest and most interesting companies in tech, and in particular, he's seen a
02:26huge part of the evolution of social media and social networks. He worked on Stories at Instagram,
02:31he worked on Status at WhatsApp, he worked on Shorts at YouTube, like I said he worked at Twitter,
02:36and a whole bunch of other things. And now he's on his own. He's building apps through his one-man
02:41startup that he calls Friendly Apps. At the beginning of our conversation, Michael told me he
02:46had been thinking about building an AI social network through much of that time. The idea for
02:52what would become social AI has been in his head for a really long time. It's just that until now,
02:58he couldn't actually pull it off. I actually tried building a version of social AI like five years
03:03ago, and the tech just wasn't there, and it was really bad. What was it five years ago? It was,
03:08I mean, I called it Influencer, and the idea was that anyone could be an influencer. I've been
03:14trying to do this for a while, but it just quite wasn't there. Originally, because we didn't have
03:22the language models, we tried to build it, and by we, I mean just me, but trying to build this to
03:30give people the feeling of a social media app, but not really having to deal with all of it.
03:36The idea was like, okay, if someone's addicted to cigarettes, how do you get them off of it? Well,
03:41you can't just tell them to stop. Maybe it's giving them something, maybe like a nicotine
03:46patch or something. What is a way that you get somebody to be able to get that experience out
03:53the way, but maybe not harm themselves or feel bad? Anyway, so I built that. It didn't really
03:59look quite right, and it didn't work well, so I didn't ship it.
04:05What did it feel like? Actually, before we get to the chat GPT thing of it all,
04:08because there's an interesting history of tech story there.
04:13When you built the thing in the before times, what didn't work? What didn't feel right? What
04:22wasn't ready?
04:22I just could not simulate the entire social network in a way that felt interesting, even
04:29mildly. I did the approach that Google did. I used to work at Google, I used to work at Facebook,
04:38and I took a similar approach to all of the assistants that there were at the time,
04:42giant if-else statements, just massive. Okay, it kind of worked, but like everything else before
04:50these language models took off, it was very robotic and very conditional, depending on what
04:58you wrote. It just didn't let you forget about the technology. You were reminded in the app
05:09that you had to do certain things to get certain comments. So at that point,
05:16it was really more of a game. I designed it more like a game, because the technology just wasn't
05:22there to make a simulated social network not feel like a game. So I had to go that route.
05:32But even then, it just did not feel right. Users, if I imagined, if they were to try it,
05:40would have felt like this app was more like a Farmville game and less like a social network.
05:49Honestly, that's not what I was trying to build. I was trying to build something that felt like
05:53a social network, and so I had to wait. Then once the early versions of GPT 3.5 came out,
06:03I thought, okay, let me give it another shot and tried to design a version of it.
06:09The model would sometimes say random stuff. It was extremely expensive to run all of the
06:17different prompts and things that I needed for it to work. I told myself, there is no way
06:24I'm going to be able to run this at this cost, and it's completely fuzzy and the responses are no
06:31good. So I said, okay, well, I have to wait. I have to wait until it gets cheap, and I have to
06:37wait until it gets more accurate. So every month, for like two years, I was building this startup.
06:43I would just wait. I would just look at the latest model, try it out with some of my tests,
06:50and from there, just keep going. I would look at the outputs that they could give
06:59and how much I could tune them, and then I'd look at the cost. When Gemini released theirs
07:05and lowered their cost, I said, okay, we're getting closer. As soon as OpenAI had their
07:10models dropping in prices, I said, okay, I think it's time. So about a month ago,
07:15I went and built the app, and I just told myself, look, this is the last attempt that I'm going to
07:21do at building this app. I've done this too many times, and I'm just going to go with it.
07:29Okay, and now here we are.
07:31Yeah, and here we are. I launched it, and of course, that's the one time that you don't think
07:37it's going to go a certain way is when it does. I mean, it's great.
07:42So I'm so curious why this idea has been so sticky in your head. It's clearly been sitting
07:48around as a thing you have wanted to build for a very long time. What is it about this thing that
07:53is so sticky and enticing to you?
07:57So social networks are not what they used to be. And I think fundamentally,
08:00the internet has changed. The internet used to be a tool of communication between people.
08:05And frankly, I love that. There's a part of me. So I was born in Miami, but at 16, I flew out to
08:12California with my mom because Mark Zuckerberg had emailed me when I was in high school,
08:17asking if I wanted to meet about working there and stuff like that. And I remember flying out
08:24there with my mom, not knowing really anything about Zuckerberg and my mom, not knowing anything
08:30about it even more. And just thinking like, okay, this is such a different world from where I come
08:35from, but I'm excited. And I had built apps before, social apps. And so I was quite excited.
08:41And I spent a lot of time working at Facebook from that point on. I didn't go to college. I
08:45spent four years there helping them build out Instagram stories and a few other features.
08:50And it was just such a fun time. It was like 2013, 2014. The company was in a different era,
08:57social media as a whole was in a different era, and people were having a lot of fun.
09:01And I think over the past couple of years, I think we've just seen social media has changed.
09:06And it's changed because the internet's changed and the technology's changed.
09:10And so where the internet used to be a place where you could connect between massive amounts
09:15of people, the internet as a communication tool in that sense is kind of falling apart.
09:24The internet now has technology that allows itself as a data set to simulate a human connection.
09:34You communicate with the internet rather than through the internet. And I think that change
09:40that's happened really kind of puts question marks around how social media should work,
09:45because the whole premise of social media is that you're using the internet to communicate
09:49through it to other people.
09:52Yeah, fair. I just want to say, by the way, that idea of communicating with the internet and not
09:58through the internet is like as succinctly and well as I have heard anybody put this moment
10:03that we're in. I am going to steal that from you, and that is very good. So thank you for that.
10:08No worries.
10:10That makes it sound like social AI has always been in your head,
10:16kind of part product, part social commentary art projects. Is that fair?
10:24Yeah, I always like to poke. I've always liked to poke. I mean, does it kind of poke fun at the
10:32facade that a lot of companies are trying to put up? Of course it does, right?
10:37But I think it's also, weirdly enough, my attempt at trying to solve some of these problems,
10:44the problems, for example, where you can't discern the difference between a human and
10:48an AI on a social platform. So like, yeah, one way is to try and invent a detector for humans,
10:54but that hasn't gone very well. I'll say instead, well, how about we just come out with a product
11:03that tells people how the internet works now and says, hey, look, this is the reality.
11:09Kind of sucks in some ways. It's kind of great in others. And we have to, you know,
11:16we have to embrace it. Like, let's embrace it. And let's do that so that we don't harm ourselves.
11:23Because going on social media sometimes and seeing comments that you think are from people that
11:28aren't can be harmful. So I'm curious kind of what it was that clicked in your brain that went
11:35from this technology is not ready to this technology is ready. I think the moment that I
11:40got in a fight with my boyfriend and I decided to open up my app to see if there were any ideas for
11:46how I could resolve the problem. That I think was the moment. That's a very good answer. You know,
11:55that was the moment the moment I got in a fight and I decided I'm going to use this app to try
11:59and vent about my problem, because if I go on actual social media, I'm doing some harm.
12:04Right? Like, and so and so I think that's just goes to show like, the product I'm building is
12:09not to give people an illusion of people, right? I know they're all AI. It's so that I don't go on
12:14social media and use it in a harmful way. Like, my ideal, my ideal scenario is one where people
12:23have people around them to listen to hear them and to help them when they need it. Right? People
12:29to people communication is number one for humans. And I and I don't think we should forget that,
12:35right? But there are a lot of people I've noticed since COVID who don't have those people around
12:41them. And so if they don't have those people around them, and they need that conversation,
12:47what are they doing? And if what they're doing is going on public social media,
12:51and talking about what's going on in their life and getting advice from bots AI without knowing
12:56what they're getting advice from without any understanding of the dynamics in these algorithms
13:02to encourage certain types of content on different spaces, then they're they're harming themselves.
13:08So I said, Okay, well, you know, I'm not trying to replace the human to human connection.
13:13I'm trying to help people find a way to have a secondary option when that human isn't around for
13:20them, so that they don't have to rush to social media, make a mistake. Right? And so when, when I
13:25had when I got in a fight, and I didn't go on social media, and instead I went on this app,
13:29I said, Okay, it's ready. Yeah, I mean, that I can imagine that being a very telling moment.
13:35But I think that distinction is really interesting, where what you're saying is kind of
13:42one is not a replacement for the other, that they're actually they're designed to be and best
13:47as separate things, right? Like I have, I have my AI tools, and I have my people. And like,
13:53actually, the problem is, on the platforms that we currently have, those things are being
13:58sort of smushed together in a way that makes it hard to tell what is what, and makes everything
14:05messy and complicated. And yes, and that what we actually need is to pull those things apart,
14:10not just try and have one or the other, but just separate them.
14:14Yes. And I think the key distinction here is what I what I mentioned earlier, which is,
14:19I think there's confusion about what the internet is for people. I think the internet being a place
14:26that things can be communicated through is no longer really the case, or it's kind of trending
14:32away from that. Now it's becoming something you communicate with. And I think more important to me
14:38is the fact that the language model technology is not a certainty, meaning it does not tell you
14:47yes or no, it gives you a fuzzy intelligence. And chat GPT was built as a fuzzy intelligence,
14:57right? Like these models were built as fuzzy intelligence. And so what I mean by that is,
15:01they're kind of like humans. They're not like the computers of the past where they give you a
15:05definitive answer for something. And so when I saw that, I asked myself, why does OpenAI choose
15:15a chat interface as the one and only interface for communicating with their language model?
15:23Because that chat interface only gives you one answer and the model does not have one answer,
15:31right? And so I find it interesting that all of these AI companies are trying to squeeze
15:39these fuzzy human-like data structures. They're trying to squeeze them into an old format
15:45computer that gives you one answer. And the truth is that they're not that great at it because we
15:53still go back to Google because we want multiple answers. We want multiple responses. And so what
15:58I've built with social AI is not so much a social network, but a new way to interact with the
16:04language model where you don't get one response, but you get multiple. And being able to drill
16:10down in a thread-like interface, in a social interface with the language model, it just
16:16feels more natural. When I use the app, for example, I was running late to a flight. I got
16:23delayed. My first flight got delayed. My next flight was in 45 minutes. I was in Dallas and I
16:28didn't know if I was going to make it. The flight had just landed. So I opened up social AI and I
16:33just kind of panic ranted about what happened, right? I didn't have to think about like, oh,
16:40I need to instruct it to tell me because I need the right answer. And what if it's not the right
16:44answer? And maybe I need to go to Google and maybe I need to go to retina. No, no, no. I simply
16:50ranted about what happened. I said, my flight got delayed. I just landed. I have 45 minutes to
16:55make it to my next flight. I don't know if I'm going to make it. I'm at Dallas. They're telling
17:01me I got to go to terminal D. I'm in terminal C. I don't know. And I just posted that. And
17:06immediately I got dozens of responses and replies on this social interface that gave me all sorts of
17:15various replies. Some of them would tell me you're not going to make it, you know,
17:20go to the front desk, just figure out if you can get another flight. Another one said,
17:24you'll make it if you run quickly, you just need to look up, see if you can find the sky train.
17:29And if you go down the sky train, you should be able to get there in time. Just make sure you're
17:34running quickly, though. Another person said, are you in the front or the back of the airplane?
17:38Like, you know, like different questions. And so what's interesting is for a human,
17:42it's natural to see that I'm going to go and look through all the responses and dig in on the one
17:47that I'm more interested in. So someone kind of gave me a hint about the sky train. I replied
17:51and said, OK, I'm going to try that. How long do you think it'll take me to get there?
17:55And then immediately that A.I. persona replies back and says, you know, if you go quickly,
18:01it'll take you about 10 to 15 minutes to get there, you know. And so I said, OK, cool. Another
18:06one replied right underneath and said, wait, wait, wait, hold up. It'll be more like 15 to 20. Don't
18:11get them too excited now. You know, and so like and so that kind of interaction just feels natural
18:18to us. All right. We got to take a break and then we will be back with more from my conversation
18:24with Michael Saban. We'll be right back. This episode is brought to you by AWS. With the power
18:33of AWS generative A.I., teams can get relevant, fast answers to pressing questions and use data
18:40to drive real results, power your business and generate real impact with the most experienced cloud.
18:50All right, we're back. Let's get back to Michael Saban. But before we do,
18:53I just want to go back to something he said right before we went to break.
18:57He talks about social A.I. as not a social network, but as an interface,
19:01a new way of interacting with a language model. We're going to get way into that.
19:07But before we do, I think it might actually help if I just explain how social A.I.
19:11works so you can picture it as we go on. So when you first get into the app, after you sign up and
19:16create a profile and whatnot, the first thing you have to do is decide what kinds of followers you
19:22want. Remember, it's all the bots. So essentially what you're deciding is which kind of bots are
19:27going to reply to your posts. You can pick supporters or skeptics or pessimists or optimists
19:33or alarmists or realists, stuff like that. Just general sort of personality types.
19:38And if you recommend the app to some friends, you unlock a bunch more options like nerds and
19:43oddballs and drama queens and contrarians. And once you do that, you just start posting.
19:48I'll type like I'm suddenly tired of all the food that I like. Anybody have any ideas
20:00about how to spice things up? Literally. And then you press post and a few seconds later,
20:09responses start appearing. Let me just read you a few. The first one is from Sunny Ray,
20:14who is at Sunshine Fellow, presumably an optimist one. It says, try adding some new
20:19spices to your meals or explore international cuisines for fresh flavors. Sure. There are a
20:24bunch here that say try new cuisines or flavors. Advice Wizard 2023 literally just says try new
20:30cuisines or flavors. I have Fan Zone Hero, who says try some bold spices like za'atar or sumac.
20:38It's magic. I like that. Fanatic Follower says, whoa, David, that sounds like a challenge. How
20:44about some trying some bold exotic spices? Anxious Nina, presumably an anxious one,
20:50says what if you accidentally make something that sours your taste forever? That's terrifying.
20:55Downcast Greg says, spicing things up sounds like a recipe for disappointment.
20:59So you get the idea, right? Different bots, different vibes, different kinds of responses.
21:04These are all my reply guys now. And I can respond to one and go down a rabbit hole with that
21:10particular bot tuned to that particular mood. I can also favorite bots. I can favorite replies.
21:16And Michael says that all of that goes back into the algorithm and into kind of the instructions
21:23being given to JadGBT every time I try to use it. On the surface, it all feels and sounds like
21:30normal human social media, except that they're all bots, but me. And I think I'm not the only
21:36one who felt kind of strange about that fact at first. It looks like Twitter and feels like
21:42Twitter and it super isn't Twitter. It just felt odd. And honestly, the reaction to social AI
21:49was really fascinating. So that is what Michael and I talked about next.
21:54I do think the reaction to this app has been in many ways, just as fascinating as the app itself.
22:03My read of it when it came out was there were basically three responses. One was like,
22:10this is cool and interesting and kind of fun, both social commentary and an interesting idea
22:17about the future. One was, this is stupid and dystopian. And then one was like, this is a joke,
22:25right? This has to be a bit and an art project and not a real product. Is that a fair representation
22:30of the reaction? Am I missing anything? I think what's interesting is the loud
22:35reactions on social media. There's two things that were interesting to me, or let's say three.
22:41First, the loudest reactions from people on social media were from those who thought it
22:46was either a joke or that it was dystopian or that they're like, oh my God, the end of the
22:52world. That kind of reaction, those are the loudest. The quietest were actually spending 10
22:59minutes per session on the app. The second thing I noticed was that the reaction from
23:06people was one thing, but there were bots on social media reacting to, yeah, because half of
23:16social media has bots now. So I found it quite ironic that there were bots reacting to an app
23:25of bots telling humans that an app with bots is like so terrible. And I found it interesting
23:35that that was happening. I was like, huh, it seems like some of these bots don't want bots around,
23:42or maybe they don't want people to know that they're bots. I don't know. And so there was
23:48a portion of bots on social media that were reacting negatively to bots. And I just thought
23:53that was ironic. Well, it kind of proves your whole point, right? If it's just a bunch of bots
23:57yelling at bots about the social network, that's all bots. I imagine you're sitting there looking
24:03at that being like exactly. Yes. I think the other issue I think that I found was just how many
24:10people don't realize that a lot of these platforms are filled with bots. And that kind of was
24:16alarming to me. But ultimately, I think that the last bit here of feedback that I've gotten is
24:22people feel a little liberated. They feel a little bit liberated. They don't feel the
24:26pressure of going on social media to share some thought that they might feel embarrassed about.
24:31But they also feel like they're able to hear other perspectives that they otherwise wouldn't
24:37feel comfortable admitting to want to hear. And so they don't let their guard down in public
24:44conversation online. People keep their guard up. And I think that keeps an echo chamber.
24:50It's interesting because people said, oh, echo chamber, echo chamber. The number one,
24:55you know, number one, number two, number three, most selected follower type on social AI is
25:02contrarians, debaters, problem solvers, thinkers, critics. Right. And so people are selecting
25:12followers on social AI that challenge them. And I think there's something interesting about that.
25:17Why would someone go out of their way to be challenged on an app like this? Can they not
25:21be challenged on real social media? Is there a reason why not? And how does this address that,
25:28right? I wonder if that goes back to what you were saying about how it feels when you perceive it to
25:35be real people on social media, because I think to some extent that fact doesn't surprise me,
25:41because one thing you hear from people who use AI a lot is that it is especially useful if what
25:47you really want to do is beat up an idea and brainstorm and get new perspectives on things.
25:53And I think to some extent, what you've built is just an endless feedback mechanism,
25:59but with no stakes because no one else sees what's happening. No one else is human on there.
26:06So even the part of it that feels sort of real, it feels like there's still something in your brain
26:11that is like, this is a safe space. I can see a world in which I mean, and I've even found this
26:18in using it. There is something very powerful in the interface is the same, but the stakes are so
26:24much lower. Yes. And I think it helps put people's guards down. I think it helps people. Like you
26:30said, people have been using chat GPT for a lot of this, but how many times have people gone on
26:35chat GPT and said, Hey, can you help me think through this? And it gives you one answer.
26:39And you're like, Hmm, I don't know about that. And then you go, well, what other ideas do you
26:42have? And then it goes and gives you something else. And then you're like, well, what else,
26:45you know, and then it gives you something else that's kind of similar, but you're like, I don't
26:49know. And then you keep going, well, what else? And by the time you keep asking what else,
26:51I forgot the context of the thing you were talking about in the beginning and just start
26:55saying random stuff. And so like the interface just feels wrong for the use case. But look,
27:01I don't blame OpenAI. I don't think that it's like, oh, they just weren't capable. Like who
27:06the hell was going to know, right? If anything, I think they built out a chat interface because it
27:11just felt like the obvious testing ground to prove a product. And it became a product that they didn't
27:18think was going to resonate as quickly. So, you know, of course we started at chat because of
27:22that. And I don't think that it's bad. I just think we haven't seen the best of it yet.
27:27Yeah, I think that I think that's totally fair. So speaking of that, actually, the
27:32the edges of this technology, I'm very curious about, and I suspect you've seen a lot of it as
27:38people are starting to really use and try new stuff with social AI. Obviously this stuff has
27:45gotten a lot better to the point where I think it feels the way you wanted it to. Where is it
27:51broken? What still doesn't work? So there's a few things that still need to be improved.
27:57So we just, you know, we, I mean me, I just launched notifications on social AI. So one of
28:02the things that was interesting was, you know, I could play around with it, but I wasn't getting
28:06any feedback later about what I posted. And I think that's quite important. Another aspect of
28:14it is proactive. So like right now, if I post, I get content. I originally built a feed into it.
28:22I was going to ask you whether you were going to build a feed.
28:24Yeah, well I did, but I took it out just for the launch. And just because I wanted to make sure
28:30that, you know, I didn't have anything too buggy in there, but it's kind of interesting because
28:34I'm not trying to simulate a world where these things have their own lives and telling you about
28:37their lives. No, no, no, no, no. Like that's not the purpose. We're trying to be honest about what
28:41this is. It's right in the middle. You know, we're not character AI and we're not Twitter. We're like
28:46in the middle. Right. And so the, the feed, what it does is every persona, every character that
28:53you interact with, as you give it likes and replies and stuff, it kind of shapes these
28:58personas and understands which ones you interact with the most. And each of those personas have
29:03their unique, like weights towards different topics of interest and personalities and the
29:10way in which it tries to answer questions. And all of that shapes the diversity of responses that you
29:15get for anything you ask. And what they also have is a set of like interests that they use to then
29:24search the internet for the latest news and the latest things going on. And so each of them has
29:29their own kind of like interests on the web that it goes and searches for. So when you go on the
29:35newsfeed on social AI and you pull the refresh, there's always more content for as well. But
29:42every piece of content is like this AI having gone through, looked at the internet, found
29:47something interesting that it liked and it giving a little commentary on it with a link to access it.
29:51Which I thought was, you know, it's pretty cool. Does that not break the whole structure of it
29:56being kind of a you centric experience? No, because all of the bots are there
30:02talking about things that you're interested in. The only ones that show up on your feed
30:06are the bots that you've interacted with that you're interested in or adjacent.
30:11Right. And you select in your feed what kind of stuff you want and what you don't. And they're
30:15there to inform you. Right. They're there to give you what what you want to hear. You know,
30:20it's interesting. I also built a trending topics page and the trending topics are not trending
30:27topics of the news. They're trending topics of you. So as you post and you interact with things,
30:34there's trending topics. It's like these are the top of mind things for you.
30:38Interesting. You can tap on any of those topics and see hundreds of A.I.s
30:43debating and discussing it. Wow. That's kind of that's kind of wild, actually. Yeah. So is there
30:50a how how are you thinking about or I guess what are you seeing in terms of how people are
30:56using it? Because that idea of like, I'm going to use this as a sort of proactive,
31:01you're almost describing like a personalized news app or feed reader or whatever. Right.
31:07That instead of having my Facebook feed curated by my Facebook friends, it's going to be curated by
31:13A.I. I think we're going to see a lot of that. Right. That's right. That's right.
31:17But fundamentally, it's a new way to interact with language models. It's building a new
31:23interface for interacting with them and all of the applications that allow
31:28people to get more value out of the language model through those interactions go into that.
31:34Right. And socially at its premises is not social for the sake of the social network,
31:39but social for the sake of social interface. Right. And so that's really what it is.
31:46All right. We got to take one more break and then we'll be back with the rest of my conversation
31:50with Michael Saman. We'll be right back.
32:15We're back. Let's get to the end of my conversation with Michael Saman.
32:19At one point in our conversation, I asked him about one thing that I always do with new A.I.
32:23tools, which is ask them what movie I should watch tonight. I like that question because there are
32:28lots of answers, all of them real, or at least in theory, real and fact checkable.
32:33But there's no one pat answer. There is not like a movie I should watch tonight.
32:37So when you ask, you immediately get to get a sense of how the tool works and how it works with
32:42you and just kind of how it's gonna make sense of things for you. And let me just try it with
32:49social A.I. So I go to social A.I. I type in what movie should I watch tonight? Then I hit post.
33:01Wait a few seconds. And OK, I have a bunch of replies from Sonny Jane,
33:06Brightside Jane, again, probably an optimist. How about a feel good comedy for some laughs?
33:11Sure. Pondering Fiona says, what mood are you in, David? Perhaps a whimsical fantasy like the
33:18Secret of Rhone-ish to transport you to a world of magic and discovery? Or maybe something
33:23thought-provoking like Interstellar for those cosmic ponderings? Positivity Eddie says,
33:28how about a feel good adventure, uplifting and fun? At Feedback Fountain says, consider going
33:33for something classic like the Shawshank Redemption if you want a deep story. Or if
33:37you're in the mood for laughter, Step Brothers is a solid choice. What genre are you feeling
33:41tonight? Let's see. There's a whole bunch more of these. Inception comes up a lot.
33:46Interstellar comes up a lot. The Spot apparently really loves Christopher Nolan movies.
33:51There's one from Anxious Ali 77 that just says, what if there's a terrible plot twist with the
33:57thinky emoji? It's good stuff. More Interstellar. Cynic Gina says, well, since you're asking for
34:03cinematic advice, how about watching something truly enlightening like a documentary on existential
34:07dread? Or maybe a thriller that reminds you how not all choices lead to happiness. The options
34:12are as bright as the dark abyss of reality. So, once again, it's all over the place. This is
34:17how this system is designed to work, right? And I can go into one that has a recommendation that I
34:22like. There's one here that recommends Eternal Sunshine of the Spotless Mind. Great movie.
34:26I can reply to that and say, oh, love that. Saw it recently. What else do you recommend?
34:31That's the idea about how all of this is supposed to work.
34:34But frankly, I'm looking at these replies, and it's mostly just people saying,
34:39did you pick a movie yet? How did it go? Did you find a movie that speaks to you yet?
34:43Did you choose a movie? Why that one? Did you narrow it down? Did you settle on a movie choice?
34:49Is that helpful? I don't really think so. So, I told Michael that this didn't feel like it had
34:55been tuned to be particularly good or helpful or designed to actually give me information
35:01in the way that he's been talking about. And he said, well, yeah, he agreed with that.
35:07Yeah, and the reason is because it's an alpha that I built myself in four weeks, right? So,
35:11we can't expect, you know, like...
35:13Right. Well, this is kind of what I'm wondering is, like, what do you hope for? What should that do?
35:19Yes. So, of course, you know, of course it shouldn't work that way. I think,
35:23you know, the intention here, and of course it's on its way, and I think it's quite surprising that
35:29it even, you know, works at all. But I think it's a testament to the interface that it does.
35:34But I would say that, you know, the ultimate vision here is to make that question better
35:41answered in social AI than in chat GPT. And I think the interface will allow it. If you have
35:49comparable models on both platforms, and one of them gives you multiple responses from different
35:54points of view, and the other one just gives you one answer, and you're working with a technology
35:59that's probabilistic, like, who has the upper hand? You know, like, I can give 10 answers,
36:05and if one of them is good, you're happy. But if chat GPT gives you one answer, and it's not right,
36:11you're frustrated.
36:12That makes me think of the thing people always say about the TikTok algorithm, which is that
36:15the reason it feels like magic is because you don't get annoyed when it's wrong, because you
36:19just keep swiping. And I think social has, like you're saying, very much the same thing. The
36:23signal-to-noise ratio is actually, like, horrendous on social media, but we're also sort of used to
36:28it now. And if you just scroll past it and move on, and we all kind of understand how to find
36:34needles and haystacks in a way that when chat GPT recommends a movie I don't want to watch,
36:39it feels bad because it gave me an answer.
36:42Exactly. And it's part of the reason why people still go to Google,
36:45right? Because Google doesn't have any more accurate stuff these days, you know,
36:50compared to what it was because of all the AI that's in there too. So, you know, it's not like
36:55Google is any more accurate, but it's interesting because Google gives you this chat GPT, the answer
37:02at the top, right? And then you have, like, all of these various links that give you different
37:07perspectives. And, you know, let's be honest, like, most of these links are now run with,
37:12like, so many paywalls and things that, like, you can't even get to the answer for any of these
37:16links. But the original intent behind Google and why it worked was it gave you options to look
37:23through. And so it allowed it to be wrong and it increased its chance of being right at least once
37:30or twice. And we're used to using the internet this way. We go through the internet looking for
37:36information, trying to find which thing is helpful to us. Right. And so I just I think
37:42it's interesting that maybe we got a little distracted by the horror movie a little too much.
37:47Just a little.
37:49No, just a little, you know, and we kind of leaned in a little harder than we were supposed to.
37:53But, you know, that's kind of where I see it being more valuable is can we give people
38:00the option to peek behind the curtain of this AI and see what are the various answers it can think
38:09of rather than here's my one response? And and ultimately, I think this is going to be how it
38:15works. If you have multiple language models, if you have CLAWD, OpenAI, LLAMA, and so on,
38:21are you going to go from one to the other and be like, which one gave me the best answer? Or are
38:26you going to use an interface that's going to pull from all of them, generate a bunch of different
38:31responses and curate those responses based on the reactions and responses that you've done in the
38:37past and full control for the user to get as many responses as they can? You want to index the
38:45language model. You want to create a Google for the language model outputs. And that's kind of
38:50what this is. Mm hmm. Yeah, no, I agree. That's super interesting. So tell me about the roadmap
38:56a little bit. You know, you mentioned this is an alpha you built in a few weeks. It is becoming
39:01increasingly clear to me that you have much bigger visions for where you're headed. Tell me a little
39:07bit about it. My vision is to build out the whole funnel of interaction with the language models
39:14from the top of the stack all the way down. So people are going to start yelling at you to build
39:18DMS and you're going to build DMS and you're just going to speed run all the pain of developing a
39:23social network. I love it. All of it, all the way down. They're going to be like, when do I get
39:28lists? And you're going to have to do lists. And then they're going to be like, when do we get
39:30bookmarks? But, you know, it'll be interesting. The conversation will go down the funnel.
39:34Naturally, I expect the first thing people want, which they already say is they want DMS, right?
39:38So, OK, you introduce DMS, right? And it makes sense as you interact with the models and you
39:42are more interested in one or another. You want to talk a little more deeply with that one. Cool.
39:46DMS. So wait, hang on. Just let me let me just press on the idea there, because I think that's
39:50another. How do you do that without just becoming the chat GPT chatbot again?
39:56Well, because what I'm building includes chat GPT within, but I'm building at a level,
40:03one level up of communication stack, which is how do you discover who you want to talk to?
40:10Right. It's like asking Google, well, you're providing all these links to the websites.
40:15Why aren't you just another one of those links? It's like, well,
40:18because we're providing an index that lets people figure out which one they want to go into.
40:22Right. And so social provides that layer that helps people discover
40:26who they want to talk to. The end goal is not to talk to an individual. The end goal is the
40:33whole picture. It's the experience of being able to go up in breadth like a Twitter interface,
40:40dive down to a FaceTime call with an AI model, go up to an audio call with another,
40:45go up to a chat, go back up to the surface, go back down, go back up.
40:49That experience, the entire funnel that people go through to find information and communicate online,
40:54that whole thing needs to happen. Right. And so the vision is we start at the top and work our
41:00way down and then we offer users the ability to go through all of it. Okay. So the idea is then
41:05by the time you're at that one on one conversation, you've essentially filtered your way down such
41:11that you're talking to, instead of going through like the GPT store or picking plugins or whatever,
41:16I've now socially found the conversation I want to have with the right source.
41:21And even if now all I'm doing is essentially having a chatbot conversation with a chatbot.
41:26That's right. I've filtered it in the way that is useful.
41:29Yes. I think in the future people are going to ask, they're going to look at the first
41:34versions of ChatGPT and they're going to see when ChatGPT announced features like,
41:38customize your GPT and tell it how you want it to talk to you. And they're going to think,
41:43I can't believe we had to tell it, you know, I can't believe we had to tell it, you know,
41:50like what, you know, it doesn't make sense, you know, it doesn't make sense to have to do that.
41:53I think that's my bet. I don't, I might be wrong. Look, I don't think the product hit the nail on
41:58the head. I think we're, we're definitely not on the, on the center. We're, we're a little bit off
42:03and, and I, I have no doubt in my mind that there's a lot of work that needs to be done to
42:07get it in the right place. It's, it's an alpha, right? But I think that's the general direction.
42:13And my bet is, you know, telling ChatGPT before you talk to it, everything you want to, you know,
42:18do and how you want to talk to it in this particular case and why and whatever,
42:22just feels like we're missing a step, you know? Yeah. I buy that as it happens. I think
42:31ChatGPT's whole custom instructions thing is just as ridiculous as you do. So I'm,
42:36I'm very much with you on that one. What have you seen so far in terms of how people are using it?
42:41Are people doing surprising things with it that have sort of changed your mind about what this
42:46thing is or can be? You know, people are favoriting a lot of users, which I think
42:52is interesting. So you can favorite like any particular follower and then that pins their
42:56replies to the top. I think that's kind of interesting. Well, that seems like it sort
43:00of supports your thesis that, that what people want is to find their kind of chosen favorites
43:07in here. Right. And it's, it's also interesting that you didn't call it follow, which is a
43:12departure from social networks, but also makes more sense in the kind of scheme you're talking
43:16about. In the context. Yes. I check out some of these apps and like, what I've seen is
43:21it's like, they'll grab the social network, they'll grab the AI and they'll just mash them
43:27and just, okay, let's see how it goes. Right. It, it, you know, like, and, and in this case, I,
43:33it's hard, it's much harder to try and like get this right. And I don't think I've got it right.
43:38I think I'm closer, but, but I don't think I've got it right, but I don't want, what I don't want
43:43to do is just mash them together without consideration. And so, you know, I didn't say,
43:49oh, you can follow a particular AI because in this case, you could just favorite it.
43:54And to your point, right? Like it's more in line with what the product is. For example,
44:00the app opens up to the composer. It does not open up to your profile. It opens up to the
44:04composer now. So you go into it and it's kind of more like ChachiPT in that sense.
44:09The bots don't tell you that they're human, right? They tell you that they're AI.
44:13They're there to help you. That model of like design is me kind of saying, I'm going to pick
44:19and choose what I think is useful from a social networks interface. And I'm going to integrate it,
44:25but I'm not going to like, forget that this is not that. Right. And because if I forget that
44:32this is not that, then it kind of feels gimmicky, you know, like it kind of feels like it's,
44:38it's like, okay, like we get, you know, like, and I don't want to build a gimmicky product,
44:43you know? So, and that's something I've learned too, is like, I went full on with the skeuomorphism,
44:49right? I went full on trying to build the product that made people most familiar with it. And as
44:56I've been iterating, I've scaled back a lot of the skeuomorphic design that really, you know,
45:01kind of leaned into the social stuff because I realized, you know, okay, users don't need to like
45:07see the home feed in the way that they do on Twitter, because it's not that necessary. You
45:11know, let's lean into the utility a little more. And I've tweaked it a bit to make it a lot more
45:17of the function and leave the, you know, the skeuomorphism of the interface to just the
45:24pieces that are required to make it easy to interact with. And so that, that I think will
45:30continue. I think the product will continue to distance itself from the skeuomorphic design
45:38that it starts with and turn into its own interface. Do you have a, do you have a clear
45:43sense of what that might look like over time? If I, if I told you I did, I would be incredibly
45:50delusional. I think more so than I am, I know I'm, you know, I've got my delusions for sure, but,
45:57but you know, more so than that, I would be at a ridiculous level of delusion.
46:02I don't know what it's going to end up looking like. I don't know what that final form is,
46:07but I have a mission in mind, right? And I, and I, and I'm determined and excited about it.
46:13And my hope is that as we get more information and data around like how people are using it,
46:21that we're able to further go that direction. And, and in my view, I don't think we should,
46:26there should not be a limit to what angle we approach if the users and the people are asking
46:33for that, you know, interaction. There's a lot of people, and I think a lot of product companies out
46:39there when, when they launch something, the thing that they started with that got them the attention
46:45they believe is like sacred. Right. And so you'll see like be real launches and it's like, Oh, our,
46:50our feature is the feature Snapchat launches. And they're like, Oh, our, you know, our, our
46:55ephemeral Snapchat is like the feature. Right. And, and then it's like, okay, but you can expand
47:01and do this and do that. And it'd be cool if you had this and they're like, no, because this is
47:05our feature, you know? And it's like, okay, cool feature. But like a lot of times, and this is
47:11something I learned when I used to work at Facebook, it's like a lot of times the thing,
47:16the product or feature that gets you from stage one to stage two is not going to be the same
47:22method approach or product feature that's going to help you get from stage two to stage three.
47:27You have to be comfortable with your product evolving over time. And so, so for me, it's
47:31maximum flexibility on that front. I want to help solve these problems for people. I don't care what
47:37it looks like. Is there a business here? The, the, I feel like you've gone from like social
47:45commentary, fun experiment to like kind of hit app to at some point, somebody is going to want
47:54to give you a bunch of money to make like a social AI for teams and sell it to it departments.
48:01Do you have a sense of what the business looks like here? How are you going to make money?
48:05People have already asked. Yes. You see, that's your, your, your right to predict that,
48:10you know, people asking that. It's just what happens. They're like,
48:12how, how do we sell this to IBM? And you're like, okay.
48:18You know, look, I, I'm not interested in, in thinking about, you know, how to monetize the
48:23product right now. I think I'm sure there are ways to like help people find, you know,
48:28the right shampoo bottle for them or whatever, but I'm not like, that's not my primary thing
48:35right now. I I'm trying to solve a problem I've dealt with in my life, right. Where I've sometimes
48:40felt, you know, especially during COVID, like it was really hard to get my thoughts out and to
48:49be able to just think through things a little bit without, you know, jumping to conclusions.
48:56And, and I struggled with that, you know, and I, I wanted to solve that for me. I wanted to solve
49:02that for my friends. I, you know, I w I wanted to build something that could help. And that's the
49:09thing that keeps me going. That's the thing that makes me excited and the rest of it, you know,
49:14we'll figure it out. But my hope is that I'm able to help a little bit, you know, other people out
49:21there like me who maybe need a little space to think about things and don't want to rush to
49:28social media and post all their thoughts. I don't have a great read on what success looks like for
49:35this, because it's like, what you're saying is sort of, if I, if I post less dumb stuff on,
49:42on Twitter, because I'm posting dumb stuff on social AI instead, that's a certain kind of
49:47victory, uh, I think for you and for the world, but also you're in a position where what you're
49:53saying is like, you need, people need a place to put this stuff and to talk about this stuff and
49:59to get feedback on this stuff and actually doing it in a safer place. Yeah. Like ultimately we
50:04want, we want to talk to people, right? Like that's our goal. But if you don't have someone
50:07around, right. I think it's certainly better than going on social media, right? Like that's kind of
50:11how I, how I see that. And actually to your point that you made earlier, a lot of people have been,
50:16saying, uh, that they think Elon Musk should use social AI, uh, before he starts tweeting random
50:22stuff, you know, and just the A B test for his own thoughts so that he could stop saying things
50:26that like piss people off. Um, I thought that was an interesting point. What's the wildest thing
50:32you've seen somebody do in social AI? What is like either in, in the logs or screenshots or
50:37so what is the one thing that you've seen that you were like never in a million years, what I've
50:42Okay. So maybe this isn't too wild, but okay. So my, my mom works at a furniture store in Miami.
50:47Um, and she, uh, she's from Peru, uh, speaks mostly Spanish and she never uses my apps,
50:52right? Cause like, you know, she's nice supports it, but like, she knows that like, you know,
50:55whatever, uh, I keep trying and they're all, you know, not that great. So I was surprised when she
51:01messaged me out of random. Okay. And she sends me a screenshot of my app cause it works in all
51:06these languages. She sends me a screenshot of the app and she starts using it and ranting in Spanish
51:12about like about work. She's like writing about work, you know, um, on this app. And it's so
51:19funny to me. Like I, I was like, okay, well, you know what? Like maybe product market fit. I don't
51:26know. Um, but you know, I, you know, I, I love her and, and I think it's, I think it's sweet
51:35to see that there's, you know, that there's appeal in, in unique ways. I got it. You know,
51:41if I ever go work at some company, maybe I'll start using this to rent, you know?
51:46And no one will know. And that's the beauty. There you go. Awesome. All right. Well,
51:51I should leave you alone, but thank you so much for doing this. This is really fun.
51:53No, thank you so much.
51:56All right. That is it for the first guest today. Thank you to Michael Saman again for being here.
52:00And thank you as always for listening. There's lots more on everything we talked about,
52:04including some of our social AI coverage and just all of our AI coverage in general.
52:09There's a ton happening. Check it all out at theverge.com. I'll put some links in the show
52:13notes, but as always, read the website. It's a good website. And as always, if you have thoughts,
52:17questions, feelings, or other AI bots that I should be hanging out with, you can always email
52:22us at vergecast at theverge.com or call the hotline 866-VERGE11. We love hearing from you.
52:28This show is produced by Liam James, Will Poore, and Eric Gomez. VergeCast is a Verge production
52:33and part of the Vox Media Podcast Network. Nilay, Alex, and I will be back on Tuesday and Friday with
52:38all of your regularly scheduled programming. We have a bunch of fun stuff coming up.
52:42We'll see you then. Rock and roll.

Recommended