Is your data yours?


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

Data as a Liability, Again

We talk a lot about data governance here. The main argument is that data comes not just as an asset, but also as a liability. The asset part is part is obvious. If you are in the ads business, data about your customers allows you to target them better, and charge more for your ads. 

The liability part is less obvious. For years, the thinking was that data is an unadulterated good. Yet, the tide is slowly turning. For example, as GDPR came into power and its enforcement picked up, even companies that you’d not immediately associate with data collection are and up having to pony up millions in fines. It’s not like airlines are particularly profitable to begin with.

Again, my argument has never been that data is not without value. Rather, we simply haven’t been able to account for its liabilities until now. Storage is practically free nowadays but governance is not. How do you make sure the data is accessed only by those who need to? Or more subtly, how do you make sure it is only used in the way you said you would use it? We are still figuring out these answers.

For example, if you are asking people to enter (and verify!) their phone numbers to improve your security, you end up with a bunch of phone numbers in your database. Now, can you use those phone numbers to target people as an advertiser? You probably shouldn’t. It’s tempting, however, to think that you have all this data in your database, so why not use it? It’s so much work to build all that controls and flags and tags to make sure data is not just stored, but also used properly. Engineers aren’t cheap, after all.

Of course, that data has to come from somewhere. And it’s not that users are always losing something when they part with their data. Again, some of the examples are more obvious than others. Majority of people do not pay a dime to Google or Facebook to use their services. If you want to be academic about it, you could argue that people who advertise on Google and Facebook charge you higher prices, but how do you know they’d not charge you more if they bought ads on the local paper.

Cheap, Smart, Private. Choose Two?

There are more interesting cases, however. For example, if you bought a Smart TV in the last few years, it’s probably subsidized to some degree thanks to the data it collects on you. That’s why sometimes non Smart TVs are cheaper than the smart ones. The tech that goes into making a TV “smart” is not expensive, but the data you get from figuring what people watch is enough to make the TV cheaper. Is that a win-win?

That’s more of a philosophical question, and hard to answer. How you approach it really comes down to your personal politics too. If you are an individualist, you could argue that people who bought those TV sets consented to various privacy policies and knew what they were getting themselves into. Don’t want to get your data collected? There’s no dearth of dumb TVs to be bought.

But, that sounds a bit simplistic. First of all, It doesn’t sound reasonable to argue that we should subject people to hundreds of pages of legalese to catch the latest episode of The Bachelor (or The Bachelorette, I don’t judge). Smart TVs might be an improvement over dumb TVs (bear with me) in terms of functionality, but maybe we can take a more holistic approach. 

The more subtle argument is whether collecting this much personal data is something we should as a society put some brakes on. When you give up your data, it’s not just *you* that’s affected. When you end up with a huge collection of personal data in one place, there are larger risks to society. You don’t have to go that far back, even, to come up with examples here.

Data, In Your Face

Facial recognition is an interesting case. I remember when identifying a face in a photo (i.e. “is there a face in this photo?) seemed like magic, but now it seems like table stakes; a first year in a CS program could do it. Then the problem became facial recognition (i.e. “who is in this photo?”). Again, for a while, it seemed like you would really need a lot of computation power and millions of tagged photos to figure that out. But now, the smartphone I carry in my pocket is able to miraculously tag my parents in photos that I took several years ago, without ever having to talk to a bunch of expensive servers in the cloud. Is there a cloud in my pocket?

Again, one of my beats is that people (or me, at least) really under appreciate how fast technology develops sometimes. What is a hard CS problem that takes a whole bunch of PhDs is now a library that you can import into your application. This stuff is going to be everywhere. Last week, internet was abuzz with FaceApp, a fun app allows you see yourself (or anyone really) aged. Of course, it didn’t take people to realize it’s an app made in Russia. Since all-things-Russia is obviously bad, there were even American senators involved figuring out what’s going on. 

This is an interesting moment for me to ponder. I do not have an antagonistic view of the government, like many Americans do. Generally, I believe most people in public service have good intentions, are capable, and have the best interest of people they serve at heart. So, while I may not agree it’s the best use of a senator’s time to worry about FaceApp, given, ehem, today, but it’s fine.

At the same time, however, I worry where as individuals our responsibilities start and end. If millions of people willingly download an app from various app stores, and then willingly upload their photo (yes, I know, you can upload any photo), who is at fault? Is it Apple’s fault for allowing such easy access to the device camera, or not disclosing to people that it’s a Russian made app?

It simply seems hard for me to imagine a regulatory framework by which we can prevent people from uploading their selfies. How do you make sure that FaceApp’s activity is restricted but people can still use Instagram freely? There are millions of selfies there too, and most of them even have the location tagged along also. Isn’t that even creepier? You could make the argument that Instagram is equally creepy and we shouldn’t let Facebook have access to so many photos either, but I don’t think you’d get a lot of support for that one.

Does Data Have Borders?

Or do we go full nationalist and segregate by origin? That doesn’t seem right either. What does the origin even mean, anyway, if the application is made by a Russian company but the photos are uploaded to a public cloud operated by an American company, on American soil? Or what would it mean if it was an American company that operated those servers in some other country, say, Singapore? Would it make it less creepy, or more so? 

There are societal benefits to collecting a lot of data, but there are also risks. My personal view is that we can mitigate a lot of the risks by making sure the data doesn’t get stored forever, and is responsibly discarded. Moreover, there are probably ways to get the value of the data, even in aggregate form, without building dossiers on every mere mortal on planet, so we should invest more in those.

However, how we can generate, send over, and store that data is also a personal question. It’s tempting and largely valid to point the lens of scrutiny on those who have the data and the power, but as individuals, we are also responsible. We control what apps we use, and what we do with them. It’s fun to enjoy the fruits of technology, but part of the entry fee is to be a knowledgable consumer of it.

What I’m Reading

A Spreadsheet Way of Knowledge: There’s no end to how people use and abuse spreadsheets. Steve Ballmer, reportedly, uses Excel as a personal calendar, and I’ve met people who use it to track the personal favors they’ve given and received in a Google Spreadsheet. Fun! There are few pieces of software that have changed our world as deeply and widely as spreadsheets. They are everywhere, and they model not just our businesses but our life, our thinking. This is a good historical narrative on how they came out to be.

[…] He ran fifteen different scenarios on his computer, including one in which he took the money set aside for renovation and invested it elsewhere. What Maxwell found was startling: Not only would renovation be foolhardy, but “even the ‘best case’ showed I’d get nearly as good a rate of a return on my investment in a money market fund as staying in the restaurant business.” Get out of the restaurant business! the spreadsheet said. What the spreadsheet left out, of course was the unquantifiable emotional factor — Maxwell loved what he did. He kept the restaurant (though scuttled the renovation)

How to Hire: There are many ways tech companies compete with each other, but there’s no competition like there’s one for talent. It’s hard to hire good people, harder to keep the better ones. But then, there are few more important decisions than who to hire and how. What’s a company, other than a bunch of people working together on a shared mission? This is a good talk by the Carta CEO in text format about how the company makes decisions. Nothing too controversial, but it’s a good overview and some bits are interesting.

I want to repeat this point. We are increasing overhead by 50% because we failed to execute. It is not something to be proud of. It is humbling to go back to the labor market, hat-in-hand, asking for help. We did this when we hired you. We asked each of you to help us. You did not need us. There are plenty of great jobs. But we needed you. And thank goodness you came. We wouldn’t be here without you. But each of you was hired because the team before you failed to execute without you. And this is still true today.

Software Eats Itself


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

One question I get often is why I decided to get an MBA, and one abroad. I had a degree from a fancy CS school, and brand name companies on my resume. And besides, aren’t all MBAs basically the same; a bunch of young kids who spend a year or two partying around the world and then getting the same jobs; be it consulting or product managers?

There’s both a personal and a professional answer to that question. The personal side is simple. I wanted to be momentarily away from the Silicon Valley bubble I spent a good 8 years in. While I throughly enjoyed most of that almost-decade, it felt like I was developing a blind spot to the rest of the world. Most of my conversations revolved around tech and most of my friends were in tech. Spending a year abroad, spread between Singapore and France, surrounded by people not in tech, seemed like a good idea. And it was!

The professional side is a bit more controversial. But here’s my contrarian bet on technology: While there’s seemingly an unassailable dearth of technology workers, especially in the Bay Area, the tech industry will also be automating itself, faster than most people are ready to appreciate. The lower end will is already experiencing this pain, but the bar will slowly move up.

The Cloud is not just a Server

Let me give you an example. When I started my career in Bay Area at Digg, we owned hundreds of physical servers, colocated in a facility. There were people hired, especially to maintain those computers, making sure the blinkenlights kept on blinking in unison. While Digg was a had a sizable user base, especially for its time (and even by contemporary standards), it’s hard to imagine a company do the same today. You simply click a button on the AWS console, and more machines than you can ever need will be ready at your disposal.

And this is really the most obvious example, but the bar keeps going up and up. What used to take 10 engineers 10 days to program is now a simple library on GitHub that you can import into your application with a few hours of tinkering. The always-eloquent Benedict Evans likens former white collar workers to cells in Excel. But you can take that analogy even further. Pivot tables replaced the lowest levels of analysts, and Tableaue reports went up even further.

Looking at just the offering of AWS, it’s hard not to feel the same fate will hold for technologists too. Most of those products on that chart used to be someone’s full time job at a sizable tech company. Why do you need to have a full time database administrator when AWS has a turn-key solution for you? I am simply taking Amazon as an example here as they are The Cloud for most tech companies, but it’s hardly alone. It’s Seattle neighbor, Microsoft has basically the same products, but also a few more especially focused on machine learning infused services. Simply drive alongside 280 in Peninsula and you’ll be inundated by billboard after billboard of companies offering similar services. 

Of course, you could argue that I am simply contradicting myself here; clearly these are tech products built by tech workers. Yes, you don’t need to build your payment infrastructure if you just use Stripe, but someone needs to build Stripe. And same for all those features on Amazon, and Microsoft, and so on. These companies need to hire people, but there are qualifiers

Doing more with Less

And that’s the other side of my argument. I believe as more and more of the bottom rung of the technology work is either automated or abstracted away, there’ll be more and more demand for the upper end. In other words, more specialization which will require deeper, more intensive training.

Given most tech jobs are in states with unenforceable non-competes and at-will employment laws, these job markets are relatively liquid. This would ideally mean the compensation be relatively close to what would be suggested by the equilibrium of supply and demand. 

Yet, there are other factors like housing shortage in California and New York and a frothy venture capital boom artificially increasing salaries. Of all the companies I worked at over 8 years, almost none of them turned any profit, and even the one that eventually did go public is still not profitable. There’s a horizon issue here, as profitability is hardly the goal of any growth stage firm. Yet, during all this time, I was gainfully employed and made a decent living primarily thanks to the largesse of the various venture capitalists and a zero interest rate environment. Is this forever sustainable?

And there’s the curious market of coding bootcamps and the like. It’s overall a good thing for society if more people can get high paying jobs without having to spend 4 years in college. And it’s a good thing for tech companies to come to their senses on giving up their obsession with fancy degrees, when a significant chunk of the work at even the most “high tech” firm is tying together different services, which hardly requires a Carnegie Mellon degree.

On the other hand, what seemed like an amazing market, charging a fraction of what would cost people tens and sometimes hundreds of thousands in student debt, is now ridden with dead company after dead company. Most of the smaller players are already out, and the some of the big ones are having to offer more and more specialized degrees. There’s also the Lambda School model, with clever financial incentive alignment and a heavy focus on initial student quality, but its success (if you take its outspoken CEO at his word) at this point is more the exception than the norm. Just because there’s a big demand for more tech workers, it doesn’t mean the demand is limitless and or we can or should convert our entire workforce to a bunch of coders.

Cash does not Rule Everything Around Me

Compensation is hardly the reason, though, I wanted to switch my function. If anything, it’s more likely that salary prospects of a software engineer is brighter than most other functions in tech. The reason for my switch is that I firmly believe in increasing abstraction of technology. While discussing this piece with my co-host Ranjan, we initially disagreed on what I mean by that word; abstraction. Let me explain.

In software engineering, abstraction means something specific. Namely, the idea is to hide the complexity behind an abstraction and only expose the important bits. A good analogy is the automatic transmission, which is an abstraction over (of?) the manual transmission. This means that you can either simply forget about changing gears entirely, which does come at a cost, or more recently, simply focus on changing gears instead of using a clutch pedal. 

This notion of abstraction means that you can do more with less. In the case of driving, your cognitive drain is reduced, and driving becomes easier. In case of the larger software engineering industry, this means that you can slowly focus less on what software is, and focus on what it does. When you don’t have to worry about the intricacies of your software and infrastructure, you can focus on the larger vision, and increase your leverage, and impact. 

In a world eaten by software, being a software person is going to be not just an advantage, but a requirement. But in a world of increasing software abstraction, the more in-demand skills will not be just ability to think in software. And not only that, software people will also have to think in terms of things they touch, which is everything. This is why, in the end, is why I decided to pursue a degree in business.

And hey, if it all fails, there’s always philosophy.

What I’m Reading

Wimbledon 2010 live blog: 23 June: The Wimbledon final between Federer and Djokovic was epic, but it’s still nothing compared to the 2010 Wimbledon game between Isner and Mahut. At 11 hours, it’s the longest professional tennis match in history. And this liveblog from The Guardian was an epic hit when it came out. I am both amused and disturbed that I remember it so dearly:

I’m wondering if maybe an angel will come and set them free. Is this too much to ask? Just one slender angel, with white wings and a wise smile, to tell them that’s it’s all right, they have suffered enough and that they are now being recalled. The angel could hug them and kiss their brows and invite them to lay their rackets gently on the grass. And then they could all ascend to heaven together. John Isner, Nicolas Mahut and the kind angel that saved them.

Principles for a More Informed Exceptional Access DebateI talked here about end-to-end encryption before. As more happens in bits over atoms, our base assumptions on which our civilizations are built are tested. Who gets to see what? How do law enforcement and intelligence organizations work and how do we balance their demands on data with individuals rights on privacy? This is a good primer on some of the basics on end-to-end encryption. I don’t agree with everything here, but it’s sober and introduces some good points.

Law enforcement and intelligence agencies have been “going spotty” for some time, in that changes in technology continually changes what is available to law enforcement. There’s no panacea that can solve all the problems law enforcement has with access to information. This article outlines how to enable the majority of the necessary lawful access without undermining the values we all hold dear. For the purposes of this article, we’ll use the term “exceptional access” to mean a targeted government authorization to access, with the assistance of the service provider, the data belonging to a user when needed, for example as part of a criminal investigation or to stop terrorists. It’s exceptional because almost all users aren’t affected by it and it’s not very common, on the scale of the total number of devices and the total number of communications enabled by the platforms.

Rise and Waterfall of Apple Maps


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

We talked about the 2019 WWD last week. Yet, it was such a densely packed presentation, it is worth pulling out a few more threads. During the presentation and the Apple pressers afterwards, one thing that came to my attention is how Apple has redesigned its Maps app again.

If that phrase sound familiar, it is because I think this is the second time Apple is announcing it’s built the map experience from the ground up, after its initial less-than-stellar lunch. This time, Apple says, its maps are even better. Now there’s Apple’s version of Street View which is sleeker. The maps have more detail. It all sounds good, but I do worry if Apple really has what it takes to develop this type of software, and more importantly the infrastructure behind it, in a way that can compete with Google.

Tech companies like Apple are giant behemoths and the way software is developed does affect the kind of software that comes out the door. We talked about this a bit in the context of Service Oriented Architecture Amazon had followed for years, and how it enabled them to build AWS. But there’s more to it, so let’s discuss two different ways.

Down the Waterfall

You can divide the ways you develop and launch big software products into two. The first is one is basically how you think you’d develop and launch software; talk to some people who you think will use your software, collect those requirements into a document. Then you talk to your software people, tell them what needs to be done, they argue with you a bit why it’s so hard to do X and Y, but eventually people agree on a specification. Then, the software people go and do their thing for a few weeks (months, years, whatever) and you have software ready to launch.

This approach is called waterfall where there are sequential steps (requirements gathering, planning, development, testing) and your software falls from step to step.

Of course, this approach has its problems, largely being that you do really not know how your users are going to use (and equally important, abuse) your software before it is delivered in their hands. The moment people start interacting with it, they will realize their most requested feature is pointless, but if you could do this other thing, they’d happily pay double.

Then, you can adopt a more iterative approach where the sequential steps are turned into smaller cycles. Instead of developing a giant specification that no one fully agreed on (but rather, gave up discussing on in reality), you develop something and put it in front of people. They tinker with it a bit, and see what they like and they don’t. You then take this feedback to your team, and they tinker a bit more, then it goes.

Can do Agile instead?

This approach is called agile and has been the more common approach to software development lately. The main idea is your iterative cycles are much, much smaller. Ideally weeks or days instead of the months or years in the waterfall mode.

There are of course some costs to this model. The obvious one is, there’s a ton more management you need to do, constantly collecting and collating feedback from your users and relaying it to your developers. Your software people need to be more in contact with the users, which they may or may not like. You also need to educate your users a bit too; they need to be OK with their products being constantly in flux, with things coming and going at times. The much maligned Move Fast and Break Things slogan is a good way to summarize this model.

In the Software Eats the World of online, most stuff gets built with an agile model now. There are some baseline requirements, of course, that every agile product needs to cover. For a consumer product, for example, you need to build some account management things, and for enterprise software there are regulations and standards you have to adhere to no matter what. Yet, the bigger bulk of the value add is slowly moving to the agile-y developed part.

Agile is still mostly about developing the said software, but you can also see how the delivery methods does play into it. In the waterfall world, the feedback came much later, so it didn’t matter if the software was shrink-wrap software or was downloaded. In the Agile world, however, you need the delivery and the feedback as soon as possible. In fact, if you could skip the delivery step altogether and immediately onboard your customers on to the new version, that’d tighten the loop even further.

41 Shades of Blue

For Google, this method of developing and delivering software is ingrained in their culture. In fact, the company was so adamant in rationalizing the agile process and feedback collection, that at some point it’d famously try testing 41 shades of blue. If your software is essentially a web app that’s accessed by thin clients (like web browsers), you can skip the feedback part entirely and just throw out automatically generated different versions of the same thing and see what sticks. That’s partly crazy, but really, why not?

Apple, on the other hand, has the DNA of delivering software in yearly cycles that generally correspond to their hardware launches. Every June, you get a sneak peek at the new stuff, and by September you can put in your phone along with the new hardware to go with it.

Now, the company has been working to make development of its new apps decoupled from OS updates, but you can still see the organizational culture is much, much different than Google.

This brings us back to Apple Maps. Now, it’s true that on iOS that Apple Maps has a larger market share, but that’s mostly a function of immutable defaults. Yet, you’d be hard pressed to find anyone who finds Apple’s product to be as good as Google Maps. The fact that Apple itself had to apologize and tell people that Google Maps would soon be coming only speaks to the company’s initial confidence in its product, at least initially.

Apple Maps, Back Again

It’s hard to tell whether Apple Maps is as good as Google Maps now. For some use cases, it probably is. But, speaking from experience and some data I had seen when I was at Uber, when it comes to rigorous users, Apple Maps simply doesn’t cut it. My personal experience echoes the same; every city I travel in I make a small concerted effort to switch to Apple Maps first but then soon give up switch to Google Maps. Sometimes it is because Apple Maps doesn’t have transit information, and sometimes its location database is simply lacking in quality.

I am aware I am being a bit unfair to Apple here, and mixing software development practices (waterfall vs. agile) with what’s essentially building a big database that you need to update. But I’d argue, the culture that surrounds software development permeates into how the said database that supports that product is maintained too.

Moreover, It’s true Google had a huge headstart in building maps compared to Apple. There is some diseconomies of scale here, no matter how hard you try, you just need to spend a bunch of time building a ton of stuff before you get to something usable. In addition, Google Maps being a default on Android and its relatively generous attitude to data collection does help Google in ways that Apple will not even try. There could be some smart tricks you can do here with anonymization, but I doubt if they’d ever get you the fidelity of data you’d get from simply tracking your users’ every more forever.

Apple will of course keep updating its location database in the server every day, and its transit database will better too. Yet, looking at what Apple has demoed at WWDC and how it is presented, I can’t help but notice that the Cupertino company still hasn’t fully came to appreciate how people came to expect their products to get better every day, not every year. This is especially key for something like maps, where the info becomes stale the moment you put in the database.

Former Apple employee Justin O’Beirne has written many in-depth comparisons of Apple and Google Maps. One of his concerns, in his latest essay was whether Apple was too focused on shapes, which are pleasing, and Google on information, which is useful. I think that comparison does speak to what different things are important to each company. But I also think as important as the priorities is how the work gets done, and how it affects the outcome. Apple has its task set out for it.

What I’m Reading

The day we discovered our parents were Russian spiesReading about spycraft is always fun. But then you realize, spies are people too. They have lives outside of work, friends, families, and sometimes, kids. This is a fascinating story about two Canadian kids who, one day, learn that their parents are actually Russian spies and have their entire lives taken away from them. If you’ve watched the famous TV show Americans, this is the story it is loosely based on:

The programme was the only one of its kind in international espionage. (Many assumed it had been stopped, until the 2010 FBI swoop.) Many intelligence agencies use agents operating without diplomatic cover; some have recruited second-generation immigrants already living abroad, but the Russians have been the only ones to train agents to pretend to be foreigners. Canada was a common place for the illegals to go, to build up their “legend” of being an ordinary western citizen before being deployed to target countries, often the US or Britain

What I Learned Writing a Book: My twice-manager and long-time friend Will Larson’s book, An Elegant Puzzle is finally out. Will is a compassionate manager first, but also a very analytical thinker, who loves analyzing the human and systemic aspects of software development. He’s been writing on his blog for years, and this newsletter is partly inspired by his writings as well. I hope you take the time to read his book, or check out this review. And in the mean time, here is Will about the process of writing a book.

There is this sort of classic bind in life that sometimes your biggest opportunities arrive when you’re least equipped to take advantage. Towards the end of finishing the book, I found my patience and joy in writing declined – I was burning out – which isn’t too helpful as my continued writing is a big part of marketing the book, and it’s also a unique opportunity to get more folks reading this humble blog if I can keep writing good things.

Apple enters the identity ring


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

I love WWDC, Apple’s annual developer’s conference. Ever year Apple announces new software in early June, and then in September the new devices come out. It’s always fun to play with new the new iGizmos, for me at least, but there’s something fundementally more fun, more exciting with seeing new software, bestowing your existing stuff with new functionality.

This year’s WWDC was sort of special too. As many others have pointed out, it seemed like the first developers conference Apple ever held since it got over the iPhone. Mind you, iPhone is still the rainmaker, but it’s not the main focus anymore. There are new technologies like SwiftUI, setting the tone for the next few decades of iPhone UI development. There’s Catalyst, bridging the gap between iPad and macOS apps. What stole the show for me was Sign In with Apple.

Here’s the short version: Apple now allows (more on this “allows” later) developers to eschew building their own authentication mechanism and just delegate it to Apple. In more human terms, you can now sign in to your favorite app using your Apple ID, just like you could with Facebook or Google. And of course, it works everywhere including the web. In one swoop, Apple joined the fight to be the identity provider online. 

There were many things that leaked before this conference, but as far as I could tell, this announcement came out of nowhere, which makes it doubly exciting, as well as scary.

Of course this new feature comes with Apple-esque twists, main one being privacy. First of all, Apple will not share anything with the developers other than your name and email (and a stable key that you can use in your database). That is a big departure from Facebook or Google’s systems, where the developers can request myriad of information on the users. Of course, Apple doesn’t have the same high fidelity of data on its users that social media / adtech companies in the first place.

Adtech Won’t Like This

The real surprise came late though. Apple will just allow a third-party sign in, but also allow users to hide their email addresses from the developers. When a user wants to “hide their email”, Apple will generate a throwaway email address (per user and account combination), pass that on to the developer, and relay all the messages to the user.

This is not a novel feature; you could always do this to some degree with Gmail with the “” trick. This isn’t perfect, but it at least allows you to give individual email addresses per service. There are also services that will generate throwaway, disposable emails. However, both of those options remained popular only among a small group of people. Deploying this relatively sophisticated approach to privacy in such a user-friendly manner to billions is a true and true Apple move.

This will change things. There are two main reasons why developers want you to create an account on their platform. The first is that having a persistent identity. This buys you the ability, for example, to store the data on the server so that you can later sign-in from a different device, or after a device reset. If this explanation is bland, it’s only because this is something we take for granted that every service should do this.

The other reason why developers do want you to login has only emerged in the last few years. As more and more of our lives moved into the apps, and those apps began sucking more and more types of data, smart people realized all this data can be turned into cold-hard cash, especially in the form of hyper-precise (though rarely hyper-accurate) profiles for ad targeting. The interesting thing is, the sum of all this information these apps collect is generally bigger than its parts.

Differently put, if you want to maximize the value, you need to “join” (or merge) different types of data from different apps. Now, you can see where I am going with this. The unique identifier that ties your data from all the different apps is your personal email address. This is why Apple providing a unique email, in essence hiding the primary key for those accounts to be merged, is a big, big deal. For many years, many developers, especially small ones, would essentially build apps not to make money via the app itself, but rather to have enough users to sell their user’s profiles to the highest bidder.

Will Developers Adopt Apple Sign In?

Let’s assume for now Sign in With Apple is “good” for the end-users. Yet, Apple would still need to get the developers on board. So, will they? I’m going to try to answer two questions at the same time, from the developers’ perspective: A) Are third party logins good? B) Is Sign in With Apple a good option?

First of all, this third party sign in stuff generally works. If anything, they work too well. Facebook Login, for all its flaws, is much easier to use than having to enter a username and password for every app you use. Sure, very rarely it is down, and you are royally screwed for a few minutes, but that’s few and far between. And yeah, it does add a bit of complexity maintaining multiple identity profiles for each account, but that’s a small variable cost, on top of the fixed cost of supporting third-party logins in the first place.

But the wins are huge. Having worked on this stuff professionally across many companies, I can tell you that there are more ways for users to get confused and mess things up filling in the most basic form than there are stars in the sky. If you can make a user simply tap a blue button that says “Login with Facebook”, you’d much rather do that than build a huge login form with all its intricacies. 

If anything, I’d expect Apple’s solution to be even more frictionless since they fully control the OS and they can build native interactions. Things like Facebook Login do work, but only through elaborate hacks. Every once in a while, you’ll have users who get stuck in some weird loop because their connection to Facebook dropped, or their phone failed to open a browser app. Apple’s system, at least on the UI side, will be more robust and less error prone.

Lastly, this might make a tiny sliver of users more likely to use your product. There’s a small, but arguably vocal, set of people for whom privacy is a major concern. An Apple provided login system where your identity is protected could make your product more attractive to some. 

It also helps that the system will come with some anti-fraud mechanisms.Apple’s login system will tell you whether a user who just signed in might be a fraud-y user, based on device level data. Google can, to some extent, provide similar functionality but Facebook is limited to what it can gather via its apps and server side data. If you want to adjudicating a user’s identity, it’s generally better to do it closer to the user. Such set of features might be attractive to some developers and users alike, but it probably won’t move the needle much.

That’s…about it? What about the cons? There are not that many, but they matter.

Well, the major one is that using a third-party login system severs your direct relationship with your users. This is not a binary thing, of course. In either you will still have an email address to reach your users, but it’ll always be mediated by Apple, in the case of Apple Sign In. Not only you’ll be unable to work with data brokers for some cheap ad dollars, but also you’ll also lose the ability to buy services from other brokers, link your users’ profiles with data from other providers, with whom you might have legitimate agreements with.

But, that’s not all. Apple didn’t obviously mention this in the presentations, but it didn’t take long people to find out the stick in the documentation. If you are using a third-party login provider in your app, you have to . support Sign in with Apple as well. Now, that is fun! I have been thinking about this since I’ve watched the keynote and read the documentation up and down (hope your weekend was as fun as mine!).

The Verdict on Apple Sign-In

I think Apple Sign-In is a good thing for the world. However, I do not think it’s not something many developers will be jumping at implementing. But they will have to, and we’ll be better for it.

First of all, while I acknowledge the ease of using third-party logins on the apps I use, the privacy implications of using them make me uneasy. I try not to use third-party whenever possible, and instead create individual accounts that I manage with 1Password. This is admittedly a bit more work, but not too tedious. Thus, I welcome Apple butting its way into the apps and providing a more privacy-focused alternative.

Of course, you could make the argument that the fact Apple “forcing” its developers to use these products is a cheap shot. This, I think, overlooks a bit of reality. Look, I see the aesthetic appeal of letting the “better product win in the free market”, where Apple converts developers by building a “better” login system. But it’s not that simple.

I am not huge a free-market dogmatist, but if you were to take the free-market idea to its extreme, you’d also concede that this is, after all, Apple’s walled garden and it has the right to enforce any rule it so damn wishes. Of course, market definition is a tough one, and given the increasing scrutiny of Apple using its dominant position in one market to hurt competition in others would make tamper this instinct a bit.

Moreover, I do think that Apple will have to work hard to get this right. There’s a common tripe in Silicon Valley that Apple doesn’t get social. Now, I don’t think authentication systems are particularly “social” things, but they are still not in Apple’s usual bailiwick. The reason why third party logins work is partly they absorb the complexity, and expose very little to the end user. This stuff isn’t that trivial to build, and nothing is ever easy at Apple’s scale. 

Other companies like Snapchat and Twitter also built similar systems before, but neither of them gained much traction, simply due to the vast popularity of Facebook and Google’s systems. Networks effects are hard to overcome. More ironically, Facebook itself touted an anonymous login option a few years back, but it was never rolled out.

It’s worth thinking whether Apple could make Apple login more attractive to developers by providing certain privileges to certain apps. There are definitely some levers here. Screwing with the App Store search results would be a bridge too far, but Apple could potentially feature apps it likes prominently on the App Store. But then, I think Apple would try to steer clear of explicit rewards to those apps over that don’t, especially to protect itself against regularity scrutiny but also to maintain the quality of the App Store.

A lot of this will also depend how constraining Apple’s guidelines will be for developers who use Sign in with Apple. For example, the company requires the Apple Sign In option be placed on top of other providers, and be very prominent. The Cupertino-based company could be notoriously picky but also capricious with such details.

In the end, I view the world of business through the lens of competition. It’s the fear of competitors that your consumers can flock to that aligns a company’s incentives with its users. For many years, social networks like Facebook and search engines like Google enjoyed a relatively relaxed marketplace where there wasn’t much to worry about in terms of rivalry. This came at huge costs to us all, including erosion of privacy. I am glad Apple is taking a stab at this. Whether this will work, that’s a different question. I am keeping my hopes up.

What I’m Reading

WWDC 2019: The Things You May Have Missed: The list of things Apple announced on stage and in developer sessions at WWDC this year was staggering. I’ve watched the keynote and perused some of the new documentation and definitely missed some stuff. This by Patrick Balestra is a good comprehensive list. Most them are on the technical side, but there are some interesting gems here that point to where Apple might be going next. A few definitely jumped at me. (Ordering mine)

  • IMDF (Indoor Mapping Data Format) is a new concept introduced by Apple that provides a generalized, yet comprehensive model for any indoor location, providing a basis for orientation, navigation and discovery. @ortwingentz
  • Apps are to request “Always” access to the device location but users will see an alert when an app tries to access the location from the background that prompts them to “Always Allow” or to “Change to Only While Using”. Users are also presented with a map clearly showing that the app was tracking the location. In this way, users are not forced to take a decision upfront when installing the app for example. quicklywilliam
  • App Store Connect will soon get a near real time sales view showing you the last 24 hours. @lesmond
  • App deletions statistics will also be available as part of App Analytics in App Store Connect. @ilyakuh

Why Bankers Can’t Stop Running (Subscription Required): I’ve only recently picked up running as a hobby. There are definitely days I dread going for a run. But never have I felt, after a run, that it was a bad idea. It messes with your head. This Financial Times columnist runs through (hah!) a bunch of high powered executives who make time for their runs, no matter how busy they are, and wonders why and how. I’ve went on a 5 mile run after reading this.

Mora also runs with colleagues and is one of the organisers of an annual event for Goldman’s summer interns and other runners at the bank. “For a junior out of college looking for a job, it’s another way for them to connect,” Mora says. “One day they’re sitting at a desk working with them, the next day at 5.30am they’re running the Brooklyn Bridge with a managing director, a partner, someone from the firm, running alongside them.”

Between 40 and 50 people join the run annually. Hu says he tries “to run with colleagues in every city” when he travels —something that I’ve done too, with varying levels of success. On a trip to Dublin, where Citigroup Europe is based, he went running with Ireland’s then junior finance minister Eoghan Murphy, who took him on the same stretch of beach where I once enjoyed my daily runs.

Is White Supremacy Good Business for Twitter?


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

There are certain things reasonable people can disagree on. Should you douse your fries in ketchup or mayo? Which is better, Star Trek or Star Wars? Does god exist, and if so could we tell? There was a time, back when I had more a lot more energy and a lot less concern for psychological wellbeing of others, I thought these were debate-worthy questions.

Now, I don’t particularly care about any of those, but I also don’t think having strong opinions on such questions makes anyone a bad person. You might be a bit tedious, and maybe I’d ask you to stop Redditing in real life, but still, I’d grab a beer with you.

There are also other types of questions I don’t debate anymore. Not because I don’t care, but because having a strong opinion on these questions other than the ones I hold does make you a person I’d rather not interact with. For example, I don’t think earth is flat. Nor do I think vaccines cause autism, or that US government faked the moon landing. These are generally settled debates, if there was one to begin with.

When is white supremacy not good?

Yet, Twitter The Company, is still divided on this issue seemingly. 

Motherboard reports (emphasis mine):

Twitter is conducting in-house research to better understand how white nationalists and supremacists use the platform. The company is trying to decide, in part, whether white supremacists should be banned from the site or should be allowed to stay on the platform so their views can be debated by others, a Twitter executive told Motherboard.

Now, in 2019, you’d think we’d have also settled the debate on white supremacy. If not, let me share my views. White supremacy is a bad, vile, sick, horrible ideology that is based on nothing but pure hatred for other human beings. It has no redeeming quality and it has no place in modern discourse. You definitely do not need to or want to engage with a white supremacist, unless you are a professional politician and / or ethicist. None of those are up for debate. I am not even sorry if this absolutism bothers you.

I do not want to make the assumption that Twitter executives think white supremacy is good. Statistically speaking, there’s probably an employee or two who thinks that way, and to be kind, they can go fuck themselves. I also do not consider Jack Dorsey to be particularly #woke, but I also don’t think you need to be socially progressive to be a good CEO.

I am, however, deadly curious about how on earth you embark on a mission where you have to answer the question “Do I want to have white supremacists on this platform, which I run for profit?” and expect to come up with any answer other “No, Jack. White supremacists are bad”.

Obviously I am caricaturizing things a bit. Repeatedly yelling “white supremacy is bad!” is probably not a good way to un-radicalize those who have been lost, or make the world safer who are threatened by this such sick ideologies. Social media companies’ laissez faire approach is partly to blame for the increasing, but it’s not the only reason.

Yet, on a more logistical level, the idea that Twitter The Company has to go on this long soul searching mission to figure this out is quite crazy. I do not want to harp on Jack Dorsey too much here, but it’s really hard not to. The man’s entire brand is built on the idea that you should always think as hard as possible, to the point of not doing anything ever.

Here, let me lie down set hard truths on the table for all of us to consider, because really, we are all parts of the problem.

It’s just Business

White supremacists make Twitter money. They count as daily active users. They create engagement. Twitter shows ads to white supremacists, and takes a cut when those ads make money. White supremacists and their activity are forever embedded in the machine learning models. You don’t have to see a single Nazi tweet to have interacted with them in some way. Your tweets, your likes, everything you do on Twitter, everything you see on your timeline is influenced, monetized and funded by some white supremacist somewhere.

There’s so much shit smeared on the walls of this house, we don’t even notice it anymore. Instead, we are just discussing what color of brown we like.

There are some cliched oppositions to the idea that Twitter should just call it a day, and ban white supremacists off of Twitter. The first is that Twitter discriminately banning people off of its platform would amount to curbing of free speech. The flaw with this argument is almost too obvious to point out; Twitter is a for-profit company that has no obligation to keep any sort of speech on its site.

This is really beating a dead horse, but Twitter is not a public square, nor is it an marketplace of ideas that is run as a courtesy to its users. Twitter exists only to make money for its shareholders, and every day Twitter keeps the white supremacists on its site, it is making money off of that activity. EU-funded research puts the number of alt-right users on the site at around 100,000 minimum. Subjecting itself to the whims of the sickest people on earth based on the naive belief that the only antidote to bad speech is more speech is one thing. Pretending this does not make you money, or it’s not part of the calculus, is insulting the intelligence of everyone.

Will They, Won’t They?

Till now, I have been assuming Twitter did have the ability (in addition and as opposed to willingness). This is admittedly a generous assumption, but not a crazy one. A common argument bans is that Twitter actually may not have the ability to identify, ban, and keep the white supremacists off of its platform. But let me flip the argument on its head. Is Twitter worth anything if it cannot keep a modicum of decorum on its site?

Partly, I do not buy the idea that there are so, so many white supremacists on Twitter that an even an expansive manual cull couldn’t make a substantial difference. The aforementioned EU research puts a floor of 100,000 alt-right members on the platform, which is a big number, but not unmanageable for a well-run company. A big operation might be costly, and there could be some false positives. But if de-platforming of people such as Alex Jones to Milo Yiannopoulos have shown anything, it is that they work, and the resulting censorship frenzy around censorship generally dies off once the media cycle moves on to the next Trump tweet.

We talk about Balkanization or “splinternet” often on this newsletter. It’s worth pointing out Twitter already blocks certain content, and bans people often in countries like Germany, and yes, Turkey, where I am originally from. Twitter’s cooperation with the Turkish authorities for silencing dissent is dishonorable, but I do not particularly fault them for it. 

However, what Twitter wants to do and what it is being forced to do are two different things. Lumping them together doesn’t help. Not many people at Twitter HQ are excited about blocking journalists’ account on Erdogan’s request within Turkey. It is, however, very clear (I think?) that Twitter does think white supremacists are bad, yet they prefer to have them on their platform.

In the end, I will wholeheartedly concede that these questions are easily answered from outside then outside. From my time at Uber, I’ve seen first-hand how what appears as a small fix, a minor change in policy could be impossible to put into action for reasons unknown to even the most knowledgable experts. But then, there was also a lot of legitimate concerns with Uber’s previous management, and it resulted in a hell of a year for the company, and eventual ousting of its CEO. Twitter might very well be afraid not just losing users and engagement, but actual physical safety of its employees and executives.

And that’s really the rub. Twitter made this bed, and now has to sleep in it. Once you associate yourself with the sickest of all, you are forever stuck there. There’s no way out. Unless, that is, they choose to find one.

What I’m Reading

Grow Smarter, Faster: How Axios drives engagement with user-level dataNormally Ranjan is loathe to promote his company’s stuff on The Margins. It keeps us indie. But, he’s got me thinking a lot about newsletter analytics. One thing I never thought about was the focusing on the individual readers, as opposed to the crowds, as is common on web marketing. His team just interviewed the VP of Growth at Axios exactly on this subject and it fits my experience building this newsletter well:

“You don’t simply get a 50% open rate by having a 100% opener and a 0% opener. You get two distinct cohorts that you act upon in different ways.” Simply put, some of your audience is engaged and some isn’t. So why do we treat them all the same? We should not measure success as an aggregate, but instead try to understand if the right people are highly engaged.

The Incels Getting Extreme Plastic Surgery to Become Chads: There’s no burying the lede here. The pick-up artists gave way to the incels (“involuntarily celibates”) and now they are undergoing surgery to make themselves look more like those they hate. Cringing doesn’t even being to describe my feelings but I still couldn’t stop reading. Internet does weird things to people:

Mike recently got a jaw procedure called BSSO, plus a hair transplant. After the surgeries, he met two girls at his other job, teaching comedy, whom he considered “cute,” and he took this as a sign of success. Now he’s investing in cryptocurrency in hopes of getting more procedures with Eppley. In a recent forum thread, he posted a selfie specced out with angles and degrees, measurements of his features; he then found a photo of Tom Cruise and gave it the same treatment. (Mike’s jaw angle was 69.02 degrees; Tom’s was 76.31.) “I want to solve this woman thing,” he told me.

Is Fake News spam?


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

Bill Gates probably never said “640K ought to be enough for anybody” but he definitely did say email spam will be solved in two years, back in 2004. “Two years from now, spam will be solved” was his exact words, in front of a bunch big-wigs at Davos.

Needless to say, spam was not solved in 2006 but it was eventually solved. There’s still a ton of email spam, mind you, clogging the tubes but all in all, most of what people consider rarely hit their inboxes, but instead go to their spam folders. This is progress!

There are a bunch of reasons why and how was spam “solved” in the narrow sense. First of all, lots of stakeholders decided to play together, from industry to governments and and the individual players in the field as well. There was a bout of regulation in the US, the motherlode being the cutely named (I should know…) CAN-SPAM act. As a result, there was was a bunch of high profile cases both in US and other countries, and people did go to jail. To top it all up, then the email people came together and agreed on a few protocols to better authenticate both their servers (like DKIM) and the emails themselves (like SPF).

But there’s also the fact that the technology to detect and pile away the actual emails just got better. We always had the technology to send a ton of emails for cheap, but that’s much easier than being able read each of those emails individually and make a decision on the spot. First is a horizontally scalable problem; you can just throw money at the problem as long as you make on the other hand. Making computers think and understand requires more of a breakthrough.

Obviously, there’s a bit of a chicken-and-egg problem (solution?) here. If you are going to use machine learning to detect spam, the more signal you have, the better your algorithms are going to get. This is why, for example, more and more emails going through a few providers like Google and Microsoft helps. Not just for machine learning, but also being able to stop a to of spam in one go with blacklists and such. 

There are big downsides to this as we keep talking about here again and again, but it’s what it is. Economies of scale is a powerful force. 

Fake News: Artist formerly known as Spam?

I mention all this, because if you look at the spam problem long enough, and squint a bit, it starts to resemble the fake news problem. Replace Eudora with Facebook and Nigerian princesses with some Russian-government trolling, and you have a system where the costs of distribution of material is cheaper the returns, and the entire thing flies off the wheel. This isn’t really a new line of thinking and I’ll credit some Benedict Evans tweets (who ironically blocked me on Twitter) for some of the terminology I’m using here.

Anyway. It’s natural to think that the previous approaches should work on this problem too; 1) centralize to get better data and leverage (i.e. one tweak fixes everything) 2) apply machine learning. Rinse, repeat. Simple enough, really.

If you are, say, Facebook dealing with a huge anti-trust problem, this could be a bit of godsend. If the problems you have created are so big that they are putting entire liberal democracies in the West at risk, and fanning genocidal flames in Southeast Asia, then you can make the argument that “only someone as big as me (centralized) and someone who has the technical chops (machine learning) can solve this problem”. I am not saying that Facebook would rather have the fake news problem around the world than the anti-trust troubles at home, but I am saying you would be incentivized to think that way a bit. It’d at least color your thinking a bit.

It’s good to check your assumptions every once in a while.

What if fake news is not a spam-like problem but actually is something else, that requires different types of solutions?

For example, a defining quality of spam is that is not just it is unsolicited, but it is annoying. It gets in the way of the useful stuff. Not only that, it is crap that you do not want to read, even though there’s enough people who do read them to make them worthwhile to send.

Fake news, on the other hand, is almost always the opposite. You want to read that stuff. For example, Casey Newton pointed to this study in his Interface newsletter that says some of the “fake news” is even more engaging than the real news.

It is eye-opening.

On Facebook, while many more users interact with mainstream content overall, individual junk news stories can still hugely outperform even the best, most important professionally produced stories, drawing as much as four times the volume of shares, likes, and comments.

This sort of makes sense, if you think about the entire genre of literature called urban legends, or conspiracy theories in general. A secret cabal that runs the world is definitely more interesting than a bunch of old people mangling legal documents and yelling at each other on C-SPAN.

And before you think only a nutjob here and there would believe in conspiracy theories, consider that more than 1/3 of Americans don’t even buy into the climate science. This is the stuff your boring real news that takes hours of research to produce has to compete against:

A quarter believe that our previous president maybe or definitely was (or is?) the anti-Christ. According to a survey by Public Policy Polling, 15 percent believe that the “media or the government adds secret mind-controlling technology to television broadcast signals,” and another 15 percent think that’s possible. A quarter of Americans believe in witches. Remarkably, the same fraction, or maybe less, believes that the Bible consists mainly of legends and fables—the same proportion that believes U.S. officials were complicit in the 9/11 attacks.

Good luck fitting all that to print, The New York Times.

And there’s also the difference between the motivations of people who send spam and those who create and distribute the fake news.

Fake news is not about profits

The reason why spam flared in the first place, making a quick buck, also made it easy (I mean, bear with me) to both detect and punish those behind it, further making it less attractive. There are only so many ways to get people to make a purchase on your website and get that money in your bank account. In the global financial system, there are ways (and loopholes) to track people and tip the law enforcement to knock on someone’s door. Laundering money is equally hard, which is why you only see relatively large amounts being laundered (and caught). 

Fake news, however, come in many forms. A big chunk that exists for the same reason spam exists; the zero-cost distribution means that if you can make something go viral on a platform and slap a few ads on it, you can make a quick buck.

But how about politically motivated fake news? Stuff that a bored Redditor creates with slowing down a politician’s speech to make her sound drunk and incoherent (have they even listened to Trump?) is an interesting example. How do you protect against a lone wolf, when the wolf can inflict damage at a massive scale?

We’ve seen this happen multiple times in India, for example. You can just crop out a video from one event, add a new caption to it and get a bunch of people violently lynched to death. Obviously, the bulk of the blame lies on the physical perpetrators of the crime. But you can’t just shrug this behavior off as people being crazy, when it happens over and over again, to the point of genocidal action, while you are raking in the profits by the billions.

Not that we are making the problem easier on ourselves. One of the big gains of a centralized system, the argument goes, is that allows you collect more data and build better algorithms. Will Facebook be able to gather enough data when they can’t look at the content at all because all the chats are now entirely end-to-end encrypted? Will just looking at the metadata be enough? We don’t really have good answers to these questions.

What Do We Do?

There are some easy wins here, at least in theory. I think a great deal of fake news isspam-like and can be eliminated by similar techniques. Yet, I don’t think that will make the pain go away as much as it did for spam. We’ll need a multi-pronged approach.

Lack of timely, accountable information from social media companies encourages a reactive approach, often too late to fix the damages, let alone prevent them or really understand what happened. Similarly, without the fear of competitors that users can flock to keep them in check, companies engage in extremely risky behaviors.

Moreover, these behaviors and their results are generally hidden from public or hard to even detect, and only discovered by painstaking investigation by journalists. This doesn’t scale, and the power asymmetry, let alone the animosity, between the two industries will only get worse. Regulations around other critical industries (like finance) and individual companies are much tighter, and can be a starting point.

But there are also some other fundamental issues we’d want to discuss. Do we really want to have a truly anonymous internet? For years, the anonymity of the internet allowed was considered a feature, including by yours truly. But a dogmatic anonymity fervor should not disallow accountability.

Furthermore, we should think about whether we want to run our major information distribution channels on advertising based networks, get all our news from a few sources that aren’t accountable to anyone.

What I’m Reading

How the Kleiner Perkins Empire FellKleiner Perkins is as iconic and blue-chip as they come when it comes to Silicon Valley Venture Capital firms. (Disclaimer: I worked at a company where Kleiner Perkins was a major investor, and John Doerr on our board) In recent years, however, the firm has gone through a bit of a turmoil, and arguably lost a bit of its -never intentionally claimed- luster. This is an interesting overview:

The firm’s heart may have been in the right place, but its investments flopped. Some, like electric-car maker Fisker Automotive, went bankrupt. Others, like fuel-cell manufacturer Bloom Energy, took 16 years from Kleiner’s investment in 2002 to go public. The result was a tarnished brand at a time Kleiner’s competitors were killing it with investments in the digital economy. Accel Partners, for example, was the early backer of Facebook. Union Square Ventures was among the first to put money into Twitter. And Benchmark Capital, which scored in the web’s first era by investing in eBay, staked Uber in its early days.

The problem with Ben Thompson’s ‘aggregation theory’I am a big fan of Ben Thompson’s Stratechery, and have been a paying subscriber for years. This is an, in my humble opinion, a fair criticism of his infamous Aggregation Theory. It purports that aggregation theory is really using new terms for old concepts. Thompson had a response on his newsletter later:

The problem I have with the [aggregation] theory is that it implies there is something fundamentally new or unique about the economics of the brave-new-world of tech, when in reality, the old economic rules still work just fine. This, in turn, creates the raw material to rationalize bubble thinking/valuations, instead of more level-headed analysis. The reality is that from time immemorial, it has always been the case that certain points in the supply chain make more money than others, reflecting differences in market power. Porter’s Five Forces, for instance, has long been used as a framework for analysing where and how much market power exists, and explaining and predicting why some firms make more money than others. If your suppliers for e.g. have a lot of bargaining power, all else held constant, you tend to be less profitable, and vice-versa.

WhatsApp too gets hacked.


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

Intrinsic motivation is hard to muster, but it is powerful. Back when I worked at a cloud storage company, our CTO really wanted us to be excited about our End-to-End Encrypted (E2E) offering. He believed, rightly so, that without E2E, any rogue employee could look at any customer’s data. So we built a small web application that randomly pulled photos from employee’s accounts and put them on a giant TV screen for everyone in the office to see. There was a small backlash; the employees were encouraged to use the product for their daily use but no one really agreed to have their coworkers see the photos of their kids.

The giant screens, however, stayed for a few months until we actually finished the E2E features fully integrated.

Encryption was thrust back into the headlines, albeit in a roundabout way. Financial Times reported last week that the Israeli spyware company, NSO Group, developed a tool that used Facebook’s WhatsApp voice call feature to install a surveillance software directly.

It is scary stuff (emphasis mine):

WhatsApp, which is used by 1.5bn people worldwide, discovered in early May that attackers were able to install surveillance software on to both iPhones and Android phones by ringing up targets using the app’s phone call function.

The malicious code, developed by the secretive Israeli company NSO Group, could be transmitted even if users did not answer their phones, and the calls often disappeared from call logs, said the spyware dealer, who was recently briefed on the WhatsApp hack.

You get a missed call, and game-over. You may not even be aware that you’ve been hacked! It doesn’t get much worse (better?) than that.

Facebook’s WhatsApp is famous for deploying end-to-end encryption to billions of people worldwide. That seems like a noble thing. It is likely that WhatsApp founders actually believed in the benefits bestowed upon with the encryption scheme. But then, they also said [advertising sucks], so who knows? You can’t buy loyalty they say, but turns out, you can rent it.

I’ve talked long about whether Facebook merging all its chat applications into a giant Voltron of a messaging app while also introducing E2E is a privacy-forward act.

wrote back then:

First, the encryption. Zuckerberg might appear to leave data on the table when he decides to encrypt all communications, but that’s hardly the case. Facebook doesn’t use the contents of the messages today for advertising. Yet the company’s targeting is so good and people more predictable than they think, people accuse the company of listening their private conversations. Moreover, even when Facebook encrypts all the messages you send and receive, it will still be collecting tons of other sources of data, such as the metadata about the messages, location information gathered but the apps, your browsing habits via the various trackers on the web, data shared by apps that use Facebook SDKs, and the huge troves of data buys from other data brokers. None of that, seemingly is changing.

In some way, the NSO Group’s hack (seemingly) has little to do with end-to-end encryption; rather it relies on a bug in the larger app to install a surveillance tool that captures things before they are encrypted by the app.

The end in “end-to-end” sort of hides the fact there are several layers that exist before the data is fully encrypted, in a way that makes it invisible to the transport layer. First of all, you have to type it in to your phone, which exposes what you type to people (or cameras, mind you) around you. Even if your screen is covered, and keyboard, you are still leaking data from your keyboard, both visually and acoustically

But then there’s also the operating system that your app is running on; you simply rely on the fact that your keyboard isn’t logging things as you type them, your camera isn’t recording when it shouldn’t, so on and so-forth. There are a lot of “loose” ends before the end-to-end shrouds your messages in mathematical secrecy. And then, there’s the recipient. In most cases, you have no idea what situation the recipient is in or who he or she might be. For all you care, they might be just broadcasting your texts to the building across from them.

Encryption is just part of the puzzle, it is definitely not panacea.

Relatedly, Bloomberg writer Leonid Bershidsky stirs the pot:

“End-to-end encryption” is a marketing device used by companies such as Facebook to lull consumers wary about cyber-surveillance into a false sense of security. Encryption is, of course, necessary, but it’s not a fail-safe way to secure communication.

Bershidsky’s piece generated its own controversy and I admit I hesitated before linking to it, granting it further clicks and page views. The provocative tone makes it hard to tell if it was written in good faith, and the original headline (“WhatsApp hack shows End-to-End-encryption is pointless”) did not do it many favors. Something about WhatsApp encryption does make people say dumb things, I think. *cough*Guardian*cough*.

To make the obvious painfully obvious, I do not think E2E is a marketing ploy, but rather a necessity at this point. Whether that necessity is driven by public demand for privacy (good!) or Zuckerberg et al wanting to defer any sort of responsibility for what happens on its platform (bad!) is a different discussion. 

However, the point Bershidsky tries to make but gets lost in his inflammatory rhetoric is that if you are targeted by a state-level actor, you are probably done for. The Mariana trench level of depth hardware and software stack ensures someone will forget to plug a hole somewhere. And of course, the many, many, points of leverage a government has over people around you practically ensures that only the most life-long dedicated evade the Big Brother’s watchful eyes. If all fails, there’s always a wrench somewhere.

Then, a more interesting thing to ponder is whether you would want truly unbreakable E2E communications widely available to everyone at all times. My knee-jerk reaction to this is “Yes” but at the same time, “But how?”. Think hard enough, and you might even end up at “Maybe not?”.

We’ve seen that as there’ll be E2E communications, there’ll be ways to work around them. It is painfully naive to think we’ll hit on a technology to fix all those before the technology to break it all won’t develop. I am not a quantum technology expert, but some people are worried.

And there’s the human side. Be it Signal, Facebook’s WhatsApp, Wire, or Telegram, or Apple’s iMessage, or Wickr, we are at the mercy of a few people to get a ton of software and hardware right, and do the right thing all the time. We practically ran the internet on a buggy cryptography library for more two years before anyone noticed, and that was open source software. 

I admit I don’t have a good answer here.

On one side, I do not want people over at Menlo Park to peer through my chats on Facebook’s WhatsApp nor do I want people in Switzerland to go through my ProtonMail email. I am not sure if they cannot right now, but I know without E2E, they can. I’ll take that side of the deal, and you should too. Similarly, basic encryption protects you from a customs officer at the border having a bad day, or an ex-boyfriend that just wants some dirt. The same argument goes for mitigation dragnet surveillance. Not everyone, yet, can afford NSO Group’s software.

Moreover, E2E makes data stored in the cloud much, much less valuable. I believe that there are unaccounted liabilities in data, one of which is how the vast quantity of it presents a nice fat prize to focus all hacking efforts on. Properly encrypted data turns the data into an amorphous blob that is of no use to anyone.

Yet, how do you explain to tens of Indians or Myanmar residents that you simply cannot control people’s behavior, when you are benefiting from the encryption mostly? Apple put on a brave face when it resisted FBI’s attempts, but will it be able to do the same if there was a bigger threat to national security? Will Microsoft? Would we even know that these companies cooperated with the government? If Google tomorrow drops a key logger on your phone, I am not sure if anyone would be the wiser.

This stuff is not going to be fixed by us being miserable about it, but rather having a real debate between technologists and other stakeholders. This will mean working with governments, but also investing in new technologies. The other options are not workable.

Going back to the company I mentioned in the beginning. I am not sure how much the shame-board helped, but we eventually finished implementing what we called The Vault at the time; a folder that you could optionally put your data in. It’d be slightly slower, and some of the features like search and thumbnail generation wouldn’t work on all devices, but it “worked”. Yet, turns out, turning yourself into a dumb hard drive in the cloud is not much a business model. So that idea got scrapped. 

There’s a lot more to say about that, but hey, I am not going to put that in writing or even tell you online. I’ll tell in person. Between us 😉

What I’m Reading

Why Books Don’t Work: Andy Matuschak, a well known software engineer, talks why books (or lectures for that matter) aren’t great mediums for people to actually learn and integrate things from and presents his own (experimental) solution. Andy is always at the forefront of learning sciences, and I’m looking forward to see where he goes with this:

Instead, I propose: we don’t necessarily have to make books work. We can make new forms instead. This doesn’t have to mean abandoning narrative prose; it doesn’t even necessarily mean abandoning paper—rather, we can free our thinking by abandoning our preconceptions of what a book is. Maybe once we’ve done all this, we’ll have arrived at something which does indeed look much like a book. We’ll have found a gentle path around the back of that intimidating slope. Or maybe we’ll end up in different terrain altogether.

The Night The Lights Went OutThis is part harrowing, and part hilarious. Writer Drew Magary describes in gory details how he woke up from a chemically induced coma after a traumatic brain injury. I don’t want to spoil anything, but you owe it to yourself to read this:

[…] But I do know that I’m different. Still me, but not quite. All the pieces of me aren’t all lined up exactly as they were, and I haven’t fully accepted this yet. I liked who I was before all this. I’m not sure about this new fella.

No surge pricing for $UBER


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

Taking Stock of Stocks

What determines the price of a given stock? If you want to be academic about it, you’d expect it to be net present value of all future expected cash flows to the stockholders. In reality, though, it’s set by supply and demand; a stock price goes up when other people want to buy it. Of course, the stock holders do expect some benefits, so those two theories do say the same thing. This is all Financial Markets 101, and you don’t even need an MBA to know this stuff, as my co-host kindly pointed out.

Anyway, talking about stock prices, the ride share behemoth (Disclaimer: My former employer and I have some stock) Uber went public the last week. 

It hasn’t been going particularly well. 

DealBook from The New York Times:

Uber suffered the worst first-day dollar loss of any U.S. I.P.O. ever.That’s a terrible start for the biggest market debut in years. What was supposed to be a celebration turned into an exercise in expectations management: “I think we came public on a tough day, and a tough week,” Dara Khosrowshahi, the company’s C.E.O., told Mike Isaac of the NYT.

Let’s be real: It’s a fool’s errand to do any sort of deep dive on a stock that’s a couple days old. And it’s easy to go full Pessimists Archive and sneer at news companies calling it doom and gloom on other tech stocks that didn’t well on its IPO day, only to rise to never-before-seen heights. Amazon this, Facebook that. Again, Intro Corp Finance stuff. 

On the other hand, there are certain expectations of a company on its IPO day, and of them is that their stock go up a bit. Not too much, since the spread between the opening price and the eventual price ends up in the underwriters’ pocket instead of the company, but a few points up is good for the soul.

Of course, it’s pointless to judge a company by its IPO. But that doesn’t mean that the stock price is entirely meaningless. This stuff matters to some people! If you were, say, an Uber employee with stock options (or Reserved Stock Units), you’d rather have the stock go up. Maybe your employer doesn’t get much more on the listing day, but you do, or at least feel that way, since you’d be locked up for 6 months. I don’t think there are many people whose options are underwater (and Uber switched to RSUs in late 2014), but either way, higher stock price is good for most people.

Stocks Rule Everything Around Me

I talked here before stock options a bit, since that’s a major part of the compensation packages at tech industry. Both prospective and future employees (and former) follow the stocks of their favorite companies closely. If the stocks go down enough, you can see the recruitment funnel tighten, and the talent attrition go up.

A fair question here is why tech companies favor such equity heavy compensation packages. A satisfyingly folksy answer is that early stage companies with not much revenue but lots of growth potential don’t have much money, so equity is all they have. And, sure, it has the nice side-effect of aligning the interests of The Company with its employees, which should ideally make you work…better? There’s a hint of socialism at play in this arrangement too, if you squint a bit.

Again, I’ve gone on record saying that if you are joining an early stage firm, stock is where you want to be since the profits flow to the capital as opposed to labor in our system. It’s just the smart thing to do. But the origin story of equity have stock packages does sound a bit more financial engineering-y than a rosy, meritocratic system. 

Take it from Aswath Damodaran, the towering figure of valuation at NYU Stern: (Emphasis mine)

In particular, accounting rules allowed companies to grant options to employees and show no cost, at the time of the grant, if the options were at the money. Not surprisingly, companies treated as options as free currency and gave away large slices of equity in themselves to employees (and, in particular, to the very top employees), while claiming to be spending no money. If and when the options were exercised later, companies would report a large expense (reflecting the difference between the stock price at the time of the exercise and the exercise price) and show that expense either as an extraordinary expense in the income statement or adjust the book value of equity for it. 

After a decade of fighting to preserve this illogical status quo, the accounting rule makers finally came to their senses in 2006 and changed the rules on accounting for option grants. Companies were required to value options, as options, at the time of the grant and expense them at the time (with the standard accounting practice of amortizing or smoothing out softening the blow). This is the law that is triggering the large stock-based employee option expenses at Twitter and other companies like it, that continue to compensate employees with equity. It is worth noting that the change in the accounting law has also resulted in many companies moving away from options to restricted stock (with restrictions on trading for a few years after the grant), since there is no earnings benefit associated with the use of options any more.

Valuation is hard, and even seasoned professionals make mistakes all the time. And while the financial facade of numbers and jargons lend the industry an aura of objectivity, the reality is quite different. There are issues around integrity (people lie), motives (some people want high prices, some people low) and then competence (well, people suck). 

Let’s say you magically were able to account for all that. Still doesn’t help. Many highly educated people who have studied at a small number of schools (which itself is a problem), and learned the material from even fewer number of canonical sources differ in their analysis. 

And then there is the issue of comparison. Different companies describe similar businesses in different ways, which makes comparisons extremely hard. This gets exponentially harder when not just the companies themselves are new, but also their industries. As a fresh-faced almost MBA grad, I read the Uber and Lyft S-1 documents couple times over, and my head was spinning. 

Turns out I wasn’t alone, even people whose jobs are reporting on stuff is confused:Shira Ovide@ShiraOvideI’m not kidding when I say I have read this Uber S-1 glossary section every day for a month. And I still have to check the definitions of all its customized financial metrics. May 10th 201922 Retweets139 Likes

A knee-jerk reaction to such dizzying complexity is that these companies are hidingbehind this complexity, but I am not convinced. This ride-hailing stuff is quite new as a business, and there are no real precedents to some of the key metrics. We went through such adjustment periods when social media companies were growing up too. Eyeballs made way to Daily and Monthly Actives, vanity figures like cumulative user numbers to more business relevant ones such as Average Revenue Per User. As Uber and Lyft mature, they will better at telling their stories. Markets, in their infinite wisdom (one hopes?) will figure out what metrics really matter. 

But, the key question remains: When there are tons of people who constantly get it wrong, what are you supposed to do as an individual tech employee to value your stock?

Show Me the Money

A good way to think here is how your compensation package is set. Similar to the stock price discussion above, one way is to anchor it on how much you make for the firm. It can’t pay you the exact amount of value you add, then the firm would make no money. It also clearly can’t you pay you more, since then why would the firm hire you? So, you end up making just a bit under what you make for the firm. 

But of course, in reality, in tech and other relatively liquid labor markets, companies end up paying to most people enough to keep them employed here rather than there. If you are an efficient markets person, like I am, the ultimate way to price those options would be to get as many offers as possible, and see the point they converge to for your private stock options.

This isn’t really ideal, since different companies will judge you differently (a self-driving expert is worth more to Google than she is worth to Netflix, but an UI engineer could make more at Facebook than at either) but it’s one way. If you are particularly enterprising, you can peruse the H1-B salaries or find someone with access to Option Impact or one of those storied salary databases. Or, of course, you could just move to in Norway or Sweden, where such data is more publicly available. That does sound like cheating though.

Stock based compensation is here to stay, whether anyone likes it or not. And this stuff is not always pleasant, watching your net worth tumble down as Jim Cramer goes on screaming on CNBC. Just ask LinkedIn employees how they felt before Microsoft acquisition closed.

They didn’t feel good:

The rapid devaluation has posed more than just a problem for investors. LinkedIn’s employees are paid largely in stock, and therein lies the rub: Around the company’s new 26-story skyscraper that opened in downtown San Francisco in March, as well as the corporate headquarters in Mountain View, Calif., there have been persistent whispers about whether LinkedIn could retain its top talent as the marketplace clobbered their incomes.

Yet, Yet

I’ve argued before the situation is not ideal, and industry should change its terms to give earlier employees a more realistic chance at building wealth. Before that happens though, employees should do their best to evaluate their portfolio for the long horizon, avoid short term rash decisions, and most importantly diversify their holdings. Seriously, this stuff is so easy you could even fit on an index card.

There are established financial dynamics to IPOs in general, but what captures the attention is the human aspect. Every big IPO is fodder for some drama, and this being the Uber IPO, it’d be amiss if something wasn’t out of the ordinary, unexpected, and utterly polarizing. The plummeting stock price is what stole the show this time. 

Now, ask yourself: would the same people who are claiming that such a dramatic drop is actually good be saying the opposite if the price went up? 

I have my guesses. Now, if you’ll allow me, I’m going to look at some stock tickers…

What I’m Reading

The dangerous world of being paid in shares: How tech firms’ massive rewards are coming back to bite themWell, this is fitting. Alex Stamos, the former Facebook Chief Security Officer and others argue that tech stocks cause employees to outsource their morality to Wall Street, which I guess is not as good as Silicon Valley. The piece is behind a paywall, but you can login to read it:

[Alex Stamos:]”Markets have demonstrated that they don’t care about social responsibility – they only care about what the quarterly numbers look like and what guidance they are given on future revenues… it’s incongruous with our beliefs about changing the world in a positive way that we’re inheriting the lower Manhattan school, or the City of London school, of what makes a responsible company. There’s more to responsibility than returning value to shareholders.”

Chris Eberle, a former director at Facebook who gave out and received many “secret taps on the shoulder”, agrees. “When you’re incentivised through stock grants, everything becomes about what’s important to Wall Street,” he says. At Facebook, that led employees to “not look too closely” at anything that might diminish Facebook’s most important numbers, such as user growth and engagement.

When Bitcoin Grows Up: Seems like a million years ago now, from the madness of 2017. But Bitcoin is up again, for better or worse. A good time to re-read this piece by John Lanchester in London Review of Books. Just the story of how the founder of Silk Road got caught is worth reading the entire thing in full:

On 1 October 2013 Ulbricht was sitting in a public library in San Francisco, logged into Silk Road via the library’s wifi. He was in an online chat with an FBI agent whose job was to make sure Ulbricht was still online when his colleagues swooped. Ulbricht was at a desk across from a slight young Asian woman when a couple of typical San Francisco street people began arguing loudly just behind him. He turned to look, and the young woman grabbed his laptop: she was an FBI agent. So were the street people. Nice one, the Feds. Ulbricht was logged into Silk Road under the account ‘/Mastermind’. Game over for Dread Pirate Roberts. Ulbricht went on trial in 2015, was convicted, and is serving two life sentences without the possibility of parole.

Who Controls the Internet?


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

If you asked a ton of people “Which country controls the internet?”, what would the answer be? Most people, I am guessing, would first balk at the question but then probably say United States. 🇺🇸!🇺🇸!🇺🇸!

There’s a bunch of reasons to think that way. On the surface, most companies that people associate with “Internet” are concentrated to a tiny, earthquake-prone region in the US. It’s not that Tim Cook is dying to do Trump’s bidding, but there’s some truth to the idea that if Uncle Sam really flexed his muscles, say by sending some people with guns over to Silicon Valley, he could get all those folks to cooperate. My co-host Ranjan thinks this is a bit extreme, but then, I am Turkish and he’s not.

ICANN Headquarters, where TLDs are Born

But there’s also some technical realities too. For example, ICANN, the non-profit that controls the DNS scheme is based in California. To gloss over a ton of technical details, that gives ICANN the ability to own the relationship between human-readable addresses (like typing in in your browser) and the IP addresses, that refer to the servers. Now, ICANN has a tumultuous relationshipto say the least, with the US government and every few years, there are calls to make ICANN’s authority be moved to an international body. To this day, though, the organization remains in sunny Southern California, only occasionally being thrusted to headlines when it tries to raise some revenues by introducing questionable Top-Level Domains, like .amazon

“I come from Cyberspace”

Yet, there’s also the globally shared sensation that internet is somewhat above the regular, day-to-day, international drama. It’s all digital, global, connected, and you know, good. It was designed to be supranational, in some sense, rather than international. It rises above those pesky, arbitrary notions of land borders, regional disputes, sectarian differences. Internet is just there, encompassing us all, like the air we breathe.

Not my words! Take it from John Perry Barlow. The iconic figure once penned a fiery manifesto at a World Economic Forum, after being struck by the arrogance and the dismissal of the world leaders of the incoming cyber revolution. He even called it, provocatively, “A Declaration of the Independence of Cyberspace” and boy did he not mince his words:

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.

Barlow later backed those claims down a bit, but the libertarian thrust of his manifesto never really died down. We are in this beautiful mess, with Facebook accidentally kindling genocides, YouTube promoting anti-vaccination content and god-knows-what-else to kids, partly due to this line of thinking.

Enter Russia

Yet, the borders seem to persist. The internet recently was abuzz with the news that Russia is now setting up a new perimeter for its own internets. 

Here’s Financial Times:

The Balkanisation of the internet has entered another phase, with Russian president Vladimir Putin signing a law to give the country a “sovereign internet” that the Kremlin will be able to disconnect from the global web.

The move was expected and follows other attempts to cut off users from the world wide web. There’s been the Great Firewall of China, an Iranian move to isolate itself, and the recent temporary blocking of Facebook and other social networks by the Sri Lankan government after the Easter Sunday bombings.

Balkanization is not a technical term, but it largely refers to dividing the “global internet” into more local internets (or intranets, kind of arbitrary here) that are controlled by individual countries. Of course, the fact that Russia is the one doing it makes it extra-uneasy, given the country is hardly a bastion of free expression. This feels bad, as in not done in good-faith, at least to my Western ears. You don’t have to be a technolibertarian to think Balkanization is not ideal.

But, here’s an idea: ask the question that I posed in the beginning “Which country controls the internet?” to someone in China. I don’t know the answer, but the Chinese friends I’ve asked said “Well, people don’t really think of US internet companies”, so that’s sort of an answer. 

Is Chinese internet the same as our (?) Western internet? If not, what’s that relationship? Maybe it’s a subset, or maybe a federated one that occasionally talks to ours, on Chinese government’s terms? We really do not have good models to fully understand them yet.

Even if they did know of the American internet (bear with me here), it’s not even clear they would even care at this point:

Two economists from Peking University and Stanford University concluded this year, after an 18-month survey, that Chinese college students were indifferent about having access to uncensored, politically sensitive information. They had given nearly 1,000 students at two Beijing universities free tools to bypass censorship, but found that nearly half the students did not use them. Among those who did, almost none spent time browsing foreign news websites that were blocked.

As much as we’d like to believe that Internet (internet?) is not just a set of technologies, but in fact a manifestation of the notion that “information wants to be free”, a force of nature that just cannot be held back due to its sheer size and complexity, China seems to be doing fine with their firewall.

In fact, not just fine, but China’s internet protectionism has not just kept Chinese dissidents at bay, but it also allowed the country to nurture and develop its own technology giants such as Tencent, Baidu, and more recently (more on this soon!) Bytedance. It’s hard to argue, if you are a Chinese investor, that the Great Firewall has not been a good thing. 

China decided to carve out its own internet from the greater network, yet it’s still the same internet, running on the same technologies. But that’s not the only way you could have your internet. If you are especially enterprising, have a tendency to generally do things in your own way, could also just build an entire internet, or something that resembles it, by inventing a whole set of new technologies.

Comme ci comme ça

Take a look at France, where I temporarily live. Unbeknownst to many in the United States, this beautiful land of wine and cheese had its own “internet”, way before Al Gore invented it across the Atlantic. Allow me to introduce you to Minitel.

Essentially, an end-to-end system with its own terminals, Minitel allowed people all over France to communicate, do commerce, and generally have a good time. You could set up a “website”, browse other sites, chat with people, and of course, get their rocks off. The closest analogue I can think of in the US would be the Bloomberg terminals, which like Minitel, runs on its own “parallel” internet, with its own protocols, own terminals.

Minitel enjoyed some limited success, but in the end it was shut down in 2012, and it remains as one of those ahead-of-its-time technologies that historians fawn over, and provide more fodder for my French friends to assert their arrogance. But, it’s also an interesting experiment in a country developing, it’s own set of technologies from the ground up, and building a national network that works well.

And some of those tendencies stick around. Just a few weeks ago, French government announced they would be switching to Tchat, an internally developed instant-messaging system based on Matrix protocol. The switch did not go swimmingly (French), with embarrassing security mishaps allowing strangers to enter government chat rooms. Yet, you can imagine French intelligence not being too psyched with Macron using Telegram (which I bet he still does). And there’s also Qwant, a European search engine that parts of French administration is encouraged to use.

It’s a time-tested tradition to make fun of French eccentricities. Yet, still in the United States, you can’t read a single newspaper without hearing about Huawei and its ascension to being the 5G backbone provider of choice around the world. Can you say in the same breath that internet is truly global, and then argue that the nationality of the technology provider is a deal-breaker?

Mind in Cyberspace

Maybe, the answer is “yes”. The same United States recently forced Grindr, a dating app popular with the gay community, to divest its Chinese ownership, over fears the sensitive data it has over American citizens could become a liability. I talk often here on data as liability, but the the issue here is larger than that.

Whether we like it or not, some notion of borders, along with national sovereignty and protections seem to be slowly making their way to the digital space.

Some companies will surely be more equipped to handle these new challenges than others. For example, you can even imagine Facebook’s new push towards end-to-end encryption in this sense a bit. While E2E is most likely a hedge against anti-trust regulation and a deference tool against surveillance, it also has the nice feature of turning data into amorphous blobs that you can’t really meaningfully “manage. In other words, you either allow Facebook entirely in your country, or not.

Remember DVD regions?

Some of the previous attempts, such as region locks on DVDs, to borders on the cyberspace have fallen flat. The long-term effects of GDPR is yet to be seen, but it also did have a slight Balkanizing effect where some US firms like LATimes and Instapaper simply stopping to operate in Europe . On the other hand, if California has its way with its GDPR-lite, and there is no federal equivalent, things could get even more hairy in US.

What’s certain now, is that, the old rules of the internet are being rewritten right now. And whether we like it or not, the borderless, stateless, cyberspace is not going to be happening anytime soon.

What I’m Reading

The 5 Years That Changed Dating: A wonderful piece about how Tinder both changed dating, and not, for the better, and for the worse. The many anecdotes about compartmentalization of romance and how apps like Tinder both foster and hamper that dynamic is fascinating.

People used to meet people at work, but my God, it doesn’t seem like the best idea to do that right now,” Finkel says. “For better or worse, people are setting up firmer boundaries between the personal and the professional. And we’re figuring all that stuff out, but it’s kind of a tumultuous time.”

A Conspiracy To Kill IE6An early YouTube engineer talks about how a few renegade engineers started a skunkworks effort to wean people off of Internet Explorer 6, without any approval from the Google corporate machine. A fascinating play-by-play, but also goes to show how much power a few engineer can wield.

The code was designed to be as subtle as possible so that it would not catch the attention of anyone monitoring our checkins. Nobody except the web development team used IE6 with any real regularity, so we knew it was unlikely anyone would notice our banner appear in the staging environment. We even delayed having the text translated for international users so that a translator asking for additional context could not inadvertently surface what we were doing. 

What’s in a Username?


This post is cross-posted from my joint newsletter with Ranjan Roy, The Margins. Please check it out, and consider subscribing.

Two weeks ago my co-host wrote about the digital exhaust, and mentioned how a surreptitious Nest thermostat can keep tabs on the new owners of a house. I’ve experienced something similar myself. My previous partner had a Google Home smart speaker in our living room. After I moved out, it took me a few days to realize that I was still logged in to the Google Assistant on my phone and could literally see what she was saying into the speaker. There wasn’t anything particularly scandalous, yet idly observing the activities of your former partner from your phone, albeit in extremely low fidelity, has a tinge of voyeurism to it.

Ranjan’s post was more about the “exhaust”, the data that gets inadvertently generated and forgotten. Yet, there’s even a more fundamental issue that I think that deserves attention here; that is identity management. 

It’s hard to pinpoint a number, but most people seem to have around 100 or so accounts online. My own highly biased Twitter survey of people who use password managers puts that average to over few hundred. That is an obscene number of identities for a single person to handle.

I shouldn’t have to spill more ink on why you should use a password manager, and how the initial minor pain of setting all that stuff up on your devices pays off huge benefits later. But this is my soapbox for now: You should use a password manager. I use 1Password on all my devices, enable 2-Factor authentication where possible. I have in my memory 3 passwords only, and they are actually all passphrases.

There’s a part of me that enjoys watching this rather complicated (if not convoluted) setup work like butter with FaceID and TouchID and all the other Apple’s biometric wizardry. As much as it creeps me out that my phone is taking a biometric photo of me every time I open up WhatsApp, I enjoy being able to pay for the Tube in London with a combination of mathematical models of my face and some radio waves. I’ve paid for this iPad I am writing on by buying it on Apple’s website, which used TouchID on my laptop. The entire flow feels both cool, and secure.

But there’s also another part of me that finds this setup insanely complicated and brittle. For every website that 1Password’s browser extensions work with, there are a few more where I have to copy a password from one app, and paste it into another. The mere digital UI trickery involved in generating correct identities in 1Password with the “Website” field set in is barely within my reach, and I’ve built such UIs myself for years. The way 1Password app matches the passwords it has on file to the accounts I have on different services is smart, but it does require you to understand it fully (so maybe not so smart).

Moreover, I live in constant fear of somehow my database of passwords across my devices getting out of sync or losing all my devices at the same time. Every time I enter a new password in one device, I make a mental note to open up 1Password in the other devices to make sure it gets picked up.

This stuff is just bonkers.

And this is just the tip of the iceberg, that I have some a modicum of control over, and tiny bit of visibility. Behind each of those accounts lie separate databases, which are connected to other databases, that hold dossiers of information on me. Some of that data is stale the minute it is entered in, some of it is utterly incorrect. Yet, they lie there dormant, until someone does something (maybe good, maybe bad) with it. These databases, as I’ve mentioned before, tend to make their way into the public sphere often, exposing their inaccuracies for the whole world to exploit. Let’s not even get into what happens when the companies that own these databases change owners, and the new management has different ideas on what to do with the data 

This is admittedly a pessimistic view of the world. For most people, the small amounts of data they enter into an app is quite irrelevant, and the damages are quite minuscule even in the worst of all outcomes. Modern economies have ways to hedge these possible downsides like insurance. We are probably not pricing the risks correctly yet, but it’s definitely possible. Nevertheless, you simply can’t deny things are slowly getting out of hand, with more and more of our lives take place in the bits territory, instead of atoms.

I’ve written before that another way minimize these types of risks is to move to a more ephemeral model of data storage. The point I’ve made before wasn’t that we should never be holding on to any data but that we should be thinking of the entire lifecycle, including its disposal:

If every product manager in Silicon Valley thought about how their teams would eventually have to delete the data, we wouldn’t be in this mess in the first place. If right to erasure was part of the technical calculus, alongside maintenance and performance requirements done by tech leads, deletion would also work. If every engineer thought about the data she’s sending over the wire when they log an error message or send it through a PubSub system, she would be writing better code in the first place. The data wouldn’t seep into the machinery, like a viral infection that you can’t even diagnose, incubating for years and years, only to have a outbreak that almost destroys Western democracy.

Writing pieces toiling the long-term benefits of such a vision is fun, but I also try to practice what I preach. I, somewhat performatively, frequently delete all my tweets, in order to keep more of a fleeting presence on the platform. 

It’s not particularly a novel idea, but it’s one becoming more common and even attracting investment capital. Just recently, the makers of the famed Sunrise calendar app came up with a new company called Jumbo. Their app is essentially a productized version of what I do with a mish-mash of Ruby scripts to delete my tweets and likes. 

Platforms such as Facebook and Twitter both provide tools on paper, but in reality they are barely usable. Zuckerberg’s promised “Clear History” functionality is still nowhere to be seen. Twitter only allows deleting your last 3200 tweets programmatically. The aforementioned deletion wizard Jumbo seems to rely on a liberal read of the platforms Terms of Service agreements, and brittle hacks to impersonate user behavior.

The larger insight behind apps like Jumbo is that users only own their data only to the extent they can manipulate it as they wish, including deleting it altogether. This notion of ownership that’s predicated on operability is much more comprehensive and reflective of how people think of owning a good, then the narrow legal sense tech companies espouse. 

This is where identity management and data ownership tie back together. One way to think of your identity online is as a combination of all the data that’s spread around behind hundreds of different accounts. Ephemeral data makes each of those individual accounts both less risky, and also more reflective of things work in the real world, with timeliness as a natural part. This is the part Jumbo attacks.

And identity management approaches the other variable, all the different logins and accounts on all the services. This is where companies like 1Password and LastPass operate.

I see these two approaches as attacking the problem from two different angles. The enterprise side of identity and access management has already made huge stride. Until very recently, the demand on the consumer side hasn’t been high, but clearly things are different now.

It remains to be seen how the future trends, along with aggressive regulatory moves like Europe’s GDPR or California’s best imitation of it will change the landscape. However, to me, it feels like we are on the cusps of big changes of how we are thinking of identities online, and how technology will let us manage presence online better.

What I’m Reading

Definite Optimism as Human Capital: Dan’a blog is one of my new favorites and he just became a Bloomberg writer too. This piece about how optimism is a hard to renew resource made me question my own skepticism and cynicism often, and is one that I keep myself going back to often.

It’s straightforward to measure a recession’s effects on employment and output. But what if the psychological impact of a recession is much more severe than we thought, to the extent that it could make a dent in long-term productivity growth? If we accept the idea that recessions linger in the form of psychological scars, lower expectations, and greater risk aversion, then it makes more sense to do a lot to avoid them. And it weakens the Austrian case for recessions as healthy corrections that improve capital allocation, because they cause a great deal of unseen harm as well. If we treated definite optimism as a function in human capital and productivity growth, then we could be slightly more rigorous in considering the broader effects of recessions.

How Game Theory Helped Improve New York’s High School Application Process: I have an odd fascination with the admission-industrial complex, especially in highly selective sectors. This 2014 piece is more about the former, with a mathematical tinge, and is fascinating.

Before the redesign, the application process was a mess. Or, as an economist might say, it was an example of a congested market. Each student submitted a wish list of five schools. Some of them would be matched with one of their choices, and thousands — usually the higher-performing ones — would be matched with more than one school, giving them the luxury of choosing. Nearly half of the city’s eighth graders — many of them lower-performing students from poor families — got no match at all. That some received surplus offers while others got none illustrated the market’s fundamental inefficiency.