Every company is a tech company, and everyone is a techie.

I work in tech, or used to, like most of my circle in San Francisco. But it was never clear to me, what I really did. I changed the world, of course, but what did I really do? My father ran his own business of gas stations, and also sold cars. My lawyer friends wrote up legal documents and endlessly argued about stuff, and doctors did what doctors did. Teachers taught kids, professors taught slightly older kids, writers wrote, and I worked in tech. I worked at T-tech companies, and tech companies that were more or less a custom CMS. The term lost all its meaning, we all kind of knew, but we all played along.

Google and Facebook, for example, by most people’s standards are tech companies. If you ask media companies, however, Facebook especially is also a media company. Facebook doesn’t like that comparison, mostly because of the scrutiny attached to being a media company. But it feels right, in that for more than 50 percent of people, it’s where they get their news.

Of course, some others disagree. The argument goes that companies that fall within the same category should be comparable, and Business Insider is nothing like Facebook. Facebook is a tech company, that’s in media business by accident. That also feels right; Facebook shuttles engineers back and forth on 101 by hundreds, and Business Insider mostly has reporters. They are in the same business on the demand side, attention and page views, but how they go about generating and commanding that attention is so different that we shouldn’t not call them both media companies. That seems somewhat generous to Facebook, but still fair.

But what is a tech company then? It’s easy to write off WeWork and Hampton Creek as being hippies who want to catch a whiff of the tech vibe. But where do you draw the line? Mayo is not tech, and self-driving cars are, but, say, is textual analysis of content? What if I analyze some news, and make financial decisions on it; does that make me a finance company, or a fintech one? If I decide to show that news to someone based on that analysis, am I a tech company or a media one? Am I a tech company if I help people route shipping containers across the world, using computers. Or what if it’s not containers, but trash? Call yourself “Uber of Trash” all you want, but you won’t get this guy, who worked at Uber, to call you a tech company.

I worked at 5 different companies, who all were tech companies. Three of them primarily sold ads against content and hired engineers to keep the blinkenlights on while sales people brought in cash. One produced original content, two had the users do the content generation. This is the business Facebook and Google is in too, but they managed to delegate the content generation to users for free, and automate away the sales part, practically minting money out of thin air. Twitter managed to the former, arguably, failed at the latter. It’s not this stuff is trivial; all 3 companies I worked at failed at either side of this.

Then another company I worked at built a file system, and then build products on it, and sold it to people, which felt like something a tech company would do. We built something with code, and then charged people to use that.

Then came Uber, which built a platform that brought drivers and riders together. It felt like a tech company in that people used an app to get where there were going, but a lot of the work initially was really about keeping the lights on, while we either wired together off-the-shelf tech, or should have.  It wasn’t until the company started building its own maps, and its own self-driving tech, and some nifty security stuff (which I worked on) that it felt like a tech company.

During all this time, a career spread across 5 companies in 3 cities, I was a techie who worked at a tech company. I wrote code, reviewed code, sat in meetings, interviewed candidates. There wasn’t much in terms of that I did at one company, as a techie, that was different from doing it any other place. Business folks, we thought were replaceable, as they came and went, but we never realized we as amorphous as they were.

One of my friends worked at a photo sharing app for a few years, only to switch to self-driving car company. A few friends did the opposite transition; going from hardware companies to app companies. Another one that worked on software used by astronauts now builds software sold to city administrators. If you ask any of them, they all worked in tech too. We read the tech news, raise money from tech VCs, get harassed by those who hate tech. In the end, the entire discussion becomes so abstract, that it becomes pointless. But you can take it even further.

Tesla, for example, is a car company, that also sells batteries. But look deeper: they really want to be a transportation company where you can use a Tesla network to get where you want, which you may not have bought. That’s probably why Model 3 has a driver facing camera, and comes with no key. You use an app to get transported; your ownership of the car is incidental. It’s almost like an ICO, where instead of buying tokens, you buy Teslas to fund the Tesla transportation network. So is Tesla car company or a transportation company or an energy company?

A friend once told me that datacenter colocation companies are mostly in the HVAC business. That seemed odd at the time, but I see her point now. The company bought electric from the grid, turned it into cooled aisles, and leased space. There’s a running meme in popular business books that you buy at airports that McDonald’s is mostly a real estate company, but how different is keeping meat at a certain temperature than doing it for racks? Is anything not a tech company by this definition?

That’s the crux of the issue. The term “tech” company means as much as calling your local bodega —not that Bodega— an electric company because they use it to keep their fridges running. A few years ago, one of the content companies I worked at used to own and operate its own servers; today that seems crazy. Most, if not all technology, gets commoditized once its put to use as other figure out how it works and build cheaper versions of it. Tesla dazzles people with its self-driving tech, but ask Continental, and they will sell you the same tech used in most other cars.

You can think endlessly about what makes a tech company a tech company. Is it the fact that a company creates leverage using technology that makes it a tech company? The number of patents it has? Is it that it hires engineers, and mostly engineers? Maybe it’s the DNA of the founders, because as the adage goes the only real product of a company is its culture. It’s definitely good cannon fodder for blog posts, hot takes.

I think the discussion itself is not a useful one, which possibly makes this essay even less so. The term has lost its meaning, for the most part, and it’s at best aspirational, at worst misleading. But I’ll also chime in, not to make a distinction; not to help decide whether a company is a tech company or not but to decide what a company does.

See who gives the company money, and who the company gives money to. Try to figure out who the masters are. It seems awfully reductionist, but so is the term “tech company”. And if this doesn’t make sense, maybe just retire the term altogether. There was a time it had a meaning, but not anymore.

Smoking as a parable to tech addiction

When I talked about how people’s addiction to smartphones is akin to a public health crisis, I compared it to smoking.It’s not a particularly insightful analogy, of course. For example, Ian Bogost wrote about it as far as back in 2012. He compared the fall of BlackBerry to the slow burn of Lucky Strike with this note:

But calling Blackberry a failure is like calling Lucky Strike a failure. Not just for its brand recognition and eponymy, but even more so, for the fact that its products set up a chain reaction that has changed social behavior in a way we still don’t fully understand–just as our parents and grandparents didn’t fully understand the cigarette in the 1960s.

One of Bogost’s points is that our relationship with smartphones is so unique and so personal, that we may not fully understand or even predict what our society will look like when it bubbles up to the population level. For smoking, it turns out, that effect was widespread cancer. Not great.

One of my points was that the addictive nature of smartphones, and technology overall, was always visible to those who build them. Here is Bill Gates in 2007:

“She could spend two or three hours a day on this Viva Pinata, because it’s kind of engaging and fun.”

Gates said he and his wife Melinda decided to set a limit of 45 minutes a day of total screen time for games and an hour a day on weekends, plus what time she needs for homework.

I argued, while Apple focuses on physical health, it casually ignores the mental health implications of the addictive nature of its products, even though its designers already know it’s a problem. Here is Jony Ive, on stage of New Yorker Tech Fest in 2017:

REMNICK: How can — how can they be misused? What’s a misuse of an iPhone?

IVE: I think perhaps constant use.

Another point I passingly made is that smoking had huge interest groups backing it, with lots of public relations behind it showing it as a beneficial, progressive, useful activity. The dangers of smoking were not well known, but it wasn’t exactly hidden either.

Here is a quote from  article from 2015 by Richard Gunderman, a medial doctor. Gunderman talks about Edward Bernays, the father of modern public relations and how he wanted people to smoke, but not his wife.

In the 1930s, he promoted cigarettes as both soothing to the throat and slimming to the waistline. But at home, Bernays was attempting to persuade his wife to kick the habit. When would find a pack of her Parliaments in their home, he would snap every one of them in half and throw them in the toilet. While promoting cigarettes as soothing and slimming, Bernays, it seems, was aware of some of the early studies linking smoking to cancer.

Good times. The entire article is an excellent, if not a sobering read. I also “Like”d the part where Bernays channels a certain tech-executive prone to apologizing. They didn’t have Medium back then, so he couldn’t apologize there but he quipped in his autobiography:

They were using my books as the basis for a destructive campaign against the Jews of Germany. This shocked me, but I knew any human activity can be used for social purposes or misused for antisocial ones.

I have mentioned that history repeats itself, and The Cyber is not an exception but it’s kind of unsettling how often Nazis make an appearance. I guess when you manage to manipulate millions at such scale to conduct such adversities, it scrambles all notions of rationality, ethics, morality, technology.

And a passing point here. There’s a general sensation that filling up coffers of tech companies with STEM majors may not be the best idea when those kids with little knowledge  of history end up shaping up the new public spaces. I agree with the overall sentiment, but I have some reservations.  The problem is less the people’s majors, but that those major’s general appreciation for history. In other words, we will always need STEM majors and probably more of them as time passes so curbing that supply is not an option. But maybe we could educate (or build?) more smarter ones.

I harp on America a lot on Twitter, as an expat-cum-immigrant. But one thing America has over Europe and/or Turkey is that almost no one smokes in US. It is uncanny. But it wasn’t always this way. And it took a lot of effort to get things to where we are. It’s doable, though.

Apple created the attention sinkhole. Here are some ways to fix it.

Your attention span is the battleground, and the tech platforms have you bested. Social media platforms, like Facebook, Twitter, Instagram get bulk of the blame for employing sketchy tactics to drive engagement. And they deserve most of the criticism; as Tristan Harris points out, as users, they are at a serious disadvantage when competing against companies trying to lure them with virtually endless resources.

However, one company that is responsible for this crisis goes relatively unscathed. Apple jumpstarted the smartphone revolution with the iPhone. Our phones are not anymore an extension of our brains but for many, a replacement. However, things went south. Your phone is less a digital hub, but more a sinkhole for your mind.

I believe that for having built a device that has demanded so much of our attention, Apple has left its users in the dark when it comes to using it for their own good. It has built a portal for companies to suck as much of our time as they demand, without giving us ability to protect ourselves. Surely, there have been some attempts to solve the problem, with features like Do Not Disturb and Bedtime, most of them have been half-assed at best. The market has tried to fill the void, but the OS restrictions render most efforts futile.

Currently, the iOS, the world’s most advanced mobile operating system as company calls it,  is built to serve apps and app developers. Apple should focus on its OS serving its users first, and the apps second.

1 · Attention

I have touched on this before, within the context of the Apple Watch, but I believe Apple has built a device that is so compelling visually, and connected to apps that literally have PhDs working to get you addicted to your, that the users are treated like mice in a lab pressing on pedals to get the next hit. This is unsustainable, and also irresponsible.

I believe Apple should give users enough data, both in raw and visually appealing formats to help them make informed choices. Moreover, the OS should allow people to limit their (or their kids’) use of their phones. And lastly, Apple should use technology to help users, if any to offset the thousands of people to trying to get them addicted.

1.1 · Allow Users to See where their Time Went

First of all, Apple needs to give users a way to see how much they spend on their phones, per app. There are clumsy ways to do this data. The popular  Moment does this literally inspecting the battery usage screen’s screenshot. The lengths developer Kevin Holesh went to make this app useful is remarkable, and application itself is definitely worth it but it shouldn’t be this hard. And it is not enough.

A user should be able to go to a section either on the Settings app, or maybe the Health app, and see the number of hours —of course it is hours— they have spent on their phone, per day, per app. If this data contains average session time, as defined by either the app being on the foreground, or in the case of iPhone X, looked at, even better. The sophisticated face tracking on the new iPhone can already tell if you are paying attention to your phone, why not use that data for good?

FaceID Demonstration
Paying serious attention

In an ideal case, Apple would make this data available with a rich, queryable API. This is obviously tricky with the privacy implications; ironically this kind data would be a goldmine for anyone to optimize their engagement tactics. However, even a categorized dataset, with app names discarded would be immensely useful. This way, users can see if they really should spending hours a day in a social media app. At the very least, Apple can share this data, in aggregate with public health and research institutions.

1.2 · Allow Time Based and Screen Time Limits for Apps

Second of all, Apple should allow users to limit time spent on an app, possibly as part of parental settings, or Restrictions, as Apple calls it. There is already precedent for this. Apple allows granular settings to disable things from downloading apps altogether to changing privacy settings, allowing location access and such.

Users should be able to set either duration limits per app (e.g. 1hr/day, 10hrs/week), time limits (e.g. only between 5PM and 8PM) or both. Either of these would be socially accepted, if not welcome. Bill Gates himself limits his kids’ time with technology, and so did Steve Jobs, and Jony Ive.. Such features should be built into the OS.

Steve Jobs and Bill Gates on stage
Low tech parents

As an aside, I think there are lots of visual ways to encourage proper app habits. Apps’ icons could slowly darken, show a small progress indicator (like when they are being installed), or other ways. This way, someone can tell that they have Instagrammed enough for the day.

1.3 · Make Useful Recommendations

With the new Apple Watch, and watchOS 4, Apple is working with Stanford to detect arrhythmia, by comparing current heart rate data, to that user’s known baseline. Since its inception,  Watch used rings, to encourage people to “stand up”, and move around. Even my Garmin watch keeps track of when I am standing still for too long.

Apple can do this for maintaining attention too. Next time you find yourself stressed, notice how you switch between apps, over and over again. Look at how people sometimes close an app, swipe around, come back to the same app just to send that one last text. These are observable patterns of stress.

Apple can, proactively and reactively, watch for these patterns and recommend someone to take a breather, maybe literally. With Watch, Apple went out of its way to build a custom vibration to simulate stretching on your wrist for breathing exercises. The attention to detail, and license to be playful is there. Just using on-device learning, Apple can tell when you are stressed, nervous, just swiping back and forth, and recommend a way to relax. Moreover, the OS can even see if the users’ sessions between apps are too short, or too long, make suggestions based on that kind of data.

Display on a Mercedes Car showing Attention Assist
Attention Assist, Indeed

As mentioned, there’s a lot of precedent for determining mental state using technology, and making recommendations. Any recent Mercedes will determine your fatigue based on how you drive, and recommend you take a coffee break. Many of GM’s new cars have driver facing cameras where the camera can tell your eyes are open and paying attention during self-driving mode. Using your phone is not as risky as driving a car, but for many, a phone is a much bigger part of your life.

2 · Notifications

Notifications on iOS are broken. With every iOS release, Apple tries to redo the notification settings, in a valiant effort to allow people to handle the deluge of pings. There are many notification settings hidden inside Settings app, with cryptic names like banners, alerts, and many more.

Apple Notification Guidelines
If only

However, currently all notifications from all apps are on a single plane. An annoying campaign update from a fledging app to re-engage you gets the same treatment as your mom trying to say hi. Moreover, apps abuse notification channels; the permissions are forever but the users’ interests are not. And of course, the data is sorely missing.

2.1 · Allow Users to See Data about Notifications and their Engagement

Again, this is a simple one. Apple should make data both the raw data as well as an easily digestible reporting about notifications available to a user. It is easy for this to get out of hand, but I think even a single listing where apps are ranked by notifications per week or day would be useful. Users should be able to tell that their shopping app they used once have been sending them notifications that they have been ignoring.

2.2 · Categorize and Group Notifications

Apple should allow smarter grouping of notifications, similar to email. Currently, as said, notifications largely have a single channel. However, this doesn’t scale. Tristan Harris and his group make a good suggestion; separate notifications by their origin. Anything that is directly caused by a user action should be separated from other notifications to start with. This would mean that your friend sending a message would be a different type of notification than Twitter telling you to nudge them.

I think there are even bigger opportunities here; without getting too much into it, Apple can help developers tie notifications to specific people, start categorizing them by intent. Literally anything, over what is currently available, would be an improvement.

This idea would definitely  receive a ton of pushback, especially from companies whose business relies on getting users addicted to their products. However, the maintaining toxic business models shouldn’t be a priority. If a user does not want to launch Facebook, then they shouldn’t have to. If an app can drive engagement, or whatever one might call mindlessly scrolling, only with an annoying push notification, maybe they shouldn’t be able to.

This is the kind of storm Apple can weather. While Apple cherishes its relationships with apps, it essentially is beholden primarily to its users. And such a change would almost certainly be welcome by users.

2.3 · Allow Short Term Permissions for Notifications

For many types of apps, notifications are only useful for a limited amount of time. When you call an Uber, or order food, you do want notifications but other times, an email would or a low-key notification would suffice. Users should be able to give apps a temporary permission to nudge them, and then the window should automatically close.

This is something some people are already familiar with. Many professionals, such as doctors, college professors, and lawyers have office hours when you can talk to them freely, but other times, you cannot.

2.4 · Make Useful Recommendations

Once again, Apple can even take a more proactive role and help users manage their notifications by making recommendations. For example, the OS can keep track of notifications one engages with meaningfully, or not. This way, the phone can ask the user if they would like to silence an app that they never use.

Apple already does this, to some degree with app developers; if you app’s notifications are too spammy, and users rarely engage, you’ll get a call. However, the users should have a say. An app that might be meaningful to a user might be spammy to other. The OS can make these decisions, or at least make smart recommendations. A feature like this literally exists to help you save space on your phone’s memory; why not for your notifications too?

Ending Thoughts

I believe that an attention based economy, where millions of people are in a constant state of distraction, with tiny short bursts of concentration is dangerous to our mental health as individuals, and society as a whole. Wasting hours switching between apps, not accomplishing anything is one thing, but  a constant need to be entertained, a lack of ability to be with one’s thoughts, not being able to just be around people, without pulling out a phone, are all going to cause wide social issues we’ll tackle with for years. When the people who have built these tools are scared, it’s a good sign that we lost control of our creations.

Surprisingly, iOS is lagging much behind Android in this aspect. I have almost exclusively used an iPhone since its launch, and written bulk of this piece without doing much research. I was surprised, and somewhat embarrassed to see most of what I proposed in the Attention section, such as bedtimes, app limits already exist in Android as part of Family Link. And of course, tools like RescueTime existed for Mac and Windows to help people see where their time went, but their functionality is next to useless in iOS. As mentioned, even Moments app can do only so much within the confines of Apple’s ecosystem.

I wholeheartedly think that unless we approach this issue like we did smoking, and elevate the discussion to a public health issue, it won’t get solved. However, there are ways to help curb the problem, and it is time Apple took the matter to its own hands.

Unlike most other tech companies, Apple makes most of its money by selling hardware to consumers. Every couple years, you buy an iPhone, and maybe an app or two, and Apple gets a cool thousand bucks,. Apple’s incentives, although recently less so with the increasing services revenue, lies with those of its users, not the advertisers or the marketers. If Apple is serious about its health focus, now is the right time to act.

Fighting Spam at Facebook

Couple days ago, I wrote about how “Fake News” on Facebook is a spam problem caused, or at least exasperated, by economics of attention. Since there’s limited amount of attention people can give in a day, and Facebook controls so much of it, if you can reverse engineer out the mechanics of the News Feed, you can fan out your message, or boost in Facebook parlance, to millions of people with at a minuscule cost.

On this blog, I use a combination of my experience as a software engineer, what is reported in press, and some light rumor treading to explore ideas. But it is hard to not come off as navel-gazing. No one writes about spam at Facebook, when it’s not a problem. And these systems are complex, involving hundreds of people working on them over many years. They have their own compromises. The inner workings aren’t always hidden (but they are, more than they should be), but it’s not always easily accessible to an outsider.

Luckily, I had some help. Melanie Ensign is a former co-worker of mine Uber. And more importantly, she used to work at Facebook with teams working on fighting spam.  Melanie has been at Uber for almost a year now, but I think her experience from Facebook (where a lot of Uber’s security team hails from, including the CSO) is well worth exploring.

Through a few tweets (embedded at the bottom of this post), she told me how Facebook used to combat spam, why certain approaches worked better than others. It’s worth noting that her comments were about Facebook posts, not about ads (a bit more on those in a bit).

Ensign says that Facebook fights spam primarily by targeting not the content of the posts, but the accounts that post them. She says “systems were trained detect spam based on behavior of accounts spreading malware. It’s never really been about *content* until now”.  She adds that monitoring content is tricky, for several reasons.

The first is obvious; with more than 2 billion monthly active users and many more billions of content posted on the site every day, it’s a big, wieldy undertaking to even start monitoring that much content. Facebook is an engineering force to behold but scaling an operation like that; building systems to analyze that much material for unstructured data, and doing it effectively real-time is not a simple task.

Second reason, is the obvious risks around censorship. Facebook admittedly wanted to keep a neutral position on the content posted on its site (save for legal requirements). False negatives are bad; you let in “spam”, but a false positive is akin to censorship. This might be less controversial now, where Facebook works with fact-checkers to annotate content. But the Facebook promise was always one of extreme, sometimes admittedly labored editorial impartiality.

Third is harder to appreciate but one I can understand. When you build systems that recognize spammy content, you inherently give away your secret; it becomes much easier to work around. Ensign points to the Ray Ban spam that was going in Facebook couple years ago. She says that since content the proper bona fides, team fighting spam instead relied on account characteristics of the accounts posted. Facebook engineers who presented at the Spam Fighting @Scale conference share similar insights; “Fake accounts are a common vector of abuse across multiple platforms.” and “It is possible to fight spam effectively without having access to content, making it possible to support end-to-end encrypted platforms and still combat abuse” are two that are worth mentioning.

Running a user generated content site is hard. When I first started at Digg, the thing that shocked me most was how much of the Digg engineering was really to keep the site remotely clean. We had tools that worked to recognize botnets, stolen accounts, and everything in between. Every submission was evaluated automatically for many characteristics, such as “spamminess”, adult content, a few more. There were tools to block certain content, only in certain countries. Brigades, are they were called, would form on Yahoo! Groups to kick stuff of the site, or promote them. As we plugged one hole, some social media consultant find a new way to use Digg’s various tools to send traffic to his or her ad infested site.

And there was also the scaling. Digg was a lean and mean engineering organization, compared to Facebook. But still, we always struggled with scaling challenges, and so did everyone. Failwhale might be all gone now, but running a site that’s under attack 24/7, with features being added left and right causing unforeseen performance issues all of a sudden, scaling an organization to support just more than a few million users, is an exercise that few can appreciate.  Keeping a site that complicated, up and running at that scale, is a challenge. Doing that while keeping site fast, is a whole another beast. Former users of Friendster or Orkut might feel the same way; performance issues were what caused their users to leave the sites.

I stand by my initial assessment of the problem. Facebook built a massive attention pool, sliced and diced it, packaged it nicely, and now is making bank selling it to the highest bidder. The problems it faces, from spammy content, to fake news, is inherent to the medium of exchange, attention. Sketchy characters flock to frothy marketplaces, like bees to honey. What makes or breaks a marketplace valuable is being the trustworthy intermediary, between the buyer and the seller. By being so large, and so influential, Facebook owns this problem.

And to be clear, this is not a dig (or Digg?) at the company; I rely on Facebook to keep in touch with friends scattered around the world. My WhatsApp groups, like for many others, is my support system. I think as the world’s address book, it is where any business, or activist, or a community organizer find customers, supporters, or members. And of course, while I have no significant others working at the company, or have any financial exposure to it, I do have close friends who are former, or current employees.

My main qualm with social networks has always been the commercializing of the individual and the collective attention spans. As we spend more of our waking hours plugged in, move more and more of our political discourse, both in United States and around the world, to these walled gardens with rules that are written by a few people living in California, we risk losing more than just the integrity of a single election.


Fake News is an attention economy problem

A common theme of this blog is that history repeats itself. There are some fundamental dynamics of information that are innate to the internet, and most companies coast those trends. There are occasional shifts; like the smartphone with its always-on-connectivity and sensors but things more or less follow certain trends.

The recent rise of “fake news”, or cheap information that plagues everywhere that Facebook, and to a smaller degree Google, is dealing with has precedents and can be explained (and predicted, as many did) basic look at the economies of attention, which is the another theme of this blog. Being somewhat reductionist, the problem can be view as a spam issue, on steroids. I admit the integrity of presidential elections is a more serious problem than loss off productivity but a more sterile approach might help come to some immediate solutions.

Facebook might be the punching bag these days for everyone, especially journalists, but Google had its fair share of spam issues. Not too long ago, at around 2009, the Mountain View company was fighting a fierce war against what was then called “content farms”. These companies would basically figure out the trending Google searches, create extremely cheap content, real fast, and do some SEO magic, and get traffic from Google, against which you can sell ads. As long as your cost of production was lower than your revenue from ads, you were golden.

This was a big, lucrative business. The biggest player in this game, aptly named Demand Media was a billion dollar public company. This Wired feature on the the company is full of amazing anecdotes. The company ran many, many websites targeted at virtually any vertical, including one called Livestrong, a franchise of the none other than Lance Armstrong.

Google, soon woke up to the danger, and issued an update to its “algorithm”, called the Panda update and effectively kneecapped the entire industry. Today we are looking to hear from Facebook CSO Alex Stamos, but Matt Cutts of Google was all the rage back then.

Facebook even had its fair share of “spam” problems, and while company might seem like paralyzed in an effort to satisfy both sides, it wasn’t always that way either. Zynga figured out the dynamics of News Feed, as well as the psychological rewarding mechanisms of unsuspecting “gamers” and built a billion dollar business around it. In the meantime, though Zynga and its flagship FarmVille game became synonymous with spam. When Facebook woke up to the problem, and took action, the resulting tweaks nearly killed Zynga too. The gaming company is still around, as a public company, but it’s struggling to even pay for its HQ. Same pattern also happened with companies like Upworthy, and many other “viral” news sources.

As an outsider, it’s not clear how much of an existential crisis this is for Facebook. Google’s struggles with content farms was an existential risk; users losing trust in their search engine can jump ship to Bing or any other. Facebook users are locked in to the platform, and by the virtue of social networks, as more users join, its gets harder for next user to leave. The social network is more or less the world’s biggest address book for many, and the filter bubbles really make the problem of fake news only one someone else can diagnose for you, not unlike a mental disorder. Some like Sam Biddle even argue inherently benefits from our endless craving of drama. Russian interference in US elections propelled the problem to mainstream media, but that was unintentional.

Moreover, the numbers itself make it a challenge. unlike a few content farms (or virtual farms, in case of Zynga) that can be easily identified, for Facebook, there are 5 million advertisers who can push any sort of content to users’ news feeds. Still, it doesn’t seem like an unmanageable number. There are many business that have similar number of customers, who seem to keep a handle on them.

It wouldn’t be great for Facebook’s bottom line to have to increase the cost per customer, but it is probably the right approach for the long term. The media and tech analyst Ben Thompson argues the same in his column. (Subscription might be required) Facebook flew past its competitors partly by being the saner, more refined, Ivy-grad built and approved alternative. Google probably doesn’t miss revenue it used to earned from the content farms, and Facebook certainly doesn’t miss Upworthy. Longer term vision would help. A company that’s building solar powered planes that communicate each other via gyroscopically stabilized lasers should be able to solve some spam issues.

As a sidetone, it’s worth mentioning the opposite examples. These cheap SEO or virality games do not always end badly for companies. For each Demand Media, there’s a “success” story like Business Insider, and the like. The journalistic pasts of these organizations are questionable. Both, among others, have built their businesses on borrowing content from other organizations, having fewer and more junior staff, but really playing the SEO game better than anyone. Similarly, Buzzfeed is a now serious journalistic powerhouse now but the company was decidedly built on subsidizing actual journalism off of more viral, bite-sized content.

The fact the solutions will emerge only points to the chronic nature of the problem, however.  Facebook, Google, or any platform can solve the spam problem, given enough resources and focus. An economy that’s based on commodified attention poses not just passing economic challenges to tech behemoths, but existential risks for a regime that’s somewhat predicated on an educated public. The history of attention economy is the subject of Tim Wu’s excellent book Attention Merchants, which I can’t recommend highly enough.

When people’s attention can be sold to the highest bidder, the producers with the lowest fixed costs will rule the world. A few years ago, it was Demand Media, then it was Zynga, then Upworthy and Huffington Post, and today it’s everyone. As costs of production goes down (which is a good thing), the challenge will get harder. Moreover, as targeting of not just ads, but any content, becomes more precise, yet more opaque, the shared context that holds a society together will inevitably decay.

It might be a libertarian pipe dream to live free of interference from anyone, in one’s own digital and physical cocoon, but that seems untenable in the long run for a liberal democracy. At some point, we will have to elevate our rights to our information laid down in a more robust fashion, instead of relying on the good will of a few people living in California. Spam, as a risk to productivity, was solved by better technology, as well as regulation that required transparency to widely distributed emails. But most importantly, it got solved after we acknowledged the problem, saw the long term risks, and attacked it at its mechanics.

With Big Data Comes Big Responsibility

It’s getting harder to suppress the sense of an impending doom. With the latest Equifax hack, the question of data stewardship has been propelled to the mainstream, again. There are valid calls to reprimand those responsible, and even shut down the company altogether. After all, if a company whose business is safekeeping information can’t keep the information safe,

what other option is there?

The increased attention to the topic is welcome but the outrage misses a key point. Equifax hack is unfortunate, but it is not a black swan. It is merely the latest incarnation of a problem, that will only get worse, unless we all do something.

The main issue is this: any mass collection of personally identifiable data is a liability. Individuals whose data is vacuumed en masse, the companies who do the vacuuming, and the legislators should become aware of the risks. It is fashionable to say “data is the new oil” but the analogy only goes so far, especially when you consider the current situation of the oil-rich countries. Silicon Valley itself here is especially vulnerable.

Big parts of the tech industry in Bay Area  is built on mass collection of such private data, and deriving some value from it. A significant part of the value comes from, somewhat depressingly, from the ever increasingly precise ad targeting. The problem with this model was long known, if not tacitly admitted by its creators, but it wasn’t until the Snowden revelations real national debate has picked up. With the recent brouhaha following the 2016 Elections, and a real risk of an authoritative government in the US, the questions are louder this time.

Public outcry does help, but the change is very slow. Part of it is the business models are wildly successful. Combined Alphabet (née Google) and Facebook are a trillion dollar duopoly. The cottage industry around these two companies, along with practically all stakeholders in the area being somewhat either beholden or financially tied to the industry, motivation to change is small.  Some companies, like Apple, try to raise the issue to a higher plane of morality, part for ethical reasons, part competitive. But the data keeps getting collected, at an ever increasing pace and it’s getting more and more likely a catastrophic event will occur.

Let’s first talk about how data gets exposed. Hacking, or unauthorized access is the most talked about but it’s far from the only way. A lot of the times, , it’s just a matter of a small mistake. Take Dropbox. A cloud storage company once allowed anyone to log into anyone else’s account by entirely ignoring a password check. The case was caught quickly, but it’s a dire reminder of small mistakes can happen. And that is a point worth pondering, separate the recent hack Dropbox suffered from.

As easily data is collected and stored, it’s even easier for it to change hands. Companies and their assets change hands, and so do the jurisdictions they live in. Russian tech sector is a prime example. Pavel Durov, the founder of the oddly popular instant messaging platform Telegram, first built VKontatke, a Russian social network site much popular than Facebook in the country,. But then came Russian government with demands of censorship. Durov ran away but the Russian social network is owned by a figure much closer to the government. And there’s always LiveJournal, which again got sold to a Russian company, now all its data under Russian jurisdiction.

And sometimes, the companies themselves open up themselves to being hacked. Once an internet darling, Yahoo! was put on spotlight when its own security team found a poorly designed hacking tool, installed by no other than company itself. Initially designed to track certain child pornography related emails for the government, the tool was built without the knowledge of the company’s Chief Security Officer, Alex Stamos, a well regarded security professional. He departed the company soon after, only to join Facebook. And again, this is just an addition to the Yahoo! hack that affected 1 billion users, and almost derailed multi-billion dollar acquisition.

Government surveillance is a touchy subject, and moral decisions are always fuzzy, with someone being unhappy. Governments should use tools at their disposal to keep their citizens safe, and this might sometimes require uncomfortable measures. This doesn’t mean they should be given a direct access to millions of people’s private, however. Intelligence efforts should be directed, not drag net. Living in a liberal democracy requires a certain amount of discomfort, not pure order.

But it is hard to deny the evidence at hand, from once liberal darlings like Turkey to known autocratic regimes like China, any government will find it impossible to resist the temptation to take a peek at the data, one way or another.

Governments are made up of people, just like corporations are. The solutions to these problems won’t be easy; with so much already built, tearing it all down is not an option, or even preferable. The industries built add value, employ thousands, if not millions. But we have to start somewhere, both as individuals, technology companies, and legislators.

First, individuals need to be more cognizant of their decisions about their data. Some of it will require education, from a much younger age. But even today, for many, there are a lot of easy steps one can take.

For many uses, a more private, less surveillance oriented tools already exist. Instant messaging tools like WhatsApp (once bought by Facebook for a whopping $19 Billion) is easy to use while using an cutting edge end-to-end encryption technology borrowed from Signal. One can wonder, if essentially playing spies is worth the hassle, but the risks are real, and getting more so every day even for congress people in the US.

For regular browsing, things are in worse shape. Practically every site on the internet tracks you across every other site, shopping and news sites are particularly bad. The users are fighting back, with sometimes clunky, equally overzealous tools. Thanks to an overzealous adoption of ads, both intrusive and sometimes malicious, ad-blocking is on the rise around the world. It is hard to fault consumers, most would benefit from using an independently owned Ad-Blocker like uBlock Origin, or using a browser like Brave that has such technology built in. Apple recently updated its browser Safari on both macOS and iOS to “intelligently” curb cross-site tracking.

For things like email, and cloud storage, things are trickier. For many users, their data is safer with a big company with a competent security team, as opposed to a smaller service provider. There’s a balance here; while big providers are much juicier targets (including governments who can request data legally), they also have the benefit of being hardened by such attacks. Companies like Google use their own services, further incentivizing them to safeguard data, at least from hackers.

However, even then, most people would benefit from increasing the security from the default values. For users of Gmail, Dropbox, and virtually any other cloud storage technology, using 2-Factor authentication, coupled with a password manager is a must.

And largely, going back the cognizance, individuals must be aware of the data they provide and be at least minimally informed. When you sign up for a new service, before sharing with them all your data, see if they at least have a way to delete it, or export it. Even if you never use either of those options, they can be good signs that company treats your data properly, instead of letting it seep into their machinery.

For creators of such technology, things are harder but there’s hope. First step is obvious; companies should treat personally identifiable data as liabilities and collect as little as possible, and only for a specific purpose. This is also the general philosophy behind EU’s new General Data Protection Regulation (GDPR) directive. Instead of collecting as much data as possible, hoping to find good use for it later, companies should only collect data, when they need to. And most importantly, they should delete the data, when they are done with it, instead of hoarding it.

Moreover, companies should invest in technologies that do not need collecting data at all, such using client side computation instead of server side. Apple is the prime example here; company uses machine learning models that are generated on the server, on aggregate data, for things like image recognition or speech synthesis on the devices themselves. Perhaps a sign of poetic justice, the intelligent cross-site tracking Apple built-in to its browser is based on data collected in aggregate form, instead of personally identifiable fashion.

It is not clear, if such technologies can keep up with a server-based solution where iteration is much faster, but the investments might pay dividends. Today’s smartphones easily compete with servers of just a few years ago in performance. Things will only get better.

And for times when mass collection of data is required, companies should invest in techniques that allow aggregate collection instead of personally identifying data. There are huge benefits to collecting data from big populations, and the patterns that emerge from such data can benefit everyone. Again, Apple is a good example here, though Uber is also worth mentioning. Both companies aggressively use a technique called differential privacy where private data is essentially scrambled enough to be not identifiable but still the patterns remain. This way, Uber analysts can view traffic patterns in a city, or even do precise analysis for a given time, without knowing any individual’s trips.

And more generally, companies should invest and actively work on technologies that reduce the reliance on individuals’ private data. As mentioned, a big ad industry will not go away overnight, but it can be transformed to something more responsible. Technologists are known for their innovative spirit, not defeatism.

End-to-end encryption is another promising technology. While popular for instant messaging, technology still in infancy for things like cloud storage and email. There are challenges; the technology is notoriously hard to use, and the recovery is problematic when someone forgets their encryption key, such as their password. Maybe most importantly, encryption makes the data entirely opaque to storage companies, severely limiting the value they can provide on top of it.

However, there are solutions, some already invented, some being worked on. WhatsApp showed that end-to-encryption can be deployed at massive scale and made easy to use. Other companies like Keybase work on more user-friendly ways to do group chat, and possibly storage, while also working on a new paradigm for identity. And there’s also more futuristic technologies like homomorphic encryption. Still in research phase, if it works as expected, technology might allow being able to build cloud storage services where the core data is private while still being able to be searched on, or indexed. Technology companies should direct more of their research and development resources efforts to such areas, not just better ways to collect and analyze data.

And lastly, legislators need to wake up to the issue before it is too late. The US government should enshrined privacy of individuals as a right, instead of treating as a commercial matter. Moreover, mass collection of personally identifiable data needs to be brought under supervision.

Current model, where an executive responsible for leaking 140M US consumers’ can get away with a slap on the wrist and $90M payday, does not work. Stronger punishment would help, but preventing such leaks at the source by limiting the size, fidelity, or the longevity of the data would be better.

Moreover, legislators should work with the industry to better educate the consumers about the risks. Companies will be unwilling to share details about what is possible with the data they have on their users (and unsuspecting visitors) but it is better for consumers to make informed decisions in the long run. Target made the headlines when it reportedly figured out a woman was pregnant before she could tell her parents. Customers should be able aware of such borderline creepy technology before they become subjects to it. Especially more so considering Target itself was also a victim of multiple major hacks. Facebook recently was the subject of a similar report where the company discovered a family member of a tech reporter (the same reporter who broke the Target story), unclear to everyone how. Individuals should not feel this powerless against corporations.

The current wave of negative press against Silicon Valley, caused mostly by the haphazard way social networks were used to amplify messages from subversive actors, is emotionally charged but is not wholly undeserved. Legislators can and should help technology companies earn back people’s trust, by allowing informed debate about their capabilities. A bigger public backlash, when it happens, would make today’s pessimism seem like a nice day in the park.

There are huge benefits to mass amounts of data. There is virtually no industry that wouldn’t benefit from having more data. Cities can make better traffic plans, medical researchers study diseases and health trends, governments can make better policy decisions. And it can be commercially beneficial too, with more data we can make better machine learning tools, from cars that can drive themselves to medical devices that can identify a disease early on. Even data that is collected for boring purposes can become useful; Google’s main revenue source selling ads on top of its search results, which no user would want to get rid of.

Data might be new oil, but only with mindful, responsible management of it will the future look like Norway, rather than Venezuela or Iraq. In its essence, personally identifiable data in huge troves is a big liability. And the benefits we derive from such data currently, is largely mostly used for things like better ad targeting. No one wants to go back to a time without Google, or Facebook. But it possible to be more responsible with the data. The onus is on everyone.

iPhone stole your attention. Your watch might help.

Apple announcements never fail to entertain. Over the years most amusing moments came to be when an Apple executive makes a comment about how their products not just contain amazing technology, but embody larger than life qualities. Couple years ago, when Apple removed the headphone jack from its phones, they called, without a hint of irony, “courage”. This year’s announcements had its share of squirming moments too, from Apple Town Squares to soul-sucking visualizations of face scanning technology. But for

me, the real kicker was when Apple decided to associate the Apple Watch with cellular connectivity with “freedom”.

It’s hard to not cringe, when you see Apple’s first promo video for the cellular Watch shows a surfer, who receives a call right in the middle of her sick trick. How is that a good thing? Do people not go on vacation to unplug? The eye rolls didn’t stop there; where Apple decided to demo making a phone call with nothing but a watch by showcasing an Apple executive answering a phone call, during a paddle-boarding session on Lake Tahoe. I wrote the proclamations of freedom via a $400 watch, combined with a $120/year bill hike, off as garden-variety Apple navel gazing.

It wasn’t until I read a review of the watch by Hodinkee, a high-end watch blogger, that the freedom Apple was promising was nothing more than freedom of its own device, the phone. It’s a great read overall, with lots of interesting insights into the industry itself. But what caught my eye was how the watch changed, or reduced how he used his phone.

In the few days I’ve been using the Series 3 Edition as my only communication device, I’ve found myself checking Instagram less. Texting less. Dickin’ around on the web less. I use the watch to text or make phone calls when I need to – and that’s it. My definition of “need” has changed completely – and frankly I don’t miss having my phone in my pocket at all.

The smartphone promised us always-on connectivity, and we welcomed it with open hands. The ability to respond to an email immediately wasn’t new, but add an actual web browser, and an App Store that extended the functionality of the phone virtually endlessly, we got hooked. As the fidelity of medium increased, it slowly became not just a device to use for a specific purpose, but something that we use, to more or less, to use. In short, we traded in our attention for the promise of always connectivity.

The reasons for how our phones are so addictive are numerous and we are just discovering the results, both personal and societal, of such an enormous shift in how we manage our attention spans. Although the research is taking shape, there are already a few loud voices telling us that the commodification of our attention is nothing less than a full-on scale war by the brightest minds of our generation against our identity.

I am not no Luddite; I earned my living for the past 7 years for working at technology companies. As I have moved across first cities, and then countries, I have relied on technology to stay connected to those that’s dear to me. I also think that technology is an essential tool to slowly bring down the arbitrary barriers in humanity, democratize access to information, and generally make the world a more just place.

Apple Watch here stands as an interesting device with the promise of a connectivity with a much smaller drag on one’s attention. It has a screen, but a much smaller one than the one on your phone; you simply can’t look at it for hours at end. The input methods to it are similar to a phone (with the notable exception of a camera) but voice plays a much bigger role on it, ironically, than it does on the phone. You can, realistically, use your watch via voice, both as an input and output method and only rely on the screen for an occasional glance.

Of course, the same dangers that made the smartphone an attention hog loom over the watch. Unlike a phone, a watch is always attached to your body, with an ability to jerk you at any time with a vibrating motor. And Apple is not being subtle about its goals; while it is admirable that the company is using the heart-rate sensor to detect heart conditions and generally provide data to researchers around the world, there’s something off-putting about your heart rate being measured constantly and uploaded, even in aggregate form, to some datacenter somewhere. And maybe, this will all be invalid when the tech industry actually puts is resources, unlike they’ve done so far, behind developing new apps for the watch that become as addictive as their phone counterparts.

It is early in our technological evolution to tell what will be the prevailing way we’ll be interacting with technology and for what purposes. Smartphones seem ubiquitous now but it’s important to note that they have existed for merely 10 years, a blink of an eye even on the fast changing pace of technology. It’s very unlikely and depressing that interacting with a 6 inch glass slate that is littered with apps whose raison d’être is to collect more data about you to sell better ads, is the conclusion of human-computer interaction.

In some way, Apple’s proclamation of freedom that you can get with a watch is an admission of this guilt. What the watch promises is a freedom from your phone. More than any company, Apple itself created this world where we feel a compulsive desire to be entertained and not be bored. And maybe, with the watch, Apple can help undo some of the damage. This is not to suggest that the main reason Apple sells devices is to advance the human civilization, or to not make unfathomable amounts of money, only to spend it on absurd buildings or ask for salvation from a giant corporation for our sins.

Unlike many of the other tech giants, Apple makes most of its money (though increasingly not all of it) from directly selling products to its customers. Without other intermediaries to take a cut, the company’s incentives are more directly aligned with those of its users. And more than that, with its size and reach, Apple is a company that sets the tone for the industry.

Our mode of interaction with our technology is still evolving. It is not reasonable to roll back to a world where always-on connectivity isn’t the norm. But that doesn’t mean that our attention should be up for sale. A device, or a combination of devices, that makes a conscious effort to be less in your face and more out of your way is one way to ensure that.

The cyber history repeats itself

With a new unicorn popping up seemingly every other week, it’s easy to forget that the new behemoths that shape our lives, the technology firms, existed more than a few years. Behind the shiny veneer, however, there is a rich history of how this world came about to be. And just like any other history, it’s one that keeps repeating itself.

The latest iteration of the history, though, is not its finest one. Nazis are back.

A quick recap. The informed citizens of the greatest country on earth have collectively voted to elect a white supremacist sympathizer, with overt, covert, voluntary, and involuntary help of practically every tech company and its acolytes. By the time we all woke up to what we did, it was too late; the Nazis were emboldened, chanting in the streets of Virginia, among many places. Then a guy woke up, literally, and decided to kick the Nazis off the internet, until they find a new home.

“I woke up this morning in a bad mood and decided to kick them off the Internet.”

— Matthew Prince, Cloudflare CEO

For some observers of the technology, this latest kerfuffle might just be a new chapter in the upcoming book by a Vanity Fair writer. For those a bit more in the know, they would note that the Nazis (a word I am using as a short for white supremacists), never really left the internet. They practically populated the every platform you did; they were on newsgroups, mailing lists, 4chan, reddit, Facebook, Twitter, and probably still are.

But, go down a bit farther back in the Wayback Machine, and it’s easy to remember that Nazis and some part of their history was on the internet as far as 2000s, and it points to one of the most interesting tensions of the Internet with capital I; the constant tension between the borderlessness of it, yet the levers of it being controlled just a few. This is subject of this essay; how the current gatekeepers of the internet’s aims to create a new type of statelessness state is just a clumsy reiteration of past attempts.

“Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.”

— John Perry Barlow, EFF CO-Founder

The aspirational extraterrestrial culture of the internet is a messy and deep subject but the “Declaration of Independence of Cyberspace” is a good start. Penned by John Perry Barlow, one of the founders of Electronic Frontier Foundation (EFF), at a World Economic Forum, the declaration pulls no punches. In fact, more than just statelessness, you can hear the subtext of cyberspace being not just an international entity but almost an supranational one. It is a good read, both as a way to understand the libertarian thinking of early residents of the cyberspace and also as a Marxist approach to how zero marginal cost of production of technology changes the entire dynamics of economy and of course societies. It is also remarkably prescient, not necessarily in the types of world early adopters would eventually create but the conflicts they would face.

Scroll your way up to 2000. Not just to the days Before iPhone or Before Facebook but Before Google. In 2000, a French human-rights organization discovers that Yahoo, on its auction platform, allows sale of Nazi and Third Reich memorabilia. While still not tasteful and unpresidential at the time, such activity was not illegal under US law, but quite so under French law. In what’s considered a landmark case, French court eventually ordered Yahoo to not just pull such items from its French store (fr.yahoo.com) but also make the items in the US store inaccessible in France.

Front page of the internet, 2000 Front page of the internet, 2000

The entire discourse around the case is extremely fascinating, and some of the statements from both sides have a very timeless quality. To an American audience, where only freedom of speech is more paramount to right to carry a firearm, an interference by a French court of all courts, is an international overreach of unseen proportions. However, this analysis misses the continent-wide trauma Europeans experienced with Nazism in 1940s. While America has its fair share of World War 2 scars, it pales in comparison to the destruction Europe endured. This suffering was so profound, so widespread and so deep, and Nazism such a vile idea that the entire continent’s new identity, European Union is largely built around this reaction.

It is worth pulling out a few quotes here especially, just to see how prescient some of the predictions from the French philosophers are. Mark Knoebel, the French activist whose letters sparked the entire shebang says that American internet is becoming a “dumping ground” for racists all over.

Any discussion of censorship on the internet would be amiss without bringing up everyone’s once-favorite liberal reformer turned autocrat strongmen Recep Tayyip Erdogan, the president of Turkey. Even as far back as 2008, just 4 years after Google’s IPO, the Turkish government was in cahoots with YouTube over a couple of videos making fun of Mustafa Kemal Ataturk, the founder of modern Turkish Republic. In what would become the norm for Turkish government (or already was, depending on your ethnicity in Turkey), the state decided to block YouTube entirely, and demand the videos be taken down. The case went on for literally years, during which time YouTube stayed blocked in Turkey for almost two years. Turkish bloggers took the matters to their hands, where they shut down their own sites to protest the government’s block. However, the block itself was so ham-fisted that even the then Prime Minister Erdogan himself mentioned that “everyone knows how to access YouTube”.

“I think the Decider model is an inconsistent model because the Internet is big and Google isn’t the only one making the decisions”

— Nicole Wong, Google

Still, the details of this 2008 already signals the awkward situations tech companies would themselves with government. Impossible to imagine now, though, Google employees felt comfortable jokingly calling themselves “The Decider” with a New York Times journalist in the room. The employees in charge, many with law degrees, were aware of their power, felt obviously uncomfortable with the levers they held, but, in the end they held on to them.

A common theme that underlies most of the Silicon Valley thinking is that computers, internet and associated technologies changes everything; from mode of production to distribution to how information is generated to how it is disseminated. No incumbent is too big to not upend, no industry without with inefficiencies a couple of scripts can eliminate. A common complaint of the less-STEM focused side of the world, then is that Silicon Valley’s casual disregard for the history and the rules of the world is bordering on recklessness.

This is largely a political argument, which means it’s an everything argument, but the singular point is that sometimes the Internet company’s casual disregard for history is not just hurtful for the entire world, but also for themselves (a statement whose irony is quite obvious to yours truly).

Silicon Valley companies love to invoke legal talismans, a phrase (I think) coined by Kendra Albert. In short, they love to evoke feelings of a legal proceeding, such as a due process, where there is none, to mostly justify their own decision making. But sometimes, such invocations are just symptoms of delusions of grandeur and they do come with consequences for everyone, as mentioned, including the companies themselves.

Consider the time Twitter UK General Manager called Twitter not just a bastion of free speech but the “free speech wing of the free speech party” in 2012 and try not to cringe. But you can definitely see a direct line from the EFF declaration to such an inane statement. A new world is being born, called the cyberspace (as opposed to what, meatspace?) and the rules are written by whoever is creating this world. Considering the current situation Twitter find itself in right now, with user growth barely chugging along, a stock hugely under its IPO levels, its value possibly held up significantly by an orange White House resident, it’s hard to imagine Twitter would be behaving the same way if they had a better understanding of the nuances of free speech laws, and how it protects people from state because, unlike corporations, state is allowed to jail and sometimes, kill, its people.

“That means more than one-sixteenth of the average user’s waking time is spent on Facebook”

Of course, this aspirational statelessness of guardians of the cyberspace does go the other way too. It’s easy to write off your overzealous application of freedom of speech as a mistake,  but harder to do, when you do the opposite. When a tech company counts  ⅓ of the world’s population as its users (and 80% of online Americans), and those users spend a considerable amount of their waking moments looking at things pushed on to them by that company, it’s practically impossible to for a one-in-a-million event to not happen with exceeding frequency when you are dealing with billions.

Probably one of the more eye-opening cases of this American overreach into cultures involves bodies, or more specifically naked ones. For Americans, a sight of a covered breast at a sporting event is a cause of national debate, but for many Northern Europeans, nudity is just another state of undress, as normal as any other. Especially so, when it is presented in a historical, artistic or just non-sexualized context. And even more especially so, when it is the Conservative Norwegian Prime Minister who happens to share a Pulitzer-prize winning photo. Is Facebook, run largely by a bunch of white men in America, not making cultural statements about an unashamedly progressive country?

Banned in California Banned in California

It is easy to write off these high profile instances as simple mistakes, and having worked in a similar user-generated content site before, it is mind-blowing to me that Facebook is as free of spam as it is. But what does that mean when these types of  incidents happen so often that you slowly start shifting values of other cultures to your own, which whether you like it or not, were shaped by your own American upbringing? One cannot just create a culture in such a transactional manner.

It is one thing, as an academic exercise to imagine a world without governments, a libertarian paradise. And if someone wants to take his academic exercise to the seas or to other planets, it is only within their rights to do so.

But for a generation that wants to eventually not just govern the cyberspace but also one of the most important states in the world, the utter clumsiness of the entire enterprise should give one a pause. A common joke in Silicon Valley, the place about the Silicon Valley, the hit HBO show is that many of the absurd plot twists in the series is really toned down to be believable to the general public.

Consider the case of Reddit. When a bunch of celebrity’s iCloud accounts got hacked and their private photos were posted on the site, the company decided, reasonably, to remove that content. But in doing so, the CEO of the company said that they were considering reddit not just a private company, but “a government for a new type of community”. He even went to describe how he sees the actions by the moderators akin to law enforcement officers. But, how do you reconcile such great ambition with the fact that your CEO, or president, resigns from the government because of a seating arrangement issue? (Disclaimer: I worked at a Reddit competitor briefly, around 7 years ago, partly because I was and still am quite interested in the space. I even wore a Reddit t-shirt when they came to visit us)

“We consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community”

— Yishan Wong, Former Reddit CEO

Building a new world, one that is more just, more humane, one that is safer, cleaner, more efficient all great goals. When I decided to study computer science in 2005, my main motivation was similar. I grew up in a town in Turkey where I didn’t always fit in and it was through the internet where I could see more of the world easily enough and find people that I could connect with, on many levels. I wanted to extend that world, which seemed reasonably better than the one I lived in, more to the real world.

And personal politics matter too. As an immigrant to US, unlike most of my more left-leaning friends, I find the idea of statelessness, or a post-nation-state world an experiment that humanity owes itself to try. While the supranational organizations such as the EU and World Trade Organization do have their flaws and globalization comes with this unsettling feeling of homogeneity, I stay largely optimistic that as a species, we are better off in a more integrated society.

However, that does not mean I advocate for a world where we outsource our thinking, our values, our cultures, our judicial decisions and certainly not our free press wholesale to a small number of people, who are unelected, unvetted, and largely unaccountable.

What I would like to see, however is less of the reckless attitude but a more thoughtful approach. An informed, inclusive, global debate about the kind of digital world we can create together. One that learns from our previous mistakes, and does better. Time for this discussion is running out, and we have repeated our mistakes enough times. We need to do better now.

On quiet

Istanbul is not a quiet place. The streets are filled to the brim with cars, honking. The kid is screaming to his mom, the girlfriend to her boyfriend, the police to the street vendor. It’s not pleasant, but it is Turkey.

However, the real noise is not the people, or the cars, or the ferries. It is the news. Everyone in Turkey is always watching the news. It’s on the background when you are at home, with your parents. It’s blaring at you when you are at the corner store from the TV hung to the corner. It’s shouting at you when you are at bank, from the small radio sitting next to the framed photo of the teller’s daughter. It’s even on at the waiting room at the doctor’s office, because that’s when you really need a pick me up.

And when you are, by some miraculous happenstance out of the earshot of a TV, there’s Twitter. Everyone is always on their phones, and if they are not checking Instagram, they are checking the news on Twitter. It never ends. It wasn’t always that way, I want to say, but for the love of me, I can’t remember when it wasn’t.

It used to be fashionable to call Turkey the “Little America”, largely due to an overzealous adoption of neoliberalism and all the joys and pains that come with it. It used to be a thing, a family tradition, to enjoy the even the most inane of American traditions. Having visited America was a sign of not just wealth, but also a checkmark on the pursuit of a more enlightened world.

Now, slowly it looks America is on its way to become a “Little Turkey” itself, primarily starting from people’s addiction to the news and a constant state of screaming.

Many a words have been said about the 24/7 cable news networks in the US. How the inane, and insane, need to fill up over the hours drives networks to just have talking faces on TV. The current boogeyman for the orange man in the White House is partly responsible, people argue, for him being there. When I was a kid, CNN for me was the night-vision imagery from the first, of seemingly endlessly many, Iraq war. Now it’s a bunch of talking heads, that are always there.

And then, there’s Twitter. And push notifications. Always the push notifications. It used to be different though. When I first moved to US, in 2006, we also had a scandalous president. He didn’t seem to be that coherent, and his policies didn’t earn him many favors in or outside the US. There was some political turmoil, maybe even a war, but it happened on a different timescale. There were other things going on.

One of the first things that America lost when Trump got elected is the quiet, the personal space millions had to themselves. You had a time to yourself to be in love, to be with your friends. There were conversations that never touched on politics. Some things were downstream politics, but most things were not. There was a time, when you could just be angry at your things in your world at your own time. Now, you are required to be angry all the time because of something you didn’t do, don’t have control over and seemingly with no end in sight.

Has it been 6 months since Trump took the office, or 6 years? Is anyone even counting anymore? How would it feel different if this wasn’t just 1/8 (hopefully) into the dumpster fire that’s this administration but we were just halfway there. I am aware that I am speaking from a privileged position here, as a white man with a stable job in a well-paying industry, as opposed to being a minority. Maybe things were always this loud, if you always had to worry about your job, or your livelihood.

But in the objective space I can carve out, I feel that things got worse. And we need to do something about it.

I am not suggesting that people ignore the news or disengage from the public discourse. Or disconnect entirely or at all. I don’t think a democracy works with a fully disengaged public. And it certainly does not, with a public that only is informed about topics that interest them. We all have a responsibility to be informed, including on things that don’t matter to us but to those around us. But it also matters what we each decide to think about, what we need to care about. We built ourselves empires on capturing attention, and we are slowly realizing that our minds cannot keep up with its demands. But, I think we have yet to realize that our minds aren’t also capable of being outraged, all the time. We can’t always be mad, lest we lose our connection with the reality. Everything is political but politics isn’t everything.

One goal of politics is to arrange relationships between big groups of people. Not necessarily divide or unite them, but to establish some sort of structure. A network of roads, where connections happen. It doesn’t care if you run tanks on them, or ice-cream trucks. You can drive away, or run towards someone. But the world is not about those roads. It’s not not about them either, of course -just ask any commuter- but it’s just a part of it.

Somewhere along the way, we need to park our cars, get off our bikes and look around the world as is. The quiet is easily disturbed, but in the end, it’s what makes each of us human, unique and it’s what keeps the society humming along. We can’t always scream, we need to be quiet so that everyone else can have it too.

Accents and Blowhards

Each time I get in a cab in San Francisco, I make it a point to talk to the driver, not just because I believe it’s awkward otherwise but I as a somewhat assimilated Turkish person living in the U.S., I enjoy conversations with cab drivers who a lot of the time happen to be foreigners themselves also. As we speak and they ask me where I’m from, a lot of the time, they tell me how surprised they are that for someone living in the US for 7 years, I have almost no accent.

Similarly, at bars and other places where I meet new people, especially if I explicitly try and slow down my speech just a bit (and I speak pretty fast), I am able to maintain a non-discernible American accent, or so I am told. In fact, for a long time, I considered having an accent myself a failing as I was leaking information the moment I started speaking; there have been times where I’d have preferred if the people I’m talking to didn’t necessarily know I was a foreigner.

Recently, Paul Graham, the founder of the prime Silicon Valley capital firm Y Combinator, made some comments in an interview about how, according to his data, a strong accent in an entrepreneur is strongly correlated with their companies failing. Unsurprisingly, as the notion of an accent is tightly correlated with nationalities and races, a big kerfuffle arose so much so that Mr.Graham himself had to write a piece explaining himself to people calling him ugly names.

As someone who has been interested in languages and accents, both personally and academically, the entire debacle has been a fascinating one to watch, somehow reminding me about the Turkish saying “a mad man throws a stone into a well and thousand clever men can’t get it out”. But as I read more and more of the blog posts and comments and tweets, I decided that it is now my turn to throw a stone into the well of the internets.

Before I came to U.S. in 2006, I attended an American high school in Turkey where the primary language of instruction was English. During my time, I was heavily involved in the Model United Nations club, which meant that I had to be speaking English outside of school and I was lucky enough to give public speeches, in English, when I was 17, to thousands of people.

When it was time for me to pick a college in the U.S., my choice of CMU was partly driven by the fact that it had a small Turkish community which would allow me to make more American and international friends, which I did. I’m assuming that since 2006, I have spoken and read more English than Turkish by orders of magnitude; most of my close friends in the U.S. are Americans and at this point, I find myself even slurring my speech in Turkish, speaking certain words with an English tonality. Moreover, I have actually studied cognitive science (not computer science!) during college, specializing in linguistics, so this puts me in a special place to strategically aim stone, like no one else can get it out.

Let’s first look at what venerable Paul Graham actually said about accents, before making any judgements.

One quality that’s a really bad indication is a CEO with a strong foreign accent. I’m not sure why. It could be that there are a bunch of subtle things entrepreneurs have to communicate and can’t if you have a strong accent. Or, it could be that anyone with half a brain would realize you’re going to be more successful if you speak idiomatic English, so they must just be clueless if they haven’t gotten rid of their strong accent. I just know it’s a strong pattern we’ve seen.

Taken verbatim, or parsed like a computer, this is a benign statement. Paul Graham and the folks at Y Combinator have probably worked with more startups than most people in Silicon Valley and it’s a natural tendency to look for patterns and explanations for interesting phenomena when you are exposed to so many of the similar things at once.

Nevertheless, what Paul Graham is seemingly missing is that communication doesn’t happen just through words we speak but the context in which such words uttered also matter equally, if not more so. The context brings along all sorts of prejudices, preconceived notions and especially for a semi-public figure like Paul Graham himself, who owns part of his fame to his eloquent essays, it’s the author’s responsibility to somewhat adjust his narrative to the audience.

There’s a curious and slightly frustrating tendency in people with scientific backgrounds to assume their audience they are speaking to has to have the same level and type of sophistication and it’s simply “phony” to adjust the way they speak, both in tone and content, to make themselves easier to understand or maybe just not horribly offend the other party. It’s also curious that this tendency, or social oddity if you will, seems to be amplified in people working with computers, where it’s tempting to reduce all sorts of information to its pure essence while actually losing information that’s not so easily coded in terms of words and phrases, but actually is more transient and context dependent, meaning that in order to represent a specific piece of information, you’d have to code the entire state of the world, almost literally speaking.

Note that Paul Graham mentioned not only people with accents but actually said “people with strong foreign accents”(emphasis mine). Surely, you can argue that I’m nitpicking words but hey, Mr.Graham is the native speaker here himself and one could only assume (or care) he’s picking his words with utmost care and precision, given we are talking about languages here and I’m just giving him and his words the respect they deserve.

So, as soon you start talking about “people with strong foreign accents”, you immediately bring race and nationality into play, which even, or maybe especially, in the Land of True and Unadulterated Meritocracy that is Silicon Valley, is a third-rail. Thus, after those words were published and publicized, just like on cue, people of all sorts started calling Paul Graham racist, a xenophobe, a hypocrite, and many other unspeakable things. I’m sure his fame, his wealth as well as the “rich, white men” stereotype that he unfortunately seems to fit in didn’t help the matters much and his close ties to the technology sector where there’s seems to disproportionate number of people with accents and foreign born individuals made it an even juicier subject.

I have no reason to believe that Paul Graham is any of those things people call him. If anything, from what I can tell, he’s passionate about allowing more foreign born nationals to U.S., for one reason or another. It could be an altruistic motive but for this discussion it could simply be that he wants more labor force available to his companies. In either case, Paul Graham, argued many times on Twitter, on comments section on Hacker News and his response, that he is on the founders’ side on this debate and he actually is trying so frantically to help people and I believe him. But sometimes, there seems to be some room for improvement in his tone, delivery, and the actual content of his messages.

Take into account the second part of that sound byte where Mr.Graham argues that “anyone with half a brain would realize you’re going to be more successful if you speak idiomatic English, so [the entrepreneurs] must just be clueless if they haven’t gotten rid of their strong accent”. Now, we are at a point of not just calling out people with strong foreign accents, but essentially saying that people who haven’t actually gotten rid of their accents are lazy and stupid because they aren’t able to understand how people are perceiving themselves. That’s very, very hapless coming from Mr.Graham (and is pretty offensive to people who had lobotomies for medical conditions, they are surprisingly normal). And there’s another underlying implication here is that not only you aren’t as smart as a person with half a brain if you haven’t gotten rid of your “strong foreign accent” but also it’s sliding scale where the common decency, mind you, of getting rid your accent is strongly tied to your intelligence.

The more surprising thing is that Mr.Graham seemed shocked at the response such a sound byte seem to have generated. While a significant chunk of the responses have been simply people being angry, a couple of smart people have touched on how someone as notorious as Mr.Graham is still propagating stereotypes and providing more ammo for those who are truly racist and short-sighted. My personal qualm has been more about the haphazard way Paul Graham seems to throwing around phrases with a false sense of authority without realizing their implications or really grasping the content matter at hand fully.

Going back to the aforementioned quote, Mr.Graham himself mentions that he’s not sure what actually causes entrepreneurs with strong foreign accents to fail and enumerates a couple possible explanations. In other ways, we have an interesting phenomenon, a couple possible explanations, and some preliminary data. This is an interesting pattern that should be familiar with anyone with a half a brain but a college education would help too. This is where a person would simply engage in what’s a battle-tested way to solve this problem; apply science! Or more specifically, simply apply the scientific method, test your hypothesis, measure your data, rule out other possible explanations such as confounding variables, rinse, repeat, until there’s a reasonable level of confidence.

And in fact, Mr.Graham does seem to understand this also. Reading his response piece, he alludes that he has in fact some data on this:

We have a lot of empirical evidence that there’s a threshold beyond which the difficulty of understanding the CEO harms a company’s prospects. And while we don’t know exactly how, I’m pretty sure the problem is not merely that investors have trouble understanding the company’s Demo Day presentation”

Note the phrases like “empirical evidence” and “threshold”. I’ll give you a freebie; while common among the nerderati, regular people don’t generally speak with such scientific terms. In fact, anytime someone invokes jargon, you can assume that they are trying to raise the level of conversation to a higher plane, where they are either trying to make a better point or simply coming down to crush you (although in common conversation, it’s a pretty big faux pas). It’s admirable that Mr.Graham is trying to argue that he’s basing his arguments on evidence but when he comes up pretty short when he tries to draw the all mighty scientific sword to cut over the controversy which has surely has been hurting him, personally and financially.

Scientific method, while far being perfect, is simply the best tool we have at hand so far to establish some resemblance of truth and figure out causal relations (Although you’d be surprised how many big areas with rich scientific evidence are still very highly contested). But scientific method requires not only using the correct terminology, but actually doing the walk also. More specifically, the empirical evidence Mr.Graham mentions is worth next to nothing unless he’s willing to share the data he has collected publicly, along with his methods and have them peer reviewed. Again, if you think I’m actually creating a straw man where there’s none (since Mr.Graham never actually said that he’s doing “science”), I’d just urge you to look at the definition of the word “empirical”, read that sentence to yourself couple times out loud and come to your own conclusion as to why Mr.Graham used such language.

Paul Graham, in his response, clearly argues that he has no problem with accents per se but it’s actually when people have such strong accents that it’s hard to understand them, it’s an issue.

Everyone got that? We all agree accents are fine? The problem is when people can’t understand you.

Putting aside the curiously defensive tone with those question marks, this again makes me think that Mr.Graham doesn’t fully understand how accents work or how people will inevitable understand his messages.

Over the course of my life, as my Turkish accent has become less noticeable, I noticed that some people are simply better at understanding different accents and some people even understand different accents than others; in other words, it’s pointless to argue that there’s a discrete point after which an accent becomes less or more understandable to anyone. After a strenuous workout, even my college girlfriend had hard time understanding me while my Mexican roommate never missed a beat. I still don’t fully understand some Southern accents and neither do my friends who have never left California their entire lives. Some people’s Russian accent still trip me up but I am a sucker for French accent and the New England accent is still bit of a mystery to me (I kid, kind of) but I’m getting better at it.

Attributing any perceived advantage or handicap in understanding different accents is itself an interesting problem in itself; putting my cognitive scientist hat on, I can tell you that the list of phonemes you can both speak and hear are determined by what you grow up hearing, when you are as little as 6. In other words, it gets progressively hard to simply hear different phonemes than those spoken in your native language (and more interestingly, babies who have no language yet seem to be able to hear and produce all these phonemes). The most dramatic and well known manifestation of this a lot of Japanese people not hearing the difference between “beer” and “beel”, and I personally have hard time pronouncing “wedgie” and “veggie” differently, unless I’m trying, which makes for funny moments at BBQs. Again, this phenomenon is part of the reason why you have people who have seemed to spent 20 years in a different country but still speak with an accent whereas their kids start speaking two languages with no accent when they are 10.

Again, that’s not to say someone can never get rid of their accent; anyone with cursory knowledge in statistics know that statistics don’t apply to individuals and most natural phenomenon fall within a bell curve. There will undoubtably have outliers on both ends of the spectrum.

So, now, everyone got that? We all agree that sometimes people can’t meaningfully get rid of their accents and even if they do, there’s no point where they become universally intelligible at the same level?

Every once in a while, while I’m on the subway or in a movie theater or somewhere there are a lot people with different nationalities, I realize how U.S. and Bay Area in particular is such a diverse land, where everyone is accepting of all cultures, all races, languages, nationalities.

But unfortunately, even in the U.S., a nation of immigrants (and the unfortunate natives), there’s still much road ahead when it comes to understanding and accepting of differences. Luckily, we all realized having accent monitors in our classes was a bad idea pretty fast. There are many studies (the scientific ones) that document that having an accent is simply a handicap when it comes to hiring. Similarly, many studies show that people find people with certain accents “smarter” and inevitably, other ones dumber. Even world-renowned celebrities aren’t immune to such thinking. Famous German supermodel and America’s Got Talent hostess Heidi Klum herself received significant amount of criticism because of her accent. Judging by how her accent has changed over time, I find it pretty likely she received a lot of speech classes, which she could fortunately afford, to make her accent more palatable to American clientele.

When you hear people of such respect and influence like Paul Graham make such audacious claims with such seemingly such great authority, even if he is unaware of how he’s perceived, it’s a great reminder to everyone that human communication is a wide, fascinatingly complicated field, an area of very active research where there’s already significant debate between established scientists on how it works on all levels, with all sorts of public policy, social, and many other great implications.

All in all, I believe Mr.Graham’s heart is in the right place and he’s simply trying to help people be more succesful. In the same vein, as someone who has experienced problems and couple unpleasant incidents even, with my accent back in the day, I can attest to that even in the great melting pot that’s U.S., to this day, there are benefits to being able to communicate clearly and effectively. But we should strive for better, help find ways to help people communicate clearly, and make social progress towards inclusion, not exclusion, as a humanity. And when it comes individuals, everyone should certainly strive to make themselves understood better but really, that’s an advise we can give to not just people with strong foreign accents but to simply everyone, including Paul Graham himself.