Developing Shared Code with Principles

One of the most high-leverage work in a technical organization is building shared libraries or frameworks. A common library, a piece of code that can be used as is, or a framework, a system that codifies certain decisions and allows further work to be built on top, has the opportunity to benefit many people at once. Not only that, they also institutionalize shared knowledge, put knowledge that’s in people’s head in code for future employees. And of course, there are other benefits such as possibly open-sourcing such work, which comes with its set of benefits to hiring and on boarding.

Of course, there are risks with such a venture. The biggest risk is of course the value risk; that the work goes unused by other teams. That is bad enough, but sometimes it gets worse. Sometimes the adoption comes, but the work instead of enabling teams, hinders them. It gets in the way of actual work, the benefits of standardization is overshadowed by the pains of integration and customization.

So how do we make sure our shared work, be them libraries or frameworks, achieves its goal? In my experience, there are 3 principles that separate successful shared projects from failed ones. Two of them are about how to build the project, and a third one about how to get it in hands of others.

Start with a Real Project

Start with the origin. The ideal way to come up with a shared tool is to extract it from an actual project. This seems straightforward enough. The biggest benefit of this approach is the framework has an immediate customer, so its creators are incentivized to solve actual problems, for actual people. In practice, the most important value comes from the “developer ergonomics”, or essentially the usability of the project is automatically improved. This is basically the cornerstone of the agile movement anyway.

Take Rails, which has been extracted out of Basecamp, a project management software. Or Django, again which was extracted from a content-management system built for a local news site. These two frameworks have different cultures, but they attack the same problem; how do you make sure your developers are more productive?

I realize this way of working is not always easy, or even possible For example, it’s easier to imagine a front-end library like Bootstrap being extracted from Twitter’s internal UI, than, say a library used for internal communications platform. But it is possible.

Keep It Small

The second way to ensure success in a shared product is to keep the surface area as small as possible. In other words, the shared project should try to provide value as soon as possible. In practice, what this means is that the framework should probably undershoot its feature set.

This might sound counter-intuitive. If we are aiming for high adoption, should we not try to cover all the possible use-cases? Shouldn’t we try to solve as many problems as possible, to make sure people find some use in our work?

The problems with trying to solve too many problems at once are plenty. First of all, as modern software development methodologies have discovered, the problem discovery is really a continuous process. Instead of trying to predict what the problems would be and trying to solve them, we should try to deliver value as soon as possible, and then iterate further.

The more subtle problem with a large feature-set is that in my experience, especially more tenured teams see it as more of a liability than leverage. They realize a big investment, especially in a new project will likely result in more work down the line.

A small word of caution here. Especially in the front-end world, this advice to keep the feature set small is taken to an unfortunate extreme. No project should need a library to add left padding to a bunch of strings. The line between a small project that does one thing and does it really well and a comically tiny project is a fine one. A good rule of thumb is to make sure the project should provide some immediate value, and be meaningful by itself. That is, one should be able to do something “production” level with your project and only that.

Take a look at first version of Rails, which is essentially a bunch of ActiveRecord classes that uses Ruby’s dynamism to build an Object Relational Mapper (ORM). All the other features that most Rails developers take for granted came many years later. Similarly, React (in addition to being extracted from an internal Facebook project), barely had many of the features it has now; support for many of the common HTML tags, ES6 classes came later.

Evangelize Constantly

And lastly, but maybe most importantly, teams should actively evangelize their projects to be adopted within a company. This might be initially uncomfortable. Many technical teams have a negative opinion of any sort of marketing. They believe that other teams should be able to evaluate their work on its merits, and everything else is either throwaway or disingenuous.

This is a short-sighted way of looking at it. Developers of shared tools should consider evangelism as not only adverting but also forming a two-way communication channel with their customers.

Take marketing. Many of your potential customers inside your company might have heard of your project, but unless they know the project is well supported and actively maintained, they will probably not consider using it. When you market your project via emails, presentations and such, you are not only letting people know of your project, but sending an active signal that it’s active, maintained project. Many times, just having a face attached to the project that is known inside the company is the difference between real adoption and leverage as opposed to a repository full of insights bit-rotting away.

Moreover, evangelism is also about forming that feedback loop with your customers. When you actively work with your prospective customers, you are getting immediate feedback about the pain points they are having. You see what some of the under-invested parts of your project might have low-hanging fruits for future wins.

Again, a bit of warning here is in order. Evangelism doesn’t come naturally to most developer teams. Moreover, with evangelism sometimes comes sacrifices. It is not unheard of to make one or two small one-off work for a big internal customer, to get some initial adoption. This might feel impure, but sometimes might be necessary. The key here is to keep the scope of the one-off demands as small as possible, and really do it for customers who would be game-changers.

Building shared libraries or frameworks is extremely fulfilling; seeing your code be adopted by others, making their lives easier is why many people get into software development in the first place. And it is an amazing way to create high leverage work in a technical organization. Ability to positively impact the work of tens, hundreds, or even thousands of fellow developers is something most executives would be excited about.

I believe with these guidelines in mind, you would be in a much better place to both deliver value for your company, and have some fun doing it.

Re-engineering News with Technology

Years ago, in college, I went to a presentation by a big internet company, as part of a recruitment event. At the time, I was working at the college newspaper, and the talk was about their “front page”. They said it was the biggest news site at the time, so I was excited.

The bulk of the talk was technical. But the presenter mentioned that one of the biggest challenges was keeping abreast of what they called the “National Enquirer effect”. The problem, as she described, was this. The main goal of the front page is to drive traffic to other properties; and the system was always optimizing both the selection of content on the front page and its ordering based on raw clicks. He said, while no one admits to it, content with the best-clickthrough rate was always “bikini women”, so left alone, algorithms would turn the front page into National Enquirer. Ironically, this means that no one would visit them, over a long enough period. They said they were trying to fix this by some longer term optimizations, but for now, there was essentially a team for each locale that monitored the site, and kept it “clean”.

Couple days ago, I saw a tweet about a NYTimes wedding post. It said “Trevor George asked Morgan Sarner out to dinner 10 nights in a row, and won her heart”. The person, whose retweet I saw, said she’d probably get a restraining order. It was funny, I “liked it” but it seemed odd that NYTimes would promote such creepy behavior. So I clicked on the link. It turns out the groom did not just ask her out 10 days in a row, but took her out as such. It’s a minor difference, between the text in the tweet but the actual content, but it was enough to get that person to retweet a mock of it, and more ironically to get me to click on it.

Ev Williams, the co-founder of Twitter and Medium, likened the algorithms that govern the internet as to a Deus ex machina that provides you with the most extreme of what you want; you think car crashes are interesting? Here’s a pile up! It feels true, and definitely explains the long-winded global nausea.

Looking at it another, though, this is just specific application of the paperclip maximizer. Instead of natural resources of the earth, we are just mining minds. And instead of making more paperclips, we are just making some people in Bay Area richer. I live in the Bay Area, for now, so of course I shouldn’t complain.

But what’s really missing from the debate is how technology has really failed to find a way to attract attention of its readers without sacrificing the content. And while some of it is done automatically, some of it is self inflicted.

There are structural and economic explanations for the problem. Internet first destroyed the newspapers’ monopoly on advertisement. Then the glut of content came, with democratization of publishing tools, further pushing down the value of any individual work. Unbundling of pieces from the newspapers and magazines that carried them reduced the value of a brand, and in turn pieces that make up a bundle too. As social media platforms further flattened all content into same structure, be it from The New York Times or some kids from Macedonia, any semblance of product differentiation.

The number of knobs publishers have is dwindling and their editorial decisions is one of their last levers. When you are competing with so much content, and you don’t control how your content is distributed, your only option is to change your content to fit your distribution channels. Legitimate news organizations have long erected a well between “the church and the state”, or rather “editorial and the advertising” but at least in terms of packaging, the wall no longer exists. The only difference is that it’s not he advertisers that determine your content now, but your distributors.

I saw this first hand too. At Digg, we would casually tell big publishers and famous individual alike that if they worded headlines a specific way, they would get more clicks. Sometimes it worked, sometimes it didn’t. Google has entire guides, mostly technical but with editorial hints, on how to help you get more traffic.  Facebook does it too, but slightly for different reasons. They want people to click on the content, but not too much, so publishers better avoid clickbait titles. And of course, most publishers, especially smaller ones that do not have big subscription revenues or rich patrons to back them, get in line.

This is not a jab at newspapers, although it is that a bit. My real qualm is that we still don’t have a proper way to consume the news where the hook doesn’t dictate the content. We built search engines that can scour the entire web in less than a second, but I still can’t figure out whether a piece of content is worth my time, or is just fluff. I can take a virtual tour across the globe, but I cannot tell what a federal policy change means for me as a resident in California. The primary problem is funding and revenue, but is there a lack of imagination as well?

I also don’t know if the solutions to these problems will exist on the supply side, or the demand side. Probably, it will need to be both. Publishers need ways to authenticate and brand their content, and consumers need reading experiences that respect those. Moreover, consumers need a better way to find and consume content that respects the integrity of it, and not let it be violated for distribution.

There are a lot of attempts to build a new stack for consuming news. Services like Blendle attempt to fix monetization by removing the hurdle of micropayment, and also consolidate subscriptions. Facebook and Google try various things too; AMP is a way to clear up the reading experience (and cynically move more of the content to Google servers), Facebook’s Instant Articles is a more locked-in and heavy-handed way of doing the same. Both Facebook and Google also want to help publishers gain more subscribers, and the subscribed users to have more fluid, integrated experiences on the web with their own platforms.

And of course, publishers, try their hands too. One of my personal favorites is what Axios does, with their telegraphic, lightly structured way of presenting their content. It feels respectful of my time as a reader, cuts through the fluff without sounding too clinical. I wish more publishers experimented with radically different, but still thoughtful ways of producing and presenting content like them.

At that talk I went, they said one of the ideas was to have a fluff lever; slide it to one side and you get practically smut. Slide it all the way to the other, it’s all dreary politics, which wasn’t smut at the time. As far as I know, they never launched it.

Internet has undermined, intentionally or not, the workings of all news organizations. It took over their advertising, their users’ attention, and now a few companies inadvertently are guiding more of the content too. The different responses to this change lie across the political spectrum. What is common, though, is that the problems will not go away, and the economics that govern newspapers will not go back to where they were. But maybe, there are ways to attack this problem with technology, as well as with policy.

I am not sure if that is the answer, but maybe it could be worth trying.

Apple created the attention sinkhole. Here are some ways to fix it.

Your attention span is the battleground, and the tech platforms have you bested. Social media platforms, like Facebook, Twitter, Instagram get bulk of the blame for employing sketchy tactics to drive engagement. And they deserve most of the criticism; as Tristan Harris points out, as users, they are at a serious disadvantage when competing against companies trying to lure them with virtually endless resources.

However, one company that is responsible for this crisis goes relatively unscathed. Apple jumpstarted the smartphone revolution with the iPhone. Our phones are not anymore an extension of our brains but for many, a replacement. However, things went south. Your phone is less a digital hub, but more a sinkhole for your mind.

I believe that for having built a device that has demanded so much of our attention, Apple has left its users in the dark when it comes to using it for their own good. It has built a portal for companies to suck as much of our time as they demand, without giving us ability to protect ourselves. Surely, there have been some attempts to solve the problem, with features like Do Not Disturb and Bedtime, most of them have been half-assed at best. The market has tried to fill the void, but the OS restrictions render most efforts futile.

Currently, the iOS, the world’s most advanced mobile operating system as company calls it,  is built to serve apps and app developers. Apple should focus on its OS serving its users first, and the apps second.

1 · Attention

I have touched on this before, within the context of the Apple Watch, but I believe Apple has built a device that is so compelling visually, and connected to apps that literally have PhDs working to get you addicted to your, that the users are treated like mice in a lab pressing on pedals to get the next hit. This is unsustainable, and also irresponsible.

I believe Apple should give users enough data, both in raw and visually appealing formats to help them make informed choices. Moreover, the OS should allow people to limit their (or their kids’) use of their phones. And lastly, Apple should use technology to help users, if any to offset the thousands of people to trying to get them addicted.

1.1 · Allow Users to See where their Time Went

First of all, Apple needs to give users a way to see how much they spend on their phones, per app. There are clumsy ways to do this data. The popular  Moment does this literally inspecting the battery usage screen’s screenshot. The lengths developer Kevin Holesh went to make this app useful is remarkable, and application itself is definitely worth it but it shouldn’t be this hard. And it is not enough.

A user should be able to go to a section either on the Settings app, or maybe the Health app, and see the number of hours —of course it is hours— they have spent on their phone, per day, per app. If this data contains average session time, as defined by either the app being on the foreground, or in the case of iPhone X, looked at, even better. The sophisticated face tracking on the new iPhone can already tell if you are paying attention to your phone, why not use that data for good?

FaceID Demonstration
Paying serious attention

In an ideal case, Apple would make this data available with a rich, queryable API. This is obviously tricky with the privacy implications; ironically this kind data would be a goldmine for anyone to optimize their engagement tactics. However, even a categorized dataset, with app names discarded would be immensely useful. This way, users can see if they really should spending hours a day in a social media app. At the very least, Apple can share this data, in aggregate with public health and research institutions.

1.2 · Allow Time Based and Screen Time Limits for Apps

Second of all, Apple should allow users to limit time spent on an app, possibly as part of parental settings, or Restrictions, as Apple calls it. There is already precedent for this. Apple allows granular settings to disable things from downloading apps altogether to changing privacy settings, allowing location access and such.

Users should be able to set either duration limits per app (e.g. 1hr/day, 10hrs/week), time limits (e.g. only between 5PM and 8PM) or both. Either of these would be socially accepted, if not welcome. Bill Gates himself limits his kids’ time with technology, and so did Steve Jobs, and Jony Ive.. Such features should be built into the OS.

Steve Jobs and Bill Gates on stage
Low tech parents

As an aside, I think there are lots of visual ways to encourage proper app habits. Apps’ icons could slowly darken, show a small progress indicator (like when they are being installed), or other ways. This way, someone can tell that they have Instagrammed enough for the day.

1.3 · Make Useful Recommendations

With the new Apple Watch, and watchOS 4, Apple is working with Stanford to detect arrhythmia, by comparing current heart rate data, to that user’s known baseline. Since its inception,  Watch used rings, to encourage people to “stand up”, and move around. Even my Garmin watch keeps track of when I am standing still for too long.

Apple can do this for maintaining attention too. Next time you find yourself stressed, notice how you switch between apps, over and over again. Look at how people sometimes close an app, swipe around, come back to the same app just to send that one last text. These are observable patterns of stress.

Apple can, proactively and reactively, watch for these patterns and recommend someone to take a breather, maybe literally. With Watch, Apple went out of its way to build a custom vibration to simulate stretching on your wrist for breathing exercises. The attention to detail, and license to be playful is there. Just using on-device learning, Apple can tell when you are stressed, nervous, just swiping back and forth, and recommend a way to relax. Moreover, the OS can even see if the users’ sessions between apps are too short, or too long, make suggestions based on that kind of data.

Display on a Mercedes Car showing Attention Assist
Attention Assist, Indeed

As mentioned, there’s a lot of precedent for determining mental state using technology, and making recommendations. Any recent Mercedes will determine your fatigue based on how you drive, and recommend you take a coffee break. Many of GM’s new cars have driver facing cameras where the camera can tell your eyes are open and paying attention during self-driving mode. Using your phone is not as risky as driving a car, but for many, a phone is a much bigger part of your life.

2 · Notifications

Notifications on iOS are broken. With every iOS release, Apple tries to redo the notification settings, in a valiant effort to allow people to handle the deluge of pings. There are many notification settings hidden inside Settings app, with cryptic names like banners, alerts, and many more.

Apple Notification Guidelines
If only

However, currently all notifications from all apps are on a single plane. An annoying campaign update from a fledging app to re-engage you gets the same treatment as your mom trying to say hi. Moreover, apps abuse notification channels; the permissions are forever but the users’ interests are not. And of course, the data is sorely missing.

2.1 · Allow Users to See Data about Notifications and their Engagement

Again, this is a simple one. Apple should make data both the raw data as well as an easily digestible reporting about notifications available to a user. It is easy for this to get out of hand, but I think even a single listing where apps are ranked by notifications per week or day would be useful. Users should be able to tell that their shopping app they used once have been sending them notifications that they have been ignoring.

2.2 · Categorize and Group Notifications

Apple should allow smarter grouping of notifications, similar to email. Currently, as said, notifications largely have a single channel. However, this doesn’t scale. Tristan Harris and his group make a good suggestion; separate notifications by their origin. Anything that is directly caused by a user action should be separated from other notifications to start with. This would mean that your friend sending a message would be a different type of notification than Twitter telling you to nudge them.

I think there are even bigger opportunities here; without getting too much into it, Apple can help developers tie notifications to specific people, start categorizing them by intent. Literally anything, over what is currently available, would be an improvement.

This idea would definitely  receive a ton of pushback, especially from companies whose business relies on getting users addicted to their products. However, the maintaining toxic business models shouldn’t be a priority. If a user does not want to launch Facebook, then they shouldn’t have to. If an app can drive engagement, or whatever one might call mindlessly scrolling, only with an annoying push notification, maybe they shouldn’t be able to.

This is the kind of storm Apple can weather. While Apple cherishes its relationships with apps, it essentially is beholden primarily to its users. And such a change would almost certainly be welcome by users.

2.3 · Allow Short Term Permissions for Notifications

For many types of apps, notifications are only useful for a limited amount of time. When you call an Uber, or order food, you do want notifications but other times, an email would or a low-key notification would suffice. Users should be able to give apps a temporary permission to nudge them, and then the window should automatically close.

This is something some people are already familiar with. Many professionals, such as doctors, college professors, and lawyers have office hours when you can talk to them freely, but other times, you cannot.

2.4 · Make Useful Recommendations

Once again, Apple can even take a more proactive role and help users manage their notifications by making recommendations. For example, the OS can keep track of notifications one engages with meaningfully, or not. This way, the phone can ask the user if they would like to silence an app that they never use.

Apple already does this, to some degree with app developers; if you app’s notifications are too spammy, and users rarely engage, you’ll get a call. However, the users should have a say. An app that might be meaningful to a user might be spammy to other. The OS can make these decisions, or at least make smart recommendations. A feature like this literally exists to help you save space on your phone’s memory; why not for your notifications too?

Ending Thoughts

I believe that an attention based economy, where millions of people are in a constant state of distraction, with tiny short bursts of concentration is dangerous to our mental health as individuals, and society as a whole. Wasting hours switching between apps, not accomplishing anything is one thing, but  a constant need to be entertained, a lack of ability to be with one’s thoughts, not being able to just be around people, without pulling out a phone, are all going to cause wide social issues we’ll tackle with for years. When the people who have built these tools are scared, it’s a good sign that we lost control of our creations.

Surprisingly, iOS is lagging much behind Android in this aspect. I have almost exclusively used an iPhone since its launch, and written bulk of this piece without doing much research. I was surprised, and somewhat embarrassed to see most of what I proposed in the Attention section, such as bedtimes, app limits already exist in Android as part of Family Link. And of course, tools like RescueTime existed for Mac and Windows to help people see where their time went, but their functionality is next to useless in iOS. As mentioned, even Moments app can do only so much within the confines of Apple’s ecosystem.

I wholeheartedly think that unless we approach this issue like we did smoking, and elevate the discussion to a public health issue, it won’t get solved. However, there are ways to help curb the problem, and it is time Apple took the matter to its own hands.

Unlike most other tech companies, Apple makes most of its money by selling hardware to consumers. Every couple years, you buy an iPhone, and maybe an app or two, and Apple gets a cool thousand bucks,. Apple’s incentives, although recently less so with the increasing services revenue, lies with those of its users, not the advertisers or the marketers. If Apple is serious about its health focus, now is the right time to act.

Fake News is an attention economy problem

A common theme of this blog is that history repeats itself. There are some fundamental dynamics of information that are innate to the internet, and most companies coast those trends. There are occasional shifts; like the smartphone with its always-on-connectivity and sensors but things more or less follow certain trends.

The recent rise of “fake news”, or cheap information that plagues everywhere that Facebook, and to a smaller degree Google, is dealing with has precedents and can be explained (and predicted, as many did) basic look at the economies of attention, which is the another theme of this blog. Being somewhat reductionist, the problem can be view as a spam issue, on steroids. I admit the integrity of presidential elections is a more serious problem than loss off productivity but a more sterile approach might help come to some immediate solutions.

Facebook might be the punching bag these days for everyone, especially journalists, but Google had its fair share of spam issues. Not too long ago, at around 2009, the Mountain View company was fighting a fierce war against what was then called “content farms”. These companies would basically figure out the trending Google searches, create extremely cheap content, real fast, and do some SEO magic, and get traffic from Google, against which you can sell ads. As long as your cost of production was lower than your revenue from ads, you were golden.

This was a big, lucrative business. The biggest player in this game, aptly named Demand Media was a billion dollar public company. This Wired feature on the the company is full of amazing anecdotes. The company ran many, many websites targeted at virtually any vertical, including one called Livestrong, a franchise of the none other than Lance Armstrong.

Google, soon woke up to the danger, and issued an update to its “algorithm”, called the Panda update and effectively kneecapped the entire industry. Today we are looking to hear from Facebook CSO Alex Stamos, but Matt Cutts of Google was all the rage back then.

Facebook even had its fair share of “spam” problems, and while company might seem like paralyzed in an effort to satisfy both sides, it wasn’t always that way either. Zynga figured out the dynamics of News Feed, as well as the psychological rewarding mechanisms of unsuspecting “gamers” and built a billion dollar business around it. In the meantime, though Zynga and its flagship FarmVille game became synonymous with spam. When Facebook woke up to the problem, and took action, the resulting tweaks nearly killed Zynga too. The gaming company is still around, as a public company, but it’s struggling to even pay for its HQ. Same pattern also happened with companies like Upworthy, and many other “viral” news sources.

As an outsider, it’s not clear how much of an existential crisis this is for Facebook. Google’s struggles with content farms was an existential risk; users losing trust in their search engine can jump ship to Bing or any other. Facebook users are locked in to the platform, and by the virtue of social networks, as more users join, its gets harder for next user to leave. The social network is more or less the world’s biggest address book for many, and the filter bubbles really make the problem of fake news only one someone else can diagnose for you, not unlike a mental disorder. Some like Sam Biddle even argue inherently benefits from our endless craving of drama. Russian interference in US elections propelled the problem to mainstream media, but that was unintentional.

Moreover, the numbers itself make it a challenge. unlike a few content farms (or virtual farms, in case of Zynga) that can be easily identified, for Facebook, there are 5 million advertisers who can push any sort of content to users’ news feeds. Still, it doesn’t seem like an unmanageable number. There are many business that have similar number of customers, who seem to keep a handle on them.

It wouldn’t be great for Facebook’s bottom line to have to increase the cost per customer, but it is probably the right approach for the long term. The media and tech analyst Ben Thompson argues the same in his column. (Subscription might be required) Facebook flew past its competitors partly by being the saner, more refined, Ivy-grad built and approved alternative. Google probably doesn’t miss revenue it used to earned from the content farms, and Facebook certainly doesn’t miss Upworthy. Longer term vision would help. A company that’s building solar powered planes that communicate each other via gyroscopically stabilized lasers should be able to solve some spam issues.

As a sidetone, it’s worth mentioning the opposite examples. These cheap SEO or virality games do not always end badly for companies. For each Demand Media, there’s a “success” story like Business Insider, and the like. The journalistic pasts of these organizations are questionable. Both, among others, have built their businesses on borrowing content from other organizations, having fewer and more junior staff, but really playing the SEO game better than anyone. Similarly, Buzzfeed is a now serious journalistic powerhouse now but the company was decidedly built on subsidizing actual journalism off of more viral, bite-sized content.

The fact the solutions will emerge only points to the chronic nature of the problem, however.  Facebook, Google, or any platform can solve the spam problem, given enough resources and focus. An economy that’s based on commodified attention poses not just passing economic challenges to tech behemoths, but existential risks for a regime that’s somewhat predicated on an educated public. The history of attention economy is the subject of Tim Wu’s excellent book Attention Merchants, which I can’t recommend highly enough.

When people’s attention can be sold to the highest bidder, the producers with the lowest fixed costs will rule the world. A few years ago, it was Demand Media, then it was Zynga, then Upworthy and Huffington Post, and today it’s everyone. As costs of production goes down (which is a good thing), the challenge will get harder. Moreover, as targeting of not just ads, but any content, becomes more precise, yet more opaque, the shared context that holds a society together will inevitably decay.

It might be a libertarian pipe dream to live free of interference from anyone, in one’s own digital and physical cocoon, but that seems untenable in the long run for a liberal democracy. At some point, we will have to elevate our rights to our information laid down in a more robust fashion, instead of relying on the good will of a few people living in California. Spam, as a risk to productivity, was solved by better technology, as well as regulation that required transparency to widely distributed emails. But most importantly, it got solved after we acknowledged the problem, saw the long term risks, and attacked it at its mechanics.

The cyber history repeats itself

With a new unicorn popping up seemingly every other week, it’s easy to forget that the new behemoths that shape our lives, the technology firms, existed more than a few years. Behind the shiny veneer, however, there is a rich history of how this world came about to be. And just like any other history, it’s one that keeps repeating itself.

The latest iteration of the history, though, is not its finest one. Nazis are back.

A quick recap. The informed citizens of the greatest country on earth have collectively voted to elect a white supremacist sympathizer, with overt, covert, voluntary, and involuntary help of practically every tech company and its acolytes. By the time we all woke up to what we did, it was too late; the Nazis were emboldened, chanting in the streets of Virginia, among many places. Then a guy woke up, literally, and decided to kick the Nazis off the internet, until they find a new home.

“I woke up this morning in a bad mood and decided to kick them off the Internet.”

— Matthew Prince, Cloudflare CEO

For some observers of the technology, this latest kerfuffle might just be a new chapter in the upcoming book by a Vanity Fair writer. For those a bit more in the know, they would note that the Nazis (a word I am using as a short for white supremacists), never really left the internet. They practically populated the every platform you did; they were on newsgroups, mailing lists, 4chan, reddit, Facebook, Twitter, and probably still are.

But, go down a bit farther back in the Wayback Machine, and it’s easy to remember that Nazis and some part of their history was on the internet as far as 2000s, and it points to one of the most interesting tensions of the Internet with capital I; the constant tension between the borderlessness of it, yet the levers of it being controlled just a few. This is subject of this essay; how the current gatekeepers of the internet’s aims to create a new type of statelessness state is just a clumsy reiteration of past attempts.

“Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.”

— John Perry Barlow, EFF CO-Founder

The aspirational extraterrestrial culture of the internet is a messy and deep subject but the “Declaration of Independence of Cyberspace” is a good start. Penned by John Perry Barlow, one of the founders of Electronic Frontier Foundation (EFF), at a World Economic Forum, the declaration pulls no punches. In fact, more than just statelessness, you can hear the subtext of cyberspace being not just an international entity but almost an supranational one. It is a good read, both as a way to understand the libertarian thinking of early residents of the cyberspace and also as a Marxist approach to how zero marginal cost of production of technology changes the entire dynamics of economy and of course societies. It is also remarkably prescient, not necessarily in the types of world early adopters would eventually create but the conflicts they would face.

Scroll your way up to 2000. Not just to the days Before iPhone or Before Facebook but Before Google. In 2000, a French human-rights organization discovers that Yahoo, on its auction platform, allows sale of Nazi and Third Reich memorabilia. While still not tasteful and unpresidential at the time, such activity was not illegal under US law, but quite so under French law. In what’s considered a landmark case, French court eventually ordered Yahoo to not just pull such items from its French store (fr.yahoo.com) but also make the items in the US store inaccessible in France.

Front page of the internet, 2000 Front page of the internet, 2000

The entire discourse around the case is extremely fascinating, and some of the statements from both sides have a very timeless quality. To an American audience, where only freedom of speech is more paramount to right to carry a firearm, an interference by a French court of all courts, is an international overreach of unseen proportions. However, this analysis misses the continent-wide trauma Europeans experienced with Nazism in 1940s. While America has its fair share of World War 2 scars, it pales in comparison to the destruction Europe endured. This suffering was so profound, so widespread and so deep, and Nazism such a vile idea that the entire continent’s new identity, European Union is largely built around this reaction.

It is worth pulling out a few quotes here especially, just to see how prescient some of the predictions from the French philosophers are. Mark Knoebel, the French activist whose letters sparked the entire shebang says that American internet is becoming a “dumping ground” for racists all over.

Any discussion of censorship on the internet would be amiss without bringing up everyone’s once-favorite liberal reformer turned autocrat strongmen Recep Tayyip Erdogan, the president of Turkey. Even as far back as 2008, just 4 years after Google’s IPO, the Turkish government was in cahoots with YouTube over a couple of videos making fun of Mustafa Kemal Ataturk, the founder of modern Turkish Republic. In what would become the norm for Turkish government (or already was, depending on your ethnicity in Turkey), the state decided to block YouTube entirely, and demand the videos be taken down. The case went on for literally years, during which time YouTube stayed blocked in Turkey for almost two years. Turkish bloggers took the matters to their hands, where they shut down their own sites to protest the government’s block. However, the block itself was so ham-fisted that even the then Prime Minister Erdogan himself mentioned that “everyone knows how to access YouTube”.

“I think the Decider model is an inconsistent model because the Internet is big and Google isn’t the only one making the decisions”

— Nicole Wong, Google

Still, the details of this 2008 already signals the awkward situations tech companies would themselves with government. Impossible to imagine now, though, Google employees felt comfortable jokingly calling themselves “The Decider” with a New York Times journalist in the room. The employees in charge, many with law degrees, were aware of their power, felt obviously uncomfortable with the levers they held, but, in the end they held on to them.

A common theme that underlies most of the Silicon Valley thinking is that computers, internet and associated technologies changes everything; from mode of production to distribution to how information is generated to how it is disseminated. No incumbent is too big to not upend, no industry without with inefficiencies a couple of scripts can eliminate. A common complaint of the less-STEM focused side of the world, then is that Silicon Valley’s casual disregard for the history and the rules of the world is bordering on recklessness.

This is largely a political argument, which means it’s an everything argument, but the singular point is that sometimes the Internet company’s casual disregard for history is not just hurtful for the entire world, but also for themselves (a statement whose irony is quite obvious to yours truly).

Silicon Valley companies love to invoke legal talismans, a phrase (I think) coined by Kendra Albert. In short, they love to evoke feelings of a legal proceeding, such as a due process, where there is none, to mostly justify their own decision making. But sometimes, such invocations are just symptoms of delusions of grandeur and they do come with consequences for everyone, as mentioned, including the companies themselves.

Consider the time Twitter UK General Manager called Twitter not just a bastion of free speech but the “free speech wing of the free speech party” in 2012 and try not to cringe. But you can definitely see a direct line from the EFF declaration to such an inane statement. A new world is being born, called the cyberspace (as opposed to what, meatspace?) and the rules are written by whoever is creating this world. Considering the current situation Twitter find itself in right now, with user growth barely chugging along, a stock hugely under its IPO levels, its value possibly held up significantly by an orange White House resident, it’s hard to imagine Twitter would be behaving the same way if they had a better understanding of the nuances of free speech laws, and how it protects people from state because, unlike corporations, state is allowed to jail and sometimes, kill, its people.

“That means more than one-sixteenth of the average user’s waking time is spent on Facebook”

Of course, this aspirational statelessness of guardians of the cyberspace does go the other way too. It’s easy to write off your overzealous application of freedom of speech as a mistake,  but harder to do, when you do the opposite. When a tech company counts  ⅓ of the world’s population as its users (and 80% of online Americans), and those users spend a considerable amount of their waking moments looking at things pushed on to them by that company, it’s practically impossible to for a one-in-a-million event to not happen with exceeding frequency when you are dealing with billions.

Probably one of the more eye-opening cases of this American overreach into cultures involves bodies, or more specifically naked ones. For Americans, a sight of a covered breast at a sporting event is a cause of national debate, but for many Northern Europeans, nudity is just another state of undress, as normal as any other. Especially so, when it is presented in a historical, artistic or just non-sexualized context. And even more especially so, when it is the Conservative Norwegian Prime Minister who happens to share a Pulitzer-prize winning photo. Is Facebook, run largely by a bunch of white men in America, not making cultural statements about an unashamedly progressive country?

Banned in California Banned in California

It is easy to write off these high profile instances as simple mistakes, and having worked in a similar user-generated content site before, it is mind-blowing to me that Facebook is as free of spam as it is. But what does that mean when these types of  incidents happen so often that you slowly start shifting values of other cultures to your own, which whether you like it or not, were shaped by your own American upbringing? One cannot just create a culture in such a transactional manner.

It is one thing, as an academic exercise to imagine a world without governments, a libertarian paradise. And if someone wants to take his academic exercise to the seas or to other planets, it is only within their rights to do so.

But for a generation that wants to eventually not just govern the cyberspace but also one of the most important states in the world, the utter clumsiness of the entire enterprise should give one a pause. A common joke in Silicon Valley, the place about the Silicon Valley, the hit HBO show is that many of the absurd plot twists in the series is really toned down to be believable to the general public.

Consider the case of Reddit. When a bunch of celebrity’s iCloud accounts got hacked and their private photos were posted on the site, the company decided, reasonably, to remove that content. But in doing so, the CEO of the company said that they were considering reddit not just a private company, but “a government for a new type of community”. He even went to describe how he sees the actions by the moderators akin to law enforcement officers. But, how do you reconcile such great ambition with the fact that your CEO, or president, resigns from the government because of a seating arrangement issue? (Disclaimer: I worked at a Reddit competitor briefly, around 7 years ago, partly because I was and still am quite interested in the space. I even wore a Reddit t-shirt when they came to visit us)

“We consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community”

— Yishan Wong, Former Reddit CEO

Building a new world, one that is more just, more humane, one that is safer, cleaner, more efficient all great goals. When I decided to study computer science in 2005, my main motivation was similar. I grew up in a town in Turkey where I didn’t always fit in and it was through the internet where I could see more of the world easily enough and find people that I could connect with, on many levels. I wanted to extend that world, which seemed reasonably better than the one I lived in, more to the real world.

And personal politics matter too. As an immigrant to US, unlike most of my more left-leaning friends, I find the idea of statelessness, or a post-nation-state world an experiment that humanity owes itself to try. While the supranational organizations such as the EU and World Trade Organization do have their flaws and globalization comes with this unsettling feeling of homogeneity, I stay largely optimistic that as a species, we are better off in a more integrated society.

However, that does not mean I advocate for a world where we outsource our thinking, our values, our cultures, our judicial decisions and certainly not our free press wholesale to a small number of people, who are unelected, unvetted, and largely unaccountable.

What I would like to see, however is less of the reckless attitude but a more thoughtful approach. An informed, inclusive, global debate about the kind of digital world we can create together. One that learns from our previous mistakes, and does better. Time for this discussion is running out, and we have repeated our mistakes enough times. We need to do better now.

On Being a Builder

One of the recurring themes in any technical team is the tension between designers and developers. Many designers complain about how their beautifully designed and well-thought out mocks aren’t faithfully implemented but merely considered as guidelines. A lot of the time, the design details takes a back seat to the ease-of-implementation and how detail-oriented the developer is. While there are a lot of developers who don’t mind going the extra mile to get the design “just right”, most of the time, the result ends up less than satisfactory to the designer.

On the flip side of the coin, a lot of the developers complain about the seeming disconnect of designers from the realities of building an application. Sometimes this happens in the form of designer designing something that can take an inordinate amount of time to implement or simply impossible. Other times, while the design looks great on the mocks where every piece of data is the way it is supposed to be, when the design is built and tested against real-world data, it just breaks down in unexpected ways and has to change dramatically.

While I have been mostly been on the developer side of this conundrum and definitely did my fair share of my complaining, it’s clear that this is a common problem with a lot of negative effects like inferior products that don’t feel right, unnecessary tensions between designers and developers, and wasted iteration cycles.

Different companies seem to be attacking this problem in different ways; some companies require their “designers” to actually code their designs, with Quora being the one of the well-known proponent of that approach. Quora’s job description for their product designer position explicitly lists “Ability to build what you design” amd their product designer Anne Halsall’s answer on the topic pretty much argues that the most important thing is being a builder.. Similarly, 37signals’ David Heinemer Hansson notes in a blog post that “all 37signals designers work directly with HTML and CSS“.

Yet another approach seems to be the rise of the “front-end engineer” position. As more business and consumer applications that were once desktop applications are built as web-applications where the meat of the interaction happens in the browser, people who were once simply considered “webmasters” have rightfully claimed their titles as real developers and became front-end engineers. While this position is generally considered an engineering position, it’s also always assumed (and implied in the job descriptions) that these people will have strong design sense, attention to detail to bring those intricate designs to reality as faithfully as possible.

I think both of these approaches, which aren’t mutually exclusive, are valid and have their uses. Especially in a sizable organization where there are tens or hundreds of people working together, some extreme specialization is not only desired but almost required to make sure people can work without stepping on each others’ toes.

However, I think the distinction between those who design and build application is an arbitrary one that is one that is slowly eroding. As there are better abstractions are built, the barrier to entry for realizing your idea and sharing it with the world becomes much, much lower. For developers, this means that they can prototype things out much faster and iterate on things themselves.

The real benefit, of course, is for the designers. For them, this means that they can just build what they had in mind without having to convince or wait for someone else to do it for them. I believe this is a game-changing freedom and it will only get better from here.

As we keep building better frameworks that encapsulate years of decision making, abstractions that hide things under the hood under under another plastic cover, and have simply better tools to get our work done, more and more people will be empowered to do things that once were within the technical reach of the few.

Today, anyone who can open up a Terminal window and type rails generate scaffold Post name:string title:string content:text can have a very basic blog up and running in less than a couple minutes. Top this off with some Heroku action and you have a live site running on a real database on a server somewhere in a minutes.

While the iOS space is a lot younger than the web and its reach is a lot less limited, better tools and abstractions that lower the barrier of entry are coming up fast. The XCode 4 interface is a huge improvement over the XCode 3 interface that make building an app require interacting with 1 app instead of 2; story boards are making building basic, brochure like applications essentially a drag-drop exercise, appearance proxies and callback based animations are making building iOS apps feel a bit more like using stylesheets and those familiar jQuery animations. For those who are more adventerous, tools like Pixate and RubyMotion are taking the abstraction to a whole new level where building an iOS app is essentially no different than building a web app.

Discussing the pros and cons of using abstractions over specialized tools is beyond the scope of this essay; I think while abstractions built in the name of cranking out more products faster result in subpar products, abstractions and frameworks that come out of real-world needs end up getting real traction.

Essentially, I believe we are moving to a world where we will see more builders like Sam Soffes who can churn out an iPhone app, a web application and an externally available API all by himself. When a single person can both “design” and “build” an entire product by himself, it’s clear that our nomenclature hasn’t fully caught up with people’s newly-found abilities.

That is not to say we should abolish all specialized tools; I think there will always be need for going really under the hood and actually replacing some carburetors but today, it is possible to push that need a lot further down the line. Similarly, that is not to say there’s never going to be designers who will be working away from technical tools; establishing a brand and visual identity will continue to be a job that requires professionals. Nevertheless, I believe the role of a designer as as it stands will change.

I think a lot of the responsibility will lie with the designers who will be expected to comfortable with the technology they are working with. While this seems like an exact opposite of the argument I am making, it will be more due to the adjusted expectations. Just like we are expecting better times from our Olympic athletes, designers will be expected to simply do a bit more.

As for developers, they will be freed from working as pure implementors of other people’s ideas but instead work on things that they find exciting. It is hard to make a general statement as to what this could be as it’s very domain specific but in general, I think in the future a lot more development effort will be focused on both building better abstractions for those who build on them and solving brand-new problems such as personalization and mining huge amounts of data.

It is an exciting time to be working in the tech industry right now. As we have built ourselves better tools, it is getting easier to simply work on the problems themselves. We’ll all have to learn a couple new tricks but hey, to me, that’s a small price to pay for progress.