Books for Software Engineers switching to Technical Product Management

As I’ve decided to switch over from being an software engineer to technical product management, I found my theoretical knowledge lacking. Having worked in software companies big and large, I knew the basics. Agile this, MVP that. I knew how prioritize things, and how to get things out the door. But largely, I was repeating what I’ve seen to have worked. The lack of a mental framework left me unprepared for novel problems where I’d need to make ad-hod decisions. I never failed miserably, but it seemed imminent.

Also, not just technical matters, but also on communication. I generally have an easy-going attitude and never had serious problems communicating up and down an organization. But there have been times, faced with challenges, I’d “wing” a solution, instead of really approaching it systematically. Things generally worked out, but I felt more lucky than accomplished. Not particularly sustainable.

So, I did what anyone does to fill in the gaping void in my knowledge. I decided to read some Medium posts. Obviously, that didn’t work. Turns out, reading endless amount of think-pieces isn’t a good way to absorb information, or make sense of it.

I present you 4 books I’ve read over the year and a half that really made an impact on my technical product management thinking. They span the management-product spectrum, but leaning towards “management”. That’s partly because I already felt comfortable on the product side and partly because I haven’t written that post yet.


High Output Management

High Output Management

High Output Management is to management what K&R is to programming in C. From the very first page to the last, it is dense with information, examples, and suggestions. Every sentence, seemingly, is there for a purpose.

The book starts with a hypothetical example of optimizing the output of a diner with a “breakfast factory” example and effortlessly builds the entire Intel empire throughout each chapter.

Some of the most eye-opening chapters were the ones on how and most importantly why many big companies evolve into matrixed organizations with cross-functional reporting structures. It is easy to get disillusioned such organizations and balk at the seemingly insane criss-crossing of management chains. Similarly, the parts about why (well run, and intentional) meetings are important to an organization, why long terms plans are more than the plans itself were all eye-opening.

Similarly, the chapters on how to prioritize and plan projects are full of immediately useful insights. Grove talks about how to streamline a delivery around non-negotiable deadlines and “bulk”, how to track the process with paired indicators that measure a process from both sides (how much work is done and how it gets closer to delivery).

If there’s thing I’d note, I think a dogmatic, straightforward application of all the lessons in this book would optimize your organization for output, not so much for the organization itself. There have been parts where his suggestions made me pause, from an organizational standpoint. I think part of it is due to the didactic prose more so than the content.

I’ve read this book couple times, and I still find myself highlighting certain parts. It’s timeless book, dense, full of information and insights.



If High Output Management is the seminal technical management book, Inspired might be the seminal product management book. People have recommended this book to me couple times, and I wish I read it much earlier in my career.

The main theme in the book is that in technology companies, especially high-growth consumer products, engineering needs to be part of the decision making process from day one. Marty Cagan separates the “product” workflow into two, product discovery and product delivery. His main point is that most companies need to spend more time in product discovery, but do it in a structured way by shipping to learn.

He takes effort to separate prototyping for different goals, a thought I’ve find interesting. The distinction he makes on delivering features versus delivering prototypes seems to be lost on many of the lean-startup folks of the late 2010s era.

The other long-running theme of the book is that successful organizations are full of missionaries, not mercenaries. He does talks quite a bit about how important it is to make product teams aligned on the goals and have their own autonomy. I found agreeing myself with the lessons here, but for some of the recommendations seemed a bit high level, and too obvious.

The book follows a loosely bottoms up approach; Cagan starts from building a team, to process, to products, to culture. The book is seemingly derived from a bunch of blog posts, and it can feel quite repetitive at times. The seemingly similar paragraphs about “missionaries vs. mercenaries” and “discovery vs. delivery” do get in the way of reading.

This book has a quite a narrow target audience, but it delivers on its promise.

Radical Candor

Radical Candor

I have literally judged this book by its cover and almost didn’t read it. The title “Radical Candor” reminded me too much of the “Radical Honesty” movement. I wish I didn’t. Kim takes years of management experience and dilutes into a book that’s easy to read, and full of insight. This book is quite new, but I expect it to become a common piece.

The main theme of the book is that any sizable organization will have lots of conflicts and it is best to handle them with grace and structure, instead of avoiding them or sulking over them. She talks many different sources of tension, from how different employees operate, different teams have different goals and seemingly at-odds ways to get there.

Her main lesson in the book is that managers can use honesty as a veiled attempt at being heartless but a way to open a channel of communication. She takes great effort to separate being “direct” from being a “dick”. She mentions that honesty goes both ways; managers need to open to feedback and actively solicit it.

The book is a true Management with capital-M book. Compared to High Output Management, it can be a bit heavy on the jargon and delicately capitalized frameworks. I personally haven’t felt any of the book was filler, or repetitive but if you are not into management books, it can feel slightly MBA-esque.

The Manager’s Path

The Manager’s Path

The Manager’s Path is not about product management but instead of on how to manage engineers. I wish I read this book earlier in my career (had it been published) when I was transitioning from a junior engineer to a more senior one, and than a tech lead spearheading a technical project with different teams.
Camille Fournier follows a straightforward structure. A software engineer turned senior software engineer turned tech lead turned engineering manager and then all the way up to CTO. Fournier has been through the ringer herself, and the book seems very true to the core.

Unlike the other books, this book is heavier on practical advice than on theory. That is not to say it doesn’t have a guiding principle. But in my reading of, it seemed the principles behind the advice came later.

This book also reminded of High Output Management because of the way she builds up the management organization, but in a more thoughtful way. At times reading High Output Management, some of the lessons felt very heavy-handed, even though I’ve agreed with them. In Manager’s Path, Camille Fournier takes time to make the case for why an engineering manager needs to do the thing she does.

A slight improvement to this book would be a better narrative, at least within chapters itself. There have been couple times where the sections jumped from topic to topic too fast, and I found myself distracted. Sometimes a couple paragraphs could be about mentoring a team, then immediately turning over to more technical, almost infrastructure-level advice.

Hang tight for a list of couple books more on the product side of product management.

Developing Shared Code with Principles

One of the most high-leverage work in a technical organization is building shared libraries or frameworks. A common library, a piece of code that can be used as is, or a framework, a system that codifies certain decisions and allows further work to be built on top, has the opportunity to benefit many people at once. Not only that, they also institutionalize shared knowledge, put knowledge that’s in people’s head in code for future employees. And of course, there are other benefits such as possibly open-sourcing such work, which comes with its set of benefits to hiring and on boarding.

Of course, there are risks with such a venture. The biggest risk is of course the value risk; that the work goes unused by other teams. That is bad enough, but sometimes it gets worse. Sometimes the adoption comes, but the work instead of enabling teams, hinders them. It gets in the way of actual work, the benefits of standardization is overshadowed by the pains of integration and customization.

So how do we make sure our shared work, be them libraries or frameworks, achieves its goal? In my experience, there are 3 principles that separate successful shared projects from failed ones. Two of them are about how to build the project, and a third one about how to get it in hands of others.

Start with a Real Project

Start with the origin. The ideal way to come up with a shared tool is to extract it from an actual project. This seems straightforward enough. The biggest benefit of this approach is the framework has an immediate customer, so its creators are incentivized to solve actual problems, for actual people. In practice, the most important value comes from the “developer ergonomics”, or essentially the usability of the project is automatically improved. This is basically the cornerstone of the agile movement anyway.

Take Rails, which has been extracted out of Basecamp, a project management software. Or Django, again which was extracted from a content-management system built for a local news site. These two frameworks have different cultures, but they attack the same problem; how do you make sure your developers are more productive?

I realize this way of working is not always easy, or even possible For example, it’s easier to imagine a front-end library like Bootstrap being extracted from Twitter’s internal UI, than, say a library used for internal communications platform. But it is possible.

Keep It Small

The second way to ensure success in a shared product is to keep the surface area as small as possible. In other words, the shared project should try to provide value as soon as possible. In practice, what this means is that the framework should probably undershoot its feature set.

This might sound counter-intuitive. If we are aiming for high adoption, should we not try to cover all the possible use-cases? Shouldn’t we try to solve as many problems as possible, to make sure people find some use in our work?

The problems with trying to solve too many problems at once are plenty. First of all, as modern software development methodologies have discovered, the problem discovery is really a continuous process. Instead of trying to predict what the problems would be and trying to solve them, we should try to deliver value as soon as possible, and then iterate further.

The more subtle problem with a large feature-set is that in my experience, especially more tenured teams see it as more of a liability than leverage. They realize a big investment, especially in a new project will likely result in more work down the line.

A small word of caution here. Especially in the front-end world, this advice to keep the feature set small is taken to an unfortunate extreme. No project should need a library to add left padding to a bunch of strings. The line between a small project that does one thing and does it really well and a comically tiny project is a fine one. A good rule of thumb is to make sure the project should provide some immediate value, and be meaningful by itself. That is, one should be able to do something “production” level with your project and only that.

Take a look at first version of Rails, which is essentially a bunch of ActiveRecord classes that uses Ruby’s dynamism to build an Object Relational Mapper (ORM). All the other features that most Rails developers take for granted came many years later. Similarly, React (in addition to being extracted from an internal Facebook project), barely had many of the features it has now; support for many of the common HTML tags, ES6 classes came later.

Evangelize Constantly

And lastly, but maybe most importantly, teams should actively evangelize their projects to be adopted within a company. This might be initially uncomfortable. Many technical teams have a negative opinion of any sort of marketing. They believe that other teams should be able to evaluate their work on its merits, and everything else is either throwaway or disingenuous.

This is a short-sighted way of looking at it. Developers of shared tools should consider evangelism as not only adverting but also forming a two-way communication channel with their customers.

Take marketing. Many of your potential customers inside your company might have heard of your project, but unless they know the project is well supported and actively maintained, they will probably not consider using it. When you market your project via emails, presentations and such, you are not only letting people know of your project, but sending an active signal that it’s active, maintained project. Many times, just having a face attached to the project that is known inside the company is the difference between real adoption and leverage as opposed to a repository full of insights bit-rotting away.

Moreover, evangelism is also about forming that feedback loop with your customers. When you actively work with your prospective customers, you are getting immediate feedback about the pain points they are having. You see what some of the under-invested parts of your project might have low-hanging fruits for future wins.

Again, a bit of warning here is in order. Evangelism doesn’t come naturally to most developer teams. Moreover, with evangelism sometimes comes sacrifices. It is not unheard of to make one or two small one-off work for a big internal customer, to get some initial adoption. This might feel impure, but sometimes might be necessary. The key here is to keep the scope of the one-off demands as small as possible, and really do it for customers who would be game-changers.

Building shared libraries or frameworks is extremely fulfilling; seeing your code be adopted by others, making their lives easier is why many people get into software development in the first place. And it is an amazing way to create high leverage work in a technical organization. Ability to positively impact the work of tens, hundreds, or even thousands of fellow developers is something most executives would be excited about.

I believe with these guidelines in mind, you would be in a much better place to both deliver value for your company, and have some fun doing it.

Planning for Agile

One of the main tenets of agile methodology is working software trumps extensive documentation. You get something to work, and then iterate based on the quick feedback. It sounds great in theory, and in my experience, works reasonably well in practice. All software estimates are wrong, so agile is also wrong, but it produces software and does it without inflicting too much damage on those who build it.

But how do you square this way of working with a long term vision? If an organization is aligned towards a vision, there has to be a roadmap that people follow. And a roadmap, by definition, is a long term plan. It guides what needs to be done months, and sometimes years in to the future.

These two ideas seem contradictory, and they can be confusing for especially inexperienced software engineers to wrap their head around, like it was for me years ago. But after spending several years in companies big and small, I found a way to reconcile the seemingly contradictory ways of thinking. For me, two ideas bridge this gap: a) Planning is for planning b) “Agile is a state of mind”. Let me explain.

In his seminal book High Output Management, the famed late Intel CEO talks about how Intel creates a five-year roadmap, every year. This seems insane on the surface; every year a bunch of high powered executives come in and spend many hours creating a five-year plan, only to do it next year, seemingly wasting 4 years of planning! Couldn’t they just do a one year plan?

Grove points out the output of the planning process is not really the plan itself, but the mental transformation of the people involved in the process and the organizational effects. The physical, literal exercise of having people sit around a table, and discuss the future establishes a shared vocabulary, and provides a starting point and a framework for all the future ad-hoc decisions that will need to be made.

In other words, once the planning process is finished, it is the people that is reformed, and the plan, on paper, itself is just a small residue in the crucible. That transformation is provides two main things; first is a sane default for all the future decisions and the second is a lingering sense of what needs to be done to keep the momentum. In my experience, the sane default aspect is more important. The key here is not that plan is a fallback for next decisions be followed blindly but it’s a shared framework, a common place to start the conversation to initiate a discussion. This is what Eisenhower meant when he said that “No battle was ever won according to plan, but no battle was ever won without one”. It’s not what happens that matters, it’s that something happens.

This leads me to my next point, namely “the agile mindset (man)”. In most software projects, especially those in consumer field, what matters is the cadence of development. And the hardest part of that momentum is always overcoming the inertia of doing nothing. Ironically, most of the time, this inactivity manifests itself as planning. We need plans, for ourselves, surely, but we also need the antidote.

Let me give an example. One of the projects I worked on involved building a new transport layer that extend all the way from the mobile client almost to the storage layer on the backend of a major enterprise. Almost literally, there was a moving part on every single part of the stack, each owned by different teams on different schedules, sometimes different timezones, different set of hopes and dreams.

The number questions with a project like that is essentially infinite. Some things security and privacy are non-negotiable, but the tail end of requirements have no end in sight. What is the monitoring story? What about error handling? How do we handle rollbacks, exceptions? Typing? Code generation? Compression, and performance? Where do you even start?

This is the point where agile mindset comes in handy. The idea behind agile is not that documentation is not useful (it is definitely useful, which I’ve learned the hard way) but it comes after the working software. The trick lies in being able to identify what really matters and what that initial state of working software looks like. In my experience, it’s always better to err on the side of simpler. Anything more than just a bit, sometimes literally, needs justification that’s simply not worth it.

So here’s what we did: we defined the security and privacy guarantees we need to provide, and only those. Nothing else. And then started building out something where the client can talk to the server, and server can respond back. It was extremely uneventful, when I tapped a button on my phone, and the random string I typed on my laptop appeared back on it. But it worked, and rest of it just followed. We found a way to handle the error handling, and handled the performance bottlenecks as they came along, and some brave souls handled code generation and today, it all works.

This is not to say the process was simple. The art of saying “not today” when people come knocking with their pet feature ideas, either from up or down the management chart, and sounding similarly credible when saying “but tomorrow” is a delicate skill. It requires credibility, resolve, and yes, sometimes a thick skin.

The selling point of “Agile” to the management has been that it provides value instantly and is more amenable to a dynamic, fast changing marketplace. And those are all true, but such verbiage can throw off those working in the trenches as MBA-speak.

For me, the main guiding principle of “agile”, with an intentional lowercase-A, has been the idea of taking into account how humans who build the software work. This isn’t surprising, considering “Agile Manifesto” was penned by actual software developers in the field. The open embrace of the messiness of doing anything that involves flesh-and-bones people is what makes the process more bearable than other forms of building software.

There are known knowns, there are known unknowns, and yes, there are unknown unknowns. There are temper tantrums, there are executive demands that come from nowhere on the 11th hour, there are teams that forgot they are involved, and there are those that casually ignore everything until the last moment.

The best we can do is aligning everyone on the same goals as best as we can, make sure people feel involved in the decisions that affect them, form the personal and organizational connections they will surely need, and have some sense of what success looks like. Rest will follow.

Twitter is throwing the towel on democracy

When I was growing up in Turkey, one of the more curious political insults was a “statukocu”, or “one who favors status quo”.  I remember asking my parents what it meant. And when I got the answer, it didn’t satisfy me either; why would wanting things to stay the same be a bad thing? It took me a bit longer to fully understand what that really meant.

Jokes about “move fast and break things” are as original as an Adam Sandler blockbuster these days. And so are essays about them. Sure, democracy is too important to accidentally break by moving fast. We get it. Facebook gets it too, they changed their slogan.

But what if what kills democracy is not Zuckerberg et al moving too fast but the crippling inability of Twitter to take a single action? Those jokes haven’t been made yet by others. Luckily for us, though, Twitter management continues to be that joke. And we are the butts, I think.

It ’s hard to describe this any other way, without sounding mean. As I mentioned yesterday, Trump couple days ago tweeted some blatantly racist tweets, showing people getting killed. Twitter first said they didn’t delete those tweets because they were “newsworthy” and provided “both sides” of an argument. I am of the opinion that inciting violence and racism are universally derided but OK, maybe Twitter knows better?

Then, and here’s the joke part, they walked back on their decision making. Not their decision, but their decision making. I wrote yesterday that “newsworthiness is just a fleeting moment of decision making done in San Francisco” but I was wrong. My larger point was that “newsworthiness” was a sham, a non-falsifiable hypothesis that allowed Twitter management to do as they wished. But, with the new “explanation”, I am not sure if there’s even anyone in the room anymore. Maybe there’s just a robot that just throws darts at a Wheel of Apology?

Imagine getting rejected for a job application, and the recruiter sends you an email the next day “Sorry, we didn’t hire you not because you are too junior, but because we just didn’t like you”. That would be hugely disrespectful, and would show a startling lack of professionalism. But this has been going for literally years when it comes to Twitter. Is this fine?

At this point, I am not sure what to say. One running joke is that Medium  is a Silicon Valley blog for apologies. Maybe we can say that the main reason Twitter exists at this point is to provide a platform for Trump to spread vitriol and for Jack to come up with post-facto rationalizations on why that’s a good thing, some Nazis, and sure, a bunch of startup (read: Uber) drama. This joke works on many levels, considering Medium is also founded by a Twitter co-founder.

I consider myself a progressive. I think favoring the status quo is not a amenable political position in general, and definitely not ideal in today’s America. But here’s the thing; I am from Turkey. I know how fast the sense of normalcy shifts under you when you let a few people play you. My Turkish diaspora jokes about how “we had Trump years ago, no big deal” but there’s darker underbelly here. Nothing is too sacred to discuss, but some things are worth saving. Democracy is a good one. Twitter is throwing the towel here.

Erdogan’s rise to the authoritarianism didn’t happen overnight. Turkish mainstream media did not become a single propaganda machine overnight. Educated Turkish youth who are hopeless, tired of the constant political drama that sucks the oxygen out of any room did not start looking for jobs abroad en masse overnight. There were millions of people screaming about how dangerous Erdogan and AKP is before he got elected, and before they changed the laws left and right.  The madness came slowly, and then all of a sudden.

America is nowhere near Turkey, but it’s also farther ahead when Erdogan first gained power. Turkey and US are similar in many ways, mostly depressing ones such as lack of belief in evolution and income disparity. But it will slip, and it’ll happen both faster you can think and slower.

So, this is the world we live where Twitter (and Facebook and Google and YouTube) operate. Don’t be fooled; these companies are American companies that prospered under American values and are headquartered in America, mostly staffed by Americans in decision making levels. And the American values are under attack. There are no sides here. There’s only one side. It’s the side of liberal democracies.

Our, and yours, current leaders of social networks are flailing. In their attempts to keep their businesses afloat and provide a semblance of impartiality, they are picking the side of chaos. As Bret Stephens aptly puts in his piece, we are all part of Trump’s game now. Politicians of all statures, even heads of state, all across the world are on the edge, because they think the US president is unhinged. Everyone who is letting these shenanigans are going to be on the wrong side of history. A few people who have points of leverage are failing.

History books are being written. And they will definitely outlast any Medium blog post.

Goodbye, Twitter.

I am done with Twitter, for a while at least, if not forever. I will still read tweets, and might even occasionally tweet, if anything to keep my account alive or for major announcements, but I decided to cut it out my life.

If you’ve been following me on Twitter, you know that I use it a lot. It is the only social network I use. I have met people through it, made professional connections, and I generally have fun reading it. Unlike the tamed, manicured, creepily synthetic feel on Facebook and Instagram, Twitter feels raw. I loved Twitter.

But it’s over. At this point, the more I spend time on Twitter, more I feel like I am helping the evil normalize. I feel icky, and disgusted. I feel like I am frequenting the same places as a bunch of skinheads. I am in their lives, and they are in mine. And it’s because Twitter management wants it that way. They are fine with Nazis. Make no mistake. They are not happy, I sure hope they wish they didn’t have them on Twitter. But at the end of the day, they are fine with Nazis. And they are also fine with Trump.

Yesterday, I broke. It’s over for me.

I have a very short username on Twitter, that also happens to be a modal verb in English. So, I get probably on the order of a few dozen “mistaken” mentions a day. I like it. In fact, I love. it. I used to joke (to myself, largely) that it’s my escape hatch from the filter bubble. It’s like a tiny sliver of all tweets handed to me. A random sampling of real people tweets, not the more or less same kind of tweets from the tech / media people I follow.

But then yesterday, I got a “mention” from a Nazi. But not any other Nazi, a Nazi whose profile photo is a swastika made from Trump typography. He wasn’t yelling, he wasn’t being obnoxious. He was just quote tweeting an article about Trump’s tax bill. This is the world we live in now.

I thought whether I should call him out, like I sometimes do. Or just block him. Should I respond to him and block him? Would he try to gang up on me? I have been bullied on Twitter before, people have tried to steal my account many times. But here was a Nazi. And then it hit me. Why am I engaging with Nazis?  Why is this on me? Because Twitter wants me to.

There’s a perverse belief in American society that corporations exist on a different plane of reality. It’s not just Main Street vs Wall Street. But that corporations do business, and there are people, and sometimes they interact via #brands or whatever, but largely they are separate. But that’s just dumb. Corporations exist in a society. They are made up of people, operate via people. They have people on their boards, their employees are people. Software might be eating the world, but it hasn’t yet.

Corporations have voices. Here in the western world, they largely operate in democratic societies with a strong rule of law. They trust some people cannot come and take their property away. And more importantly, these people that make up these companies trust that their lives won’t be in danger for just being themselves, for being who they are. Yet, here we have people who want to throw it all out, and the strongest reaction from most social networks is “meh”. The profits Twitter (tries to) make are predicated on a set of values that these want to overthrow. Twitter is fine with it.

Silicon Valley has taken that illusion of dichotomy to a new level though. It’s not just corporations vs. the people, but also the corporations that make up the online world, vs the meatspace portion of our existence. I grew up on the internet. Living in a mid sized city in Turkey, I was largely alone, save for the internet. It was a conservative town. People called it the Citadel of Islam, and I wasn’t (and still am not) into that much. I spent hours, days, weeks probably perusing The World Wide Web, establishing DCC connections via mIRC in the hopes of downloading questionable material. For me, back then internet was the primary place, but for rest of the world, it was just another place.

That tide has shifted, I think, with advent of smartphones and addictive social networks. Now, the online is the primary place for not just me, but many people. It’s not just where you read the news. It’s where you meet people. It’s where people share their deepest secrets with strangers, or apparently write suicide notes often enough that Facebook will try to detect that before your friends and family does. The idea that “online” can be taken out of your life is an antiquated, Luddite belief that guides most of the decisions of the tech elite.

So it’s baffling to me that we all sit here and pretend that Nazis befriending us because Twitter is fine with it is normal. It’s not. It really isn’t. There’s nothing normal about bringing the white supremacists and their imagery to the national stage. I don’t think Nazis should be punched on sight. But I also don’t think they should be given the platform that Twitter gives them. I am going to go online, just to see another swastika on the woke guy’s network.

And speaking of Nazi idols, let’s talk about the President of the United States (2017). Two days ago, Trump retweeted some extremely racist content from Britain First, a right wing group in the UK who has advocated for, among other things, violence against minorities, politicians. It’s disgusting, among other things. Of course the tweet received criticism not just from British PM, Mayor of London, but even people like Paul Joseph Watson, famed alt-right instigator and supposed journalist, and Piers Morgan, another famed alt-right instigator and supposed journalist,  (again, 2017) chided Trump for it.

But it’s OK. Twitter is fine with it.

The cop out is there. “Newsworthiness” is the new “Algorithms did it”. But be careful what you are really saying.

Trump might subtly incite violence towards an entire world religion, create animus towards millions of Muslims living in the US, but as long as newsworthy, it’s OK. Twitter is fine with it. Stay woke, my friends.

Let’s run a thought experiment. Would it be newsworthy if Trump tweeted, say, a video of him shooting someone on Fifth Avenue, like he said he could? Or, what if he posted a photo of him holding a women’s genitals, like he said he does? Or what if he just goes on Twitter and says “If this tweets gets 200M likes, I am going to nuke North Korea?” Would you like RT it? Would any of these be a) surprising b) not newsworthy?

Newsworthiness is not a thing. You can’t hold it, you can’t argue with it. You can’t point to it on a document, or test it in court. It’s a judgement call. It’s a fleeting moment of decision making that happened somewhere on Market Street in San Francisco. You might catch a glimpse of it, but never can really observe it on demand. It’s a sham.

So either Twitter says that in their judgement, a possibly delusional president who thinks what he said on tape might not be real, tweeting about inciting violence is OK. If any of your friends made a call like this, you’d be worried. Jack Dorsey postures that it’s not about the ratings. But I wish it were, Jack. Because your current look, that you think this is newsworthy to keep on your service, is worse.

And that’s the end of it. This is it for me.

When I tweeted that I was done, a couple people reached out. I told one of them that I was already on the verge, since I was sick of the mentally adversarial relationship I’ve been having with social networks. The instruments of attention based economy, I told her, was already taking a toll on my mental health and productivity.

This stuff is out of control. Imagine someone asked me what I was doing tonight and I told her that “Oh not much, I’m just going to go to this bar called Aryan Paradise. It’s a weird place, but I like it”.

I am done with Twitter. I am done seeing Nazis in my mentions, I am done frequenting the same places with them. I am done contributing meaningfully to a service that caters to Trump. I will be reading tweets from people I follow, will be checking the news via Nuzzel. I might tweet here and there. But for me, it’s too much.

I can’t go on sharing the same space with Trump and Nazis. I am done.

Re-engineering News with Technology

Years ago, in college, I went to a presentation by a big internet company, as part of a recruitment event. At the time, I was working at the college newspaper, and the talk was about their “front page”. They said it was the biggest news site at the time, so I was excited.

The bulk of the talk was technical. But the presenter mentioned that one of the biggest challenges was keeping abreast of what they called the “National Enquirer effect”. The problem, as she described, was this. The main goal of the front page is to drive traffic to other properties; and the system was always optimizing both the selection of content on the front page and its ordering based on raw clicks. He said, while no one admits to it, content with the best-clickthrough rate was always “bikini women”, so left alone, algorithms would turn the front page into National Enquirer. Ironically, this means that no one would visit them, over a long enough period. They said they were trying to fix this by some longer term optimizations, but for now, there was essentially a team for each locale that monitored the site, and kept it “clean”.

Couple days ago, I saw a tweet about a NYTimes wedding post. It said “Trevor George asked Morgan Sarner out to dinner 10 nights in a row, and won her heart”. The person, whose retweet I saw, said she’d probably get a restraining order. It was funny, I “liked it” but it seemed odd that NYTimes would promote such creepy behavior. So I clicked on the link. It turns out the groom did not just ask her out 10 days in a row, but took her out as such. It’s a minor difference, between the text in the tweet but the actual content, but it was enough to get that person to retweet a mock of it, and more ironically to get me to click on it.

Ev Williams, the co-founder of Twitter and Medium, likened the algorithms that govern the internet as to a Deus ex machina that provides you with the most extreme of what you want; you think car crashes are interesting? Here’s a pile up! It feels true, and definitely explains the long-winded global nausea.

Looking at it another, though, this is just specific application of the paperclip maximizer. Instead of natural resources of the earth, we are just mining minds. And instead of making more paperclips, we are just making some people in Bay Area richer. I live in the Bay Area, for now, so of course I shouldn’t complain.

But what’s really missing from the debate is how technology has really failed to find a way to attract attention of its readers without sacrificing the content. And while some of it is done automatically, some of it is self inflicted.

There are structural and economic explanations for the problem. Internet first destroyed the newspapers’ monopoly on advertisement. Then the glut of content came, with democratization of publishing tools, further pushing down the value of any individual work. Unbundling of pieces from the newspapers and magazines that carried them reduced the value of a brand, and in turn pieces that make up a bundle too. As social media platforms further flattened all content into same structure, be it from The New York Times or some kids from Macedonia, any semblance of product differentiation.

The number of knobs publishers have is dwindling and their editorial decisions is one of their last levers. When you are competing with so much content, and you don’t control how your content is distributed, your only option is to change your content to fit your distribution channels. Legitimate news organizations have long erected a well between “the church and the state”, or rather “editorial and the advertising” but at least in terms of packaging, the wall no longer exists. The only difference is that it’s not he advertisers that determine your content now, but your distributors.

I saw this first hand too. At Digg, we would casually tell big publishers and famous individual alike that if they worded headlines a specific way, they would get more clicks. Sometimes it worked, sometimes it didn’t. Google has entire guides, mostly technical but with editorial hints, on how to help you get more traffic.  Facebook does it too, but slightly for different reasons. They want people to click on the content, but not too much, so publishers better avoid clickbait titles. And of course, most publishers, especially smaller ones that do not have big subscription revenues or rich patrons to back them, get in line.

This is not a jab at newspapers, although it is that a bit. My real qualm is that we still don’t have a proper way to consume the news where the hook doesn’t dictate the content. We built search engines that can scour the entire web in less than a second, but I still can’t figure out whether a piece of content is worth my time, or is just fluff. I can take a virtual tour across the globe, but I cannot tell what a federal policy change means for me as a resident in California. The primary problem is funding and revenue, but is there a lack of imagination as well?

I also don’t know if the solutions to these problems will exist on the supply side, or the demand side. Probably, it will need to be both. Publishers need ways to authenticate and brand their content, and consumers need reading experiences that respect those. Moreover, consumers need a better way to find and consume content that respects the integrity of it, and not let it be violated for distribution.

There are a lot of attempts to build a new stack for consuming news. Services like Blendle attempt to fix monetization by removing the hurdle of micropayment, and also consolidate subscriptions. Facebook and Google try various things too; AMP is a way to clear up the reading experience (and cynically move more of the content to Google servers), Facebook’s Instant Articles is a more locked-in and heavy-handed way of doing the same. Both Facebook and Google also want to help publishers gain more subscribers, and the subscribed users to have more fluid, integrated experiences on the web with their own platforms.

And of course, publishers, try their hands too. One of my personal favorites is what Axios does, with their telegraphic, lightly structured way of presenting their content. It feels respectful of my time as a reader, cuts through the fluff without sounding too clinical. I wish more publishers experimented with radically different, but still thoughtful ways of producing and presenting content like them.

At that talk I went, they said one of the ideas was to have a fluff lever; slide it to one side and you get practically smut. Slide it all the way to the other, it’s all dreary politics, which wasn’t smut at the time. As far as I know, they never launched it.

Internet has undermined, intentionally or not, the workings of all news organizations. It took over their advertising, their users’ attention, and now a few companies inadvertently are guiding more of the content too. The different responses to this change lie across the political spectrum. What is common, though, is that the problems will not go away, and the economics that govern newspapers will not go back to where they were. But maybe, there are ways to attack this problem with technology, as well as with policy.

I am not sure if that is the answer, but maybe it could be worth trying.

Digg was all about news and nothing else. It didn’t work out.

Couple days ago, I was having lunch with a friend who used to work at Twitter. Eventually, the issue of Fake News came up. I told him, as more of a joke, that Facebook could just solve the Fake News problem by taking the News out of News Feed, and turning it to essentially just a bunch of social update. He retorted, saying that product already existed and it was called Instagram. We both sighed and shrugged and downed a few more drinks.

Now, apparently Facebook is trying that exactly, and of course publishers are freaking out. You can’t really blame them. For many publishers, Facebook is their biggest source of traffic, which they monetize via ads. But you can also not just feel bad for them, because, that is the risk of building your business on someone else’s platform. Just ask Zynga.

My understanding is that Facebook started promoting news sources and publishers more or less as a defense mechanism against Twitter. It might be ancient history now, but there was a time where the fates of these companies weren’t as far apart as they are now. Facebook noticed that Twitter was getting an undue amount of attention from the media folks, with newscasters and individual journalists signing up on in droves and moving the conversation there. Facebook wasn’t a fan, decided to flex its muscles a bit.

I don’t know if that’s true, but it rings true. And I know this, because I used to work at a company that was in the same boat, at the same time. When we launched Digg V4, one if its goals was to cut down the noise of Twitter and just focus on links instead of the mundane status updates. It didn’t work out, and Digg imploded rather spectacularly but the idea was solid. Digg was always at the forefront of many ideas that are common now, such as “liking” things both in and out of Digg’s website and apps. But with Digg V4, it all came crushing down.

To understand all this, you need to go back to 2010, if not earlier. Twitter and Digg were both merely curiosities, largely unknown outside of Silicon Valley. Twitter However, Digg controlled a significant amount of traffic, and getting on its front page could be a huge boost to not just publishers but really any company. Even Dropbox, now a pretty much a household name, attributed a significant amount of their early users to getting on Digg’s front page.

However, Twitter was already gaining momentum. Although the site could barely stand without failwhaling, it was already signing up big time users like Ashton Kutcher, Justin Bieber. But they didn’t really drive traffic to anyone; and most people were using it as a more public stream of consciousness than anything.

So that was one of Digg’s plays with V4; that we’d be the driver of traffic to publishers because we didn’t have any of those pesky “I am eating a cheese sandwich” updates that littered your Twitter timeline that you didn’t know what to do with.

Kevin Rose tweet about Digg V4
Kevin Rose tweet about Digg V4

The why and the how of Digg’s failure is complicated. But largely, it was a perfect storm of technical issues (mostly of our own doing), management mishaps, and of course the Cold War Digg used to wage on its users finally erupting into thermonuclear skirmishes. Digg always had a delicate relationship with its most influential user base; either side never really blinked but with Digg V4,  it all changed.

One of the most controversial changes was making My News, the logged in personalized page the default option, as opposed to the “Top News”, which was The Digg Homepage. With this change, the importance of Top News was significantly reduced since we essentially distributed the logged in page views across thousands of personalized homepages. This was both a way to keep more people logged in, by providing them a better and more engaging homepage, but also to make sure that we had more unique pages where most publishers could get clicks from.

The real controversial change was actually allowing publishers to automatically submit items to Digg by sucking in their RSS feeds. What this meant is that now you could participate in Digg without really participating; we could just suggest your account to new users who would see your content, which Digg would automatically ingest, without you doing ever anything. And the fact that we accidentally, I swear, promoted those items over manually submitted items did not help.

While Digg made a conscious decision to prioritize big publishers, we managed to scare away most of the user base. Without users, of course, the traffic publishers received started dwindling down. But more critically, without eyeballs on Digg itself, the advertisers slowly fled. The rest is history.

Facebook, for what’s its worth, never had a problem with users leaving its service and I doubt they ever will. I don’t know if they would, but they could remove all the links to publishers from News Feed and most users wouldn’t give a damn. The genius of News Feed was never the links, but it was the ability to give living and breathing person on earth their own personalized rumor mill. The outrage articles, especially in the age of Trump is addictive, for sure, but it doesn’t hold a candle to the addictiveness of being able to see a new update from one of your friends.

I am a strong believer in the importance of journalism for a liberal democracy. I would dare not wish for publishers, and journalists to lose their sources of revenue. But at the same time, I can’t imagine that at least the big publishers did not see this coming. No one in their right mind would put all their eggs in someone’s basket. And hey, maybe this is a good thing. Maybe this is the wake up call we all needed.

Every company is a tech company, and everyone is a techie.

I work in tech, or used to, like most of my circle in San Francisco. But it was never clear to me, what I really did. I changed the world, of course, but what did I really do? My father ran his own business of gas stations, and also sold cars. My lawyer friends wrote up legal documents and endlessly argued about stuff, and doctors did what doctors did. Teachers taught kids, professors taught slightly older kids, writers wrote, and I worked in tech. I worked at T-tech companies, and tech companies that were more or less a custom CMS. The term lost all its meaning, we all kind of knew, but we all played along.

Google and Facebook, for example, by most people’s standards are tech companies. If you ask media companies, however, Facebook especially is also a media company. Facebook doesn’t like that comparison, mostly because of the scrutiny attached to being a media company. But it feels right, in that for more than 50 percent of people, it’s where they get their news.

Of course, some others disagree. The argument goes that companies that fall within the same category should be comparable, and Business Insider is nothing like Facebook. Facebook is a tech company, that’s in media business by accident. That also feels right; Facebook shuttles engineers back and forth on 101 by hundreds, and Business Insider mostly has reporters. They are in the same business on the demand side, attention and page views, but how they go about generating and commanding that attention is so different that we shouldn’t not call them both media companies. That seems somewhat generous to Facebook, but still fair.

But what is a tech company then? It’s easy to write off WeWork and Hampton Creek as being hippies who want to catch a whiff of the tech vibe. But where do you draw the line? Mayo is not tech, and self-driving cars are, but, say, is textual analysis of content? What if I analyze some news, and make financial decisions on it; does that make me a finance company, or a fintech one? If I decide to show that news to someone based on that analysis, am I a tech company or a media one? Am I a tech company if I help people route shipping containers across the world, using computers. Or what if it’s not containers, but trash? Call yourself “Uber of Trash” all you want, but you won’t get this guy, who worked at Uber, to call you a tech company.

I worked at 5 different companies, who all were tech companies. Three of them primarily sold ads against content and hired engineers to keep the blinkenlights on while sales people brought in cash. One produced original content, two had the users do the content generation. This is the business Facebook and Google is in too, but they managed to delegate the content generation to users for free, and automate away the sales part, practically minting money out of thin air. Twitter managed to the former, arguably, failed at the latter. It’s not this stuff is trivial; all 3 companies I worked at failed at either side of this.

Then another company I worked at built a file system, and then build products on it, and sold it to people, which felt like something a tech company would do. We built something with code, and then charged people to use that.

Then came Uber, which built a platform that brought drivers and riders together. It felt like a tech company in that people used an app to get where there were going, but a lot of the work initially was really about keeping the lights on, while we either wired together off-the-shelf tech, or should have.  It wasn’t until the company started building its own maps, and its own self-driving tech, and some nifty security stuff (which I worked on) that it felt like a tech company.

During all this time, a career spread across 5 companies in 3 cities, I was a techie who worked at a tech company. I wrote code, reviewed code, sat in meetings, interviewed candidates. There wasn’t much in terms of that I did at one company, as a techie, that was different from doing it any other place. Business folks, we thought were replaceable, as they came and went, but we never realized we as amorphous as they were.

One of my friends worked at a photo sharing app for a few years, only to switch to self-driving car company. A few friends did the opposite transition; going from hardware companies to app companies. Another one that worked on software used by astronauts now builds software sold to city administrators. If you ask any of them, they all worked in tech too. We read the tech news, raise money from tech VCs, get harassed by those who hate tech. In the end, the entire discussion becomes so abstract, that it becomes pointless. But you can take it even further.

Tesla, for example, is a car company, that also sells batteries. But look deeper: they really want to be a transportation company where you can use a Tesla network to get where you want, which you may not have bought. That’s probably why Model 3 has a driver facing camera, and comes with no key. You use an app to get transported; your ownership of the car is incidental. It’s almost like an ICO, where instead of buying tokens, you buy Teslas to fund the Tesla transportation network. So is Tesla car company or a transportation company or an energy company?

A friend once told me that datacenter colocation companies are mostly in the HVAC business. That seemed odd at the time, but I see her point now. The company bought electric from the grid, turned it into cooled aisles, and leased space. There’s a running meme in popular business books that you buy at airports that McDonald’s is mostly a real estate company, but how different is keeping meat at a certain temperature than doing it for racks? Is anything not a tech company by this definition?

That’s the crux of the issue. The term “tech” company means as much as calling your local bodega —not that Bodega— an electric company because they use it to keep their fridges running. A few years ago, one of the content companies I worked at used to own and operate its own servers; today that seems crazy. Most, if not all technology, gets commoditized once its put to use as other figure out how it works and build cheaper versions of it. Tesla dazzles people with its self-driving tech, but ask Continental, and they will sell you the same tech used in most other cars.

You can think endlessly about what makes a tech company a tech company. Is it the fact that a company creates leverage using technology that makes it a tech company? The number of patents it has? Is it that it hires engineers, and mostly engineers? Maybe it’s the DNA of the founders, because as the adage goes the only real product of a company is its culture. It’s definitely good cannon fodder for blog posts, hot takes.

I think the discussion itself is not a useful one, which possibly makes this essay even less so. The term has lost its meaning, for the most part, and it’s at best aspirational, at worst misleading. But I’ll also chime in, not to make a distinction; not to help decide whether a company is a tech company or not but to decide what a company does.

See who gives the company money, and who the company gives money to. Try to figure out who the masters are. It seems awfully reductionist, but so is the term “tech company”. And if this doesn’t make sense, maybe just retire the term altogether. There was a time it had a meaning, but not anymore.

Smoking as a parable to tech addiction

When I talked about how people’s addiction to smartphones is akin to a public health crisis, I compared it to smoking.It’s not a particularly insightful analogy, of course. For example, Ian Bogost wrote about it as far as back in 2012. He compared the fall of BlackBerry to the slow burn of Lucky Strike with this note:

But calling Blackberry a failure is like calling Lucky Strike a failure. Not just for its brand recognition and eponymy, but even more so, for the fact that its products set up a chain reaction that has changed social behavior in a way we still don’t fully understand–just as our parents and grandparents didn’t fully understand the cigarette in the 1960s.

One of Bogost’s points is that our relationship with smartphones is so unique and so personal, that we may not fully understand or even predict what our society will look like when it bubbles up to the population level. For smoking, it turns out, that effect was widespread cancer. Not great.

One of my points was that the addictive nature of smartphones, and technology overall, was always visible to those who build them. Here is Bill Gates in 2007:

“She could spend two or three hours a day on this Viva Pinata, because it’s kind of engaging and fun.”

Gates said he and his wife Melinda decided to set a limit of 45 minutes a day of total screen time for games and an hour a day on weekends, plus what time she needs for homework.

I argued, while Apple focuses on physical health, it casually ignores the mental health implications of the addictive nature of its products, even though its designers already know it’s a problem. Here is Jony Ive, on stage of New Yorker Tech Fest in 2017:

REMNICK: How can — how can they be misused? What’s a misuse of an iPhone?

IVE: I think perhaps constant use.

Another point I passingly made is that smoking had huge interest groups backing it, with lots of public relations behind it showing it as a beneficial, progressive, useful activity. The dangers of smoking were not well known, but it wasn’t exactly hidden either.

Here is a quote from  article from 2015 by Richard Gunderman, a medial doctor. Gunderman talks about Edward Bernays, the father of modern public relations and how he wanted people to smoke, but not his wife.

In the 1930s, he promoted cigarettes as both soothing to the throat and slimming to the waistline. But at home, Bernays was attempting to persuade his wife to kick the habit. When would find a pack of her Parliaments in their home, he would snap every one of them in half and throw them in the toilet. While promoting cigarettes as soothing and slimming, Bernays, it seems, was aware of some of the early studies linking smoking to cancer.

Good times. The entire article is an excellent, if not a sobering read. I also “Like”d the part where Bernays channels a certain tech-executive prone to apologizing. They didn’t have Medium back then, so he couldn’t apologize there but he quipped in his autobiography:

They were using my books as the basis for a destructive campaign against the Jews of Germany. This shocked me, but I knew any human activity can be used for social purposes or misused for antisocial ones.

I have mentioned that history repeats itself, and The Cyber is not an exception but it’s kind of unsettling how often Nazis make an appearance. I guess when you manage to manipulate millions at such scale to conduct such adversities, it scrambles all notions of rationality, ethics, morality, technology.

And a passing point here. There’s a general sensation that filling up coffers of tech companies with STEM majors may not be the best idea when those kids with little knowledge  of history end up shaping up the new public spaces. I agree with the overall sentiment, but I have some reservations.  The problem is less the people’s majors, but that those major’s general appreciation for history. In other words, we will always need STEM majors and probably more of them as time passes so curbing that supply is not an option. But maybe we could educate (or build?) more smarter ones.

I harp on America a lot on Twitter, as an expat-cum-immigrant. But one thing America has over Europe and/or Turkey is that almost no one smokes in US. It is uncanny. But it wasn’t always this way. And it took a lot of effort to get things to where we are. It’s doable, though.

Apple created the attention sinkhole. Here are some ways to fix it.

Your attention span is the battleground, and the tech platforms have you bested. Social media platforms, like Facebook, Twitter, Instagram get bulk of the blame for employing sketchy tactics to drive engagement. And they deserve most of the criticism; as Tristan Harris points out, as users, they are at a serious disadvantage when competing against companies trying to lure them with virtually endless resources.

However, one company that is responsible for this crisis goes relatively unscathed. Apple jumpstarted the smartphone revolution with the iPhone. Our phones are not anymore an extension of our brains but for many, a replacement. However, things went south. Your phone is less a digital hub, but more a sinkhole for your mind.

I believe that for having built a device that has demanded so much of our attention, Apple has left its users in the dark when it comes to using it for their own good. It has built a portal for companies to suck as much of our time as they demand, without giving us ability to protect ourselves. Surely, there have been some attempts to solve the problem, with features like Do Not Disturb and Bedtime, most of them have been half-assed at best. The market has tried to fill the void, but the OS restrictions render most efforts futile.

Currently, the iOS, the world’s most advanced mobile operating system as company calls it,  is built to serve apps and app developers. Apple should focus on its OS serving its users first, and the apps second.

1 · Attention

I have touched on this before, within the context of the Apple Watch, but I believe Apple has built a device that is so compelling visually, and connected to apps that literally have PhDs working to get you addicted to your, that the users are treated like mice in a lab pressing on pedals to get the next hit. This is unsustainable, and also irresponsible.

I believe Apple should give users enough data, both in raw and visually appealing formats to help them make informed choices. Moreover, the OS should allow people to limit their (or their kids’) use of their phones. And lastly, Apple should use technology to help users, if any to offset the thousands of people to trying to get them addicted.

1.1 · Allow Users to See where their Time Went

First of all, Apple needs to give users a way to see how much they spend on their phones, per app. There are clumsy ways to do this data. The popular  Moment does this literally inspecting the battery usage screen’s screenshot. The lengths developer Kevin Holesh went to make this app useful is remarkable, and application itself is definitely worth it but it shouldn’t be this hard. And it is not enough.

A user should be able to go to a section either on the Settings app, or maybe the Health app, and see the number of hours —of course it is hours— they have spent on their phone, per day, per app. If this data contains average session time, as defined by either the app being on the foreground, or in the case of iPhone X, looked at, even better. The sophisticated face tracking on the new iPhone can already tell if you are paying attention to your phone, why not use that data for good?

FaceID Demonstration
Paying serious attention

In an ideal case, Apple would make this data available with a rich, queryable API. This is obviously tricky with the privacy implications; ironically this kind data would be a goldmine for anyone to optimize their engagement tactics. However, even a categorized dataset, with app names discarded would be immensely useful. This way, users can see if they really should spending hours a day in a social media app. At the very least, Apple can share this data, in aggregate with public health and research institutions.

1.2 · Allow Time Based and Screen Time Limits for Apps

Second of all, Apple should allow users to limit time spent on an app, possibly as part of parental settings, or Restrictions, as Apple calls it. There is already precedent for this. Apple allows granular settings to disable things from downloading apps altogether to changing privacy settings, allowing location access and such.

Users should be able to set either duration limits per app (e.g. 1hr/day, 10hrs/week), time limits (e.g. only between 5PM and 8PM) or both. Either of these would be socially accepted, if not welcome. Bill Gates himself limits his kids’ time with technology, and so did Steve Jobs, and Jony Ive.. Such features should be built into the OS.

Steve Jobs and Bill Gates on stage
Low tech parents

As an aside, I think there are lots of visual ways to encourage proper app habits. Apps’ icons could slowly darken, show a small progress indicator (like when they are being installed), or other ways. This way, someone can tell that they have Instagrammed enough for the day.

1.3 · Make Useful Recommendations

With the new Apple Watch, and watchOS 4, Apple is working with Stanford to detect arrhythmia, by comparing current heart rate data, to that user’s known baseline. Since its inception,  Watch used rings, to encourage people to “stand up”, and move around. Even my Garmin watch keeps track of when I am standing still for too long.

Apple can do this for maintaining attention too. Next time you find yourself stressed, notice how you switch between apps, over and over again. Look at how people sometimes close an app, swipe around, come back to the same app just to send that one last text. These are observable patterns of stress.

Apple can, proactively and reactively, watch for these patterns and recommend someone to take a breather, maybe literally. With Watch, Apple went out of its way to build a custom vibration to simulate stretching on your wrist for breathing exercises. The attention to detail, and license to be playful is there. Just using on-device learning, Apple can tell when you are stressed, nervous, just swiping back and forth, and recommend a way to relax. Moreover, the OS can even see if the users’ sessions between apps are too short, or too long, make suggestions based on that kind of data.

Display on a Mercedes Car showing Attention Assist
Attention Assist, Indeed

As mentioned, there’s a lot of precedent for determining mental state using technology, and making recommendations. Any recent Mercedes will determine your fatigue based on how you drive, and recommend you take a coffee break. Many of GM’s new cars have driver facing cameras where the camera can tell your eyes are open and paying attention during self-driving mode. Using your phone is not as risky as driving a car, but for many, a phone is a much bigger part of your life.

2 · Notifications

Notifications on iOS are broken. With every iOS release, Apple tries to redo the notification settings, in a valiant effort to allow people to handle the deluge of pings. There are many notification settings hidden inside Settings app, with cryptic names like banners, alerts, and many more.

Apple Notification Guidelines
If only

However, currently all notifications from all apps are on a single plane. An annoying campaign update from a fledging app to re-engage you gets the same treatment as your mom trying to say hi. Moreover, apps abuse notification channels; the permissions are forever but the users’ interests are not. And of course, the data is sorely missing.

2.1 · Allow Users to See Data about Notifications and their Engagement

Again, this is a simple one. Apple should make data both the raw data as well as an easily digestible reporting about notifications available to a user. It is easy for this to get out of hand, but I think even a single listing where apps are ranked by notifications per week or day would be useful. Users should be able to tell that their shopping app they used once have been sending them notifications that they have been ignoring.

2.2 · Categorize and Group Notifications

Apple should allow smarter grouping of notifications, similar to email. Currently, as said, notifications largely have a single channel. However, this doesn’t scale. Tristan Harris and his group make a good suggestion; separate notifications by their origin. Anything that is directly caused by a user action should be separated from other notifications to start with. This would mean that your friend sending a message would be a different type of notification than Twitter telling you to nudge them.

I think there are even bigger opportunities here; without getting too much into it, Apple can help developers tie notifications to specific people, start categorizing them by intent. Literally anything, over what is currently available, would be an improvement.

This idea would definitely  receive a ton of pushback, especially from companies whose business relies on getting users addicted to their products. However, the maintaining toxic business models shouldn’t be a priority. If a user does not want to launch Facebook, then they shouldn’t have to. If an app can drive engagement, or whatever one might call mindlessly scrolling, only with an annoying push notification, maybe they shouldn’t be able to.

This is the kind of storm Apple can weather. While Apple cherishes its relationships with apps, it essentially is beholden primarily to its users. And such a change would almost certainly be welcome by users.

2.3 · Allow Short Term Permissions for Notifications

For many types of apps, notifications are only useful for a limited amount of time. When you call an Uber, or order food, you do want notifications but other times, an email would or a low-key notification would suffice. Users should be able to give apps a temporary permission to nudge them, and then the window should automatically close.

This is something some people are already familiar with. Many professionals, such as doctors, college professors, and lawyers have office hours when you can talk to them freely, but other times, you cannot.

2.4 · Make Useful Recommendations

Once again, Apple can even take a more proactive role and help users manage their notifications by making recommendations. For example, the OS can keep track of notifications one engages with meaningfully, or not. This way, the phone can ask the user if they would like to silence an app that they never use.

Apple already does this, to some degree with app developers; if you app’s notifications are too spammy, and users rarely engage, you’ll get a call. However, the users should have a say. An app that might be meaningful to a user might be spammy to other. The OS can make these decisions, or at least make smart recommendations. A feature like this literally exists to help you save space on your phone’s memory; why not for your notifications too?

Ending Thoughts

I believe that an attention based economy, where millions of people are in a constant state of distraction, with tiny short bursts of concentration is dangerous to our mental health as individuals, and society as a whole. Wasting hours switching between apps, not accomplishing anything is one thing, but  a constant need to be entertained, a lack of ability to be with one’s thoughts, not being able to just be around people, without pulling out a phone, are all going to cause wide social issues we’ll tackle with for years. When the people who have built these tools are scared, it’s a good sign that we lost control of our creations.

Surprisingly, iOS is lagging much behind Android in this aspect. I have almost exclusively used an iPhone since its launch, and written bulk of this piece without doing much research. I was surprised, and somewhat embarrassed to see most of what I proposed in the Attention section, such as bedtimes, app limits already exist in Android as part of Family Link. And of course, tools like RescueTime existed for Mac and Windows to help people see where their time went, but their functionality is next to useless in iOS. As mentioned, even Moments app can do only so much within the confines of Apple’s ecosystem.

I wholeheartedly think that unless we approach this issue like we did smoking, and elevate the discussion to a public health issue, it won’t get solved. However, there are ways to help curb the problem, and it is time Apple took the matter to its own hands.

Unlike most other tech companies, Apple makes most of its money by selling hardware to consumers. Every couple years, you buy an iPhone, and maybe an app or two, and Apple gets a cool thousand bucks,. Apple’s incentives, although recently less so with the increasing services revenue, lies with those of its users, not the advertisers or the marketers. If Apple is serious about its health focus, now is the right time to act.