Audacious Fox

•••

Tom Warren, The Verge, on the news that not only is Microsoft rebuilding Edge on Chromium, but they’re also bringing it to the Mac:

A lot of web developers use a Mac to develop and test sites, but Edge doesn’t exist there, and it’s currently difficult to test Microsoft’s web rendering engine on a Mac without dual booting Windows. Microsoft is now bringing Edge to the Mac. We understand it’s not a move designed to grab more market share specifically; it’s more about making it easier for developers to test Edge. Microsoft hasn’t committed to a specific date for Edge on the Mac, but we expect to see it later next year.

I sort of get the argument here, and I’m not against more browsers on MacOS, but (at least for the foreseeable future) any websites that work in Chrome should also work in the built-on-Chromium Edge. You likely won’t need to test a site in both browsers because they’re—largely—using the same engine.

(Fun fact: it’s been 15 years since Microsoft stopped developing Internet Explorer for Mac.)

Microsoft is exiting the browser engine market, and the next version of their Edge browser will be powered by Chromium; the Google-led open source web engine that powers Chrome. Ferdy Christant:

The web now runs on a single engine. There is not a single browser with a non-Chromium engine on mobile of any significance other than Safari. Which runs webkit, kind of the same engine as Chromium, which is based on webkit.

On desktop, Edge’s departure from running their own engine, means there’s only one last man standing to counter the Chromium dominance: Firefox. Which is falling from a cliff, on its way to join the “everybody else” gang of insignificant browsers. With no serious way to truly counter it due to their near-absence on mobile, and their lack of control in pushing browser installs.

So Chromium it is. If you’re now waiting for a message of hope or a happy ending, I have none.

Go read the whole piece, because it’s a terrific overview of the current browser wars (if you can even still call it that) and the future of the open web.

I personally don’t like using Chrome, but I’m also on MacOS where Chrome isn’t what I would call the best example of a Mac app. Still, with around 70% market share and the nearest competitor only holding about 10%, Chrome doesn’t have to woo me to win.

And this is the crux of the issue. With Microsoft’s exit, Apple is the only company left with both the money and browser engine to push back against Google’s dominance. But as Mr. Christant points out, Safari’s market share is largely due to the popularity of iOS, and the browser often lags behind the others when it comes to updates. Apple seems content with where Safari’s at, because they don’t face any browser engine competition on iOS where all browsers (Chrome, Firefox, etc.) have to use the WebKit (Safari) rendering engine.

Finally, as I wrote last year, “Google controls the searches, the ads, and the window through which a majority of us see the internet.” With Microsoft’s adoption of Chromium, Google’s influence over how a majority of the world sees the Internet will only grow.

To put it another way, consider this. Last quarter, Google’s parent company Alphabet made just under $29 billion from ad revenue. Alphabet’s advertising business is far and away where the company gets a majority its money. Now think of how this ad-driven, privacy lax company controls the world’s most popular web browser with a near monopolistic amount of market share. Given that information, I find it hard to believe that Google will forever be an objective and faithful steward of what’s best for the web, instead of what’s best for Google—especially if their advertising chips are ever threatened. Thankfully, for us and for now, it seems like Google doesn’t have too much to worry about in that regard.

Oh, wait.

Brian Krebs:

Recent data from anti-phishing company PhishLabs shows that 49 percent of all phishing sites in the third quarter of 2018 bore the padlock security icon next to the phishing site domain name as displayed in a browser address bar. […]

This alarming shift is notable because a majority of Internet users have taken the age-old “look for the lock” advice to heart, and still associate the lock icon with legitimate sites. A PhishLabs survey conducted last year found more than 80% of respondents believed the green lock indicated a website was either legitimate and/or safe.

Years ago, I remember personally telling friends and family to “look for the lock” as an easy way of verifying that the site they were on was legitimate. Back then, paying for and setting up an SSL certificate was time consuming (or expensive) enough that most phishing websites didn’t bother. Those days are long gone.

I still believe that wide (and cheap) availability of SSL is a good and necessary thing. But all HTTPs and the accompanying lock symbol guarantees is that your connection with a sever is encrypted—not that you’re safe.

I’ve linked to Marvin Visions before, but I simply can’t get over how great this font looks. The latest update makes Marvin Visions available as a variable font, which means you have granular control over both weight and optical size without a dramatic increase in file size. There’s a two-dimensional slider about a quarter of the way down the page if you want to see what I mean.

Companies that aren’t tracking towards profitability either die or raise more cash—that’s obvious and not why I’m linking to this story. What makes this piece by Jim Edwards, Business Insider, interesting are the readership stats and hint towards future product direction for Medium:

The site currently has 90 million unique users each month, and publishes 20,000 articles per day, mostly from writers creating one-off articles. It recently moved away from selling advertising as a revenue model to a subscription paywall, in which readers are asked for money if they see more than three stories per month. […]

The investment will go into Williams’ effort to make Medium a bit more like a vast, thoughtful magazine. What started as a longform blogging platform is looking more and more like The Atlantic crossed with Wikipedia. Williams has a corp of editors who are paying fees or commissions to writers for high-quality material, and then keeping that high-quality content behind the paywall. While the majority of Medium’s writers are amateurs writing for coffee money, the professional work — which feels a lot like the journalism you’d see in a monthly magazine — will get much of the investment, he said.

(Some additional, unverified—but I trust them—stats from Twitter user Kontra add that Medium has around 20 million articles across the entire site.)

If Medium is to truly become a “vast, thoughtful magazine” then I think the days of using Medium as an online home for your identity and writing are stone cold dead. Find a new platform—many already are. Starting a blog on Medium today is the equivalent to buying a deed for a house that is already on fire.

Which, so long as you know what you’re getting into, is fine I guess. At least Medium is finally acknowledging that Medium is the brand, not you. In which case, a high-quality, diversely-written, and digital magazine sounds sort of compelling. With enough contributors and content, the law of large numbers suggests Medium editors could curate a fresh batch of good reads every day. Will it be good enough for people to pay, though?

And all of this sidesteps the fact that reading anything on Medium is a mess right now. If I’m not being interrupted to sign up for an account then the abundance of ancillary visual garbage taints what might otherwise be an OK reading experience. Add on the litany of trackers, analytics, and other JavaScript that every page seemingly requires and it’s, well, bullshit.

It’s gotten to the point that medium.com has joined linkedin.com as a URL that elicits Pavlovian-driven dread whenever there’s something there I want to read. Yet, regularly, there is something there I want to read, which at least for now seems to be one of the few positive things Medium has going for it.

Nick Statt, The Verge:

Wikipedia founder Jimmy Wales’ digital media company, the WikiTribune, is shifting its focus away from traditional news-gathering and moving to a “community oriented” strategy that prioritizes working with contributors. 

In the process, the company has laid off its 12 original editorial staffers, reports The Drum, following the April departure of CNN and Reuters journalist Peter Bale, who initially assembled the team. The WikiTribune began in August of last year, and Wales and his co-founder Orit Kopel posted a note to the site earlier this week first mentioning the “major personnel changes” and the reframing of its focus on the community.

The original idea was to have professional journalists working side-by-side with the community to create an ‘evidence-based’ news site. Without the editorial staff, how is WikiTribune any different from Wikipedia’s portal for current events or Wikinews?

Good eye from Marius Masalar. I doubt I’ll use this feature, though—especially since it requires physically connecting your Kindle to a computer and dragging font files around. For most texts (and most people), I think Amazon’s custom-made Bookerly is good enough.

Earlier this month, Google announced their intentions to publicly test a video game streaming technology called Project Stream. Beta testers would have the opportunity to stream Ubisoft’s latest blockbuster Assassin’s Creed in Google Chrome, so long as they had a 25 Mbps Internet connection and relatively low ping.

A week later, several outlets have had the chance to test Project Stream, and the results are positive:

Jason Schreier, Kotaku:

There’s something a little funky about playing a game like Assassin’s Creed Odyssey in your internet browser. It almost feels obscene, like you’re getting away with something that shouldn’t be legal. Google’s Project Stream might have some obstacles on the way to global dominance, but it’s still pretty damn impressive.

Sam Machkovech, Ars Technica:

What’s more, Project Stream’s source servers appear to render the game at near-max PC settings, especially in crucial categories like ambient occlusion and shadow-map resolution. (These categories, in particular, render at least “one higher” than their pro-console equivalents.) AC:O’s focus on lengthy dialogue trees—and, thus, tight zooms on human faces—is all the better when that shadow-and-light pipeline enjoys as many pixels and bounce opportunities as possible.

Austen Goslin, Polygon:

To say that the streaming service and its presentation of Assassin’s Creed Odyssey were impressive would be an understatement. Given the choice between playing the standard PC version of the game and the Project Stream version, I’d probably choose streaming. With Project Stream, the game launches a little quicker, and you only really lose the top end of quality. For those with the internet connection to play — but without a suitable computer to handle the traditional install — it’s hard to imagine a better setup than Project Stream, even in these early days.

Remember, we’re talking about streaming a massive, detailed AAA video game in a web browser. While there are already services that let you stream video games to your console or PC, the level of quality delivered via Project Stream is impressive. That it’s possible is a victory; that it’s graphically rich and enjoyable would be seemingly a triumph.

For those wondering why video games are more difficult to stream than the music and video we’ve had for years, here it is: video games require near instant response to user input. If you press a button to make your character jump and that action takes a full second to register, the game would be unplayable. Today’s video game consoles or PCs are mere feet from your controller or input devices, so input is received and registered instantaneously. A streaming service means the console/computer/server will be in another zip code, if not further away. If input lag isn’t measurable in milliseconds, there’s a problem.

So far, Project Stream seems to be handling delays relatively well, with little to no criticism from the testers. Granted, this is with a limited beta pool — who knows how tens or hundreds of thousands of players will affect the service — but the overall experience is doing much better than I would have guessed.

Looking forward and assuming streaming limitations are no longer a major obstacle, the only questions I have left are about price and competition. PlayStation and NVIDIA both already offer somewhat similar streaming services, and it’s not clear whether Google intends to be a consumer option or simply offer the technology to other companies. Regardless, Google’s message with Project Stream is clear: game on.

Customers, Not Users

A lot of software isn’t free. Plenty of people pay to use products. Yet, we call these people users in most of our copy and internal communications. Should we?

What if we tried calling them what they actually are: customers.

  • User story becomes customer story.
  • User review becomes customer review.
  • User issue becomes customer issue.

I think the difference is important. Facebook has users. We have customers. By referring to your customers as customers, you’re respecting the fact that someone is paying you for your work. We don’t need to sell customer data—their hard-earned dollars keep us running. Also, a sentence like “15 minutes of downtime” hits harder when you know it’s a customer on the other end.

You can argue that both paying and non-paying people are subsets of the generic user. And certainly, your company culture plays a larger role in whether you respect your customers (or users) than the terminology.

But I still think there’s something here. Someone, somewhere is paying us for our products. That’s a difference worth distinguishing.

One of my favorite features of the MacOS Finder is that I can display my folders and files in “column view,” where clicking on a folder in one column will reveal its contents in a column directly to the right. It’s a wonderfully efficient way to dig through your files, and up until today I had no idea this interface paradigm had a name.

Jacob Kastrenakes, The Verge:

In the past, Wi-Fi versions were identified by a letter or a pair of letters that referred to a wireless standard. The current version is 802.11ac, but before that, we had 802.11n, 802.11g, 802.11a, and 802.11b. It was not comprehensible, so the Wi-Fi Alliance — the group that stewards the implementation of Wi-Fi — is changing it.

All of those convoluted codenames are being changed. So instead of the current Wi-Fi being called 802.11ac, it’ll be called Wi-Fi 5 (because it’s the fifth version). It’ll probably make more sense this way, starting with the first version of Wi-Fi, 802.11b:

  • Wi-Fi 1: 802.11b (1999)
  • Wi-Fi 2: 802.11a (1999)
  • Wi-Fi 3: 802.11g (2003)
  • Wi-Fi 4: 802.11n (2009)
  • Wi-Fi 5: 802.11ac (2014)

What an excellent, clarifying change. I’m not sure anyone will, you know, actually refer to their Wi-Fi in this way (at least for a few years), but the previous naming scheme was as useless as it was unclear.

Author Joe Moran had quite a long-winded piece in The Guardian about writing, but this bit is worth saving:

A sentence is much more than its literal meaning. It is a living line of words where logic and lyric meet – a piece of both sense and sound, albeit the sound is only heard in the reader’s head. Rookie sentence-writers are often too busy worrying about the something they are trying to say and don’t worry enough about how that something looks and sounds. They look straight past the words into the meaning that they have strong-armed into them. They fasten on content and forget about form – forgetting that content and form are the same thing, that what a sentence says is the same as how it says it.

I’m all for excellent technical writing (see my style guide), but most of the writings I consider great have an almost lyrical quality to them.

According to Dunbar’s Number, “humans can comfortably maintain only 150 stable relationships.” Path, at its core, was a social network built around this idea. When the service launched, you could only have 50 friends. They later expanded that number to 150 (and then 500), but in those early years the network limitation forced you to only connect with people who really mattered to you. No old high school acquaintances, no business fan pages. Just you and your closest friends or family.1

Additionally, Path benefited from being an attractive competitor to Facebook at a time when you could get your friends and family to switch social networks. Today, Facebook’s too large to directly compete with, even if your defining feature is one they’d never want to copy. I want to believe the ideals of Path could be repackaged and relaunched today, but you would need a lot of money to keep a free social network afloat for long enough to maybe reach the critical mass necessary for advertising or subscriptions to sustain the service.

Anyhow — adieu, Path. You were the social network we needed but never deserved.


  1. Today, the only small (social) networks I find solace in—and enjoy, frankly—are iMessage groups and iCloud Shared Albums. ↩︎

Megan Farokhmanesh, The Verge:

Telltale Games, creators of episodic adventure games like The Walking DeadThe Wolf Among Us, and Batman: The Enemy Within, laid off a large number of its staff today. The company will retain a small team of 25. According to multiple sources The Verge spoke with, employees were let go with no severance.

“Today Telltale Games made the difficult decision to begin a majority studio closure following a year marked by insurmountable challenges,“ the company said in a statement. “A majority of the company’s employees were dismissed earlier this morning.” The remaining employees will stay on “to fulfill the company’s obligations to its board and partners,” according to Telltale.

Co-founder and former CEO Kevin Bruner (he left Telltale over a year ago), on his personal site:

Today, I’m mostly saddened for the people who are losing their jobs at a studio they love. And I’m also saddened at the loss of a studio that green-lit crazy ideas that no one else would consider. I’m comforted a bit knowing there are now so many new talented people and studios creating games in the evolving narrative genre. While I look forward to those games and new developments, and continuing to contribute, I will always find “A Telltale Game” to have been a unique offering.

This news comes several months after Ms. Farokhmanesh first reported on the turmoil inside of Telltale Games following their meteoric rise. Yet, a few months after Ms. Farokhmanesh’s reporting, the studio announced it had (among other developments) partnered with Netflix to create a game adaptation of Stranger Things, which led me to believe things were getting a little bit better.

I really wanted them to pull through. Telltale had some of the best art- and narrative-driven games I’ve ever played. There was a lot of talent inside that studio, and I hope they all end up alright.

When you perform a right-click in MacOS via the trackpad — by clicking or tapping with two fingers — there’s a small delay before you see the contextual menu appear. Apparently, to my delight, this lag can be removed by disabling the “Smart zoom” gesture, relieving MacOS from waiting a few milliseconds to see if another two-finger tap/click was on the way, which would indicate you wanted to zoom the current content.

Thankfully, I never use this zoom feature (pinch-to-zoom is more precise anyhow), and I was able to disable the gesture by going to:

System Preferences > Trackpad > Scroll & Zoom

and then unchecking “Smart zoom”. It’s amazing how much faster my secondary click feels now — the contextual menu appears instantly. Sometimes it’s the small things.

Google Inbox, the gesture-driven email experiment that turned your inbox into an actionable list of tasks, is going away at the end of March 2019. I get why they’re shutting it down — Google wants to focus their efforts on Gmail, which recently received a brand new UI — but I’m still disappointed. Inbox made it easy to keep your, uh, inbox tidy because the app grouped related types of messages; allowing you to send an entire category to the archive with a single swipe. I don’t mind Mail on iOS, but Inbox always felt a bit faster for interfacing with Gmail, and Mail probably won’t ever support Google’s unique approach to labels-as-folders.

The Medium Exodus of 2018 continues. This week, it’s a tweet from CTO and co-founder of Basecamp, David Heinemeier Hansson (DHH):

After further review, we’re going to be leaving Medium at some point in the near-to-mid-term future. Thanks for all the fish,@ev! You built a beautiful typewriter, the early community was awesome, and I respect trying something different. Shame about the VC pressures. Adieu!

If you’re not familiar with Signal v. Noise, it’s a well-known “publication about the web” with almost 20 years of history. I’ve been reading Signal v. Noise for over a decade now, and their strong opinions on product, design, and business tend to be succinct and influential in those communities.

I was always a bit surprised that Basecamp — staunchly in support of companies that choose profit over potential growth — would go all-in with a platform like Medium. A VC-funded, zero profit publishing company that restricts your ability to control the design of your content felt like the antithesis to what Signal v. Noise was all about. I’m not sure where they’ll go next (their previous blog was a homegrown Rails app), but if they’re not going back to something in-house I could see them trying out Ghost; the 2.0 release looks slick.

Regardless, this isn’t a great sign for Medium. Ever since they dropped support for custom domains, I don’t see why any new publications would seriously consider moving there. And if you already have a custom domain with Medium, I’d urge you to start looking at alternatives.

I should really just put that in my site’s header.

Executive Editor and co-founder at Polygon, Chris Plante:

Games have changed since we launched Polygon. We’re changing with them. 

We believe that a new strategy, focusing on criticism and curation, will better serve our readers than the serviceable but ultimately limited reviews rubric that, for decades, has functioned as a load-bearing pillar of most game publications.

As part of this evolution, Polygon will no longer score reviews.

Polygon’s updated review strategy is built around two new programs: Recommends (labeling to endorse a particular title) and Essentials (curated lists of the best games available). As Mr. Plante points out, Polygon is following the lead of Kotaku, Waypoint, and Eurogamer; all of which have stepped away from numerical score systems.

This is a good move for Polygon and, I’d argue, any video game review site. When it comes to reviews, 5-star scales are worthless, let alone the 10-point variant sites like Polygon previously used. If it were up to me, I’d make all review sites use a simple thumbs up/down grade and maybe a “neutral” for something not terrible nor worth endorsing. A thumbs up tells me the game is worth playing — a 7.6/10 doesn’t.

Behavioral Biometrics Are a Cookie You Can’t Clear

Stacy Cowley, the New York Times, on how banks and retailers are using your taps, swipes, and other device sensor data to verify you’re you:

The way you press, scroll and type on a phone screen or keyboard can be as unique as your fingerprints or facial features. To fight fraud, a growing number of banks and merchants are tracking visitors’ physical movements as they use websites and apps.

Some use the technology only to weed out automated attacks and suspicious transactions, but others are going significantly further, amassing tens of millions of profiles that can identify customers by how they touch, hold and tap their devices.

The data collection is invisible to those being watched. Using sensors in your phone or code on websites, companies can gather thousands of data points, known as “behavioral biometrics,” to help prove whether a digital user is actually the person she claims to be.

This sort of invisible “continuous authentication” — where my taps and swipes are tracked and checked against how I’ve tapped and swiped in the past — sounds great from a security perspective but not from a privacy one. Metadata can be incredibly revealing, and while I don’t think behavioral biometrics are a bad idea, I dislike how the data would be collected in the background without my knowledge. Couldn’t behavioral biometrics be an additional, opt-in security feature like two-factor authentication? Or is that too weird for most people to think about?

For now, it appears behavioral-tracking companies like BioCatch are focused on banking and retail. But how long until these techniques are applied by the advertising industry to further track and maintain a profile on who you are? If advertisers rely more on a combination of sensor data and how a user behaves on a webpage, then it’s possible that the user themselves become the ultimate cookie — one that’s almost impossible to clear.

To prevent this sort of misuse, our devices should prompt us for permission whenever a website or app tries to read device data that could be revealing. Android and iOS already require permission prompts for certain kinds of user data, like access to your contacts or location. But these permission requests should extend to include data coming from the device itself, whether device orientation, ambient lighting conditions, or battery levels.

Allowing websites and apps to read device data isn’t inherently bad, and as our devices become more powerful we’ll want our software to have access those capabilities. But unfettered visibility into my device and how I’m using it shouldn’t be available without my consent. Any less and I see it as an intentional leaking of private user information.

Nice overview of Liverpool F.C. by Kevin Draper for the New York Times, in which he correctly identifies the club’s primary advantage — its manager:

The birth of the new Liverpool may have been Oct. 8, 2015, the day F.S.G. announced the hiring of [Jürgen] Klopp, the former Borussia Dortmund manager. In less than three years, Klopp has become the exuberant, backslapping and hugging face of the club. His aggressive gegenpressing, or counterpressing, system is the key to Liverpool’s ruthless attack, and it can be a pleasure to watch — provided you’re not supporting the team being subjected to it

Klopp’s enthusiasm during matches is absolutely intoxicating.

The Sweet Setup Staff (via Michael Rockwell), on using the password manager 1Password as a digital will:

Unlike a conventional will, this document (or database) is not as much about who gets your stuff, but more about helping your family member unwind the countless online accounts and collections of media and digital property that you have.

When my mother passed away a number of years ago, handling all of the online bills, email accounts, and digital subscriptions for my family would have been a nightmare if not for 1Password. After a death, there’s enough to worry about in the physical world let alone the digital one that person leaves behind. As the person who handles most of the online bills and subscriptions for my family, the knowledge that my wife has a one stop shop for all of those life details is relieving.

Back to the linked-to piece: 1Password is my go-to recommendation whenever someone asks for a good password manager, but there’s so much more it can do. The team at the Sweet Setup have done a great job writing up the 1Password-as-a-will angle, and they’ve also just launched an entirely new 1Password course that goes deep on the many other ways 1Password can bring security and sanity to your digital life. There’s a launch price special for $23 (cheap!), and any course purchase gets you an extended 90-day free trial of 1Password itself. It’s an excellent deal.

I try not to write in absolutes, but 1Password is one of few (if not the only) service that I’ll happily pay for until the day I die. I really don’t know how else to put it.

Over at the Sweet Setup, I wrote down some brief thoughts on a few of the best Markdown editors for iOS. My pick — for most people — is the excellent iA Writer, which has been a staple of my writing kit for years. This was a fun piece to work on, and I think it turned out well.

Deborah Bach for the Microsoft company blog:

Packaging can be annoying for any consumer (see: wrap rage). But for people with disabilities, it often creates yet another challenge in a world riddled with them, an unnecessary obstacle that leads to frustration and a delay getting to the object inside.

Recognizing that reality, Microsoft’s Packaging Design team faced a unique challenge in creating a box for the new Xbox Adaptive Controller, designed to accommodate gamers with limited mobility. […] It had to enable gamers with limited dexterity, who might be using just one hand or arm, to easily open the box and remove the controller. And it had to be as high-quality and aesthetically appealing as any other Xbox packaging.

I love everything about this project. The whole thing is a wonderful example of thoughtful design and innovative problem solving — and all for a group of gamers that are often overlooked.

MIT postdoctoral fellow Douglas O’Reagan, writing for Physics Today:

Over the course of centuries, a struggle has been playing out about who gets to own ideas. Is it the person who comes up with them? The employer who funds the research? Or should the ideas be somehow shared between them? […]

By the 1990s teams of MBAs and business-school scholars joined forces to see if advances in information technology, management techniques, law, and sociology could allow them to extract workers’ know-how so that the company could store and own it indefinitely. The resulting academic research field and management fad became known as “knowledge management.”

This article traces changes in US law, business practices, and social expectations about research and invention in order to illuminate the history of business control over scientists’ ideas.

I found this a short and fascinating look into the world of academics and scientific research. It got me thinking too, and now I’m curious how big tech companies like Google or Apple approach this sort of “knowledge management”, especially as it relates to academic- and research-driven departments like machine learning or artificial intelligence.

Brian Feldman for New York Magazine, in a piece titled, “The Most Important Video Game on the Planet”:

Analysts estimate that Fortnite is currently raking in more than $300 million a month, and has made its maker, Epic Games, more than $1.2 billion since its battle royale mode launched in late September. That’s all from a game that’s free to download and play unrestricted.

To clarify: anything you can buy in Fortnite is purely cosmetic and doesn’t give you a better chance at winning. You buy outfits or dance moves to taunt your friends with. We know microtransactions are a profitable business plan, but even Epic’s success here is somewhat staggering.

The article goes into a few reasons why Fortnite — not the first game to have a battle royale mode — is currently experiencing a moment. A combination of wide availability (it’s on almost every game console and smartphone you can think of), cartoon-style graphics, and kid-friendly goofy “violence” (there’s no blood when you eliminate an opponent) leaves you with a game that’s literally everywhere and enjoyed by seemingly everyone.

One final bit from Mr. Feldman’s piece that caught my eye was this aside about one of Fortnite’s most famous players, Tyler “Ninja” Blevins:

Ninja, whose real name is Tyler Blevins, makes an estimated half a million dollars every month streaming Fortnite rounds on Twitch, a service for livestreaming video games that is owned by Amazon.

I don’t know which scenario is more astonishing: Epic making $300 million a month from a free-to-play game, or Ninja collecting $500,000 in the same amount of time for playing said game.

James Ball, Columbia Journalism Review, on the effects that years of a “compliant and often cheerleading media” have left on the technology press’ ability to be a watchdog in the industry:

The result is the big four tech giants have a head start of 25 or more years in building their business models and laying their groundwork ahead of receiving serious scrutiny—and today, detailed scrutiny could hardly be more important. For most of the past decade, these companies were untroubled by media incident, perhaps to their own detriment: If Facebook had faced tougher questions on moderation earlier, it would have been much easier to address and build in as it scaled up—which would have helped the global information ecosystem, too. […]

Maybe we should simply scrap the idea of a “tech desk” altogether: The sector needs scrutiny, but since technology now touches every aspect of our society, keeping it siloed from the rest of the newsroom now feels artificial. Let it be covered, extensively, across desks.

Quite the completionist.
Thanks for reading.
More? Go to the archive.