Over at The Sweet Setup, I spent a few thousand words exploring some of the best text editors available for MacOS. Few topics start such heated debates as those about why one text editor might be better than another, but I don’t think you can go wrong with any of the apps on our list.
“Whenever you click on a link, send an email, open a mobile app, often one of the first things that has to happen is your device needs to look up the address of a domain.” That’s Matthew Prince, CEO and co-founder of Cloudflare, in his company’s blog post announcing their new public DNS service, 22.214.171.124.
What is this?126.96.36.199 is a DNS service. A DNS service lets you visit websites by entering word-based domain names like audaciousfox.net instead of an obscure (and changing) IP address. Technically, you can get to a site by typing in the domain or IP address, but the domain name is far easier to remember
Why does it matter?
New competition to existing, core Internet infrastructure is a really healthy thing to have; especially when the new product is more privacy conscientious than the incumbents.
Cloudflare’s network operates on a global scale with nearly 150 data centers around the world; which means they have the support and experience to run this type of service.
It might surprise you to know that you even have a choice in DNS providers. Most people probably use their ISP’s default DNS service without knowing it. For why this isn’t the best idea, we’ll go back to Mr. Prince:
What many Internet users don’t realize is that even if you’re visiting a website that is encrypted — has the little green lock in your browser — that doesn’t keep your DNS resolver from knowing the identity of all the sites you visit. That means, by default, your ISP, every wifi network you’ve connected to, and your mobile network provider have a list of every site you’ve visited while using them.
Network operators have been licking their chops for some time over the idea of taking their users’ browsing data and finding a way to monetize it. In the United States, that got easier a year ago when the Senate voted to eliminate rules that restricted ISPs from selling their users’ browsing data. With all the concern over the data that companies like Facebook and Google are collecting on you, it worries us to now add ISPs like Comcast, Time Warner, and AT&T to the list. And, make no mistake, this isn’t a US-only problem — ISPs around the world see the same privacy-invading opportunity.
If you’ve never switched your DNS resolver before, it’s really easy to do, and Cloudflare has quick, two minute tutorials for all of your devices — phone, computer, and router. And if the privacy benefits aren’t a compelling enough reason to switch, there are speed advantages too. 188.8.131.52 currently sits at, ahem, #1 for fastest worldwide DNS resolvers. As of today, Cloudflare’s DNS is already 28% faster than Cisco’s OpenDNS and around 60% more quick than Google’s own 184.108.40.206.
You’ve heard the adage that “when the service is free, you’re the product being sold,” and that’s been true for a long time. But it becomes dangerous when whatever free service you’re using is the only comparable option available. That’s how we end up with Facebook’s monopoly on social networking or Google’s hold on search and video. Having good, privacy focused alternatives to our standard, core digital and social infrastructure — whether DNS resolves or social networks — is phenomenally important. And when an alternative is both more private and faster than what’s already out there, then it’s simply phenomenal.
If you have a few hundred dollars, a recent MacBook, and a desire to play modern video games at a decent frame rate, you can now buy an external graphics card to give your laptop a performance boost. There are, however, some asterisks. Jacob Kastrenakes, The Verge:
For one, only select models are officially supported. And, surprise, Apple is only supporting some of AMD’s Radeon cards, which it already includes in select Macs. That doesn’t strictly mean a GeForce card won’t work — people have gotten some to work while the feature was in beta — but it means you’re gambling a bit around whether it’ll continue to work.
You also won’t be able to use external GPUs on Windows through Boot Camp. And just because you have an external GPU plugged into your computer when it’s running macOS doesn’t mean it’s going to be doing anything, either; developers have to enable support for it. Finally, you’ll also need to have a new enough Mac, since external GPUs rely on the super-fast speeds provided by Thunderbolt 3. That includes 2016 and 2017 MacBook Pros, 2017 iMacs, and the iMac Pro.
For now, the list of caveats with external GPUs is perhaps longer than the list of things you’re able to do with them, but this is certainly a look at the future. Imagine all the benefits of today’s portable machines, but without sacrificing the ability to do intensive video editing or high-end gaming. Additionally, this should make it easier to upgrade your graphics card — something video editors or gamers will do every couple of years — as you won’t need to open your main machine or send it somewhere to do so.
Gennie Gebhart for the Electronic Frontier Foundation:
You shouldn’t have to do this. You shouldn’t have to wade through complicated privacy settings in order to ensure that the companies with which you’ve entrusted your personal information are making reasonable, legal efforts to protect it. But Facebook has allowed third parties to violate user privacy on an unprecedented scale, and, while legislators and regulators scramble to understand the implications and put limits in place, users are left with the responsibility to make sure their profiles are properly configured.
Facebook’s Platform API is what allows third-party applications to access your Facebook data. Disabling this will also disable your ability to “log in” with Facebook, but if you’re looking for a way to tighten down your account without deleting it, this is worth considering.
“We have a responsibility to protect your information. If we can’t, we don’t deserve it.” Signed by Mark Zuckerberg.
This reminds me of when Slack took out a full page ad ahead of Microsoft’s announcement of their new team-based chat platform. There’s something ironic about these digital companies feeling compelled to go to print when the stakes are high, no?
Dave Morin, former CEO and co-founder of private social networking app, Path:
Overwhelmed by requests to rebuild a better @Path. Considering doing it. If you are interested in working on such an idea, DM me. Let’s see if a passionate team forms. If so, we’ll do it.
Path, if you’ve never heard of it, was one of the many social networks launched in the late 2000’s, but it had a unique twist: a 50-person network limit. I only used Path for a year or so — it shared a lot of similarities to Instagram in the early days, minus the whole discovery part — but I found it simple and enjoyable. The 50-person limit ended up being a healthy limitation, because when your only connections are current friends and family, there’s a real sense of authenticity and calm. Unfortunately, privacy and friend limits don’t necessarily help grow a social networking company, and in 2015 Path was sold to Korean company Daum Kakao, as the team doubled down on maintaining their traction in the Asian market.
Path’s still available today, but they’ve lifted their network size limit and haven’t done much to the core product since the sale. It’s not the same experience that it was in 2010. It’s also worth remembering that Path wasn’t perfect. Before iOS required apps to request address book permissions, Path was caught quietly uploading all of your address book contacts to their servers, and then spamming those numbers as a way to help you make more connections. Scummy.
Still, I think there’s room for a mobile-first, affordable (as in paid, because otherwise we’ll be right back to the data-selling square one we’re in right now with Facebook), limited social network. Mr. Morin’s tweet generated a lot of enthusiastic replies from investors, developers, and designers all interested in helping get such a project off the ground, but if Facebook’s shown us anything, it’s that there’s a chasm between showing support for a cause on social media and actually doing anything practical.
With Mr. Morin, though, creating a new, better Path could be a real possibility — it just might take a while. Mr. Morin, an ex-Facebooker himself, is currently helping run the venture capital firm he co-founded, Slow Ventures. Slow’s modus operandi, if the name didn’t give it away, is that “the most powerful ideas, companies, and industries aren’t created overnight.”
At this point though, it’s not about our hypothetical, private social network becoming as powerful as Facebook — it’s about having an alternative. A well designed, private, sustainable alternative. In 2010, Path’s features and limitations were interesting — today, they’re downright compelling.
Paul Ford, for Bloomberg Businessweek, on the United States’ need for an agency dedicated to regulating companies that handle large amounts of personal, sensitive data:
The activist and internet entrepreneur Maciej Ceglowski once described big data as “a bunch of radioactive, toxic sludge that we don’t know how to handle.” Maybe we should think about Google and Facebook as the new polluters. Their imperative is to grow! They create jobs! They pay taxes, sort of! In the meantime, they’re dumping trillions of units of toxic brain poison into our public-thinking reservoir. Then they mop it up with Wikipedia or send out a message that reads, “We take your privacy seriously.”
Given that the federal government is currently one angry man with nuclear weapons and a Twitter account, and that it’s futile to expect reform or self-regulation from internet giants, I’d like to propose something that will seem impossible but I would argue isn’t: Let’s make a digital Environmental Protection Agency. Call it the Digital Protection Agency. Its job would be to clean up toxic data spills, educate the public, and calibrate and levy fines.
Whether it’s through laws or a separate agency, the U.S. needs a new approach to better supervise and safeguard the enormous amount of personal user data in the hands of today’s companies. At the moment, I don’t have much hope in law-based protection, given that Congress has largely failed to punish Equifax for their compromising of more than 140 million Americans’ personal data. However, an independent, empowered, and funded agency could be a promising first step, even if it takes years to realize its potential.
Megan Farokhmanesh, The Verge, with an excellent feature on the rapid rise and subsequent crash of Telltale Games:
When Telltale released the first episode of The Walking Dead in April 2012, even some of the people who worked on the game were surprised by how positive the audience reaction was. By January 2013, the game had sold more than 8.5 million copies — or episodes — raking in more than $40 million in sales. In October 2013, the company claimed to have sold more than 21 million different episodes individually across all of its platforms. Telltale started to expand, signing partnerships with Gearbox Software, HBO, and Mojang and transitioning from a small studio to a midlevel company with multiple licensed properties.
The culture of the company changed dramatically as a result. Former employees describe Telltale in its early days as a small, tight-knit group with a strong sense of camaraderie. New hires trickled in slowly. Upper management had been much less involved in the day-to-day, and developers were given more freedom to do their jobs as they saw best. But the success of The Walking Dead spurred the company to expand rapidly: in order to suit both its growing ambitions and keep investors happy, it became a company that many long-standing employees no longer recognized. “We went from a small and scrappy team to kind of a giant studio full of 300-plus people,” says former Telltale programmer and designer Andrew Langley, who worked at the studio from 2008 to 2015. “You walk around the office, and you don’t really recognize anybody anymore.”
Within Telltale’s portfolio are some truly excellent examples of how strong writing and simple mechanics can create a thoroughly compelling video game. It’s a risky thing, making a game that relies so heavily on dialogue driven by user choice, but Telltale made it engaging, challenging, and authentic. Here’s to hoping they can do it again.
Should every page you visit on the Internet be served over HTTPS? For banks and online stores, the answer is an obvious yes. But what about blogs, decades old web archives, and other bland online data? Do these documents deserve secured connections?
So now Google points a gun at the web and says “Do as we say or we’ll tell users your site is not secure.” What they’re saying doesn’t stand up to a basic bullshit-test. There’s nothing insecure about my site. Okay I suppose it’s possible you could get hurt using it, I’ll grant you that. But I could get hurt getting up out of my chair and going into the kitchen to refill my coffee cup. Life is insecure. When Google says my old site is insecure what they really mean is “This is our platform now, and you do as we say or your site won’t work.” I don’t believe for a minute that Google’s motivation is protecting users. They seem to believe they can confuse users (they can) and that means they can do anything to the web they like. I suppose they can do that too. But it doesn’t mean the web will cooperate. Imho, it won’t.
The web is not safe. That is correct. We don’t want every place to be safe. So people can be wild and experiment and try out new ideas. It’s why the web has been the proving ground for so much incredible stuff over its history.
Lots of things aren’t safe. Skiing. Bike riding in Manhattan. We do them anyway. You can’t be safe all the time. Life itself isn’t safe.
If Google succeeds in making the web controlled and bland, we’ll just have to reinvent the web outside of Google’s sphere. Let’s save some time, and create the new web out of the web itself.
PS: Of course we want parts of the web to be safe. Banking websites, for example. But my blog archive from 2001? Really there’s no need for special provisions there.
We’ve got two arguments here: 1.) Google’s change in Chrome to display “Not Secure” on sites that don’t have HTTPS is the first in what could be a series of steps that eventually lead to HTTP sites being automatically blocked by Chrome, effectively killing the HTTP protocol; and 2.) the world isn’t safe, HTTPS isn’t a silver bullet, and there are simply some types of content that provide no risk and don’t deserve to be called out as insecure.
There’s an additional argument, tangential and articulated by Nick Heer:
Which brings us 3.) raising the barrier to entry (e.g. requiring someone understand how to set up HTTPS before they can get a site online) harms the approachability of creating something new online.
I disagree on all three arguments, but I don’t think they come from unfounded places. I also have great respect for Mr. Winer’s contribution to the web. When Mr. Winer writes, which he does a lot, I read.
However, the last few weeks have left me scratching my head. I don’t disagree with Mr. Winer’s general distrust of Google — I’m skeptical of Google’s motives when it comes to Chrome’s ad-filter or the likes of AMP — but his recent articles leave me feeling that we’ve missed the forest for the trees; that we’re overlooking the importance of encryption because we’re hung up on our sites being labeled insecure, which, truthfully, they are.
Regardless of what Chrome, Firefox, or Safari do, HTTPS is good for the web, and more sites should enable it for their content. Another way to put it: HTTPS is like fluoride. Fluoride is a proven, safe chemical that we add to water to help prevent cavities. Do you need it, if you consistently brush and floss twice a day? Ostensibly, no, but if there’s a way to help protect your teeth in spite of what is otherwise entirely reliant on your own self discipline and understanding of the risks, why wouldn’t you take advantage of it? The World Wide Web is different today than it was when Mr. Winer first created the content he now is struggling to find reason to provide over HTTPS, but that’s not the visitor’s fault. It’s not even his — yet.
Unfortunately, the world wide landscape today desperately calls for us to encrypt what we can. We, as creators on the web, are obliged to help protect the privacy and security of our readers. Enabling HTTPS on a domain doesn’t hurt existing content, but it does provide your visitors with a little more protection, and — critically — it doesn’t require a change in their behavior. They get to keep just using the web.
Not requiring a change in user behavior is important, because most users won’t change. Recently we had some friends going on a mission trip, and we wanted to give them some money to help cover the costs. They sent us the link to the organization’s site, but when I pulled it up and navigated to the donation page, it was still being served over HTTP. Yet, the page had all the trappings of a secure location. Little lock symbols near the form, a NORTON SECURE sticker — everything but the HTTPS. To someone not scrutinizing the location field for the missinghttps://, every other visual indicator suggested that one could safely submit their credit card information. A large “Not Secure” label would have made the actual page security (or lack thereof) immediately apparent.
As for Google’s motives here, this change in Chrome doesn’t set off red flags for me quite yet. They’re doing what their peers are — trying to educate and protect a vulnerable population. I think a more secure web is good for everyone, and if Google wants to start calling out sites that don’t use HTTPS, that’s their prerogative. And unlike Chrome’s built-in ad filtering, Google doesn’t make tens of billions based on whether or not a website uses HTTPS.
The web is a dangerous place to be sure, but in contrast with skiing or bike riding in Manhattan, the consequences of an unsecured web often aren’t immediately felt. If I break my leg while skiing, I’m damn aware that it’s broken — the cause and effect are instantly apparent. But if I’m inadvertently tricked into submitting sensitive content on a site that’s not secure, I may not know about it until months later. Additionally, depending on what sort of information was compromised, it could affect parts of my life not related to the original incident. If someone gets access to my email or collects enough metadata on the content I’m visiting, it could damage me (or others) in ways I can’t even imagine. It’d be like waking up one day, months after a skiing trip to find your ankle is now sprained, but having no idea when or where it happened.
Finally, regarding HTTPS as contributing to the barrier preventing newcomers from getting started on the web — I think that’s a temporary problem. There used to be a time when I wouldn’t have recommended WordPress to someone starting out in web publishing. It took too much time to configure a server, create the database, and manage the updates. But today, you can go to any web host, pay them $5 a month, click one button, and have a WordPress site up and running in minutes. Eventually, enabling HTTPS on a domain could be equally as easy. Some web hosts are already offering free, one-click HTTPS, and with services like Let’s Encrypt, the technology to make HTTPS easy and accessible is rapidly improving. In short, the overhead required to get a site secure is quickly diminishing, and in a few years, it may well be one of the simplest parts of creating your next new thing.
Publishing to the web should be easy, accessible, and extremely affordable. But the content you publish should also be made available through a secure connection, even if you don’t think the content warrants being encrypted in transit. I think providing an HTTPS connection to your content will be as much a moral duty to web developers in the future as making accessible, open, and fast webpages are today. And although the browser vendors need to be kept in check, I don’t think their efforts to call out insecure sites are nefarious — rather, our world has changed, and our experience using world wide web needs to change with it. The more we can help push forward a fully secured web, the faster it will get here, and the easier it will be to maintain.
Yet, these sorts of programs should really be opt-in, not opt-out. I’m not against using data to do novel and interesting things, but if your company wants to display my data in some extraneous endeavor, it’s on you to convince me why it’s worthwhile, not me to remember to tell you to stop.
Indie studio Snowman and the brilliant artist/developer Harry Nesbitt have created a simple, yet stunning, sequel to the sublime, endless snowboarding game Alto’s Adventure. It’s the type of game that feels larger than your phone screen, which is fitting, because you’ll be wishing for something bigger to fully appreciate the visuals and art direction.
I’d recommend Odyssey at twice the price. For $5, it’s a steal.
In the four or so years since it launched, end-to-end encrypted messaging app Signal has become the security community’s gold standard for surveillance-resistant communications. Its creators have built an encryption protocol that companies from WhatsApp to Facebook Messenger to Skypehave all added to their own products to offer truly private conversations to billions of people. And it’s done so as a non-profit with, at any given moment, a tiny staff that includes just two or three full-time coders. […]
On Wednesday, the creators of Signal announced the launch of the Signal Foundation, which will build and maintain Signal and potentially other privacy-focused apps to come, too. WhatsApp co-founder Brian Acton has also joined as the foundation’s executive chairman, his first new role since leaving WhatsApp last fall. And Acton’s not only devoting the next phase of his post-WhatsApp career to Signal, but a fair-sized chunk of his WhatsApp billions, too: He’s personally injecting $50 million into the project.
If you follow the information security crowd, you’ll quickly pick up on a general cynicism towards technology. Who can blame them? Between the Internet of (unsecured) Things and this quarter’s rendition of guess which retailer leaked your credit card, there’s plenty of room for criticism.
However, whenever I see Signal come up, it really does seem to live up to that “gold standard” label. It’s not perfect, and the app has some problems, but the encryption code is peer reviewed and open source, and it’s trusted by some of the biggest public targets in the world. The United States Senate uses it, and, famously, Signal is Edward Snowden’s preferred messaging app.
Sidestepping my own cynicism that comes out whenever “loved app X takes large investment from vc/company Y,” it’s heartening to see the Signal team get a little structure and financial breathing room. I hope the money goes to fund further development and stability of the service, while avoiding the distractions or gimmicky features — like stories — that every messaging app seems to have these days. Signal is not like other messaging apps, and that’s a good thing. The world needs an incredibly secure, focused messaging protocol, and Signal’s now got the resources to continue building just that.
Today, we are rolling out Twitter Lite, a new mobile web experience which minimizes data usage, loads quickly on slower connections, is resilient on unreliable mobile networks, and takes up less than 1MB on your device. We also optimized it for speed, with up to 30% faster launch times as well as quicker navigation throughout Twitter.
the app is designed to be offline first and improve the experience of watching videos on a slower network; it gives you more control over data usage, by providing choice and transparency into the amount of data spent on streaming or saving videos.
Kindle Lite is the new lightweight app built specially for a great reading experience even on slow networks and with patchy connectivity. It is less than 2MB, works on slow networks, and occupies less space on your smartphone.
In the new Lite mode things look a little different — we keep the headlines and trim the rest of the components down to their essentials so that the app loads more quickly (and uses less than one-third of the data).
With our new and reimagined Google apps, we’ve focused on making them not only smaller, but smooth and fast too. For example, Google Go—a new app to find the information you want—optimizes data by up to 40 percent, weighs less than 5MB in size, and makes it faster to find popular and trending information with a simple, tappable interface.
In Hurricane #Irma’s path with a weak phone connection? Stay up to date with the text-only version of our website http://lite.cnn.io
What part of being fast, data conscious, and reliable is exclusive to old devices or those on poor networks? Why does Twitter Lite feel more like Twitter than anything the company’s done with their main website or app over the past few years? Are Facebook, Twitter, and Google truly so married to ads, analytics, and A/B testing frameworks that their only shot at making a reasonably sized, fast app is to start fresh? Will these lite variants actually stay that way, or will the bloat slowly creep back in?
Here’s a thought: the lite version of your app, service, or website should be your only app, service, or website. And if you’re just starting out, build the lite variant first, then stop.
That said, I do think “lite” is the appropriate moniker. Not because it’s the best label for these lightweight alternatives, but because the regular offerings are tragically obese.
I remain highly skeptical of Google — who made $27.2 billion in ad revenue last quarter — having any say in what ads Chrome will or won’t display. That said, the initial launch and implementation of Chrome’s native ad filtering seems honest enough, for now.
If, however, the whole thing leaves you feeling a bit icky, Firefox Quantum is a great alternative to Chrome. I’ve been using it for the past few months and have yet to find a reason to switch back.
Today, we’re announcing AMP for Email so that emails can be formatted and sent as AMP documents. As a part of this, we’re also kicking off the Gmail Developer Preview of AMP for Email-so once you’ve built your emails, you’ll be able to test them in Gmail.
Help your content stay up-to-date and interactive for your users.
Create more engaging and actionable email experiences
Go check out the gif of what an AMP email can do. Basically, it brings the interactivity of a tiny webpage to your email. Devin Coldewey, TechCrunch:
The moat between communication and action is important because it makes it very clear what certain tools are capable of, which in turn lets them be trusted and used properly.
We know that all an email can ever do is say something to you (tracking pixels and read receipts notwithstanding). It doesn’t download anything on its own, it doesn’t run any apps or scripts, attachments are discrete items, unless they’re images in the HTML, which is itself optional. Ultimately the whole package is always just going to be a big , static chunk of text sent to you, with the occasional file riding shotgun. Open it a year or ten from now and it’s the same email. […]
AMP is, to begin with, Google exerting its market power to extend its control over others’ content. Facebook is doing it, so Google has to. Using its privileged position as the means through which people find a great deal of content, Google is attempting to make it so that the content itself must also be part of a system it has defined.
Google being hellbent on slowly, methodically suffocating simple, durable, and universal tools like RSS and email frustrates me. Email thrives in its lack of sophistication and — as anyone who’s accidentally pressed send too early knows — permanence once delivered. This, at times, can be annoying or limiting, but the alternatives would undermine email’s immense usefulness.
This isn’t about innovation, either. AMP critics aren’t against matured technologies becoming better, but you have to do it without bifurcating the core format. Additionally, if Google’s concerned about the user experience of email, they already have a good initiative going: email actions. These are small tags in emails that allow Gmail to extract flight previews, add one-click “track this package” buttons and more to your messages. These are invisible, additive, and — frankly — convenient things to have; and all without fundamentally changing the original email. Extract all you want, but don’t replace the spec.
We haven’t even talked about spam yet, either. Can you imagine interactive spam? Maybe Google’s spam filtering is robust enough to save Gmail users, but if AMP in email becomes as widely used as they intend, they’ll have handed spammers and malicious actors a whole host of new tools to phish and deceive users.
The email experience can certainly be improved, but it needs to be approached as supportive tools around the email message, not replacing the message entirely.
The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.
“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”
Computer science would benefit from an equivalent to the medical profession’s Hippocratic oath. As the complexities of computer systems — especially A.I. and machine learning — increase, the easier it becomes to disregard or remain ignorant to the damage these tools can inflict. Personally, I’m still unsure where the ethical line should be drawn, or to what degree, say, an open source software maintainer is responsible for the eventual usages of her code. Maybe some? Not at all? This, to me, is where any comparison of the medical field to the computer science field becomes futile; the doctor creates actions, a computer scientist creates tools. While both products can be used unethically, a tool can operate independently and in ways the creator never imagined. So should the tool have never been created in the first place?
I always liked Google’s now-defunct mantra of “don’t be evil,” because even if the motto was only paid lip service during its final years, it served as a reminder that technology can be and is used for evil every day. So while these systems are too large to blame any one developer or computer scientist, it’s on all of us to not only discuss, but also come to an agreement on the boundaries of what technology should do and how wide-ranging its influence should be.
Angela Guzman, retelling the story of her 2008 summer in Cupertino, where she and fellow designer Raymond created several hundred of Apple’s original emojis:
My first emoji was the engagement ring, and I chose it because it had challenging textures like metal and a faceted gem, tricky to render for a beginner. The metal ring alone took me an entire day. Pretty soon, however, I could do two a day, then three, and so forth. Regardless of how fast I could crank one out, I constantly checked the details: the direction of the woodgrain, how freckles appeared on apples and eggplants, how leaf veins ran on a hibiscus, how leather was stitched on a football, the details were neverending. I tried really hard to capture all this in every pixel, zooming in and zooming out, because every detail mattered. And for three months I stared at hundreds of emoji on my screen. […]
Sometimes our emoji turned out more comical than intended and some have a backstory. For example, Raymond reused his happy poop swirl as the top of the ice cream cone. Now that you know, bet you’ll never forget. No one else who discovered this little detail did either.
Apple’s visual approach to emoji is not only beautiful, but also fascinating when you consider how flat-looking iOS and MacOS are today. In fact, if you put a designer in front of iOS for a few hours and then had them draw up a few emoji concepts, you’d probably get images with far fewer textures, no gloss, and little to no depth. But that’s not what we have, and I’m glad. Additionally, I’ve always liked how Apple’s emoji feel like a distillation of and tribute to the original Mac OS X interface style, Aqua. I don’t necessarily miss all the realistic leather patterns and pill-filled buttons, but sometimes a little skeuomorphism goes a long way, and the current emoji feel just right.
Primitive Technology was created two years ago by a man in Queensland, Australia, who builds huts, weapons, and tools using only naturally occurring materials. In all of his five- to ten-minute videos, the man wears only navy blue shorts, rarely looks at the camera, and never speaks.
It’s a niche concept, to be sure. The channel does not focus on historically accurate building techniques. It does not offer explanatory tutorials. It will not even help you survive in the wilderness: the “fire sticks” with which he ignites tinder require at least twenty-four hours to prepare and look fiendishly hard to use. So why have the videos attracted millions of viewers? And what do viewers like myself seek when we watch the channel on loop? What do we get from it?
One answer is often floated. Amid the online flood of glossy DIY demonstrations, the paranoiac alarums of super-wealthy “preppers” (people preparing for an apocalyptic event), and the cynical commentary of survivalists, Primitive Technology offers something different: quiet. A few minutes of the channel can make you feel as though you are out in the Australian forest, breathing the sun-steeped, eucalyptus-tinged air, washed clean by rain. The slow precision with which the man undertakes each step of his projects—from finding materials to shaping his tools to assembling his finished structures—lends the videos a soothing sense of purpose. On the Internet, where lunacy sometimes seems to prevail, these videos bring a kind of meditative calm.
Last November, Strava — the “social network for athletes” — released their annual global heatmap of user activity or “a direct visualization of Strava’s global network of athletes.” The report consists of 1 billion activities, 3 trillion latitude/longitude points, and over 10 terabytes of raw data. In short, it’s a staggering amount of personal data, anonymized and aggregated, and overlaid on a map.
For two months, the report made little fanfare. But this week, Nathan Ruser, an Australian university student studying the Middle East and security, pointed out on Twitter that Strava’s heatmap revealed more than just popular jogging paths. Alex Hern, The Guardian:
In locations like Afghanistan, Djibouti and Syria, the users of Strava seem to be almost exclusively foreign military personnel, meaning that bases stand out brightly. In Helmand province, Afghanistan, for instance, the locations of forward operating bases can be clearly seen, glowing white against the black map.
Zooming in on one of the larger bases clearly reveals its internal layout, as mapped out by the tracked jogging routes of numerous soldiers. The base itself is not visible on the satellite views of commercial providers such as Google Maps or Apple’s Maps, yet it can be clearly seen through Strava.
Strava does allow users to geofence “private” areas to prevent tracking in those areas. But it’s not a default option. If you don’t want to share every movement with Strava, you have to opt out. Most users don’t. And most users are seemingly unaware of how much data they’re leaving behind.
This “metadata” – something our government refers to as harmless when gathered in bulk – can result in real-world security issues.
No one is really at fault here, other than individual users who may have violated security procedures. What the heat map does illustrate, though, is that we’re living in a very different age than the one where we developed a lot of our ideas about deterrence and strategic stability.
The amount of data the average smartphone user generates on a day-to-day basis is tremendous. Even when that data is anonymized and presented in aggregate, the results can reveal patterns and routines we might otherwise think are private. It’s fair to ask whether Strava should have attempted to scrub the more sensitive data from their results, but the longterm solution is to educate friends, family, and our military about the personal information we’re passively giving away.
This whole story reminded me of back in 2010, when website Please Rob Me used public Twitter and Foursquare checkins to demonstrate how easy it was to know when someone was away from home. Please Rob Me was the first social-networking PSA I remember where freely shared, public data was used to illustrate opportunity for malicious intent. Since 2010, the issue has only become more widespread, as hundreds of millions more smartphones have started cataloging the edges of everything we do.
Following this explosion of mobile devices, an inexorable side-effect is that we now live in a sort of reverse herd immunity when it comes to privacy. Meaning that even if I don’t have any social media accounts or smart devices, my face (and voice) can still end up in the background of an Instagram photo or video, to be later analyzed by Facebook’s image processing A.I. and added to some database of faces — along with the time and place the data was captured. All of this just by walking in the park, going out to eat, or doing any number of public and private activities. I’m not saying it’s reasonable to expect complete anonymity when you’re out in public — that’s never been the case. Rather, when you consider how our devices are not only exposing our own routines and habits, but also filling in the metadata portraits of those around us, it’s easier to see how important the next decade will be when it comes to personal digital privacy and any laws that support or strip away those rights.
Given that companies won’t stop trying to learn more about their customers, smartphones won’t become less capable at recording our surroundings, and we the people won’t suddenly become any less lax about clicking through “I agree” prompts, the situation can seemingly only be improved by our laws or the device manufacturers. Writing privacy into the law takes time, but the European Union has already taken steps towards this with the European Data Protection Directive, and I expect (hope) we’ll see similar efforts or echoes of it from other countries in the future.
However, for now, the fastest road to broadly available increased security and privacy protection lies in the hands of the smartphone/speaker/Internet-connected-device manufacturers. They control the hardware and (to an extent) underlying operating systems. Implementing things like end-to-end encryption, differential privacy, or even making it visually apparent when apps are using your location are examples of ways to help educate, inform, and protect us while we wait for more comprehensive, enforceable protections to be written into law. It’s not an ideal situation, but it’s all we have. In the meantime, take some time to poke around the privacy settings of your most used apps. You’ll probably be surprised at the data you’ve been giving away.