Category Archives: Technology

Safe Steps to a New Generation

For the third time in its history, Apple is in the process of moving its Mac computers to a new line of chips. In 1994 it switched from Motorola’s 680×0 chips to AIM’s PowerPC chips, in 2005 it began to switch from PowerPC to Intel’s x86 chips, and now in 2020 it’s leaving Intel behind in favour of its own Apple Silicon, in the form of the new M1 chip.

These three switches have given Apple more organisational knowledge of how to accomplish such transitions than perhaps any other company. You can see this in how they’ve become better at it over time. Moving to PowerPC allowed Apple to compete against Windows/Intel PCs, but the advantages provided were thin and disappeared over the course of a decade. In contrast, Apple secured for itself favoured customer status from Intel when it switched and enjoyed several years of chip designs perfectly suited for its Macs. As for the switch to Apple Silicon? Well, we’ll get to that.

The transition to Apple Silicon was announced in June 2020, but no details were offered at that point. Only with the December 2020 announcement and release of the first new M1 Macs did people get a proper look at what the new chips offered. And though this is just the first step in what Apple has described as a two-year transition of its desktop and laptop line, the early results are very promising.

The three Macs switching to the new M1 chip are the absolute entry level devices in its lineup: the Mac Mini desktop and the MacBook Air and 13” MacBook Pro. These are some of Apple’s best-selling Macs, but they’re also some of the cheapest, so choosing them for the switch to new, faster chips is the opposite of risky—it provides Apple and its customers with a safe and secure first step into a new era.

First of all, the new machines look entirely like their Intel-based predecessors. Apart from a few differences in coloration and port selection (most notable with the Mac Mini), these devices are dead ringers for the machines they replace. For customers looking for reassurance that the new Macs won’t be in some way cut-down replacements more akin to iPads than “proper” Macs, this kind of familiarity will be reassuring.

More to the point, the low-end machines being replaced have the weakest performance in the existing Mac line. If the new M1 chip has any performance gains to offer, it will show best in this setting. And by all accounts, the performance gains are substantial. The M1 chip may be a derivative of the A-series chips to be found in iPhones and iPads, but Apple has been honing its chip design expertise over the past decade and the M1 looks like it’s coming out of the gate competitive with the best that Intel and its main competitor AMD have to offer. And this, remember, is arriving in low-end Apple devices.

More exciting for laptop users, perhaps, is the battery life gains that the M1 chips offer. Eking out battery life gains with Intel chips has forced Apple to make trade offs between speed and power consumption for years, but M1 seems to have broken that deadlock for Apple, with early reviews reporting that both laptops can run for a full 8-hour work day without needing to be plugged in.

Perhaps the most interesting factor about the three new Macs is how little performance differentiation between them there is. Despite their differing form factors, they all use the same CPU, with a simple choice to double the RAM from 8GB to 16GB when buying. The most notable physical difference lies in cooling, with the MacBook Air being entirely passively cooled whereas the MacBook Pro and Mac Mini sport cooling fans of varying sizes. This upshot is that while the Pro and Mini should enjoy better top speeds, the experience in day-to-day use should be much the same.

If the machines aren’t differentiated much among themselves, then they do at least offer Apple some degree of differentiation between Macs and iPads. Especially in the form of the iPad Pro, iPads have been creeping into the territory of Mac laptops for a while, but the MacBook Air and MacBook Pro, with their greater RAM, better selection of ports, and larger batteries, not to mention the improved M1 chip, should be able to maintain a comfortable distance for a while.

For Apple users, and even PC fans, considering new machines, these M1 Macs are as safe a bet as Apple could make them. Every indication is that they already run existing Mac software as fast as the machine’s they’re replacing, with software recompiled for the new chips running far faster. They’re arriving in recognisable form factors, so no peripherals will have to be abandoned, and the fact that the M1 is arriving at the low end of the market means that the price is right for those interested in trying the new systems out.

Might Apple be playing it too safe? Maybe, as there were a few complaints that the new machines weren’t exactly exciting and new, but the performance and battery gains of the M1 seem more than enough for now. It’s the second generation of machines when excitement is likely to arrive, in the form of new designs and form factors. There are already rumours of a redesigned slimline iMac in 2021, probably with an M1+ or the equivalent at its heart. Beyond that, it’ll be very interesting to see how PC manufacturers respond to Apple’s new machines.

As for myself, the time has at last come to put one of my old Macs out to pasture and try something new. Not my nine-year-old MacBook Air, which has been mostly superseded by an iPad Pro, but rather my ten-year-old Mac Mini, which has kept chugging away with the benefit of various upgrades but was never a speed demon in the first place. I wavered for a moment before deciding to order in advance of the first reviews, but that’s mainly because I’m expecting the existing Mac Mini form factor to disappear once the full Mac lineup is upgraded. Its size and shape are still based on the DVD drives it no longer sports, after all.

Regardless, I eventually put the order in and will have my new Mac in a few weeks. It’s the cheapest of the M1 Macs at the moment (cheaper even than the Intel Mac Mini it’s replacing) and I expect it to act as a media server and general purpose PC for many years to come. Life’s too short to spend it waiting for the next big thing to come out. Sometimes you just have to enjoy yourself while you can.


Cancer Update

Too grim a segue? Maybe, but please allow me my fun. I’m as locked down as anyone is these days, so apart from watching people pass by my ground floor window, the days are not full of entertainment.

Last week was particularly stressful, as I had a CT scan to check on the progress of my treatment and a meeting with the doctor to find out the results of said scan two days later. This usually causes a spike in my worrying, during which any minor complaint becomes a potential symptom. My head was not in a good space creatively, and my NaNoWriMo output was knocked out for a week. (And if that isn’t the most bourgeois whine I’ve ever made, I don’t know what is.) Luckily, the scan results came back positive, and the medicine I’m on continues to do its job.

As a result, locked down though I may be, I’m in a much better frame of mind this week and doing my best to catch up from my NaNoWriMo lapse. As long as this excessively mild winter persists, I’ll get out into the sun when I can too, and hopefully before too long I’ll get to visit my family again. Until then, and until vaccines start rolling out, keep safe and keep strong.

Wage Epic War

Apple Inc. is, by some measures, the biggest company in the world. From a near-bankrupt state in 1997, it has turned itself into a globe-spanning colossus, worth somewhere in the region of a trillion dollars. In an age of corporate technology titans, it’s been at or near the head of the pack for years.

And this week Epic Games declared war on it.

Not just on Apple either. On Google too, which along with Facebook and Amazon, forms a modern tetrarchy of technology. It’s a war that’s being fought on legal and public fronts, but exactly how does Epic plan to win? And how did these corporate David and Goliaths come to be at odds?

Founded in 1991, Epic Games started as a video game developer before segueing into developing the tools that others use to make video games, most notably its Unreal Engine game engine. Just as sellers of shovels made more money during Gold Rushes than most miners, so Epic did pretty well out of that move. Then, a few years ago, it released Fortnite.

You’ve probably heard of Fortnite. Even if you don’t play it, you know kids who do, or maybe just kids who watch video streamers who do. A free-to-play game with battle royale, creative, and cooperative elements, its in-game purchases have proved a massive cash cow for Epic, pushing the company’s valuation into the tens of billions.

With all that cash weighing it down, Epic decided to throw its weight around. Casting itself as Robin Hood, it first took on Steam, the dominant storefront for PC games, promising players cheaper games and developers a bigger cut of the revenue. The verdict on this ongoing war remains open, as while the Epic Games Store continues to host exclusive titles and offer free games to tempt new customers, many PC gamers are heavily invested in Steam. However, it’s now clear that this was just a warm up for Epic’s biggest fight.

Apple has faced years of criticism for its “walled garden” approach to releasing software on its iPhone and iPad devices. In short, if you want your software to run on an iDevice, you follow Apple’s rules and give Apple its 30% cut. While the ecosystem for Android devices is more open, the Google Play store, which has adopted similar rules and a similar revenue cut, is the quickest and easiest way to find and install new software. Hence, most users will use it.

This week, Epic said “nuts to that” and implemented a new feature in Fortnite, whereby users could make in-game transactions directly from Epic without giving a cut to Apple or Google. Apple swiftly removed Fortnite from its App Store: if you already have it, you can continue to play, but there’ll be no new users and no updates. Google followed suit not long after, delisting Fortnite from the Google Play store.

For players, the immediate impact is minimal. The difference will only really start to show when Fortnite’s new season begins. Unable to update, iOS and Google Play users will miss out on the new content. But Epic didn’t wait to let them know about it. Not only did they slap Apple and Google with a lawsuit accusing them of monopolistic practices, but they also hosted an in-game video that mocked Apple’s famous “1984” advert, arguing that Apple now held the same position as the corporate behemoths it once opposed.

It’s a fair comment. Apple is “the man” now, just as Google has long since ceased being a scrappy garage startup. Both companies have their share of questionable practices and wield ludicrous economic and social power. Yet the fact that Apple got a video whereas Google didn’t suggests that Epic is relying on public opinion being on its side in this fight. Specifically the public opinion of millions of young Fortnite gamers who might end up missing out due to this corporate spat over revenue sharing.

Apple’s argument is the same two-pronged one that it’s used to fend off anti-competitive arguments in the past. First, it built the App Store, and the host devices, and their operating systems. If Epic uploads a free-to-play game and makes billions through in-app purchases, it’s effectively freeloading if Apple doesn’t get a cut. To which Epic might respond, well, isn’t 30% a bit much? In their turn, Apple can say that the same rules apply to everyone, no matter their size. Epic might then point to Steam, which responded to the competition posed by Epic by implementing lower by altering its terms for revenue sharing. It’s a back-and-forth argument but not Epic’s strongest suit.

Apple’s second argument sees it on shakier ground: it controls its walled garden by checking the content it hosts. This has kept Apple’s App Store largely free from the knockoff apps and rubbish that plagued Android in the past, but it also means that the everything on the App Store has to be Apple-approved. With Apple having recently banned Microsoft and Facebook from hosting their own game-streaming services on iPad and iPhone, this is an opportune moment for Epic to draw attention to how Apple’s corporate culture defines what its users get to experience.

The stakes are high. Apple makes a good chunk of its earnings from hardware sales, and losing Fortnite could see it lose a chunk of those (it’s already facing threats to its Chinese market from Trumpian “diplomacy” to add to its vulnerability). On the other hand, Apple has more cash-in-hand than most countries and can weather the storm, whereas Epic is for the first time putting its cash cow at risk.

On the other hand, if Epic can’t quickly find acceptable terms with Apple and Google, some of its players and streamers might just move on. No game lasts forever as “the big thing,” and my own nieces and nephews are pretty happy with Roblox. Epic is not lacking in competitors who would be more than happy to carve off slices of the Fortnite billions.

Of course, Epic has its own war chest to fight this war, and the lawsuit against Apple and Google may prove to be nothing more than a negotiating tactic. After all, implementing changes to the law does require the presence of a justice system with the will to do so, and the U.S. has its own issues at the moment. Europe would be a more friendly venue in which to argue the merits of the tech giants’ market power, but that’s not where the lawsuit was served (as far as I can tell).

Which is where Epic’s social media strategy comes in. The video mocking Apple was a call to arms for Fortnite players to rally to the game rather than the platform. To think about a world where Apple doesn’t take a 30% cut of Epic’s earnings. Which, given that the game deliberately targets younger players with its marketing and in-game purchases, comes across as just a little bit skeezy.

Ultimately, this is a fight between companies worth billions about who gets how much money. Just because it’s the little guy doesn’t make Epic virtuous. As shown in its conflict with Steam, it’s quite happy to leverage its riches and fight dirty. Similarly, just because Google began in a garage and had a motto of “Don’t be evil” for years doesn’t make it the good guy either. And though I’ve been an Apple user for most of my life, I’m more than happy to see people calling it out when it’s getting things wrong.

This is particularly true in the area of games. It’s something that Apple has never quite got to grips with; a legacy of the Steve Jobs era. Now offering its own subscription-based games service, Apple Arcade, it looked dodgy in throwing roadblocks in front of Microsoft and Facebook. It’s a sore point that Epic has targeted, and it’s one in which Apple could do with reviewing its practices.

I’m just not convinced that there’s much more to this fight than money. There’s a possibility of a more even playing field that delivers benefits for consumers emerging from this spat, but believing in that takes optimism that’s in short supply in 2020. Epic wants more money, and it believes that it can force Apple and Google to the table. Time will tell if it’s calculated correctly, and in the meantime Fortnite users will be the ones to pick up the tab.

Fictionally Humane

The gameplay of Ion Storm’s Deus Ex, released twenty years ago, begins with a choice. Preparing to deal with a group of terrorists, the player chooses one of three extra weapons: a rocket launcher, a sniper rifle, and a mini-crossbow loaded with tranquilliser darts. Unusually for a game of that era, Deus Ex announced from the start that the player’s choices mattered.

On my first full playthrough of the game, I selected the mini-crossbow. I had already been an active player of roleplaying games for years at that point, and I was happy to play into the fiction of the game that those trying to kill you might have valid reasons for doing so. (The sniper rifle is the choice for players unconcerned with lethality, whereas the rocket launcher is best suited to taking out robots and inconveniently locked doors.) This fiction is carried through the game’s plot, in which the initial truths you’re presented with are undermined and other characters react to whether or not the player is happy to shed blood through the course of the game.

Although a seminal game in showing how player choice and morality could be integrated into games, Deus Ex proved a hard act to follow. It received only one sequel, and its 2011 prequel, Deus Ex: Human Revolution, received notable criticism for forcing players into unavoidably lethal boss fights.

One year later, in 2012, a spiritual successor appeared in the form of Arkane’s Dishonored. In this title, the player adopts the role of a vengeance-seeking assassin but the developers leaned into stealth as a mechanic and decided to provide the player with the opportunity to find non-lethal resolutions to all of their goals. Not that these are any less dark in some cases: one “fate worse than death” sees a society hostess and supporter of the corrupt regime delivered to an obsessed stalker as an alternative to being murdered.

The ten year gap between these very similar games presents a degree of progression. Deus Ex asked players to think about how they solve problems and whether casual murder can be justified within the fiction of the game. Dishonored asked that question again, and reinforced it by pointing out that just avoiding murder wasn’t enough to make you a good person. Vengeance takes you to dark places.

At this stage, I ought to point out that both Deus Ex and Dishonored are first-person action games. The player literally sees out of the eyes of the protagonist, so opportunities to distance themselves from the morality of what they’re doing are limited. Action-oriented first-person shooters, such as Halo or Doom, tend to either present inhuman enemies like aliens, demons, or zombies as cannon-fodder or lean towards the multiplayer experience, where the targets are usually other players and an immersive narrative is tossed out in favour of an arena atmosphere: you get shot, you respawn, it doesn’t really matter.

The multiplayer-focused Call of Duty series does engage with this issue but in a fashion that passes over player choice. A mission in Call of Duty: Modern Warfare 2’s single-player narrative sees the player participate in a mass shooting. The mission is flagged for “disturbing content” and players can choose how to interact, but the massacre happens regardless. The narrative requires the deaths to happen, regardless of player choice. They’re a necessity of the narrative, just as the mission itself is seen as a necessity on the part of the player character. No moral choice is made.

Step forward another decade or so and we come to 2020’s The Last of Us Part II.* Once again the protagonist is hell-bent on what initially seems to be justified vengeance. As with Modern Warfare 2, the player has no choice but to deal with the deaths they cause. Worse still (from the perspective of the player character), an in-game switch in narrative perspective does its best to rob them of any belief that their vengeance was just in the first place.

Admittedly, The Last of Us is not the same type of game as Deus Ex and Dishonored. It’s played from a third-person perspective and is less a playground for player creativity than a canvas for the creators to tell a story. It’s also unrelentingly grim in tone, and its apparent theme of just how much choosing to kill costs is one that many players seem to have resented confronting. Even so, it’s another step on the same spectrum: engaging in a work of fiction requires emotional investment, and regret, shame, and horror are all valid emotions to feel around making the choice to kill.

Narrative is one of the strongest tools that artists have to generate feelings in consumers of their art. We have literally centuries of practice when it comes to affecting emotions through stories and of making listeners, readers, and viewers reconsider their preconceptions. Video games, as an interactive art form, are much newer on the block, and it’s hardly surprising that they’re going to crib from what came before. The first few decades of film, after all, copied heavily from theatre until the new art form developed its own language.

Yet the linear narratives that other art forms have developed sit uneasily within video games. The Last of Us hews closely to linearity and while it clearly knows the story it wants to tell, it gives players little real moral choice. Even Dishonored, where the player has the freedom to devise their own solutions to problems, has a linear narrative to follow and an external marker of morality: the more murderous the player is, the more the city they inhabit falls into chaos around them.

Deus Ex had things easy, after a fashion. Technology wasn’t advanced enough to create realistically human opponents, so the moral choices facing a player had a level of abstraction. Ten years later, Dishonored provided a more sophisticated world with more sophisticated inhabitants, but it was still a playground of sorts. That twinge of discomfort when handing over Lady Boyle is one of the strongest memories for players of that game because they were forced to reflect on their choices. In that moment, they were reminded that whatever the narrative might tell them, they might not be wholly the good guy.

A decade further on and The Last of Us Part II is even more sophisticated in its world building and character portrayals, but its directed narrative might be a dead end. By all accounts it is an amazing achievement and perhaps a pinnacle for current-generation technology, but if the player has no agency in the choices the narrative makes, how powerful can the moment be when the game forces them to reflect on the morality of those choices?

This problem of ludonarrative dissonance is hardly new, and people within the games industry have been hacking away at it for years.** In these few examples, I wanted to take a look at how some games flag the choice to be a killer and how they can either lead or force the player to reflect on that. The technological capacity for doing so has definitely advanced over the years, and narrative sophistication has likewise grown, but it doesn’t feel like the two have come together yet. I wonder if and when they will.


*Having never been a person of the PlayStation persuasion, I haven’t played The Last of Us, but I have watched Noah Caldwell Gervais’s deep dive into the two The Last of Us games, which I heartily recommend.

** I specifically limited myself to a few examples to restrict the length of this piece. The Mass Effect series is one that deals heavily with morality within the narrative, though less so with the morality of killing.


Cancer Update

Yes, it’s been a while, I know. For what I hope are understandable reasons, my enthusiasm for writing anything here was at a low ebb for a while. Restoring my mental momentum took a while, and there was a recent recurrence of the whole coughing-up-blood thing that distracted me a bit too.

As a general overview though, I’m doing fine. A round of scans and another bronchoscopy found nothing too egregious (well, nothing that they didn’t already know was there) and I’m back on track, taking my medication and doing my best to dodge Covid by the simple expedients of wearing a mask, limiting the number of people I meet, and washing my hands (not all at the same time, admittedly).

My biggest worry for the moment is becoming a couch potato, which is all too easy when the couch in question is only two feet away from your work chair. Still, I have an isolation break to look forward to shortly, and in the face of Ireland’s fitful summer, it’s not so bad to be indoors. I’ll try to keep up with the posting in future, though no promises. In the meantime, I hope you’re all keeping well and safe.

The Fringe of the City Beast

I’ve completely let updating this blog slip, haven’t I? I’m not going to pretend it’s not my fault either. I had a big piece planned on revolutions—how they happen and why we might be staring down the barrel of a few of them—but the subject slipped away as I got distracted, and it’s still lurking in my drafts folder, far from finished. It’ll have to lurk there for a while yet, as I’ve not the time to devote to making it worth showing to the masses.*

In the meantime, here’s something more ephemeral but personal for your delectation. After an extended period of joblessness and temporary work, I am once more gainfully employed. (I ensured this would come to pass by such actions as renewing my library card, which I’ll now never use, and taking up time-consuming hobbies like, oh, keeping this blog filled with content.) This job is a bit of a departure for me in one specific way though: after many years of working within walking distance of home and the city centre, I am now out in the wilds. Not quite outside the city of Dublin, but not quite inside it either.

This has wreaked merry hell on my previously relaxed commuting habits. (As opposed to my even more relaxed non-commuting habits of the past few months.) A four-hour walk to work is clearly untenable, a one-hour-plus cycle might work if it didn’t route me through the horror that is Dublin city centre traffic, and a two hour bus trip was only acceptable for the first few weeks. Which means that after years and years of avoiding it, I now have a car.

But it’s not the new experience of driving to and from work, or the multitudinous indignities of trying to get a used car insured, that I’m writing about. No, this post is about the things I’m seeing on that commute, out where the city meets the countryside.

Dublin’s geography is pretty traditional, by and large. The city centre, which clusters around the River Liffey, is surrounded by neighbourhoods that were once towns and villages in their own right, before ravenous Dublin swallowed them up. The further out you go, the larger the spaces between those neighbourhood centres, and into those space have grown suburban sprawls and small industrial estates, served by buses and the occasional tram (if you’re lucky). Beyond those lies the ring of the M50, alternately artery and car park, depending on traffic conditions.

And beyond the M50? Well, that’s where I am now.

This is very much the edge of the city, the place where its tendrils have stretched out but not yet taken over. The new and the old rub shoulders, and green spaces have been marked off for future use but not yet inhabited. I’ve spotted hawks and pheasants around the fields near work, fitting into ever smaller spaces as their living space becomes someone else’s. Country houses with ample space can now see massive warehouses and data centres from their back doors, and ruined and abandoned buildings stand ready for reuse or demolition, as fate or fashion require.

Cities grow not just not just in extent but in time. The collision between a city and the spaces it expands into is a collision between two different eras. All around my new workplace, roads are being ripped up and resurfaced, provided with ample pavements and cycle lanes, as current trends require. Of course, the trend now may not have been the trend during an earlier era, and so those cycleways tend to disappear as the reach the inner, older city. In time, those more interior, older areas may catch up with the fresher outer, but here and now, this is where things are newest.

The idea of cities as living things, growing organisms, whether benevolent or parasitic, is not a new one. There’s a lot of evidence for it, if you look. Imagine hanging a camera high in the sky above Dublin and taking a time-lapse video spanning months and years. Humans would disappear from the city organism, which would itself be seen to expand in pulses. Like a tree, the heart of the city would change little, and instead all the activity would be seen on the edges, as economic factors drive the need to swallow up more space.

Is this a good thing? Cities are necessary to the way the world works now. Population has grown and civilisation has grown complex to the point where a return to rural life is only an option for a few. Even so, the way that cities swallow up the green spaces and quiet villages around them is naturally unsettling. Speed and a lack of planning leaves a sense that the process is out of balance. Dublin’s a particular case in point. A combination of planning restrictions and the presence of major multinational companies have made life in the city unbearably expensive for many, and that expense and those multinationals have pushed that sprawl out further and further.

A pile of tree roots and pieces sits behind a prefab stone and metal fence.
Uprooted hedgerows replaced by prefab fences. Not a better outcome.

I’ve been lucky up to now in not having to confront the results of this. The first few weeks saw me spending four hours a day commuting by bus, into town and out to my new employer, then back in the evening. Getting a car was close to a necessity, as it is for many others, but in doing that I’ve just added to the congestion that strangles routes into, out of, and around the city at different times of the day. In the meantime, the city continues to grow, and I’ll be far from the last to hop on this treadmill.

The living fringe of city isn’t a place I’ve ever worked or lived before, so it’s interesting to see how it works. Whether you count it as growing into or devouring the space around it, it’s a process that’s going to continue. We need to get better at managing it, and at using the space the city already occupies. Both so we can move around them and so we can live in them. The city beast is one we have to live with—it’s up to us whether or not it runs wild.


*By which I mean however many of you actually read these occasional sound bites from my brain.

A Month with the apple watch

I’m one of those terrible people who opt for Mac instead of Windows, iPhone instead of Android. I have an excuse—I’ve been using Apple devices since the Mac Plus, back in the 1980s—and I’ll argue the advantages, but I know the costs too. As much as Apple devices tend to be reliable and enjoyable to use, they’re not cheap. So if I’m going to add to my collection, I don’t do it without a lot of thought.

I’ve been eyeing the Apple Watch for years now. Partly because I like having new shiny technology to play with, and partly because of my mini-ecosystem of Apple devices that it can interact with. However, the earlier versions suffered the limitations of new technology in such a small form factor, and I had cheaper options available to me. It was only with the release of the Apple Watch Series 4 a while ago that I decided the time had come. I broke open my piggy bank and availed myself of some new wrist decoration.

My resulting purchase is the 44mm Apple Watch Series 4 with a Space Grey Aluminium Case and a black Sports Loop wristband. The Series 4 represents a step up in screen quality and device speed over previous iterations, but the basic functions are essentially the same: it’s a combination of fitness tracker, mobile phone adjunct, and, well, watch.

All of these things my previous smartwatch, the late, lamented Pebble Time, also did to some degree, and its colour e-ink screen provided allowed around five days of battery life, at the cost of much slower responsiveness. However, Fitbit’s buyout of Pebble has finally led to support being cut off. Given that the Pebble Time no longer works with my favoured fitness app, Runkeeper, moving to the new platform was an idea whose time had come.

Initial impressions of the Apple Watch were as favourable as they usually are for Apple Products. Out of the box, it paired with my phone and set about downloading watch apps to match those on my phone. The build quality is good too—a month in, and there are no signs of any scratches or damage, which is something that the plastic-bodied Pebble couldn’t boast for as long. Battery life testing revealed that it wouldn’t match the Pebble, but I get two days out of it without struggling, which feels pretty solid.

As for what it’s like in use, the responsiveness that Apple’s custom silicon provides means that simply raising your wrist (it asks during the setup procedure which hand you wear your watch on) brings it to life. Tapping the screen will do the same, and both of these actions will also wake Siri (of which more later). You can pick and choose among a wide range of watch faces, most of which are customisable in terms of look and utility. I opted for the Infographic watch face, which makes a scattering of commonly used apps and functions available through on-face “complications.”

Fitness Tracking

Fitness tracking has become a major feature of the Apple Watch because that’s what people wanted. Not only does it use Apple’s three-ring system to track calories burned, exercise duration, and hourly activity, but it also regularly reminds you to keep up a constant level of activity. I can see how these reminders (which can be turned off through the companion watch app) might become annoying, but as someone who has a tendency towards laziness, especially in the winter months, it’s a useful goad to avoid couch potato status.

Whereas the Pebble Time offered only a step tracker, the Apple Watch adds GPS and heart rate tracking. (There’s even an ECG function, though that hasn’t been enabled in the software yet and may not be outside the U.S.) Both GPS and heart rate tracking work well and consistently, and the battery life is good enough to use it as a sleep tracker one day out of two. One minor issue is that the glass back of the watch irritates the skin on my wrist a little—so it’s best not too wear it too tight or too consistently.

The Apple Watch also integrates well with whatever fitness apps you might be using. Not only can I activate Runkeeper within the watch, but it will also pay attention to what you’re doing at any given moment and ask you if you want to track your activity if you’ve been walking or running for ten minutes or more. As a GPS tracker, it’s great, but in Ireland the LTE version isn’t available yet, as no mobile providers support them. Which brings us to the next subject—the watch’s relationship with your phone.

Phone Companion

One of the reasons that I got the Pebble, and later the Pebble Time, in the first place was to reduce my habit of spending time looking at my phone and its notifications. That effort was … questionably successful, because while you could read the notifications on your wrist, you couldn’t respond to them. The Apple Watch actually allows that, within limits.

One of the big surprises with the Apple Watch for me is how well Siri works.  Simply raise your wrist and talk and it’ll respond. This makes simple actions like setting a timer or a reminder much quicker. You can even use Siri to dictate responses to messages, which again works much better than I expected. Certainly more quickly than the other Watch-specific option of drawing each letter out on the screen. The Apple Watch does provide canned responses to messages too, which are even quicker, if more limited (and easy to accidentally send).

The Apple Watch does a fine job of having some basic phone functions handed off to it. It’s not going to cure your Twitter addiction—thankfully for both you and its battery life, Twitter doesn’t work at all with the watch, beyond delivering notifications. However, if you’re looking for a way to reduce the number of times you take your phone from your pocket or bag, this could help a lot.

The Computer on Your Wrist

As for its most basic function, the Apple Watch is a fine watch. It’s not much to ask, and the WatchOS doesn’t get in the way of that simplest of jobs. In fact, WatchOS is largely solid across the board, with some odd quirks that are the result of the device’s history. The field of icons that used to be how the Apple Watch’s apps were navigated is still there, just a press of the Digital Crown away, and it’s still hard to find the app that you’re looking for in the field.

For the most part though, WatchOS does a good job of easing you into using the Apple Watch, teaching you the basics of the interface in your first few minutes of use, then leaving you to play, as is standard with Apple devices. It’s not the free-standing computer on your wrist that you might want it to be, at least not in the LTE-lacking version, but it’s as close as you can get right now, and if you can forget that your phone is somewhere nearby, there’s little difference. I’ve even indulged in a few Dick Tracy moments of phone calls made through the Apple Watch, though the otherwise solid built-in speakers struggle to overcome traffic and crowd noise. My main regret is that my much-loved AirPods suffered a washing machine-related incident from which they’ve never recovered, as they seem very much designed to work with the Apple Watch.

In short and in summary, if you’ve just skipped to the end to find out, I’m pretty happy with my Apple Watch. It wasn’t cheap, but that’s why I have a piggy bank in the first place—and it’s a lot cheaper than replacing any of my existing Apple devices. A month in and I’m comfortable with having it on my wrist, with the fabric Sport Loop keeping it sat snugly there. I haven’t even played with many of the apps and functions yet, and every few days I find another advantage or two to it. Thus far I’ve had few regrets buying Apple products, and while the Apple Watch might seem like it might be the most frivolous of those purchases yet, it sees as much use in everyday life as any of them.

The Upgrade Urge—Apple’s October Event

I’m really not in a position to be buying new technology now.* I’m in the middle of a job hunt and I ought to be saving every penny while the employment market remains a fickle, teasing wretch. Why, then, did Apple run an event yesterday designed to remind me that my existing array of gadgetry is but a dusty heap of aluminium and silicon, no more than one careless step from the technological grave?

Yes, all of Apple’s announcement events are supposed to do this. But this one was personal. They specifically announced updates to (almost) every Apple product that I own, and if I find out that Tim Cook did this just to annoy me, I’m going to be … well, I’m going to be impressed. Impressed, but also annoyed.

I wish I was exaggerating. My mostly superannuated selection of Apple technology consists of my elderly Mac Mini, my much-used MacBook Air, and my relatively youthful iPad Pro (which, with accompanying Apple Pencil and Smart Keyboard case, has taken on many laptop-style duties over the past year). Everything mentioned in the last sentence saw an update at Apple’s October 30th event, and none of those updates were small ones.

Mac Mini

The oldest of my Macs is my Mac Mini, which has been sitting beside my TV since 2010. It’s done solid service over the years, and I’ve kept it youthful by feeding it as much RAM as it will take and performing some mild surgery to replace its hard drive guts with a solid-state alternative. Yet it’s been slow and clunky for some time now, its major services taken over by an Apple TV (along with my iPhone, the one Apple item I use so much that I often forget it exists), with only the lack of a viable upgrade from Apple keeping me from considering a replacement.

Well, yesterday Apple answered the prayers of us Mac Mini devotees and delivered unto us a new machine, sleek in its smallness, patterned in gunmetal grey, and pulsing with barely contained potency. In other words, it’s been so long since the Mac Mini had an upgrade that Apple was able to claim that the new one is five times faster than its predecessor. Which was a good bit faster than my much older version, so this one would blow the doors clean off if I were to opt for it.

Which I won’t. Not yet anyway. Not because I don’t want to—it’s a desirable little chunk of metal and silicon—nor because I can’t afford to—I can, I just know I shouldn’t—but because, as mentioned above, most of its main duties have now been shifted onto the much more suitable (and cheaper) shoulders of the Apple TV. While not a perfect machine in and of itself, the Apple TV is designed to work through a television and does so nicely. To the point where I’ve ditched cable TV in favour of broadband services. In the meantime, my Mac Mini remains as it is, quietly acting as a media server. It’s happy, I’m happy, and one day we shall part, but that is not this day.

I’m sure the new Mac Mini will sell well anyhow. Just not to me right now.

MacBook Air

My MacBook Air is a little younger than my Mac Mini, being of 2012 vintage. For all that, it’s still running well and speedily, courtesy of having an SSD from the start. I don’t ask too much of it these days, as the battery has long since left behind the days of offering multiple hours’ service, but when I just need to type something, it’s the go-to machine. It’s also survived an unfortunate encounter with a glass of breakfast orange juice, courtesy of a replacement keyboard and some repair guides from iFixit.com—living to suffer another day.

The MacBook Air has seen more regular updates than the Mac Mini, but in comparison to the Retina screen-enabled rest of the Mac laptop lineup, it’s been something of a red-headed stepchild for a while. Minor processor tweaks have bumped up its speed, but the budget Mac laptop was looking a little dated and cheap before yesterday. Now, though, it has been given the Retina screen users have been crying out for, as well as substantial processor and graphics speed bumps and the new butterfly keyboard that Apple is much enamoured of (though its users are more ambivalent). Available in multiple colour options and with a Touch ID fingerprint sensor, the MacBook Air finally feels like a modern laptop again.

Not that I’ll be upgrading though. For a start, all the upgrades have seen its price jump to €1,379 for the base model. Less than the ultralight MacBook or the MacBook Pro but no small beans. The price may well come down in time, but for the moment it’s not really a budget option. Second, my iPad Pro has usurped most of my MacBook Air’s functions in daily life. And that’s what I’ll get to below.

iPad Pro

Having saved up my pennies, I splashed out on an iPad Pro just over a year ago. This was very deliberately meant to be a laptop replacement—I added a Smart Keyboard and Apple Pencil to the purchase, and I made sure the device itself had plenty of storage onboard. In the year since, I’ve been more than happy with it. It’s light enough to tote anywhere, powerful enough to handle anything I care to throw at it, has a good enough battery to last for days at a time, and is a serviceable enough typing machine for me to write a NaNoWriMo novel on it last year. All in all, I like it a lot.

This love wasn’t ended by Apple’s announcement of a new iPad Pro yesterday, but maybe it was dented a bit. The new machine is both an upgrade and a refinement: faster and better looking, with the physical Home/Touch ID button removed in favour of a Face ID camera system. The gadget ecosystem has been upgraded too, with a new Smart Keyboard case that provides a better typing experience and better protection and a new Apple Pencil that locks magnetically to the iPad for safekeeping and charging. As is not unusual for an Apple upgrade, everything just feels a little better, a little fresher.

And I won’t be buying one. Of course I won’t—my iPad Pro is only a year old. I’m not crazy! Much though I may envy the improved gubbins that my newly aged device will never provide me with, it does everything I need to it to with aplomb, and nothing other than an excess of cash and a sudden break with reality could persuade me to spend that much on such (ultimately) minor improvements.

Technological envy is a real thing, but patience works as an antidote. The tech you’ve got will serve you well, and the longer you wait, the better the reward when you finally do decide to splash out. Whether it’s for one, six, or eight years, the upgrade urge can be resisted, no matter how well Apple targets its events at you.


*This hasn’t stopped me. I broke open my piggy bank** to add an Apple Watch to my Apple menagerie just three days ago. A review will be forthcoming once I’ve been using it for a while.

**Actually a real thing, but also actually a cow bank. It was a present from a friend. Don’t judge me.

Let’s Try This One More (Pebble) Time

The optional watch face mimics the Apple Watch, which I didn't realise until I was using it.
Authentically scuffed Pebble Time: model’s own.

Depending on who you believe, the wearables revolution is underway, has not yet started, or has already failed. While I don’t wholly agree with any of those viewpoints, the fact that I’m now on my second smartwatch does suggest that I don’t agree with the last.

Those who follow this blog and are aware of my devotion to the Cult of Mac may be surprised to learn that my new toy isn’t an Apple Watch. Those who follow this blog in slightly closer detail might be able to guess what it is: a Pebble Time. Several years ago, in one of my early Kickstarter forays, I stumped up for a first-generation Pebble smartwatch, which adorned my wrist for all of a month or two.

Its short lifespan wasn’t down to the fact that it was a bad product. Beyond being a first-generation device (part of my reason for eschewing the Apple option this time) with somewhat dodgy Bluetooth wireless and limited functionality, its only major flaw was that it was uncomfortable to wear for long periods of time. Unfortunately, that’s also why it was in my pocket instead of on my wrist during a fateful bus journey, and the fact that it fell out of the pocket somewhere was entirely my fault, not the watch’s.

But enough of my ongoing habit of losing things (which seems to come in waves – whenever I lose one thing, I known that at least two more will vanish in the short-term future), what of the Pebble Time? Well, the first thing to note is that it’s much more comfortable to wear, with a slightly curved back to the watch body and a much improved rubber strap. While I’ve had to take it off for comfort reasons a few times, this has been the result of sweating due to exertion, not general day-to-day use.

So score one for the Pebble Time there. In addition, the straps now include easy-release clips that, while not as clever as the Apple Watch’s high-end version, allow access to a full range of standard-gauge watch straps.

In fact, the Pebble Time is a solid version 2.0 in a lot of ways. A colour e-ink screen replaces its monochrome predecessor, the Bluetooth connection seems much more solid, the vibration function is hard to miss, and the build quality overall seems much better, with buttons that avoid feeling too spongy despite being plastic. My gunmetal grey version has picked up a few scratches after a month or two of use, but stumping up a little more money for the Pebble Time Steel could gain you an even more polished experience.

In use, the Pebble Time is easy to figure out. Hit the single button on the left of the watch to navigate backwards to the home (watchface) screen. Hit the centre right button to dive into and select menu options and the buttons above and below it to navigate up and down through lists. Changing settings, accessing apps, changing watchfaces, etc. are trivial tasks. To actually install apps or watchfaces, you’ll need the companion smartphone app, but once installed, they can connect to the phone via a Bluetooth connection for extra processing power. Pebble’s own app store, accessible via the smartphone app, makes installing new apps easy, but finding the app you want can be tricky, as the browsing experience feels a little haphazard.

As with any smartwatch, the Pebble Time is reliant on its connection to a smartphone. Lacking that, it can tell you the time and offer access to any standalone apps that you’ve installed, and not much more. A connected smartphone offers instant access to weather, music, and more advanced apps. For example, I’ve installed the Tripadvisor app, which can point me towards nearby restaurants or attractions, should I so desire. It’s limited, but the interface is responsive and fun to use.

In fact, fun is a good description of the Pebble Time overall. The Pebble team seem to have put a lot of thought into making their device as easy to use and enjoyable to own as possible. For example, there’s no need to install an app to use the Pebble Time as an external display for Runkeeper’s smartphone app—it happens automatically. This good user experience is further helped by the Pebble Time’s battery life, which will stretch out to five days. Changing your watchface to one that displays the battery level will help you keep an eye on that, but it’s far ahead of the Apple or Android watches in this regard at least.

Some commentators have raised concerns that transferring notification from the phone to the wrist just makes us even more tied to that “always on” mentality. The reality has proven very different: an initial flood of notifications trained me to turn off any that weren’t vital. Moreover, even though vibrations on your wrist make notifications hard to miss, it’s it’s far quicker and easier to dismiss those notifications by glancing at summaries of them on a watchface than by digging your phone out of your pocket or purse to see what they are. Yes, notifications probably aren’t wholly a good thing, but if you’re going to opt in, this is the way to go.

I haven’t tested the limits of what I can do with the Pebble Time yet, having only installed a couple of apps and watchfaces. Though far less powerful than the Apple Watch or its Android competitors, the Pebble Time offers plenty of variety in terms of what it can already do. Everything that I have experienced with it so far has made me pretty happy with my purchase. The cluttered app store is the only minor fly in the ointment, whereas everything else is pleasingly smooth.

You Are The Product

(With invisible close buttons.)
Our grim, multicolour, flashing future.

One of the quieter announcements that Apple made during its WorldWide Developers Conference recently was that its forthcoming MacOS and iOS updates will give developers the ability to create “Content Blockers” for its mobile and desktop Safari browsers. While the exact content blocked is up to the developer (you could block the Daily Mail if you want—and you really should), no one is under any illusions as to the target of this feature: ads.

Ad blockers already exist as extensions to most existing browsers, but the majority of users probably don’t avail of them, especially in the mobile sphere. Apple’s decision to push this feature and the reaction to its quiet announcement says a lot about the current state of the mobile web and the dominant role that ads have come to play in it.

Faster computers, mobile phones and broadband have contributed to a smoother experience online, but this has helped to hide the fact that the vast majority of what we download is ads and spyware. Blocking out most of that would massively speed up browsing and have the simultaneous benefit of cutting down on the amount of activity tracked online.

As pleasant an outcome as this sounds though, there’s a downside: online advertising pays the bills for many of the sites we enjoy, some of them from smaller content providers. Killing off that revenue stream would likely kill off a lot of the richness and the niche content that the web provides. It’s not like the people running these sites are preying on you through the ads either—they’re often stuck in a system that they don’t like, making do as well as they can.

The power blocs behind online advertising inform the nature of the system and the conflict that Apple is wading into. Facebook and Google ensure that web users have access to a wealth of content for free by harvesting their data and selling it, funnelling a portion of the proceeds back to the content creators in order to keep the whole thing going. This is the system that’s driven the growth of the web to date, but it’s far from perfect, and web site bloat is just one of the symptoms.

Whatever your thoughts on Apple—you won’t look at the comments on any Apple article on a non-Apple site for long before you find mentions of ‘iSheep’ or ‘Apple fanboys’—the fact is that Apple’s profit model is completely different from that of the web titans. Apple’s iAds platform is distinctly small scale, and its money comes from the sale of hardware instead. Providing an improved user experience is vital to enhancing the sale of that hardware, and enabling the creation of content blockers achieves that.

Safari is a minority browser on desktop machines, but it’s a major player on the mobile web, and iOS users are a lucrative market. So while this initiative isn’t going to bring down the current financial underpinnings of the mobile web, it’s a definite shot across Google’s bows. (Less so Facebook, which has a closed ecosystem of its own that it can profit from.) For users, it raises the question: are we happy to receive things for free, with the understanding that we’re being sold in return, or are we going to accept that we should pay for the things we value?

There’s an interesting parallel to be seen in the U.K., where the Tory government is once again taking the knives to the public broadcaster, the BBC. U.K. TV viewers have long been accustomed to paying a TV licence, and in return they’ve gained a globally renowned service that entertains, educates, and informs. Of course, a publicly funded broadcaster has a head start over its private sector rivals, and in the view of the Tories, that’s unfair competition. Speaking as someone who was raised on a diet of BBC television, I’d call the licence fee a small price to pay, but the future that the private sector envisions could look a lot like YouTube: ostensibly free to use but scattered with pop-up ads and laden with user tracking.

When we’re offered something free, it’s convenient to overlook the fine print. We’re okay with being the product as long as it’s not thrown back in our face. It would be nice to think that users’ desire for speed and convenience would eventually find a balance with providers’ need for compensation. But while Apple’s intervention might be to our benefit, Apple doesn’t fund web content, only content provided through its own app store ecosystem. So if its ad blocking does gain traction, either we’re going to have to learn to pay for our content or some of those content providers are going to go under.

Watching the Apple Watch

Go on, admit it. You want to at least play with them for a while.
All these can be yours, for a (to be determined) price.

I’ve been an Apple user long enough that the company’s regular keynote events are a recognisable form of entertainment. Unusually, I didn’t watch this week’s well-publicised event until the day after it happened. (Possibly a good thing given the problems that the live streaming coverage faced.) By that time I’d already read enough of the media reaction to know exactly what I’d be seeing. Spoilers aren’t really the point with an event like this.

The first part of the event, to be fair, had already been well spoiled by leaks. Enough prototype parts and schematics had trickled out from Apple’s supply chain that only a few details remained to be filled in about the new iPhones. The 6 and 6 Plus looked much as expected and neatly relegated last year’s 5s and 5c to the minor places in Apple’s product lineup. A one-year-old free (on contract) iPhone is a better trick than a two-year-old free offering, but the 6 and 6 Plus are now the stars. The former seems the better bet, though the 6 Plus has its own appeal if you can handle its unwieldy dimensions—in its case, battery life and an improved screen are bigger draws than the optical image stabilisation of its slightly protruding camera.

Next up after the phones was something only hinted at in pre-show leaks: Apple Pay. A solution to the hassle of everyday credit card payments, it positions Apple well in the race to made commercial life more convenient. It’s the biggest leveraging to date of Apple’s credit-card enabled iTunes customers, bringing together a lot of pieces (iBeacon, Passbook) that Apple has been putting into place for some time now. However, given that it’s only usable with NFC-enabled devices (both the iPhone 6 and 6 Plus, as well as this article’s titular device) and is only to be deployed in the U.S. for the moment, its reach will initially be limited. Over the long term though, it could well be the most important announcement of the entire show.

Last up was the fabled “One More Thing,” returning to a very warm welcome from the crowd. This was, of course, the Apple Watch, likewise rumoured in the media but barely even glimpsed in advance of the show itself. It seems that Apple’s secrecy can still hold when they really need it to.

A handful of smartwatches have already hit the market. I owned one briefly, in the form of the Pebble, but most of them are now running Google’s Android in one form or another, and yet more are on the way. If the Apple Watch is going to be a success, Apple’s going to have to repeat a trick it’s already pulled with the iPod, iPhone and iPad: to enter an existing but nascent market and turn it upside down. So has it done so?

Well no, not yet, if only for the reason that the Apple Watch won’t be released until early next year and many important facts about it remain uncertain, but at least one watch industry watcher has been impressed by the unveiling, not just its implications for the smartwatch industry but for watches in general.

Whereas its competitors seem to have focused primarily on providing an adjunct to their Android phones, Apple is coming from the other direction. The Apple Watch is tethered to the iPhone (or possibly the iPad too?) true, but it’s as much a fashion accessory as it is a computing accessory. The fact that Apple paid attention to what people might actually want to put on their wrist can be seen in the simplest fact about the Apple Watch: it comes in two sizes, small and large. Just like non-smart watches do.

Physically, it’s arguably more attractive than any of the other smartwatches already out there. More importantly to potential buyers, it’s massively customisable, more so than any other Apple product before it. Between size, colour and strap type, as long as you fancy having an Apple Watch on your wrist, you’ll be able to make it look exactly the way you want it to. Moreover, Apple has gone to great lengths to design its straps so that you can fit and adjust them yourself, rather than heading to a jeweller to have it done for you, as is the case with several of the Android smartwatches.

As for the software, it certainly looks the part, with Apple once again tailoring an operating system to suit the device. The Apple Watch has a touch screen, but given that any touching finger would obscure a significant portion of the screen, it also has a “digital crown,” refashioning the traditional watch crown into a multifunction control wheel with an integrated home button. Another button devoted to bringing up a “favourite contacts” screen is a reminder that the Apple Watch, above all else, is meant to leverage the power of its linked iOS device, faster and with greater ease than ever, and preferably without needing to take it out of your pocket or bag.

As for whether I plan to get one or not, that depends. Depends on the battery life of the final device and the price of the various options. Depends on whether or not the eventual software manages to live up to the promises of the keynote speech. For, whatever else it may be good at, Apple is very good at selling its devices as objects of desire. I’ll be looking out for reasons not to break open the piggy bank come early 2015. It’s up to Apple to match its own hype.

Until then, I have iOS 8 (coming next week) and OS X Yosemite (coming a little later) to refresh my own devices, making them seem like new again. There’s a new U2 album as well, offered as an awkward freebie at the end of the keynote, but that can’t really compete as an attraction. After all, what we get for free, we never really appreciate as we should.

No More Internet Until You Learn to Behave

Be warned – there’s a lot of unpleasant imagery in the above video.

John Oliver’s rants on Last Week Tonight are becoming destination television for me. Or at least destination YouTube-ery. For his new show, the former Daily Show correspondent has replaced that show’s hit-and-miss interview segment with an extended single-topic rant, delivered as only a pissed-off English gentleman can and filled with truth bombs. The World Cup/Fifa rant is a classic already, but the more recent diatribe on the Ferguson affair had a particularly perfect closing line.

“If (the police) can make it through a whole month without killing a single unarmed black man, then, and only then, can they get their f**king toys back.”

Infantilising your opponents is no way to engage in a debate. But it’s so bloody hard to resist when they’re insistent on acting like three-year-olds throwing a tantrum. Take the response to Anita Sarkeesian’s latest Tropes vs. Women video, in which she dissects the often extremely unpleasant treatment that video games have doled out to women over the years. I don’t agree with everything that Sarkeesian is saying, but I’d love to do is have the chance to talk to someone about it and debate the issues she raises. Unfortunately, the people who have responded by hurling abuse and issuing threats of murder and rape are not interested in anything other than silencing a voice that annoys them.

Let’s be clear: there is no excuse for this. Anyone who did this in person, in a newspaper, on television, or in any other media would be shunned, shut down and perhaps even arrested. So why does it happen so regularly on the Internet, and why does it seem to happen particularly often with regard to video games? As for the Internet, the obvious answer is the anonymity that being online provides. The less obvious answer is that this anonymity facilitates communities of like-minded souls, just as the white hoods of the Ku Klux Klan allowed their activities to proceed in the not-so-distant past.

Why video games though? That’s harder to unpick. The industry, both producers and consumers, has been predominantly male for most of its history. This has served to enable attitudes to women that are proving very hard to shake. Anita Sarkeesian’s videos may depict only some of the symptoms of this problem, but she has a huge amount of material to work with. To truly dig into the gender issues in video games (which are just an outcropping of the issues in society as a whole) will take a lot more than a series of videos on YouTube.

I wonder, though, if there isn’t something to video games themselves that encourages this mindset. When Valve’s Half-Life 2 debuted its physics engine, giving us the ability to play with physical objects, it was just a more sophisticated version of what games had been allowing players to do for years: play with every interactive object in their arsenal. And in games, there’s no real difference between people and things. Both can be shot, thrown, punched and manipulated if the game designer allows it.

As games moved into the multiplayer era, this mindset didn’t change. The ranting, foul-mouthed Halo player, often teenage or younger, is something of a cliché. I’ve yelled at single-player games when things have gone badly for me, in a way I wouldn’t dream of doing to another human being. But if you’ve been trained to see your opponents as no more than sophisticated versions of computer-generated enemies rather than human beings, what’s to stop you from screaming abuse at them too?

One article on this topic nailed it for me: “There’s a fundamental lack of empathy or understanding for other human beings at play here.” I consider myself lucky to have played (and preferred) games where face-to-face contact with other human beings was a necessary part of the experience—roleplaying games and board games. How many of those who hurl the vilest kind of abuse at Anita Sarkeesian and anyone who dares to stand up for her make it a habit to engage with people who might challenge their point of view?

It’s a pointless argument to say that not all men are like this, not all gamers are like this, not all game creators are like this. In any community, from the global to the local, there are always those who take the opportunity to disrupt and destroy where they can. Every community has to figure out how to deal with this element. On the Internet, the goal of freedom of expression is colliding painfully with the notion that everyone ought to be free to make use of this new medium. In the corner devoted to video games, the howling mob is doing its best to ensure that the common space is shaped according to its preferences. I can’t imagine that it will win in the long term, but how much pain is going to be inflicted before humanity prevails?