Last updated: September 18, 2021 12:22 AM (All times are UTC.)

September 16, 2021

I watched this week’s Apple Event for a while, but there was nothing that interested me; I have a mobile phone which works fine for me, I don’t need a watch and I can’t afford a new computer.

But here’s one Apple event speech I genuinely found really energising. In 2007, Steve Jobs made a bold announcement at Apple’s developer conference, that I still find inspiring today:

Transcript

Now, what about developers? We have been trying to come up with a solution to expand the capabilities of iPhone by allowing developers to write great apps for it, and yet keep the iPhone reliable and secure. And we’ve come up with a very sweet solution. Let me tell you about it.

So, we’ve got an innovative new way to create applications for mobile devices, really innovative, and it’s all based on the fact that iPhone has the full Safari inside. The full Safari engine is inside of iPhone and it gives us tremendous capability, more than there’s ever been in a mobile device to this date, and so you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone!

And these apps can integrate perfectly with iPhone services: they can make a call, they can send an email, they can look up a location on Google Maps. After you write them, you have instant distribution. You don’t have to worry about distribution: just put them on your internet server. And they’re really easy to update: just change the code on your own server, rather than having to go through this really complex update process. They’re secured with the same kind of security you’d use for transactions with Amazon, or a bank, and they run securely on the iPhone so they don’t compromise its reliability or security.

And guess what: there’s no SDK! You’ve got everything you need, if you know how to write apps using the most modern web standards, to write amazing apps for the iPhone today. You can go live on June 29.

On open Operating Systems, Progressive Web Apps live up to this promise; truly cross-platform code that can be responsive to any form factor, using a mature technology with great accessibility (assuming a competent developer), that is secure and sandboxed, that requires no gatekeepers, developer licenses or expensive IDEs. They’ll work on Android, Windows, ChromeOS and Mac.

But 14 years after Jobs had this bold vision for the open web, iOS hasn’t caught up. Apple has imposed a browser ban on iOS. Yes, there are “browsers” called Chrome, Edge, Firefox that can be downloaded from the App Store for iOS–but they only share branding and UI features with their fully-fledged counterparts on open Operating Systems. On iOS, they are all just differently-badged skins for the buggy, hamstrung version of WebKit that Apple ships and occasionally patches for security (often waiting long after WebKit has been fixed before pushing it to consumers).

Apple knows Safari is terrible. SVP of software Eddy Cue, who reports directly to Tim Cook, wrote in 2013

The reason we lost Safari on Windows is the same reason we are losing Safari on Mac. We didn’t innovate or enhance Safari….We had an amazing start and then stopped innovating… Look at Chrome. They put out releases at least every month while we basically do it once a year.

Forcing other iOS “browsers” to skin Safari’s engine rather than use their own more capable engines is a deliberate policy decision. Apple’s App Store guidelines state

Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript.

Job’s 2007 speech felt like a turning point: a successful, future-facing company really betting on the open web. These days, Apple sells you hardware that they claim will “express your individuality” by choosing one of two brand new colours. But, for the web, choose any colour you want, as long as it’s webkit-black.

Some of us are trying to change this. Earlier this month I was part of a small group invited to brief the UK regulator, the Competition and Marketing Authority, as part of its investigation into

Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store.

You can watch the video of my presentation, and see Stuart Langridge’s slides.

September 15, 2021

September 13, 2021

September 09, 2021

Shop cop by Andy Wootton (@WooTube)

Words interest me. As I’ve been getting interested in green woodworking and the appropriate tools, I’ve become more aware of the US term “shop”. A few days ago, in a discussion of why America is lagging behind on lower level tech education, someone suggested schools needed to reintroduce ‘shop’, to get kids interested in designing and making things. I think young Brits would know this as ‘Design Technology, Resistive Materials’. I did ‘Woodwork’ and ‘Metalwork’.

America takes its cars ‘to the shop’ because their ‘garages’ are being used as ‘car parks’. Obviously, it means ‘workshop’.  A shop where you buy work, a service instead of a product, as you walk down main street. Not at all like a job-shop in Britain where we try to sell spare labour to employers, or shop for jobs, if you prefer.

I came across a meaning from the world of share trading. A stock may be ‘shopped’ – actively sold. I think we’re finally getting to the meaning. A shop is a place (real or virtual) where goods, services and labour are traded, in either direction. A market stall. I almost wrote “store” but stores are where you keep things before selling our when they haven’t sold.

I was slightly side-tracked by coppers, copping criminals and taking them to the cop shop, trading their liberty for their crimes. My government assures me that if I haven’t committed a crime I won’t cop it. I wish I believed them. That’s the cost of lying in a market that depends on trust and information. Their share price is down.

September 06, 2021

On 4 March this year, the UK Competition and Markets Authority announced that it is

investigating Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store.

Submissions from the public were invited, so I replied and was invited, with “Clark Kent” (a UK iOS Web-Apps developer who wishes to be anonymous) and Stuart Langridge to brief the CMA in more detail. We were asked to attend for 2 hours, so spoke for one hour, and then took questions.

CMA are happy for our presentations to be published (they’re eager to be seen to be engaging with developers) but asked that the questions be kept private. Suffice it to say that the questioning went over the allocated time, and showed a level of technical understanding of the issues that surpasses many engineers. (The lawyers and economists knew, for example, that iOS browsers are all compelled to use WebKit under the hood, so Chrome/ Firefox/ Edge on iOS share only a name and a UI with their fully-fledged real versions on all other Operating Systems).

Clark’s presentation

Clark talked of how Steve Jobs’ 2007 announcement of apps for iPhone inspired him to use his webdev skills to make iOS web-apps.

you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone. And these apps can integrate perfectly with iPhone services … And guess what? There’s no SDK that you need! You’ve got everything you need if you know how to write apps using the most modern web standards to write amazing apps for the iPhone today.

Unfortunately, Safari’s bugs made development much more expensive by breaking Indexed DB; breaking scrolling etc. Also, Apple refuses to support Web Bluetooth, so his customers had to purchase expensive iOS-compatible printers for printing receipts, at 11 times the cost of Bluetooth printers. iOS apps can integrate with Bluetooth, however.

He can’t ask his customers to jettison their iPads and buy Android tablets. Neither can he ask them to use an alternative browser, because there isn’t any meaningful choice on iOS. Clark simply wants to be able to make “amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone”, as Jobs promised. He summed up:

All these business costs inevitably get passed onto consumers. Or worse applications simply never get built, due to the increased cost.

From the perspective of software developers there is no browser competition on iOS.

Apple has banned all the other browser engines and Safari’s bugs and missing features means that web-apps struggle to compete with the AppStore. If third party browsers were allowed, with full access to all the APIs that Apple gives Safari, this would provide Apple the incentive to keep pace with the other browsers and give consumers a means of voting with their feet by moving to a competing browser if they fail to do so.

This would then lead to a universal open app development and distribution platform which would substantially reduce development and maintenance costs for a wide range of business and consumer applications. The open web offers an alternative future where the control from corporate intermediaries along with their fees are replaced with consumer choice and the freedom to easily shift from one platform to another. Instead of having to write multiple separate applications for every device, it allows developers to build their application once and have it work on all consumer devices be it desktop, laptop, tablet or phone.

Write once, deploy everywhere, no gatekeepers, no fees. just as Steve Jobs originally promised.

Stuart’s presentation

Stuart examined whether Apple’s assertion that lack of real browser diversity is actually better for consumers’ privacy and security.

My presentation

Here’s my captioned video. Below is the marked-up transcript with links. Note: this submission is my personal opinion and does not represent the opinion of any of my past or present clients or employers.

Transcript and Links

Hi, I’m Bruce Lawson, and I’m here to talk about Progressive Web Apps and iOS. I’ve been mucking around on the web since about 2002, mostly as a consultant working on expanding people’s websites to be truly worldwide, so they’re performant and work with constrained devices and networks in emerging markets. I’m also involved with making websites that are accessible for people with disabilities. And I was on the committee with the British Standards Institution writing BS8878, the British standard for commissioning accessible websites that last year got a rolled into the ISO. And before that, I was deputy for the chief technical officer at Opera software that makes a popular web browser used by about 250 million people worldwide, predominantly in emerging markets.

Most of my customers are in the UK, USA, or Israel, and they’re largely focused on UK and US markets. In the month of June this year, stats counter showed that 30% of all US web traffic was on iOS, 28% windows, 22% Android. And to drill that into just mobile browsers, 57% were iOS and 42% were Android.

In the UK, 30% of people use Windows to browse the web, 27% iOS, 25% Android. And that jells pretty well with stats from the GDS, the government digital services. Their head of front-end, Matt Hobbs, every month tweets some stats. He said that in June, iOS accounted for 44% of traffic, Android 28%. (Although he does note that because you have to opt in to be tracked, it tends to over-represent the mobile browsers). But we can see that in the UK, 50% of traffic is on iOS and 49% is Android.

So, we obviously have to be concerned with iOS as well as Android, Windows, et cetera, and all the different browsers that are available on every non-iOS operating system. And so, I always recommend to my customers that they use the web. After all, that’s what it’s for.

The principle of universality allows the web to work no matter what hardware, software, network connection or language you use, and to handle information of all types and qualities. This principle guides Web technology design

And that was said by Sir Tim Berners-Lee, and his parents were both from Birmingham, as am I. So he’s practically a relative. And with my work in web standards, those principles still hold true for the web standards we make today. So, I wanted to use the web and I wanted to use what’s called a Progressive Web App, which is basically a website plus. It’s a website and it’s web pages, each of which has a URL. It’s built with web technology, but in modern browsers, conforming browsers, it can look and feel more app-like.

In fact, the UK government itself recommends these. Our government writes,

advances in browser technology have made it possible to do things on the mobile web that you could previously only do with native apps. Services that take advantage of these modern browser enhancements are often called mobile web apps or Progressive Web Apps. The user experience of PWAs and a handset is almost identical to a native app, but unlike native apps, PWAs have a single code base that works on the web, like any normal website. A benefit of PWAs over native apps for the government is that there is no need to maintain lots of different operating system versions of the same app. This makes them relatively inexpensive to develop compared to native apps.

And just like government, business likes to reduce costs, and therefore can reduce costs to the end user. Our government goes on to say,

users can choose to install PWAs on any device that supports technology. If a user chooses to install a PWA, they typically take up less space than the equivalent native app.

And this is very important in emerging markets where lower priced phones have lower specifications and less storage space. The government continues,

PWAs cost less than native apps since they need to be only developed once for different platforms. They are also forward compatible with future upgrades to operating systems

And then it lists browsers that do support PWAs, and it lists Apple, Safari and iOS, but this is only partially true. Apple themselves make this claim in their February 2021 submission to the Australian investigation into monopolies. Apple said of PWAs,

Web browsers are used not only as a distribution portal, but also as platforms themselves, hosting progressive web applications that eliminate the need to download a developer’s app through the app store or other means at all

Therefore, Apple is suggesting that PWAs are a viable alternative to the app store. Apple continues,

PWAs are increasingly available for and through mobile based browsers and devices, including on iOs [my emphasis]. PWAs are apps that are built using common web technology like HTML 5, but have the look feel and functionality of a native app

But, as we will see this, isn’t true. PWAs do not have the same functionality as a native app.

Let’s recap about what a PWA actually is. It’s a bit of an umbrella term, but fundamentally it’s a website, but it can install to the home screen of your mobile device, just like a native app. And it can have its own icon defined by the web developer, launch full screen, portrait or landscape, depending up on what the developer chooses, but it lives on the web.

And this is a really important. It’s important for several reasons. Firstly, because it lives on the web, you’re not downloading everything in one go. A native app is like downloading a whole website all at once. A PWA, you download just the shell, and as you navigate around it, it will get any content it doesn’t already have stored from the web server. Another advantage is that means that you have instant publication. The moment you press the button and it goes on your web server, the next person who opens your app on their device will see the new version. Great for security updates and also great for time-sensitive material.

Another great thing is, of course, is that because there’s no app store, no gatekeeper. Smaller shops don’t have to pay the 15 to 30% tax to Google and Apple. And also, for industries that are not compatible with the app stores, such as adult entertainment, gambling, et cetera, it’s a mechanism of giving an app because they can’t use the app store. And PWAs can work offline.

Let’s briefly look at one. I’m here speaking in a personal capacity. I don’t represent any of my clients or employers, so I can’t show you one of their apps, but I took 15 minutes to convert my WordPress blog into a PWA.

Okay, here, I’m using Firefox on Android. Android, unlike iOS, allows not only different badged browsers, but it actually allows that full browser on, so Firefox on Android is using Firefox’s own Gecko engine. Here, Firefox and Android are collaborating and they can tell when a user goes to my website that it is a PWA, and it offers the opportunity to the visitor to add it to the home screen. You can see it says “you can easily add this website to your device’s home screen to have instant access and browse faster with an app-like experience”. As a developer, I didn’t make this and the user doesn’t have to know anything because the operating system and the browser just show this. If the user chooses, they can add to home screen, they can click on the icon, and it will add automatically or else they can drag it themselves to where they want it.

And here, as you can see, it’s now on my Android home screen. There’s a little Firefox icon at the bottom of my logo to tell the user it’s a PWA. Chrome doesn’t do this, but it does also detect that it’s a PWA and offer to add it to the home screen. And as you can see from the screenshot on the right, when they tickle that icon into life, up pops my website full screen, without an address bar, looking like a native app. If however, a user goes to my website in a browser that doesn’t support PWAs, such as UC Web here, they don’t get offered the add to homescreen thing, they just see my website and navigate it as normal. So people with capable browsers get an app like experience, everybody else just gets the website and nobody loses out.

On iOS it’s subtly, but importantly, different.

So on iOS, if you go to my website, you’re not told anything about this being installable. You have to know to click that icon in the bottom bar that I’ve circled in red. And once you’ve done that, then you get the screen on the right. And you just have to know as a user to scroll down, whereupon you’ll see add to home screen there with no instructions or explanation of what’s going to happen. Anyway, once you click that, then you confirm, and finally it’s on the homescreen. And when you press it, it launches full screen without an address bar. But it still, even though it’s much harder to discover this, it still can’t do everything that a native app could do.

This is Echo Pharmacy. It’s a service that I’ve been using during the pandemic. It’s nothing to do with my employers or clients. I’m just a customer. I use it to get my repeat prescriptions, and I used it and it worked really well. And on the confirmation screen, after I’d ordered some more medication, it said, “Hey, why not use our app?” And I thought, well, why would I need an app, because this website has worked perfectly well for me. So I asked on Twitter, I said, “Why does it need me to download the app?” The website allows me to place and track my meds, why clog up my device, because it’s a download, right? Is it for push notifications, because on iOS websites and PWAs cannot send push notifications. And for a pharmacy like this, it’s important for them to say, “It’s 8:00 PM, have you taken this tablet?” Or, “We notice in four days time, you’re going to run out of this medication. Please come back and order.”

And I got a reply from the CTO of Echo Pharmacy saying,

Hey Bruce, you’re right about push notifications, which for some people are preferable to email or SMS reminders. Beyond that our app and website have feature parity,so it’s really just about what you prefer.

In other words, these people at Echo Pharmacy, not only have they got a really great website, but they also have to build an app for iOS just because they want to send push notifications. And, perhaps ironically, given Apple’s insistence that they do all of this for security and privacy, is that if I did choose to install this app, I would also be giving it permission to access my health and fitness data, my contact info, my identifiers sensitive info, financial info, user content, user data and diagnostics. Whereas, if I had push notifications and I were using a PWA, I’d be leaking none of this data.

So, we can see that despite Apple’s claims, I cannot recommend a PWA as being an equal experience an iOS simply here because of push notifications. But it’s not just hurting current business, it’s also holding back future business. A lot of my clients are looking to expand into new markets. The UN has said by the end of this century, 50% of the population of the world will live in these 10 countries. And as you can see, only one of them is a traditional market for a UK business, the US. And it’s obvious why people want to do this. Whereas, the population of the “west”is not increasing, it’s increasing a lot in new markets.

“The Southeast Asian internet economy is expected to reach $200 billion by 2025“, for example, but all countries are not equal in the technological stakes. Here, for example, is the percentage of monthly income that it takes to buy one gigabyte of mobile data. In the UK, for a gigabyte, (not actually that much [data]), it costs me 0.16% of my monthly income. In India, it’s a little bit more at 0.05%. In the states, 0.1%. In Rwanda, where one of my customers is doing great business, it’s 0.5% of monthly income. In Yemen, where I was born, it’s very nearly 16% of monthly income. So, big downloads cost a lot of money, particularly for something like a medication tracking system, which I might use only once every quarter. It’s a heck of a lot to download at these prices.

VisualCapitalist.com say worldwide, the highest average costs for one gigabyte of data is 30,000% more than the cheapest average price. And it’s not just about data costs either. It’s about data speed. Here, for example, in the UK, the average or the median UK speed is 28.51 megabits per second, in the States, it’s 55. In Hong Kong, it’s 112. In Rwanda, it’s 0.81. In Cambodia, it’s 1.29. In India, it’s 4. In Indonesia, it’s 1.88.

So, a big download not only costs potential customers more, but it takes a lot longer. Facebook recognized this when they released their Facebook Lite app. Facebook said

downloading a typical app with 20 megabyte APK can take more than 30 minutes on a 2G network, and the download is likely to fail before completion due to the flaky nature of the network

Twitter released something called Twitter Lite, which is now just everybody’s Twitter. The Lite’s a PWA, and Twitter said

Twitter Lite is network resilient. To reach every person on the planet, we need to reach people on slow and unreliable networks. Twitter Lite is interactive in under five seconds over 3G on most devices. Most of the world is using 2G or 3G networks. A fast initial experience is essential

So, if we could offer PWAs, we would have a competitive advantage over other people, have much faster installation and a much cheaper installation. Penny McLachlan, who works as a product manager at Google said,

A typical Android app size [a native app, in other words], is about 11 megabytes. Web APKs, TWAs, [this is technical speak for a Progressive Web App], are usually in the range of seven to 900 kilobytes

So, only about 10% of the size of an Android app. So, we can see that PWAs is a competitive advantage for people trying to do business in the wider world, because at post-Brexit, global Britain, et cetera.

It’s no coincidence that the majority of the early PWAs that we saw coming out in 2016, 2017 came out of Asia and Africa, and that’s because when people who make websites and web apps use the same devices and the same networks as their consumers, they absolutely understand why PWAs are great.

But, we can’t make a PWA because we have to care about Safari, which is more than 50% of our traffic in our home market. So, what can I advise my clients to do? Well, a lot of them have settled on something called React Native. They can’t afford to have one developer team for Android, one developer team for iOS, and one developer team for web.

If you’re a massive organization such as Google or Amazon, of course you can afford that, but developers ain’t cheap. And three teams is prohibitively expensive for a smaller organization. React Native, which is open source (but stewarded by Facebook), promises that you write once and it works across platforms, where you’re writing in a language called React, and it compiles down to native applications. So you write in React, you press a button, and it squirts out a native Android app and a native iOS app. And it’s pretty good, I guess.

There’s pros and cons, of course. The pros we write just once. Great. The cons. Well, it still emits a native app, so we still have to submit to app stores, and the timing of that is at the whim of the whoever’s moderating the app store, et cetera. It’s still a big download because it’s a native app.

It’s a more complex built process. There’s not a great deal of build to the web. React Native requires lots of moving parts, internally. It’s harder for us to test the functionality because although we write it once, we have to test it on iOS and we had to test it on Android, and Android testing isn’t as simple as iOS because there’s a massive difference between a top-of-the-range Google Pixel device, or the expensive Samsung devices and the kind of no name cheap Android devices you can get in street markets in Bangkok and Jakarta and Mumbai.

And a problem for me, and for one of my clients, is accessibility. Accessibility is making sure that your app or your site can be used by people with disabilities. Apple and Google have done great jobs making iOS and Android accessible, but React Native has some quirks under the hood, which make it much trickier.

One of the reasons for this is that React Native is quite a new technology and the web is a mature technology, and accessibility on the web is pretty much a solved problem. It isn’t solved if developers don’t do the right thing, but Chrome is 98.5% compliant with the W3C guidelines for accessibility. Microsoft Edge is a 100%. Firefox is 94%. Safari is 97%.

But because we’re writing in React and it’s doing magic tricks under the hood, there are holes in its accessibility. Facebook themselves say,

we found that React Native APIs provide strong support for accessibility. However, we also found many core components do not yet fully utilize platform accessibility APIs. Support is missing for some platform specific features.

They say it’s true, that support is missing. But for example, if you have an external keyboard plugged into an Android device, perhaps because you have Parkinson’s, or (like me) you have multiple sclerosis and you prefer to use a keyboard than the tiny little software keyboards. But if you have an external keyboard, you cannot use that keyboard to focus in to enter any information in any input field that’s made with React Native [on Android]. This of, course, is a huge problem.

Facebook has done some effort into writing up all of their accessibility problems. Here, for example, we can see this is a GitHub board with all the issues and I scroll, scroll, scroll, scroll, scroll through them. There is apparently a fix for the inability to focus into any input fields with Android and a keyboard, but we are at the mercy of Facebook’s release schedules here.

So, what could I advise my clients to do? I tend to advise them, go PWA and let’s hope that the CMA make Apple do the right thing. What I would really like to do is say to them, “Okay, just put this notice on your web app. Say: ‘we’d like to send you push notifications, if that’s okay by you. Please download Microsoft Edge, Mozilla Firefox, or Google Chrome on iOS’.”

However, if they do download Edge for iOS, Firefox for iOS, or Chrome for iOS, that’s no help at all because Apple prohibits those browsers using their own more capable engines and they have to use the Safari engine under the hood. So, consumers get no choice at all.

September 05, 2021

Last week I was part of a meeting with the UK’s Competition and Markets Authority, the regulator, to talk about Apple devices and the browser choice (or lack of it) on them. They’re doing a big study into Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store, and part of that involves looking at browsers on iOS, and part of that involves talking to people who work on the web. So myself and Bruce Lawson and another UK developer of iOS and web apps put together some thoughts and had a useful long meeting with the CMA on the topic.

They asked that we keep confidential the exact details of what was discussed and asked, which I think is reasonable, but I did put together a slide deck to summarise my thoughts which I presented to them, and you can certainly see that. It’s at kryogenix.org/code/cma-apple and shows everything that I presented to the CMA along with my detailed notes on what it all means.

A slide from the presentation, showing a graph of how far behind Safari is and indicating that all other browsers on iOS are equally far behind, because they're all also Safari

Bruce had a similar slide deck, and you can read his slides on iOS’s browser monopoly and progressive web apps. Bruce has also summarised our other colleague’s presentation, which is what we led off with. The discussion that we then went into was really interesting; they asked some very sensible questions, and showed every sign of properly understanding the problem already and wanting to understand it better. This was good: honestly, I was a bit worried that we might be trying to explain the difference between a browser and a rendering engine to a bunch of retired colonel types who find technology to be baffling and perhaps a little unmanly, and this was emphatically not the case; I found the committee engaging and knowledgeable, and this is encouraging.

In the last few weeks we’ve seen quite a few different governments and regulatory authorities begin to take a stand against tech companies generally and Apple’s control over your devices more specifically. These are baby steps — video and music apps are now permitted to add a link to their own website, saints preserve us, after the Japan Fair Trade Commission’s investigation; developers are now allowed to send emails to their own users which mention payments, which is being hailed as “flexibility” although it doesn’t allow app devs to tell their users about other payment options in the app itself, and there are still court cases and regulatory investigations going on all around the world. Still, the tide may be changing here.

What I would like is that I can give users the best experience on the web, on the best mobile hardware. That best mobile hardware is Apple’s, but at the moment if I want to choose Apple hardware I have to choose a sub-par web experience. Nobody can fix this other than Apple, and there are a bunch of approaches that they could take — they could make Safari be a best-in-class experience for the web, or they could allow other people to collaborate on making the browser best-in-class, or they could stop blocking other browsers from their hardware. People have lots of opinions about which of these, or what else, could and should be done about this; I think pretty much everyone thinks that something should be done about it, though. Even if your goal is to slow the web down and to think that it shouldn’t compete with native apps, there’s no real reason why flexbox and grid and transforms should be worse in Safari, right? Anyway, go and read the talk for more detail on all that. And I’m interested in what you think. Do please hit me up on Twitter about this, or anything else; what do you think should be done, and how?

September 03, 2021

Reading List 281 by Bruce Lawson (@brucel)

September 02, 2021

There’s been a bit of a thing about software user experience going off the rails lately. Some people don’t like cross-platform software, and think that it isn’t as consistent, as well-integrated, or as empathetic as native software. Some people don’t like native software, thinking that changes in the design of the browser (Apple), the start menu (Microsoft), or everything (GNOME) herald the end of days.

So what’s going on? Why did those people make that thing that you didn’t like? Here are some possibilities.

My cheese was moved

Plenty of people have spent plenty of time using plenty of computers. Some short-sighted individual promised “a computer on every desktop”, made it happen, and this made a lot of people rather angry.

All of these people have learned a way of using these computers that works for them. Not necessarily the one that you or anybody else expects, but one that’s basically good enough. This is called satisficing: finding a good enough way to achieve your goal.

Now removing this satisficing path, or moving it a few pixels over to the left, might make something that’s supposedly better than what was there before, but is actually worse because the learned behaviour of the people trying to use the thing no longer achieves what they want.
It may even be that the original thing is really bad. But because we know how to use it, we don’t want it to change.

Consider the File menu. In About Face 3: The Essentials of Interaction Design, written in 2007, Alan Cooper described all of the problems with the File menu and its operations: New, Open, Save, Save As…. Those operations are implementation focused. They tell you what the computer will do, which is something the computer should take care of.

He described a different model, based on what people think about how their documents work. Anything you type in gets saved (that’s true of the computer I’m typing this in to, which works a lot like a Canon Cat). You can rename it if you want to give it a different name, and you can duplicate it if you want to give it a different name while keeping the version at the original name.

This should be better, because it makes the computer expose operations that people want to do, not operations that the computer needs to do. It’s like having a helicopter with an “up” control instead of a cyclic and collective controls.

Only, replacing the Open/Save/Save As… stuff with the “better” stuff is like removing the cyclic and collective controls and giving a trained helicopter pilot with years of experience the “up” button. It doesn’t work the way they expect, they have to think about it which they didn’t have to do with the cyclic/collective controls (any more), therefore it’s worse (for them).

Users are more experienced and adaptable now

But let’s look at this a different way. More people have used more computers now than at any earlier point in history, because that’s literally how history works. And while they might not like having their cheese moved, they’re probably OK with learning how a different piece of cheese works because they’ve been doing that over and over each time they visit a new website, play a new game, or use a new app.

Maybe “platform consistency” and “conform with the human interface/platform style guidelines” was a thing that made sense in 1984, when nobody who bought a computer with a GUI had ever used one before and would have to learn how literally everything worked. But now people are more sophisticated in their use of computers, regularly flit between desktop applications, mobile apps, and websites, across different platforms, and so are more flexible and adaptable in using different software with different interactions than they were in the 1980s when you first read the Amiga User Interface Style Guide.

We asked users; they don’t care

At first glance, this explanation seems related to the previous one. We’re doing the agile thing, and talking to our customers, and they’ve never mentioned that the UI framework or the slightly inconsistent controls are an issue.

But it’s actually quite different. The reason users don’t mention that there’s extra cognitive load is that these kinds of mental operations are tacit knowledge. If you’re asked about “how can we improve your experience filing taxes”, you’ll start thinking tax-related questions, before you think “I couldn’t press Ctrl-A to get to the beginning of that text field”. I mean, unless you’re a developer who goes out of their way to look for that sort of inconsistency in software.

The trick here is to stop asking, and start watching. Users may well care, even if they don’t vocalise that caring. They may well suffer, even if they don’t realise it hard enough to care.

We didn’t ask users

Yes, that happens. I’ve probably gone into enough depth on why it happens in various places, but here’s the summary: the company has a customer proxy who doesn’t proxy customers.

September 01, 2021

August 24, 2021

Respectable persons of this parish of Internet have been, shall we say, critical of Scrum and its ability to help makers (particularly software developers) to make things (particularly software). Ron Jeffries and GeePaw Hill have both deployed the bullshit word.

My own view, which I have described before, is that Scrum is a baseline product delivery process and a process improvement framework. Unfortunately, not many of us are trained or experienced in process improvement, not many of us know what we should be measuring to get a better process, so the process never improves.

At best, then, many Scrum organisations stick with the baseline process. That is still scadloads better than what many software organisations were trying before they had heard of Agile and Scrum, or after they had heard of it and before their CIO’s in-flight magazine made it acceptable. As a quick recap for those who were recruited into the post-Agile software industry, we used to spend two months working out all the tasks that would go into our six month project. Then we’d spend 15 months trying to implement them, then we’d all point at everybody else to explain why it hadn’t worked. Then, if there was any money left, we’d finally remember to ship some software to the customer.

Scrum has improved on that, by distributing both the shipping and the blaming over the whole life span of the project. This was sort of a necessary condition for software development to survive the dot-com crash, when “I have an idea” no longer instantly led to office space in SF, a foosball table, and thirty Herman Miller chairs. You have to deliver something of value early because you haven’t got enough money any more to put it off.

So when these respectable Internet parishioners say that Scrum is bad, this is how bad it has to be to reach that mark. Both of the authors I cite have been drinking from this trough way longer than I have, so have much more experience of the before times.

In this article I wanted to look at some of the things I, and software engineering peers around me, have done to make Scrum this bad. Because, as Ron says, Scrum is systemically bad, and we are part of the system.

Focus on Velocity

One thing it’s possible to measure is how much stuff gets shovelled in per iteration. This can be used to guide a couple of process-improvement ideas. First up is whether our ability to foresee problems arising in implementation is improving: this is “do estimates match actuals” but acknowledging that they never do, and the reason they never do is that we’re too optimistic all the time.

Second is whether the amount of stuff you’re shovelling is sustainable. This has to come second, because you have to have a stable idea of how much stuff there is in “some stuff” before you can determine whether it’s the same stuff you shovelled last time. If you occasionally shovel tons of stuff, then everybody goes off sick and you shovel no stuff, that’s not sustainable pace. If you occasionally shovel tons of stuff, then have to fix a load of production bugs and outages, and do a load of refactoring before you can shovel any more stuff, that’s not a sustainable pace, even if the amount of work in the shovelling phase and the fixing phase is equivalent.

This all makes working out what you can do, and what you can tell other people about what you can do, much easier. If you’re good at working out where the problems lie, and you’re good at knowing how many problems you can encounter and resolve per time, then it’s easier to make plans. Yes, we value responding to change over following a plan, but we also value planning our response to change, and we value communicating the impact of that response.

Even all of that relies on believing that a lot of things that could change, won’t change. Prediction and forecasting aren’t so chaotic that nobody should ever attempt them, but they certainly are sensitive enough to “all things being equal” that it’s important to bear in mind what all the things are and whether they have indeed remained equal.

And so it’s a terrible mistake to assume that this sprint’s velocity should be the same as, or (worse) greater than, the last one. You’re taking a descriptive statistic that should be handled with lots of caveats and using it as a goal. You end up doing the wrong thing.

Let’s say you decide you want 100 bushels of software over the next two weeks, because you shovelled 95 bushels of software last week. The easiest way to achieve that is to take things that you think add up to 100 bushels of software, and do less work so that they contain only, say, 78 bushels, and try to get those 78 bushels done in the time.

Which bits do you cut off? The buttons that the customer presses? No, they’ll notice that. The actions that happen when the customer presses the buttons? They’ll probably notice that, too. How about all the refinement and improvement that will make it possible to add another 100 bushels next time round? Nobody needs that to shovel this 100 bushels in!

But now, next time, shovelling software is harder, and you need to get 110 bushels in to “build on the momentum”. So more corners get cut, and more evenings get worked. And now it’s even harder to get the 120 bushels in that are needed the next time. Actual rate of software is going down, claimed rate of software is going up: soon everything will explode.

Separating “technical” and “non-technical”

Sometimes also “engineering” and “the business”. Particularly in a software company, this is weird, because “the business” is software engineering, but it’s a problem in other lines of business enabled by software too.

Often, domain experts in companies have quite a lot of technical knowledge and experience. In one fintech where I used to work, the “non-technical” people had a wealth (pun intended) of technical know-how when it came to financial planning and advice. That knowledge is as important to the purpose of making financial technology software as is software knowledge. Well, at least as important.

So why do so many teams delineate “engineering” and “the business”, or “techies” and “non-technical” people? In other words, why do software teams decide that only the software typists get a say in what software gets made using what practices? Why do people divide the world into pigs and chickens (though, in fairness to the Scrum folks, they abandoned that metaphor)?

I think the answer may be defensiveness. We’ve taken our ability to shovel 50 bushels of software and our commitment (it used to be an estimate, but now it’s a commitment) to shovel 120 bushels, and it’s evident that we can’t realistically do that. Why not? Oh it must be those pesky product owners, keep bringing their demands from the VP of marketing when all they’re going to do is say “build more”, “shovel more software”, and they aren’t actually doing any of it. If the customer rep didn’t keep repping the customer, we’d have time to actually do the software properly, and that would fix everything.

Note that while the Scrum guide no longer mentions chickens and pigs, it “still makes a distinction between members of the Scrum team and those individuals who are part of the process, but not responsible for delivering products”. This is an important distinction in places, but irrelevant and harmful in others. But it’s at its most harmful when it’s too narrowly drawn. When people who are part of the process are excluded from the delivering-products cabal. You still need to hear from them and use their expertise, even if you pretend that you don’t.

The related problem I have seen, and been part of, is the software-expertise people not even gaining passing knowledge of the problem domain. I’ve even seen it working on a developer tools team where the engineers building the tools didn’t have cause to use, or particularly even understand, the tool during our day-to-day work. All of those good technical practices, like automated acceptance tests, ubiquitous language, domain-driven design; they only mean shit if they’re helping to make the software compatible with the things people want the software for.

And that means a little bit of give and take on the old job boundaries. Let the rest of the company in on some info about how the software development is going, and learn a little about what the other folks you work with do. Be ready to adopt a more nuanced position on a new feature request than “Jen from sales with another crazy demand”.

Undriven process optimisation

As I said up top, the biggest problem Scrum often encounters in the wild is that it’s a process improvement framework run by people who don’t have any expertise at process improvement, but have read a book on Scrum (if that). This is where the agile consultant / retrospective facilitators usually come in: they do know something about process improvement, and even if they know nothing about your process it’s probably failing in similar enough ways to similar processes on similar teams that they can quickly identify the patterns and dysfunctions that apply in your context.

Without that guidance, or without that expertise on the team, retrospectives are the regular talking shop in which everybody knows that something is wrong, nobody knows what, and anybody who has a pet thing to try can propose that thing because there are no cogent arguments against giving it a go (nor are there any for it, except that we’ve got to try something!).

Thus we get resume-driven development: I bet that last sprint was bad because we aren’t functionally reactive enough, so we should spend the next sprint rewriting to this library I read about on medium. Or arbitrary process changes: this sprint we should add a column to the board, because we removed a column last time and look what happened. Or process gaming: I know we didn’t actually finish anything this month, but that’s because we didn’t deploy so if we call something “done” when it’s been typed into the IDE, we’ll achieve more next month. Or more pigs and chickens: the problem we had was that sales kept coming up with things customers needed, so we should stop the sales people talking either to the customers, to us, or to both.

Work out what it is you’re trying to do (not shovelling bushels of software, but the thing you’re trying to do for which software is potentially a solution) and measure that. Do you need to do more of that? Or less of it? Or the same amount for different people? Or for less money? What, about the way you’re shovelling software, could you change that would make that happen?

We’re back at connecting the technical and non-technical parts of the work, of course. To understand how the software work affects the non-software goals we need to understand both, and their interactions. Always have, always will.

August 23, 2021

August 19, 2021

August 17, 2021

August 16, 2021

Sleep on it by Graham Lee

In my experience, the best way to get a high-quality software product is to take your time, not crunch to some deadline. On one project I led, after a couple of months we realised that the feature goals (ALL of them) and the performance goals (also ALL of them, in this case ALL of the iPads back to the first-gen, single core 2010 one) were incompatible.

We talked to the customer, worked out achievable goals with a new timeline, and set to work with a new idea based on what we had found was possible. We went from infrequently running Instruments (the Apple performance analysis tool) to daily, then every merge, then every proposed change. If a change led to a regression, find another change. Customers were disappointed that the thing came out later than they originally thought, but it was still well-received and well-reviewed.

At the last release candidate, we had two known problems. Very infrequently sound would stop playing back, which with Apple’s DTS we isolated to an OS bug. After around 12 hours of uptime (in an iPad app, remember!) there was a chance of a crash, which with DTS we found to be due to calling an animation API in a way consistent with the documentation but inconsistent with the API’s actual expectations. We managed to fix that one before going live on the App Store.

On the other hand, projects I’ve worked on that had crunch times, weekend/evening working, and increased pressure to deliver from management ended up in a worse state. They were still late, but they tended to be even later and lower quality as developers who were under pressure to fix their bugs introduced other bugs by cutting corners. And everybody got upset and angry with everybody else, desperate to find a reason why the late project wasn’t my fault alone.

In one death march project, my first software project which was planned as a three month waterfall and took two years to deliver, we spent a lot of time shovelling bugs in at a faster rate than we were fixing them. On another, an angry product manager demanded that I fix bugs live in a handover meeting to another developer, without access to any test devices, then gave the resulting build to a customer…who of course discovered another, worse bug that had been introduced.

If you want good software, you have to sleep on it, and let everybody else sleep on it.

August 04, 2021

August 01, 2021

July 30, 2021

Reading List 280 by Bruce Lawson (@brucel)

  • Link ‘o the week: Safari isn’t protecting the web, it’s killing it – also featuring “Safari is killing the web through show-stopping bugs” eg, IndexedDB, localStorage, 100vh, Fetch requests skipping the service worker, and all your other favourites. Sort it out, NotSteve.
  • Do App Store Rules Matter? – “half of the revenue comes from just 0.5% of the users … last year, Apple made $7-8bn in commission revenue from perhaps 5m people spending over $450 a quarter on in-app purchases in games. Apple has started talking a lot about helping you use your iPhone in more healthy ways, but that doesn’t seem to extend to the app store.”
  • URL protocol handler registration for PWAs – Chromium 92 will let installed PWAs handle links that use a specific protocol for a more integrated experience.
  • Xiaomi knocks Apple off #2 spot – but will it become the next Huawei? – “Chinese manufacturer Xiaomi is now the second largest smartphone maker in the world. In last quarter’s results it had a 17% share of global smartphone shipments, ahead of Apple’s 14% and behind Samsung’s 19%
  • One-offs and low-expectations with Safari – “Jen Simmons solicited some open feedback about WebKit, asking what they might “need to add / change / fix / invent to help you?” … If I could ask for anything, it’d be that Apple loosen the purse strings and let Webkit be that warehouse for web innovation that it was a decade ago.” by Dave Rupert
  • Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy – “Inside the World Wide Web Consortium, where the world’s top engineers battle over the future of your data.
  • Intent to Ship: CSS Overflow scrollbar-gutter – Tab Atkins explains “the space that a scrollbar *would* take up is always reserved regardless, so it doesn’t matter whether the scrollbar pops in or not. You can also get the same space reserved on the other side for visual symmetry if you want.”
  • Homepage UX – 8 Common Pitfalls starring 75% with carousels implement them badly (TL;DR: don’t bother); 22% hide the Search field; 35% don’t implement country and language selection correctly; 43% don’t style clickable interface elements effectively
  • WVG – Web Vector Graphics – a proof of concept spec from yer man @Hixie on a vector graphics format for Flutter (whatever that is). Interesting discussions on design decisions and trade offs.

July 21, 2021

July 19, 2021

July 13, 2021

$ docker container list -a

I noticed I had a large number of docker containers that weren’t being used. Fortunately docker makes it easy to single out a container from this list and remove all containers that were created before it.

docker ps -aq -f "before=widgets_web_run_32cb6f5d2d71" | xargs docker container rm

There are a bunch of handy filters that can be applied to docker ps. Nice for a bit of spring cleaning.

July 09, 2021

Reading List 279 by Bruce Lawson (@brucel)

I’ve had a number of conversations about what “we” in the “free software community” “need” to do to combat the growth in proprietary, user-hostile and customer-hostile business models like cloud user-generated content hosts, social media platforms, hosted payment platforms, videoconferencing services etc. Questions can often be summarised as “what can we do to get everyone off of Facebook groups”, “how do we get businesses to adopt Jitsi Meet instead of Teams” or “how do we convince everyone that Mattermost is better for community chat than Slack”.

My answer is “we don’t”, which is very different from “we do nothing about those things”. Scaled software platforms introduce all sorts of problems that are only caused by trying to operate the software at scale, and the reason the big Silicon Valley companies are that big is that they have to spend a load of resources just to tread water because they’ve made everything so complex for themselves.

This scale problem has two related effects: firstly the companies are hyper-concerned about “growth” because when you’ve got a billion users, your shareholders want to know where the next hundred million are coming from, not the next twenty. Secondly the companies are overly-focused on lowest common denominator solutions, because Jennifer Miggins from South Shields is a rounding error and anything that’s good enough for Scott Zablowski from Los Angeles will have to be good enough for her too, and the millions of people on the flight path between them.

Growth hacking and lowest common denominator experiences are their problems, so we should avoid making them our problems, too. We already have various tools for enabling growth: the freedom to use the software for any purpose being one of the most powerful. We can go the other way and provide deeply-specific experiences that solve a small collection of problems incredibly well for a small number of people. Then those people become super-committed fans because no other thing works as well for them as our thing, and they tell their small number of friends, who can not only use this great thing but have the freedom to study how the program works, and change it so it does their computing as they wish—or to get someone to change it for them. Thus the snowball turns into an avalanche.

Each of these massive corporations with their non-free platforms that we’re trying to displace started as a small corporation solving a small problem for a small number of people. Facebook was internal to one university. Apple sold 500 computers to a single reseller. Google was a research project for one supervisor. This is a view of the world that’s been heavily skewed by the seemingly ready access to millions of dollars in venture capital for disruptive platforms, but many endeavours don’t have access to that capital and many that do don’t succeed. It is ludicrous to try and compete on the same terms without the same resources, so throw Marc Andreessen’s rulebook away and write a different one.

We get freedom to a billion people a handful at a time. That reddit-killing distributed self-hosted tool you’re building probably won’t kill reddit, sorry. Design for that one farmer’s cooperative in Skåne, and other farmers and other cooperatives will notice. Design for that one town government in Nordrhein-Westfalen, and other towns and other governments will notice. Design for that one biochemistry research group in Brasilia, and other biochemists and other researchers will notice. Make something personal for a dozen people, because that’s the one thing those massive vendors will never do and never even understand that they could do.

July 02, 2021

(Last Updated on )

All the omens were there: a comet in the sky; birds fell silent; dogs howled; unquiet spirits walked the forests and the dark, empty places. Yes, it was a meeting request from Marketing to discuss a new product page with animations that are triggered on scroll.

Much as a priest grasps his crucifix when facing a vampire, I immediately reached for Intersection Observer to avoid the browser grinding to a halt when watching to see if something is scrolled into view. And, like an exoricst sprinkling holy water on a demon, I also cleansed the code with a prefers-reduced-motion media query.

prefers-reduced-motion looks at the user’s operating system settings to see if they wish to suppress animations (for performance reasons, or because animations make them nauseous–perhaps because they have a vestibular disorder). A responsible developer will check to see if the user has indicated that they prefer reduced motion and use or avoid animations accordingly.

Content images can be made to switch between animated or static using the &LTpicture> element (so brilliantly brought to you by a team of gorgeous hunks). Bradbury Frosticle gives this example:

<picture>
  <source srcset="no-motion.jpg" media="(prefers-reduced-motion: reduce)">
  <img srcset="animated.gif alt="brick wall"/>
</picture>

Now, obviously you are a responsible developer. You use alt text (and don’t call it “alt tags”). But unfortunately, not everyone is like you. There are also 639 squillion websites out there that were made before prefers-reduced-motion was a thing, and only 3 of them are actively maintained.

It seems to me that browsers could do more to protect their users. Browsers are, after all, user agents that protect the visitor from pop-ups, malicious sites, autoplaying videos and other denizens of the underworld. They should also protect users against nausea and migraines, regardless of whether the developer thought to (or had the tools available to).

So, I propose that browsers should never respect scroll-behavior: smooth; if a user prefers reduced motion, regardless of whether a developer has set the media query.

Animated GIFs (and their animated counterparts in more modern image formats) are somewhat more complex. Since 2018, the Chromium-based Vivaldi browser has allowed users to deactivate animated GIFs right from the Status Bar, but this isn’t useful in all circumstances.

My initial thought was that the browser should only show the first five seconds because WCAG Success Criterion 2.2.2 says

For any moving, blinking or scrolling information that (1) starts automatically, (2) lasts more than five seconds, and (3) is presented in parallel with other content, there is a mechanism for the user to pause, stop, or hide it

But this wouldn’t allow a user to choose to see more than 5 seconds. The essential aspect of this success criterion is, I think, control. Melanie Richards, who does accessibility on Microsoft Edge browser, noted

IE had a neat keyboard shortcut that would pause any GIFs on command. I’d be curious to get your take on the relative usefulness of a browser setting such as you described, and/or a more contextual command.

I’m doubtful about discoverability of keyboard commands, but Travis Leithead (also Microsoft) suggested

We can use ML + your mic (with permission of course) to detect your frustration at forever looping Gifs and stop just the right one in real time. Or maybe just a play/pause control would help… +1 customer request!

And yes, a play/pause mechanism gives maximum control to the user. This would also solve the problem of looping GIFs in marketing emails, which are commonly displayed in browsers. There is an added complexity of what to do if the animated image is a link, but I’m not paid a massive salary to make web browsers, so I leave that to mightier brains than I possess.

In the meantime, until all browser vendors do my bidding (it only took 5 years for the <picture> element), please consider adding this to your CSS (by Thomas Steiner)


@media (prefers-reduced-motion: reduce) {
  *, ::before, ::after {
    animation-delay: -1ms !important;
    animation-duration: 1ms !important;
    animation-iteration-count: 1 !important;
    background-attachment: initial !important;
    scroll-behavior: auto !important;
    transition-duration: 0s !important;
    transition-delay: 0s !important;
  }
}

I am not an MBA, but it seems to me that making your users puke isn’t the best beginning to a successful business relationship.

July 01, 2021

June 25, 2021

June 24, 2021

June 23, 2021

June 21, 2021

June 18, 2021

Reading List 278 by Bruce Lawson (@brucel)

June 12, 2021

The latest in a series of posts on getting Unity Test Framework tests running in jenkins in the cloud, because this seems entirely undocumented.

To recap, we had got as far as running our playmode tests on the cloud machine, but we had to use the -batchmode -nographics command line parameters. If we don’t, we get tons of errors about non-interactive window sessions. But if we do, we can no longer rely on animation, physics, or some coroutines during our tests! This limits us to basic lifecycle and validation tests, which isn’t great.

We need our cloud machine to pretend there’s a monitor attached, so unity can run its renderer and physics.

First, we’re going to need to make sure we have enough grunt in our cloud machine to run the game at a solid frametate. We use ec2, with the g4dn.xlarge machine (which has a decent GPU) and the https://aws.amazon.com/marketplace/pp/prodview-xrrke4dwueqv6?ref=cns_srchrow#pdp-overview ami, which pre-installs the right GPU drivers.

To do this, we’re going to set up a non-admin windows account on our cloud machine (because that’s just good practice), get it to auto-login on boot and ask it to connect to jenkins under this account. Read on for more details.

First, set up your new windows account by remoting into the admin account of the cloud machine:

  • type “add user” in the windows start menu to get started adding your user. I call mine simply “jenkins”. Remember to save the password somewhere safe!
  • We need to be able to remote into the new user, so go to System Properties, and on the Remote tab click Select Users, and add your jenkins user
  • if jenkins has already run on this machine, you’ll want to give the new jenkins user rights to modify the c:\Workspace folder
  • You’ll also want to go into the Services app, find the jenkins service, and disable it.
  • Next, download autologon https://docs.microsoft.com/en-us/sysinternals/downloads/autologon, uncompress it somewhere sensible, then run it.
    • enter your new jenkins account details
    • click Enable
    • close the dialog

Now, log out of the admin account, and you should be able to remote desktop into the new account using the credentials you saved.

Now we need to make this new account register the computer with your jenkins server once it comes online. More details here https://wiki.jenkins.io/display/JENKINS/Distributed+builds#Distributedbuilds-Agenttomasterconnections, and it may be a bit different for you depending on setup, but here’s what we do:

  • From the remote desktop of the jenkins user account, open a browser and log into your jenkins server
  • Go to the node page for your new machine, and configure the Launch Type to be Launch Agent By Connecting It To The Master
  • Switch to the node’s status tab and you should have an orange button to download the agent jnlp file
  • Put this file in the %userprofile%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup folder
  • Change the Launch Type back to whatever you need (we use the Slave Setup plugin, despite the icky name https://plugins.jenkins.io/slave-setup/) — it doesn’t need to stay as Launch Agent By Connecting It To The Master.

We’re done. log out of remote desktop and reboot the machine. You should see it come alive in the jenkins server after a few minutes. If you remove the -batchmode and -nographics options from your unity commands, you should see the tests start to run with full physics and animation!

June 11, 2021

June 10, 2021

I’ve talked before about the non-team team dynamic that is “one person per task”. Where the management and engineers collude to push the organisation beyond a sustainable pace by making sure that at all times, each individual is kept busy and collaboration is minimised.

I talked about the deleterious effect on collaboration, particularly that code review becomes a burden resolved with a quick “LGTM”. People quickly develop specialisations and fiefdoms: oh there’s a CUDA story, better give it to Yevgeny as he worked on the last CUDA story.

The organisation quickly adapts to this balkanisation and optimises for it. Is there a CUDA story in the next sprint? We need something for Yevgeny to do. This is Conway’s Corrolary: the most efficient way to develop software is when the structure matches the org chart.

Basically all forms of collaboration become a slog when there’s no “us” in team. Unfortunately, the contradiction at the heart of this 19th century approach to division of labour is that, when applied to knowledge work, the value to each participant of being in meetings is minimised, while the necessity for each participant to be in a meeting is maximised.

The value is minimised because each person has their personal task within their personal fiefdom to work on. Attending a meeting takes away from the individual productivity that the process is optimising for. Additionally, it increases the likelihood that the meeting content will be mostly irrelevant: why should I want to discuss backend work when Sophie takes all the backend tasks?

The meetings are necessary, though, because nobody owns the whole widget. No-one can see the impact of any workflow change, or dependency adoption, or clean up task, because nobody understands more than a 1/N part of the whole system. Every little thing needs to be run by Sophie and Yevgeny and all the others because no-one is in a position to make a decision without their input.

This might sound radically democratic, and not the sort of thing you’d expect from a business: nobody can make a decision without consulting all the workers! Power to the people! In fact it’s just entirely progress-destroying: nobody can make a decision at all until they’ve got every single person on board, and that’s so much work that a lot of decisions will be defaulted. Nothing changes.

And there’s no way within this paradigm to avoid that. Have fewer meetings, and each individual is happier because they get to maximise progress time spent on their individual tasks. But the work will eventually grind to a halt, as the architecture reflects N different opinions, and the N! different interfaces (which have each fallen into the unowned gaps between the individual contributors) become harder to work with.

Have more meetings, and people will grumble that there are too many meetings. And that Piotr is trying to land-grab from other fiefdoms by pushing for decisions that cross into Sophie’s domain.

The answer is to reconstitute the team – preferably along self-organising principles – into a cybernetic organism that makes use of its constituent individuals as they can best be applied, but in pursuit of the team’s goals, not N individual goals. This means radical democracy for some issues, (agreed) tyranny for others, and collective ignorance of yet others.

It means in some cases giving anyone the autonomy to make some choices, but giving someone with more expertise the autonomy to override those choices. In some cases, all decisions get made locally, in others, they must be run past an agreed arbiter. In some cases, having one task per team, or even no tasks per team if the team needs to do something more important before it can take on another task.

June 04, 2021

…and this time it’s online! Back in 2019, we toured ‘A Moment of Madness’ to a number of UK festivals. AMoM is an immersive experience with live actors and Escape Game style puzzles where the players are on a stake-out in a multi-story car-park tracking politician Michael Makerson (just another politician with a closet full […]

Random Expo.io tips by Bruce Lawson (@brucel)

I’m doing some accessibility testing on a React Native codebase that uses Expo during development. If you’ve done something seriously wrong in a previous life and karma has condemned you to using React Native rather than making a Progressive Web App, Expo is jolly useful. It gives you a QR code that you can scan with Android or iOS to ‘install’ the app on your device and you get live reload of any changes. It’s like sinking in a quagmire of shit but someone nice is bringing you beer and playing Abba while it happens. (Beats me why that isn’t their corporate strapline.)

Anyway, I struggled a bit to set it up so here are some random tips that I learned the hard way:

  • If your terminal yells “Error: EMFILE: too many open files, watch at FSEvent.FSWatcher._handle.onchange (internal/fs/watchers.js:178:28) error Command failed with exit code 1” when you start Expo, stop it, do brew install watchman and re-start Expo. Why? No idea. Someone from StackOverflow told me. Most npm stuff is voodoo magic–just install all the things, hope none of them were made by in Moscow by Vladimir Evilovich of KGB Enterprises, ignore all the deprecation warnings and cross your fingers.
  • If you want to open your app in an iOS simulator, you need xcode and you need to install xcode command line tools or it’ll just hang.
  • Scrolling in iOS simulator is weird. Press the trackpad with one hand and scroll with other hand. Or enable three finger drag and have one hand free for coffee, smoking or whatever other filthy habits you’ve developed while working from home.
  • If you don’t want it, you can turn off the debugging menu overlay.
  • If you like CSS, under no circumstances view source of the app running in the browser. It is, however, full of lots of ARIA nourishment thanks to React Native for Web.

Who knows? One day, Apple may decide not to hamstring PWAs on iOS and we can all use the web to run on any device and any browser, just as Sir Uncle Timbo intended.

June 03, 2021

AMA by Graham Lee

It was requested on twitter that I start answering community questions on the podcast. I’ve got a few to get the ball rolling, but what would you like to ask? Comment here, or reach me wherever you know I hang out.

June 02, 2021

June 01, 2021

Having looked at hopefully modern views on Object-Oriented analysis and design, it’s time to look at what happened to Object-Oriented Programming. This is an opinionated, ideologically-motivated history, that in no way reflects reality: a real history of OOP would require time and skills that I lack, and would undoubtedly be almost as inaccurate. But in telling this version we get to learn more about the subject, and almost certainly more about the author too.
They always say that history is written by the victors, and it’s hard to argue that OOP was anything other than victorious. When people explain about how they prefer to write functional programs because it helps them to “reason about” code, all the reasoning that was done about the code on which they depend was done in terms of objects. The ramda or lodash or Elm-using functional programmer writes functions in JavaScript on an engine written in C++. Swift Combine uses a functional pipeline to glue UIKit objects to Core Data objects, all in an Objective-C IDE and – again – a C++ compiler.

Maybe there’s something in that. Maybe the functional thing works well if you’re transforming data from one system to another, and our current phase of VC-backed disruption needs that. Perhaps we’re at the expansion phase, applying the existing systems to broader domains, and a later consolidation or contraction phase will demand yet another paradigm.
Anyway, Object-Oriented Programming famously (and incorrectly, remember) grew out of the first phase of functional programming: the one that arose when it wasn’t even clear whether computers existed, or if they did whether they could be made fast enough or complex enough to evaluate a function. Smalltalk may have borrowed a few ideas from Simula, but it spoke with a distinct Lisp.

We’ll fast forward through that interesting bit of history when all the research happened, to that boring bit where all the development happened. The Xerox team famously diluted the purity of their vision in the hot-air balloon issue of Byte magazine, and a whole complex of consultants, trainers and methodologists jumped in to turn a system learnable by children into one that couldn’t be mastered by professional programmers.
Actually, that’s not fair: the system already couldn’t be mastered by professional programmers, a breed famous for assuming that they are correct and that any evidence to the contrary is flawed. It was designed to be learnable by children, not by those who think they already know better.

The result was the ramda/lodash/Elm/Clojure/F# of OOP: tools that let you tell your local user group that you’ve adopted this whole Objects thing without, y’know, having to do that. Languages called Object-*, Object*, Objective-*, O* added keywords like “class” to existing programming languages so you could carry on writing software as you already had been, but maybe change the word you used to declare modules.

Eventually, the jig was up, and people cottoned on to the observation that Object-Intercal is just Intercal no matter how you spell come.from(). So the next step was to change the naming scheme to make it a little more opaque. C++ is just C with Classes. So is Java, so is C#. Visual BASIC.net is little better.

Meanwhile, some people who had been using Smalltalk and getting on well with fast development of prototypes that they could edit while running into a deployable system had an idea. Why not tell everyone else how great it is to develop prototypes fast and edit them while running into the deployable system? The full story of that will have to wait for the Imagined History of Agile, but the TL;DR is that whatever they said, everybody heard “carry on doing what we’re already doing but plus Jira”.

Well, that’s what they heard about the practices. What they heard about the principles was “principles? We don’t need no stinking principles, that sounds like Big Thinking Up Front urgh” so decided to stop thinking about anything as long as the next two weeks of work would be paid for. Yes, iterative, incremental programming introduced the idea of a project the same length as the gap between pay checks, thus paving the way for fire and rehire.

And thus we arrive at the ideological void of today’s computering. A phase in which what you don’t do is more important than what you do: #NoEstimates, #NoSQL, #NoProject, #NoManagers…#NoAdvances.

Something will fill that void. It won’t be soon. Functional programming is a loose collection of academic ideas with negation principles – #NoSideEffects and #NoMutableState – but doesn’t encourage anything new. As I said earlier, it may be that we don’t need anything new at the moment: there’s enough money in applying the old things to new businesses and funnelling more money to the cloud providers.

But presumably that will end soon. The promised and perpetually interrupted parallel computing paradigm we were guaranteed upon the death of Moore’s Law in the (1990s, 2000s, 2010s) will eventually meet the observation that every object larger than a grain of rice has a decent ARM CPU in, leading to a revolution in distributed consensus computing. Watch out for the blockchain folks saying that means they were right all along: in a very specific way, they were.

Or maybe the impressive capability but limited applicability of AI will meet the limited capability but impressive applicability of intentional programming in some hybrid paradigm. Or maybe if we wait long enough, quantum computing will bring both its new ideas and some reason to use them.

But that’s the imagined future of programming, and we’re not in that article yet.

May 31, 2021

We left off in the last post with an idea of how Object-Oriented Analysis works: if you’re thinking that it used around a thousand words to depict the idea “turn nouns from the problem domain into objects and verbs into methods” then you’re right, and the only reason to go into so much detail is that the idea still seems to confuse people to this day.

Similarly, Object-Oriented Design – refining the overall problem description found during analysis into a network of objects that can simulate the problem and provide solutions – can be glibly summed up in a single sentence. Treat any object uncovered in the analysis as a whole, standalone computer program (this is called encapsulation), and do the object-oriented analysis stuff again at this level of abstraction.

You could treat it as turtles all the way down, but just as Physics becomes quantised once you zoom in far enough, the things you need an object to do become small handfuls of machine instructions and there’s no point going any further. Once again, the simulation has become the deployment: this time because the small standalone computer program you’re pretending is the heart of any object is a small standalone computer program.

I mean, that’s literally it. Just as the behaviour of the overall system could be specified in a script and used to test the implementation, so the behaviour of these units can be specified in a script and used to test the implementation. Indeed some people only do this at the unit level, even though the unit level is identical to the levels above and below it.

Though up and down are not the only directions we can move in, and it sometimes makes more sense to think about in and out. Given our idea of a User who can put things in a Cart, we might ask questions like “how does a User remember what they’ve bought in the past” to move in towards a Ledger or PurchaseHistory, from where we move out (of our problem) into the realm of data storage and retrieval.

Or we can move out directly from the User, asking “how do we show our real User out there in the world the activities of this simulated User” and again leave our simulation behind to enter the realm of the user interface or network API. In each case, we find a need to adapt from our simulation of our problem to someone’s (probably not ours, in 2021) simulation of what any problem in data storage or user interfaces is; this idea sits behind Cockburn’s Ports and Adapters.

Moving in either direction, we are likely to encounter problems that have been solved before. The trick is knowing that they have been solved before, which means being able to identify the commonalities between what we’re trying to achieve and previous solutions, which may be solutions to entirely different problems but which nonetheless have a common shape.

The trick object-oriented designers came up with to address this discovery is the Pattern Language (OK, it was architect Christopher Alexander’s idea: great artists steal and all that), in which shared solutions are given common names and descriptions so that you can explore whether your unique problem can be cast in terms of this shared description. In practise, the idea of a pattern language has fared incredibly well in software development: whenever someone says “we use container orchestration” or “my user interface is a functional reactive program” or “we deploy microservices brokered by a message queue” they are relying on the success of the patterns language idea introduced by object-oriented designers.

Meanwhile, in theory, the concept of patterns language failed, and if you ask a random programmer about design patterns they will list Singleton and maybe a couple of other 1990s-era implementations patterns before telling you that they don’t use patterns.

And that, pretty much, is all they wrote. You can ask your questions about what each object needs to do spatially (“who else does this object need to talk to in order to answer this question?”), temporally (“what will this object do when it has received this message?”), or physically (“what executable running on what computer will this object be stored in?”). But really, we’re done, at least for OOD.

Because the remaining thing to do (which isn’t to say the last thing in a sequence, just the last thing we have yet to talk about) is to build the methods that respond to those messages and pass those tests, and that finally is Object-Oriented Programming. If we start with OOP then we lose out, because we try to build software without an idea of what our software is trying to be. If we finish with OOP then we lose out, because we designed our software without using knowledge of what that software would turn into.

May 30, 2021

I’ve made a lot over the years, including the book, Object-Oriented Programming the Easy Way, of my assertion that one reason people are turned off from Object-Oriented Programming is that they weren’t doing Object-Oriented Design. Smalltalk was conceived as a system for letting children explore computers by writing simulations of other problems, and if you haven’t got the bit where you’re creating simulations of other problems, then you’re just writing with no goal in mind.

Taken on its own, Object-Oriented Programming is a bit of a weird approach to modularity where a package’s code is accessed through references to that package’s record structures. You’ve got polymorphism, but no guidance as to what any of the individual morphs are supposed to be, let alone a strategy for combining them effectively. You’ve got inheritance, but inheritance of what, by what? If you don’t know what the modules are supposed to look like, then deciding which of them is a submodule of other modules is definitely tricky. Similarly with encapsulation: knowing that you treat the “inside” and “outside” of modules differently doesn’t help when you don’t know where that side ought to go.

So let’s put OOP back where it belongs: embedded in an object-oriented process for simulating a problem. The process will produce as its output an executable model of the problem that produces solutions desired by…

Well, desired by whom? The answer that Extreme Programming and Agile Software Development, approaches to thinking about how to create software that were born out of a time when OOP was ascendant, would say “by the customer”: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”, they say.

Yes, if we’re working for money then we have to satisfy the customer. They’re the ones with the money in the first place, and if they aren’t then we have bigger problems than whether our software is satisfactory. But they aren’t the only people we have to satisfy, and Object-Oriented Design helps us to understand that.

If you’ve ever seen someone who was fully bought into the Unified Modelling Language (this is true for other ways of capturing the results of Object-Oriented Analysis but let’s stick to thinking about the UML for now), then you’ll have seen a Use Case diagram. This shows you “the system” as an abstract square, with stick figures for actors connected to “the system” via arrows annotated with use cases – descriptions of what the people are trying to do with the system.

We’ve already moved past the idea that the software and the customer are the only two things in the world, indeed the customer may not be represented in this use case diagram at all! What would that mean? That the customer is paying for the software so that they can charge somebody else for use of the software: that satisfying the customer is an indirect outcome, achieved by the action of satisfying the identified actors.

We’ve also got a great idea of what the scope will be. We know who the actors are, and what they’re going to try to do; therefore we know what information they bring to the system and what sort of information they will want to get out. We also see that some of the actors are other computer systems, and we get an idea of what we have to build versus what we can outsource to others.

In principle, it doesn’t really matter how this information is gathered: the fashion used to be for prolix use case documents but these days, shorter user stories (designed to act as placeholders for a conversation where the details can be worked out) are preferred. In practice the latter is better, because it takes into account that the act of doing some work changes the understanding of the work to be done. The later decisions can be made, the more you know at the point of deciding.

On the other hand, the benefit of that UML approach where it’s all stuck on one diagram is that it makes commonalities and pattern clearer. It’s too easy to take six user stories, write them on six Github issues, then get six engineers to write six implementations, and hope that some kind of convergent design will assert itself during code review: or worse, that you’ll decide what the common design should have been in retrospective, and add an “engineering story” to the backlog to be deferred forever more.

The system, at this level of abstraction, is a black box. It’s the opaque “context” of the context diagram at the highest level of the C4 model. But that needn’t stop us thinking about how to code it! Indeed, Behaviour-Driven Development has us do exactly that. Once we’ve come to agreement over what some user story or use case means, we can encapsulate (there’s that word again) our understanding in executable form.

And now, the system-as-simulation nature of OOP finally becomes important. Because we can use those specifications-as-scripts to talk about what we need to do with the customer (“so you’re saying that given a new customer, when they put a product in their cart, they are shown a subtotal that is equal to the cost of that product?”), refining the understanding by demonstrating that understanding in executable form and inviting them to use a simulation of the final product. But because the output is an executable program that does what they want, that simulation can be the final product itself.

A common step when implementing Behaviour-Driven Development is to introduce a translation layer between “the specs” and “the production code”. So “a new user” turns into User.new.save!, and someone has to come along and write those methods (or at least inherit from ActiveRecord). Or worse, “a new user” turns into PUT /api/v0/internal/users, and someone has to both implement that and the system that backs it.

This translation step isn’t strictly true. “A new user” can be an statement in your software implementation, your specs can be both a simulation of what the system does and the system itself, you save yourself a lot of typing and a potentially faulty translation.

There’s still a lot to Object-Oriented Analysis, but it roughly follows the well-worn path of Domain-Driven Design. Where we said “a new user”, everybody on the team should agree on what a “user” is, and there should be a concept (i.e. class) in the software domain that encapsulates (that word again!) this shared understanding and models it in executable form. Where we said the user “puts a product in their cart”, we all understand that products and carts are things both in the problem and in the software, and that a user can do a thing called “putting” which involves products, and carts, in a particular way.

If it’s not clear what all those things are or how they go together, we may wish to roleplay the objects in the program (which, because the program is a simulation of the things in the problem, means roleplaying those things). Someone is a user, someone is a product, someone is a cart. What does it mean for the user to add a product to their cart? What is “their” cart? How do these things communicate, and how are they changed by the experience?

We’re starting to peer inside the black box of the “system”, so in a future post we’ll take a proper look at Object-Oriented Design.

May 26, 2021

May 25, 2021

Make Good Grow Volunteering Platform

I was involved in designing concepts for the MGG volunteering platform, this project ranged from initial wireframes to full UI/UX prototyping for investors and stakeholders to review.

Make Good Grow was a complex platform built as a web app for multiple platforms. We worked hard on the user flow and how to make all volunteering opportunities easily accessible. We wanted the design to have character and full of colour and excitement.

The platform also featured a fully embedded event builder where clients could setup their volunteering opportunities. The event builder would walk you step by step through location, categories, skills required and much more. It was designed to be simple flexible and powerful.

The web version of the site was designed to work with a wordpress theme.If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post Volunteering Platform appeared first on .

May 23, 2021

Kangen water is a trademark for machine-electrolysed water, owned by Enagic. Similar water is also variously known as Electrolyzed Reduced Alkaline Water, Alkaline Water, Structured Water, or Hexagonal Water. It’s popular among new-age influencers and their followers, because it raises the vibrational frequency of their chakras and kundalinis, thereby energising the Qui so that Gemini shines in their quantum auras, or something. Of course, none of these are verifiable –because they’re all fictional.

Enagic itself is careful not to over-promise; their website says that their water can be used for preparing food, making coffee, watering plants, washing your pets, face, hair – who knew?! It rather confusingly tells you “Take your medicine with this water” (Archived web page):


while their FAQ tells you

Kangen Water® is intended for everyday drinking and cooking rather than for drinking with medication. The Japanese Ministry of Health, Labor and Welfare gives the following directions: Do not take medication with machine-produced water. (Archived page)

Alkaline water’s health benefits

As for health benefits, Enagic simply claims that Kangen water is

perfect for drinking and healthy cooking. This electrolytically-reduced, hydrogen-rich water works to restore your body to a more alkaline state, which is optimal for good health.

The mechanism by which it makes your body more alkaline is not explained, nor why a more alkaline state should be “optimal for good health”. In its article Is alkaline water a miracle cure – or BS? The science is in, The Guardian cites Dr Tanis Fenton, an adjunct professor at the University of Calgary and an evidence analyst for Dietitians of Canada:

Fenton stresses, you simply can’t change the pH of your body by drinking alkaline water. “Your body regulates its [blood] pH in a very narrow range because all our enzymes are designed to work at pH 7.4. If our pH varied too much we wouldn’t survive.”

Masaru Emoto and Water memory

The swamps of social media are full of people selling magic Water to each other, making all sorts of outlandish unscientific claims, such as water has “memory”:

water holds memory

This hokum was popularised by Masuru Emoto, a graduate in International Relations who became a “Doctor” of Alternative Medicine at the Open International University for Alternative Medicine in India, a diploma mill which targeted quacks to sell its degrees and was later shut down.

(The idea that water holds memory is often cited by fans of homeopathy. They agree: there is none of the actual substance remaining in this water because it’s been diluted out of existence, but it does its magical healing because the water *remembers*. Cosmic, maaan.)

Mr Emoto’s results have never been reproduced, his methodology was unscientific, sloppy and subjective. Mr Emoto was invited to participate in the James Randi Educational Foundation’s Million Dollar Challenge in which pseudo-scientists are invited to replicate their results in properly controlled scientific methods, and get paid $1 million if they succeed. For some inexplicable reason, he didn’t take up the challenge.

Hexagonal water hydrates better

It’s also claimed that this magical water forms hexagonal “clusters” that somehow make it more hydrating because it’s more readily absorbed by cells. This is nonsense: although water clusters have been observed experimentally, they have a very short lifetime: the hydrogen bonds are continually breaking and reforming at timescales shorter than 200 femtoseconds.

In any case, Water molecules enter cells through a structure called an aquaporin, in single file. If they were clustered, they would be too big to enter the cell.

Electrolyzed Reduced Alkaline Water and science

Let’s look at what real scientists say. From Systematic review of the association between dietary acid load, alkaline water and cancer:

Despite the promotion of the alkaline diet and alkaline water by the media and salespeople, there is almost no actual research to either support or disprove these ideas. This systematic review of the literature revealed a lack of evidence for or against diet acid load and/or alkaline water for the initiation or treatment of cancer. Promotion of alkaline diet and alkaline water to the public for cancer prevention or treatment is not justified.

In their paper Physico-Chemical, Biological and Therapeutic Characteristics of Electrolyzed Reduced Alkaline Water (ERAW), Marc Henry and Jacques Chambron of the University of Strasbourg wrote

It was demonstrated that degradation of the electrodes during functioning of the device releases very reactive nanoparticles of platinum, the toxicity of which has not yet been clearly proven. This report recommends alerting health authorities of the uncontrolled availability of these devices used as health products, but which generate drug substances and should therefore be sold according to regulatory requirements.

In short: machine-produced magic water has no scientifically demonstrable health benefits, and may actually be harmful.

Living structured water?

So why do people make such nonsensical claims about magic water? Some do it for profit, of course. Enagic sells via a multi-level marketing scheme, similar to Amway but for Spiritual People, Lightworkers, Starseeds and Truth-Seakers. (Typo intentional; new-age bliss ninnies love to misspell “sister” as “SeaStar”, for example, because the Cosmos reveals itself through weak puns, and —of course— the Cosmos only speaks English. You sea?)

Doubtless many customers and sellers actually genuinely believe in magic water. It would be tempting to think of this as just some harmless fad for airheads who enjoy sunning their perineums (it was Metaphysical Meagan’s instagram that first introduced me to Magic water machines; of course, she has a “water business”)

But it’s not harmless. The same people who promote magic water also bang on about “sacred earth“, claim Trump won the 2020 election, Covid is a “plandemic” and show a disturbing tendency towards far right politics. The excellent article The New Age Racket and the Left (from 2004 but still worth reading in full) sums it up brilliantly:

At best, then, New Age is a lucrative side venture of neoliberalism, lining the pockets of those crafty enough to package spiritual fulfillment as a marketable product while leaving the spiritually hungry as unsated as ever. At worst, though, it is the expression of something altogether more sinister. Rootedness in the earth, a return to pure and authentic folkways, the embrace of irrationalism, the conviction that there is an authentic way of being beyond politics, the uncritical substitution of group- identification for self-knowledge, are all of them basic features of right-wing ideology…

Many New Agers seem to feel not just secure in but altogether self-righteous about the benevolence of their world-view, pointing to the fact, for example, that it ‘celebrates’ the native cultures that global capitalism would plow over. To this one might respond, first of all, that celebration of native cultures is itself big business. Starbucks does it. So, in its rhetoric, does the Southeast Asian sex-tourism industry. Second, the simple fact that New Age is by its own lights multicultural and syncretistic is by no means a guarantee that it is safe from the accusation of being, at best, permissive of, and, at worst, itself an expression of, right-wing ideology. The Nazis, to return to a tried and true example, were no less obsessed with Indian spirituality than was George Harrison.

Conclusion

Kangen water and its non-branded magical siblings are useless nonsense that wastes resources as it requires electricity. Enagic gives contradictory advice about whether it’s safe to use it to take (real) medicines with. Water machines might make the source water worse because the device releases very reactive nanoparticles of platinum. Most people who extol its virtues want of your money, and they might also be an anti-scientific quasi-fascist.

May 21, 2021

Reading List 277 by Bruce Lawson (@brucel)

May 20, 2021

(Last Updated on )

TL;DR: I’ve forked the splendid but dated Tota11y accessibility visualisation toolkit, added some extra stuff and corrected a bug, and my employer has let me release it for Global Accessibility Awareness Day.

screenshot of Tota11y on this blog, showing heading structure

A while ago, my very nice client (and now employer), Babylon Health, asked me to help them with their accessibility testing. One plan of attack is an automated scan integrated with the CI system to catch errors during development (more about this when it’s finished). But for small content changes made by marketing folks, this isn’t appropriate.

We tried lots of things like Wave, which are great but rather overwhelming for non-technical people because they tend to cover the page being analysed with arcane symbols, many of which are beyond a CMS content editor’s control anyway. There are lots of excellent accessibility checks in Microsoft Edge devtools, but to a non-technical user, this is how devtools look:

an incredibly elaborate 1980s-style control panel for a nucelar power station

Then I remembered something I’d used a while ago to demo heading structures, a tool called Tota11y from Khan Academy, which is MIT licensed. Note, this is not designed to check everything. Tota11y is a simple tool to visualise the most widespread web accessibility errors in a non overwhelmingly-techy way. It aims to give content authors and editors insights into things they can control. It’s not a cure-all.

There were a few things I wanted to change, specifically for Babylon’s web sites. I wanted the contrast analyser to ignore content that was visually hidden using the common clip pattern and to correct a bug whereby it didn’t calculate contrast properly on large text, and reported an error where there isn’t one. False positives encourage people to ignore the output and this erodes trust in tools. The fix uses code from Firefox devtools; thank you to the Mozilla people who helped me find it. There’s loads of other small changes.

Khan Academy seemed to have abandoned the tool, so I forked it. Here it is, if you want to try it for Global Accessibility Awareness Day. Drag the attractive link to your bookmarks bar, then activate it and inspect your page. The code is on GitHub–don’t laugh at my crap JavaScript. It was also an “interesting” experience learning about npm, LESS, handlebars and all the stuff I’d managed to avoid so far.

Feel free to use it if it helps you. Pull requests will be gratefully received (as long as they don’t unnecessarily rewrite it in React and Tailwind), and I’ll be making a few enhancements too. Thanks to Khan Academy for releasing the initial project, to my colleagues for testing it, to Jack Roles for making it look pretty, and to Babylon for letting me release the work they were nice enough to pay me for.

Update: Version 1.1 released

17 June 2021: I’ve made some tweaks to the Tota11y UI. A new naming convention replaces “adjective+animal I’ve never eaten”. V 1+ are “adjective+musical instrument I’ve never tried to play”. V1.1 is “Rusty Trombone”.

May 19, 2021

May 15, 2021

Graham Lee Uses This by Graham Lee

I’ve never been famous enough in tech circles to warrant a post on uses this, but the joy of running your own blog is that you get to indulge any narcissistic tendencies with no filter. So here we go!

The current setup

IMG 3357

This is the desktop setup. The idea behind this is that it’s a semi-permanent configuration so I can get really comfortable, using the best components I have access to to provide a setup I’ll enjoy using. The main features are a Herman Miller Aeron chair, which I got at a steep discount by going for a second-generation fire sale chair, and an M1 Mac Mini coupled to a 24″ Samsung curved monitor, a Matias Tactile Pro 4 keyboard and a Logitech G502 mouse. There’s a Sandberg USB camera (which isn’t great, but works well enough if I use it via OBS’s virtual camera) and a Blue Yeti mic too. The headphones are Marshall Major III, and the Philips FX-10 is used as a Bluetooth stereo.

I do all my streaming (both Dos Amigans and [objc retain];) from this desk too, so all the other hardware you see is related to that. There are two Intel NUC devices (though one is mounted behind one of the monitors), one running FreeBSD (for GNUstep) and one Windows 10 (with WinUAE/Amiga Forever). The Ducky Shine 6 keyboard and Glorious Model O mouse are used to drive whichever box I’m streaming from, which connects to the other Samsung monitor via an AVerMedia HDMI capture device.

IMG 3356

The laptop setup is on a variable-height desk (Ikea SKARSTA), and this laptop is actually provided by my employer. It’s a 12″ MacBook Pro (Intel). The idea is that it should be possible to work here, and in fact at the moment I spend most of my work time at it; but it should also be very easy to grab the laptop and take it away. To that end, the stuff plugged into the USB hub is mostly charge cables, and the peripheral hardware is mostly wireless: Apple Magic Mouse and Keyboard, and a Corsair headset. A desk-mounted stand and a music-style stand hold the tablets I need for developing a cross-platform app at work.

And it happens that there’s an Amiga CD32 with its own mouse, keyboard, and joypad alongside: that mostly gets used for casual gaming.

The general principle

Believe it or not, the pattern I’m trying to conform to here is “one desktop, one laptop”. All those streaming and gaming things are appliances for specific tasks, they aren’t a part of my regular computering setup. I’ve been lucky to be able to keep to the “one desktop, one laptop” pattern since around 2004, usually using a combination of personal and work-supplied equipment, or purchased and handed-down. For example, the 2004-2006 setup was a “rescued from the trash” PowerMac 9600 and a handed-down G3 Wallstreet; both very old computers at that time, but readily affordable to a fresh graduate on an academic support staff salary.

The concept is that the desktop setup should be the one that is most immediate and comfortable, that if I need to spend a few hours computering I will be able to get on very well with. The laptop setup should make it possible to work, and I should be able to easily pick it up and take it with me when I need to do so.

For a long time, this meant something like “I can put my current Xcode project and a conference presentation on a USB stick, copy it to the laptop, then go to a conference to deliver my talk and hack on a project in the hotel room”. These days, ubiquitous wi-fi and cloud sync products remove some of the friction, and I can usually rely on my projects being available on the laptop at time of use (or being a small number of steps away).

I’ve never been a single-platform person. Sometimes “my desktop” is a Linux PC, sometimes a Mac, it’s even been a NeXT Turbo Station and a Sun Ultra workstation before. Sometimes “my laptop” is a Linux PC, sometimes a Mac, the most outré was that G3 which ran OpenDarwin for a time. The biggest ramification of that is that I’ve never got particularly deep into configuring my tools. It’s better for me to be able to find my way around a new vanilla system than it is to have a deep custom configuration that I understand really well but is difficult to port.

When Mac OS X had the csh shell as default, I used that. Then with 10.3 I switched to bash. Then with 10.15 I switched to zsh. My dotfiles repo has a git config, and a little .emacs that enables some org-mode plugins. But that’s it.

Since Stuart Langridge and I released Which Three Birdies, it has taken the web by storm, and we’ve been inundated with requests from prestigious institutions to give lectures on how we accomplished this paradigm shift in non-arbitrary co-ordinates-to-mnemonic mapping. Unfortunately, the global pandemic and the terms of Stuart’s parole prevent us from travelling, so we’re writing it here instead.

The name

A significant advance on its predecessors was achievable because Bruce has a proper degree (English Language and Literature with Drama) and has trained as an English Language teacher. “Which” is an an interrogative pronoun, used in questions about alternatives. This might sound pedantic, but if a service can’t make the right choice from a very limited set of interrogative pronouns, how can you trust it to choose the correct three mnemonics? Establishing trust is vital when launching a tool that is destined to become an essential part of the very infrastructure of cartography.

The APIs

The mechanics of how the service locates and maps to three birds is extensively documented. Further documentation has been provided at the request of the Nobel Prize committee and will be published in due course

Accessibility

The Web is for everyone and anyone who makes sites that are inaccessible is, quite simply, not a proper developer and quite possibly a criminal or even a fascist. Therefore, W3B offers users the chance to hear the calls of the most prevalent birds in their location, and also provides a transcript of those calls.

screenshot of transcripts

Given that there are 18,043 species of birds worldwide, transcribing each one by hand was impractical, so we decided to utilise –nay, leverage– the power of Machine Learning.

Birdsong to Text via Machine Learning

Stuart is, by choice, a Python programmer. Unfortunately, we learned that pythons eat birds and, out of a sense of solidarity with our feathered friends, we decided not to progress with such a barbaric language so we sought an alternative.

Stuart hit upon the answer, due to a fortuitous coincidence. He has a prison tattoo of a puffin on his left buttock (don’t ask) and we remembered that the trendy “R” language is named after call of a Puffin (usually transcribed as “arr-arr-arr”).

Stuart set about learning R, but the we hit another snag: we couldn’t use the actual sounds of birds to train our AI, for copyright reasons.

Luckily, Bruce is also a musician with an extensive collection of instruments, including the actual kazoo that John Cale used to record the weird bits of Venus In Furs. Here it is in its Sotheby’s presentation case:

a kazoo in a presentaton box

Whereas the kazoo is ideal for duplicating the mellifluous squawk of a corncrake, it is less suitable to mimic the euphonious peep of an osprey. Stuart listens to AC/DC and therefore has no musical sense at all, so he wasn’t given an instrument. Instead he took the task of inhaling helium out of childrens’ balloons in order to replicate the higher registers of birdsong. Here’s a photo of the flame-haired Adonis preparing to imitate the melodious lament of the screech owl:

Pennywise the clown from IT, with a red balloon

After a few evenings re-creating a representative sample of birdsong, we had enough avian phonemes in the bag to run a rigorous programme of principal components analysis, cluster analysis and (of course) multilinear subspace learning algorithms to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors.

All known birds can now reliably be transcribed with 94% accuracy, except for the Crested Anatolian Otter-Catcher. We suspect that the reason for this is the confusion introduced by the Turkish vowel harmony and final-obstruent devoicing. In practice, however, this exception doesn’t affect the utility of the system, because the Crested Anatolian Otter-Catcher is now very rare due to its being extensively hunted in the 19th century. (Fun fact, the bird was once so famous and prevalent that the whole region was known as the Otter-munch Empire.)

Hopefully, this in-depth breakdown of how Which Three Birdies? works will encourage other authors of revolutionary new utilities to open-source their work as we have done for the betterment of all humanity. We’d like to thank the nice people at 51Degrees for commissioning Which Three Birdies?, giving us free rein, and paying us for it. The nutters.

May 12, 2021

Saturday 8 May was the 18th birthday of the famous CSS Zen Garden. To quote the Web Design Museum

The project offered a simple HTML template to be downloaded, the graphic design of which could be customized by any web designer, but only with the help of cascading styles and one’s own pictures. The goal of the project was to demonstrate the various possibilities of CSS in creating visual web design. The CSS Zen Garden gallery exhibited hundreds of examples of diverse web design, all based on a single template containing the same HTML code.

I too designed a theme, which never made it to creator Dave Shea’s official list, but did the rounds on blogs in its day. For a long time, it languished rather broken, because Dave had rejigged the HTML to use HTML5 elements and changed the names of the classes and IDs that I had used as selectors for styling. To mark the occasion, I spent a while reconstructing it. You can enjoy its glory at Geocities 1996 (Seizure warning!). (You might want to use Vivaldi browser which allows you to turn off animated GIFs).

After I tweeted the link on Saturday to celebrate CSS Zen Garden’s birthday, a number of people noted that the site isn’t responsive, so looks broken on mobile. This is because there weren’t any mobile devices when I wrote it in 2003! Sure, I could rewrite the CSS and use Grid and all the modern cool stuff, but that wasn’t my intent. Apart from class and ID names, the only things I changed were the mechanism of hiding text that’s replaced by images. Tt’s now color:transparent as opposed to floating h3 span off screen, as Dave got rid of the spans. The * html hacks for IE6 are still there. (If you don’t know what that means, lucky you. It was my preferred way to target IE6, which believed there was an unnamed selector above the html element in the tree, so * html #whatever would select that ID only in IE6. I liked it because it was valid CSS, albeit nonsensical.)

It’s there as a working artefact of web development in the early 2000s, in the same way as its “exuberant” design is a fond homage to the early web aesthetic that I first discovered in 1996. And what better accolade can there be than this:

Dave’s project opened the eyes of many designers and sent a message across our then-small community of Web Standards wonks that CSS was ready for prime-time. I’m told by many that the Zen Garden is still used by educators, 18 years later. Thank you, Dave Shea!

May 11, 2021

May 07, 2021

Reading List 276 by Bruce Lawson (@brucel)

(Last Updated on )

May 06, 2021

I’m signed up to the e-mail newsletter of a local walkway.

Exciting, I know. This monthly newsletter boasts a circulation of roughly 250 keen readers and it comes full of pictures of local flowers, book reviews, and that sort of thing.

This newsletter comes to me as a PDF from the personal email account of the group secretary. My email address in tucked away in the BCC field. As it happens, Gmail lets you send an email to up to 500 people at once.

The point of this is that there’s no obsession over things like…

  • what happens when I have ten million subscribers?
  • why hasn’t jsrn opened one of my emails in a while?
  • am I using the right newsletter provider? Is it one of the cool ones?

I can only imagine the list of subscribers is kept in a spreadsheet somewhere on the secretary’s computer.

I hope he has a backup.

Back to Top