Last updated: November 29, 2020 09:22 PM (All times are UTC.)

November 27, 2020

November 23, 2020

Reading List 267 by Bruce Lawson (@brucel)

November 22, 2020

We built a one-hour Zoom-based Escape Game experience that allowed large groups of players (50+) to work in small teams to solve puzzles related to the Ballantine House and it’s history. The game has a single host (playing the character of maid or butler) that allows them to manage the entire experience.

November 20, 2020

I was thinking about the different ways I work with museums: Free Email support, consultancy, group workshops & bespoke experiences. And wanted to compare those by taking Museum Tour Guides as an example.

November 19, 2020

(This is really part 2 of a blog post about the ‘different ways I work with museums’. You can read this on it’s own or start with part 1 first.) For this project Outside Studios found me (I think via the free tutorials). They were doing a large redevelopment over the entire Workhouse site and […]

November 17, 2020

The Silent Network by Graham Lee

People say that the internet, or maybe specifically the web, holds the world’s information and makes it accessible. Maybe there was a time when that was true. But currently it’s not: probably not because the information is missing, but because the search engines think they know better than you what you want.

I recently had cause to look up an event that I know happened: at an early point in the iPod’s development, Steve Jobs disparaged MP3 players using NAND Flash storage. What were his exact words?

Jobs also disparaged the Adobe (formerly Macromedia) Flash Player media platform, in a widely-discussed blog post on his company website many years later. I knew that this would be a closely-connected story, so I crafted my search terms to exclude it.

Steve Jobs NAND Flash iPod. Steve Jobs Flash MP3 player. Steve Jobs NAND Flash -Adobe. Did any of these work? No, on multiple search engines. Having to try multiple search engines and getting the wrong results on all of them is 1990s-era web experience. All of these search terms return lists of “Thoughts on Flash” (the Adobe player), reports on that article, later news about Flash Player linking subsequent outcomes to that article, hot takes on why Jobs was wrong in that article, and so on. None of them show me what I asked for.

Eventually I decided to search the archives of one particular blog, which didn’t make the search engines prefer relevant results but which did reduce the quantity of irrelevant results. Finally, on the second page of articles from Daring Fireball about “Steve Jobs NAND flash storage iPod”, I found Flash Gordon. I still don’t have the quote, I have an article about a later development citing a dead link story that is itself interpreting the quote.

That’s the closest modern web searching tools would let me get.

November 13, 2020

The new M1 chip in the new Macs has 8-16GB of DRAM on the package, just like many mobile phones or single-board computers. But unlike many desktop, laptop or workstation computers (there are exceptions). In the first tranche of Macs using the chip, that’s all the addressable RAM they have (i.e. ignoring caches), just like many mobile phones or single-board computers. But what happens when they move the Apple Silicon chips up the scale, to computers like the iMac or Mac Pro?

It’s possible that these models would have a few GB of memory on-package and access to memory modules connected via a conventional controller, for example DDR4 RAM. They almost certainly would if you could deploy multiple M1 (or successor) packages on a single system. Such a Mac would be a non-uniform memory access architecture (NUMA), which (depending on how it’s configured) has implications for how software can be designed to best make use of the memory.

NUMA computing is of course not new. If you have a computer with a CPU and a discrete graphics processor, you have a NUMA computer: the GPU has access to RAM that the CPU doesn’t, and vice versa. Running GPU code involves copying data from CPU-memory to GPU-memory, doing GPU stuff, then copying the result from GPU-memory to CPU-memory.

A hypothetical NUMA-because-Apple-Silicon Mac would not be like that. The GPU shares access to the integrated RAM with the CPU, a little like an Amiga. The situation on Amiga was that there was “chip RAM” (which both the CPU and graphics and other peripheral chips could access), and “fast RAM” (only available to the CPU). The fast RAM was faster because the CPU didn’t have to wait for the coprocessors to use it, whereas they had to take turns accessing the chip RAM. Nonetheless, the CPU had access to all the RAM, and programmers had to tell `AllocMem` whether they wanted to use chip RAM, fast RAM, or didn’t care.

A NUMA Mac would not be like that, either. It would share the property that there’s a subset of the RAM available for sharing with the GPU, but this memory would be faster than the off-chip memory because of the closer integration and lack of (relatively) long communication bus. Apple has described the integrated RAM as “high bandwidth”, which probably means multiple access channels.

A better and more recently analogy to this setup is Intel’s discontinued supercomputer chip, Knight’s Landing (marketed as Xeon Phi). Like the M1, this chip has 16GB of on-die high bandwidth memory. Like my hypothetical Mac Pro, it can also access external memory modules. Unlike the M1, it has 64 or 72 identical cores rather than 4 big and 4 little cores.

There are three ways to configure a Xeon Phi computer. You can not use any external memory, and the CPU entirely uses its on-package RAM. You can use a cache mode, where the software only “sees” the external memory and the high-bandwidth RAM is used as a cache. Or you can go full NUMA, where programmers have to explicitly request memory in the high-bandwidth region to access it, like with the Amiga allocator.

People rarely go full NUMA. It’s hard to work out what split of allocations between the high-bandwidth and regular RAM yields best performance, so people tend to just run with cached mode and hope that’s faster than not having any on-package memory at all.

And that makes me think that a Mac would either not go full NUMA, or would not have public API for it. Maybe Apple would let the kernel and some OS processes have exclusive access to the on-package RAM, but even that seems overly complex (particularly where you have more than one M1 in a computer, so you need to specify core affinity for your memory allocations in addition to memory type). My guess is that an early workstation Mac with 16GB of M1 RAM and 64GB of DDR4 RAM would look like it has 64GB of RAM, with the on-package memory used for the GPU and as cache. NUMA APIs, if they come at all, would come later.

November 12, 2020

November 10, 2020

November 03, 2020

In case you ever need it. If you’re searching for something like “deleted login shell Mac can’t open terminal”, this is the post for you.

I just deleted my login shell (because it was installed with homebrew, and I removed homebrew without remembering that I would lose my shell). That stopped me from opening a Terminal window, because it would immediately bomb out as it was unable to open the shell.

Unable to open a normal Terminal window, anyway. In the Shell menu, the “New Command…” item let me run /bin/bash -l, from which I got to a login-like bash shell. Then I could run this command:

chsh -s /bin/zsh

Enter my password, and then I have a normal shell again.

(So I could then install MacPorts, and then change my shell to /opt/local/bin/bash)

November 02, 2020

Thinking back over the last couple of years, I’ve had to know quite a bit about a few different topics to be able to write good software. Those topics:

  • Epidemiology
  • Architecture
  • Plant Sciences
  • Histology
  • Education

Not much knowledge in each field, though definitely expert-level knowledge: enough to have conversations as a peer with experts in those fields, to be able to follow the jargon, and to be able to make reasoned assessments of the experts’ suggestions in software designed for use by those experts. And, where I’ve wanted to propose alternate suggestions, enough expert-level knowledge to identify and justify a different design.

Going back over the rest of my career in software:

  • Pensions, investments, and saving
  • Mobile Telecommunications
  • Terry Pratchett’s Discworld
  • Macromolecular crystallography
  • Synchrotron accelerators
  • Home automation
  • Yoga
  • Soda drinks dispensers
  • Banks
  • Quite a bit of physics

In fact I’d estimate that I’ve spent less than 40% of my “professional career” in jobs where knowing software or even computers in general was the whole thing.

Working in software is about understanding a system to the point where you can use a computer to make a meaningful and beneficial contribution to that system. While the systems thinking community have great tools for mapping out the behaviour of systems, they are only really good for making initial models. In order to get good at it, we have to actually understand the system on its own terms, with the same ideas and biases that the people who interact regularly with the system use.

But of course, because we’re hired for our computering skills, we get to experience and work with all of these different systems. It’s perhaps one of the best domains in which to be a polymath. To navigate it effectively, we need to accept that we are not the experts. We’re visitors, who get to explore other people’s worlds.

We should take them up on that offer, though, if we’re going to be effective. If we maintain the distinction between “technical” and “non-technical” people, or between “engineering” and “the business”, then we deny ourselves the ability to learn about the problem we’re trying to solve, and to make a good solution.

October 31, 2020

My dad’s got a Brother DCP-7055W printer/scanner, and he wanted to be able to set it up as a network scanner to his Ubuntu machine. This was more fiddly than it should be, and involved a bunch of annoying terminal work, so I’m documenting it here so I don’t lose track of how to do it should I have to do it again. It would be nice if Brother made this easier, but I suppose that it working at all under Ubuntu is an improvement on nothing.

Anyway. First, go off to the Brother website and download the scanner software. At time of writing, https://www.brother.co.uk/support/dcp7055/downloads has the software, but if that’s not there when you read this, search the Brother site for DCP-7055 and choose Downloads, then Linux and Linux (deb), and get the Driver Installer Tool. That’ll get you a shell script; run it. This should give you two new commands in the Terminal: brsaneconfig4 and brscan-skey.

Next, teach the computer about the scanner. This is what brsaneconfig4 is for, and is all done in the Terminal. You need to know the scanner’s IP address; you can find this out from the scanner itself, or you can use avahi-resolve -v -a -r to search your network for it. This will dump out a whole load of stuff, some of which should look like this:

=  wlan0 IPv4 Brother DCP-7055W                             UNIX Printer         local
   hostname = [BRN008092CCEE10.local]
   address = [192.168.1.21]
   port = [515]
   txt = ["TBCP=F" "Transparent=T" "Binary=T" "PaperCustom=T" "Duplex=F" "Copies=T" "Color=F" "usb_MDL=DCP-7055W" "usb_MFG=Brother" "priority=75" "adminurl=http://BRN008092CCEE10.local./" "product=(Brother DCP-7055W)" "ty=Brother DCP-7055W" "rp=duerqxesz5090" "pdl=application/vnd.brother-hbp" "qtotal=1" "txtvers=1"]

That’s your Brother scanner. The thing you want from that is address, which in this case is 192.168.1.21.

Run brsaneconfig4 -a name="My7055WScanner" model="DCP-7055" ip=192.168.1.21. This should teach the computer about the scanner. You can test this with brsaneconfig4 -p which will ping the scanner, and brsaneconfig4 -q which will list all the scanner types it knows about and then list your added scanner at the end under Devices on network. (If your Brother scanner isn’t a DCP-7055W, you can find the other codenames for types it knows about with brsaneconfig4 -q and then use one of those as the model with brsaneconfig4 -a.)

You only need to add the scanner once, but you also need to have brscan-skey running always, because that’s what listens for network scan requests from the scanner itself. The easiest way to do this is to run it as a Startup Application; open Startup Applications from your launcher by searching from it, and add a new application which runs the command brscan-skey, and restart the machine so that it’s running.

If you don’t have the GIMP1 installed, you’ll need to install it.

On the scanner, you should now be able to press the Scan button and choose Scan to PC and then Scan Image, and it should work. What will happen is that your machine will pop up the GIMP with the image, which you will then need to export to a format of your choice.

This is quite annoying if you need to scan more than one thing, though, so there’s an optional extra step, which is to change things so that it doesn’t pop up the GIMP and instead just saves the scanned photo which is much nicer. To do this, first install imagemagick, and then edit the file /opt/brother/scanner/brscan-skey/script/scantoimage-0.2.4-1.sh with sudo. Change the last line from

echo gimp -n $output_file 2>/dev/null \;rm -f $output_file | sh &

to

echo convert $output_file $output_file.jpg 2>/dev/null \;rm -f $output_file | sh &

Now, when you hit the Scan button on the scanner, it will quietly create a file named something like brscan.Hd83Kd.ppm.jpg in the brscan folder in your home folder and not show anything on screen, and this means that it’s a lot easier to scan a bunch of photos one after the other.

  1. I hate this name. It makes us look like sniggering schoolboys. GNU Imp, maybe, or the new Glimpse fork, but the upstream developers don’t want to change it

Rake Is Awesome

I sat down to start compiling these notes, and of course got sidetracked putting together the Rakefile which can be found in the root of this repo.

Hacktoberfest

I was all gung-ho about getting started with Hacktoberfest this year, but I’m not sure I can muster the energy. I absolutely do not need any more t-shirts, and the negative energy around the event’s growing spam problem just turns me off participating entirely.

Regardless, I’m enjoying stewarding How Old Is It? the only project I’ve had that’s gotten more than 10 stars on GitHub. I hope I can at least help some other non-spammy participants get their t-shirts.

What Even Is Typing?

Trying to get better at navigating around my code editor without using a mouse. This is motivated by the audiobook of The Pragmatic Programmer that I’m listening to, in which they discuss how efficiency can be improved by reducing the friction between your brain and your computer.

The issue isn’t that taking your hand off the keyboard, placing it on the mouse, clicking some stuff and then moving your hand back to the keyboard takes too much time; realistically the extra time added by mouse usage is going to be dwarfed by a the time spent in meetings or making seven or eight cups of tea.

The issue is that it’s a distraction that takes your mind off what you’re writing.

Rails Test Assistant

There was some functionality in RubyMine that I missed, and wanted to replicate inside VSCode. Rather than use one of the existing plugins that more than adequately solve the problem, I decided to write my own. Because, y’know. Of course I would.

Hello, Rails Test Assistant.

Things I Read

October 26, 2020

October 21, 2020



Digital Here to Stay 

At 1AM on Friday, October 9th, I got a group text from a playwright friend: Broadway was once again pushing back its reopening date to June 1st, 2021. 

I wish I could say it came as a surprise. Only a couple weeks earlier the Metropolitan Opera cancelled its 20/21 season, and anyone who’s remotely considered event logistics in the past 6-7 months could rattle off a hundred hurdles involved in reopening any kind of venue. That list grows exponentially when the venue houses a minimum of 500 people in a city where you’re lucky to get 6 inches let alone 6 feet. Rather than shocking the community, this announcement reiterated once again that the performing arts industry will be one of the last to return. So it’s time for organizations to ensure their digital solutions are going to service long-term needs, and seize underlying opportunities.

One Step at a Time

Performing arts’ biggest strength has been the live in live theater. It’s a double-edged sword: creating economic barriers and gatekeeping for many, but also generating a unique high for those that are able to experience it even on a small scale such as school recitals, or indie theater. At the beginning of the pandemic, artists at every level of the industry rapidly pivoted to produce content without its biggest strength. 

Zoom became prevalent not just for teleconferencing, but for digital events and classes. One of the most common requests I’ve gotten in the past few months is “can we embed a Zoom link in our order confirmation?” (the answer is “yes”). But while artists explored radio dramas, wrote plays specifically for digital forums, held digital festivals, and tried to make do, the focus was always on what it would look like to return home. As often as I’ve been asked about embedding links in confirmation emails, many organizations have opted to hold off on radical changes because of the ever shifting landscape and the knowledge that the current state of affairs is temporary. 

Last Friday’s announcement makes it clear that “temporary” is going to last a lot longer than we hoped. Even regions that are beginning to reopen for outdoor or distanced performances need to find a way to cater to patrons that cannot safely return to venues, and contend with the impending arrival of winter. This is no longer about creating a safety checklist, and measuring the width of seats. Top to bottom, organizations must consider digital viability in all programming for the next year. We can no longer say “hold, please” to infrastructure changes that will support the performing arts during the rest of this pandemic, particularly when it has the potential to increase accessibility across the board.

Plan Ahead

Distance makes planning even more important than when leadership was able to collaborate in person. In a normal year, every production season is planned to suit a wide variety of audiences, tastes, and artistic messages. Once shows are selected, resources are allocated and accounted for. A formal delay in the return to venues means these conversations now need to include digital logistics. Content curation must consider what pieces will work best when performers must be distanced or in completely different locations. Think carefully before slating a piece that requires physical intimacy or confrontation, consider smaller pieces over large ensemble productions.

No matter what content is selected, organizations should get in the habit of identifying opportunities for content creation. For many organizations, the Watch & Listen section was a repository for the passionate user, and good for SEO, but not a priority. Other organizations were only able to record content for archival purposes rather than public consumption. The early days of the pandemic quickly revealed this produced a gap between the potential of the digital space, and the available content. As we continue to oscillate between digital and physical spaces, and reach out to patrons who cannot safely attend performances, generating assets and high quality recordings will be a priority.

A 5-Star Hotel, not a Bates Motel

Digital content is a great way to continue to foster relationships with patrons while in-person interaction is limited or impossible, so organizations must ensure that the experience is a positive one. If you don’t have a natural home for videos already, start having conversations with hosting platforms to see which might be the right fit for you and your website. Consider whether you want to make the jump towards OTT platforms which allow users to access content on other devices rather than being tethered to a phone or computer. Whether you’re new, or a seasoned digital veteran, keep an eye on analytics to identify pain points in the digital path. Ensure that the journey through your content is a curated, and welcoming experience for patrons. Just as you would provide users with additional event information before expecting them to book, avoid abandoning users in a vast sea of videos with no context. Your event pages likely don’t bury the link to purchasing tickets, similarly your video landing pages should make it easy for users to choose what they want to watch and navigate to other recommended content. 

Nice to E-Meet You

While it is tempting to stick with known, and familiar faces during uncertain times, the pandemic has also raised a fear that the performing arts will become even more exclusive. Organizations must take advantage of the opportunity to diversify and expand their network of collaborators and audience members. With no additional travel or housing costs, organizations can now reach and collaborate with people that would have been inaccessible before. Use this to your advantage - increase the diversity of the artists you work with and collaborate with organizations around the world that are succeeding in producing theater that represents their audience. Revel in the fact you can now compete for audiences that are outside a one hour drive of your venue. Expand your community and make your art more accessible to everyone so that when we are able to return, you’ll have an even wider community. 

White Noise or Unique Contribution

The performing arts doesn’t have to beat Netflix at its own game, we need to stand apart. Before the pandemic every million dollar media company in entertainment was already entering the “streaming wars.” 10 years ago there was Netflix and YouTube, now there is a specialized streaming service for every channel and category of content clamoring for people’s cash and attention. It can be daunting for nonprofits to enter the ring without the massive production budgets available to cinema and television unless you remember what makes live art precious. 

A live event is a unique blend of elements that can be recorded, but never be replicated. Even shows with extended runs will never have the same performance twice whether it’s a stubborn wig, a backstage prank, or the crash of thunder outside. Live art is real. There’s no CGI, no second take, it is all happening before your eyes. Hollywood spends millions of dollars every year trying to replicate reality by using extended takes and marketing multi-class actors doing their own stunts. I’ve binged more Netflix than I care to admit during the past 7 months, I’ve cried and cheered at my local AMC, but theater brings an audience together down to its pulse. And I get the same buzz of nerves before a digital performance that I did putting on my makeup in a utility closet turned green room, because live art done right is lightning in a bottle. 

Nothing can be accomplished overnight. Everything I’ve mentioned is a long term commitment to the digital sphere, and there will be many trials before we reach tribulations. No matter how successful, none of this will replace live theater. It never could. Under the current timeline, Broadway will be shuttered for a total of 14 months, and smaller theaters are unlikely to lead a charge that Broadway won’t. An entire year, both creatively and financially, will be gone and many organizations with it. Broadway’s announcement sent only the most recent national wave of grief through the performing arts industry. Audiences are hurting over the loss of these shared experiences that made up their community. Hundreds of thousands of artists are yearning not just to perform, but to create and play without endangering ourselves. We miss creating with our friends and colleagues. We miss watching their performative joy, pain, and skill. But while we grieve, reality waits, and it is your responsibility to make sure that if you can survive, you do everything possible to thrive.

October 19, 2020

If programmers were just more disciplined, more professional, they’d write better software. All they need is a code of conduct telling them how to work like those of us who’ve worked it out.

The above statement is true, which is a good thing for those of us interested in improving the state of software and in helping our fellow professionals to improve their craft. However, it’s also very difficult and inefficient to apply, in addition to being entirely unnecessary. In the common parlance of our industry, “discipline doesn’t scale”.

Consider the trajectory of object lifecycle management in the Objective-C programming language, particularly the NeXT dialect. Between 1989 and 1995, the dominant way to deal with the lifecycle of objects was to use the +new and -free methods, which work much like malloc/free in C or new/delete in C++. Of course it’s possible to design a complex object graph using this ownership model, it just needs discipline, that’s all. Learn the heuristics that the experts use, and the techniques to ensure correctness, and get it correct.

But you know what’s better? Not having to get that right. So around 1994 people introduced new tools to do it an easier way: reference counting. With NeXTSTEP Mach Kit’s NXReference protocol and OpenStep’s NSObject, developers no longer need to know when everybody in an app is done with an object to destroy it. They can indicate when a reference is taken and when it’s relinquished, and the object itself will see when it’s no longer used and free itself. Learn the heuristics and techniques around auto releasing and unretained references, and get it correct.

But you know what’s better? Not having to get that right. So a couple of other tools were introduced, so close together that they were probably developed in parallel[*]: Objective-C 2.0 garbage collection (2006) and Automatic Reference Counting (2008). ARC “won” in popular adoption so let’s focus there: developers no longer need to know exactly when to retain, release, or autorelease objects. Instead of describing the edges of the relationships, they describe the meanings of the relationships and the compiler will automatically take care of ownership tracking. Learn the heuristics and techniques around weak references and the “weak self” dance, and get it correct.

[*] I’m ignoring here the significantly earlier integration of the Boehm conservative GC with Objective-C, because so did everybody else. That in itself is an important part of the technology adoption story.

But you know what’s better? You get the idea. You see similar things happen in other contexts: for example C++’s move from new/delete to smart pointers follows a similar trajectory over a similar time. The reliance on an entire programming community getting some difficult rules right, when faced with the alternative of using different technology on the same computer that follows the rules for you, is a tough sell.

It seems so simple: computers exist to automate repetitive information-processing tasks. Requiring programmers who have access to computers to recall and follow repetitive information processes is wasteful, when the computer can do that. So give those tasks to the computers.

And yet, for some people the problem with software isn’t a lack of automation but a lack of discipline. Software would be better if only people knew the rules, honoured them, and slowed themselves down so that instead of cutting corners they just chose to ignore important business milestones instead. Back in my day, everybody knew “no Markdown around town” and “don’t code in an IDE after Labour Day”, but now the kids do whatever they want. The motivations seem different, and I’d like to sort them out.

Let’s start with hazing. A lot of the software industry suffers from “I had to go through this, you should too”. Look at software engineering interviews, for example. I’m not sure whether anybody actually believes “I had to deal with carefully ensuring NUL-termination to avoid buffer overrun errors so you should too”, but I do occasionally still hear people telling less-experienced developers that they should learn C to learn more about how their computer works. Your computer is not a fast PDP-11, all you will learn is how the C virtual machine works.

Just as Real Men Don’t Eat Quiche, so real programmers don’t use Pascal. Real Programmers use FORTRAN. This motivation for sorting discipline from rabble is based on the idea that if it isn’t at least as hard as it was when I did this, it isn’t hard enough. And that means that the goalposts are movable, based on the orator’s experience.

This is often related to the term of their experience: you don’t need TypeScript to write good React Native code, just Javascript and some discipline. You don’t need React Native to write good front-end code, just JQuery and some discipline. You don’t need JQuery…

But along with the term of experience goes the breadth. You see, the person who learned reference counting in 1995 and thinks that you can only really understand programming if you manually type out your own reference-changing events, presumably didn’t go on to use garbage collection in Java in 1996. The person who thinks you can only really write correct software if every case is accompanied by a unit test presumably didn’t learn Eiffel. The person who thinks that you can only really design systems if you use the Haskell type system may not have tried OCaml. And so on.

The conclusion is that for this variety of disciplinarian, the appropriate character and quantity of discipline is whatever they had to deal with at some specific point in their career. Probably a high point: after they’d got over the tricky bits and got productive, and after you kids came along and ruined everything.

Sometimes the reason for suggesting the disciplined approach is entomological in nature, as in the case of the eusocial insect the “performant” which, while not a real word, exists in greater quantities in older software than in newer software, apparently. The performant is capable of making software faster, or use less memory, or more concurrent, or less dependent on I/O: the specific characteristics of the performant depend heavily on context.

The performant is often not talked about in the same sentences as its usual companion species, the irrelevant. Yes, there may be opportunities to shave a few percent off the runtime of that algorithm by switching from the automatic tool to the manual, disciplined approach, but does that matter (yet, or at all)? There are software-construction domains where specific performance characteristics are desirable, indeed that’s true across a lot of software. But it’s typical to focus performance-enhancing techniques on the bits where they enhance performance that needs enhancing, not to adopt them across the whole system on the basis that it was better when everyone worked this way. You might save a few hundred cycles writing native software instead of using a VM for that UI method, but if it’s going to run after a network request completes over EDGE then trigger a 1/3s animation, nobody will notice the improvement.

Anyway, whatever the source, the problem with calls for discipline is that there’s no strong motivation to become more disciplined. I can use these tools, and my customer is this much satisfied, and my employer pays me this much. Or I can learn from you how I’m supposed to be doing it, which will slow me down, for…your satisfaction? So you know I’m doing it the way it’s supposed to be done? Or so that I can tell everyone else that they’re doing it wrong, too? Sounds like a great deal.

Therefore discipline doesn’t scale. Whenever you ask some people to slow down and think harder about what they’re doing, some fraction of them will. Some will wonder whether there’s some other way to get what you’re peddling, and may find it. Some more will not pay any attention. The dangerous ones are the ones who thought they were paying attention and yet still end up not doing the disciplined thing you asked for: they either torpedo your whole idea or turn it into not doing the thing (see OOP, Agile, Functional Programming). And still more people, by far the vast majority, just weren’t listening at all, and you’ll never reach them.

Let’s flip this around. Let’s look at where we need to be disciplined, and ask if there are gaps in the tool support for software engineers. Some people want us to always write a failing test and make it pass before adding any code (or want us to write a passing test and revert our changes if it accidentally fails): does that mean our tools should not let us write code for which there’s no test? Does the same apply for acceptance tests? Some want us to refactor mercilessly; does that mean our design tools should always propose more parsimonious alternatives for passing the same tests? Some say we should get into the discipline of writing code that always reveals its intent: should the tools make a crack at interpreting the intention of the code-as-prose?

October 16, 2020



Google Analytics 4 - What does it mean for you?

On Wednesday the folks at Google announced their new version of their analytics platform is now available for all. It’s called Google Analytics 4 and, after being available to selected partners in beta, this new property type replaces and expands on the features of the Web+App property type which was launched last year. This is the biggest update to Google Analytics since Universal Analytics launched many years ago.

We know many of you will have questions about these major updates, so we’ve tried to tackle some of them here.

What does this new version offer compared to the tried and tested version?

Here are new features and changes you can expect from Analytics Version 4:  

Smarter Insights
Although Google uses machine learning in the current version of analytics it always felt a little like a bolt on, tucked away and only used when called for.  With version 4 Google have built the AI at the heart of it enabling it to fill gaps in the data, predict future customer behavior and identify trends.  The hope for Google is that this intelligence will help business make smarter decisions in this currently changing and volatile landscape.

image

Deeper integrations with Google Marketing platform
This deep integration with other applications in the marketing suite, especially with Google Ads, enables greater granulation of audience segments.  The AI can now identify audiences on your behalf, such audiences based on predictive spend or lifetime value. 

“With new integrations across Google’s marketing products, it’s easy to use what you learn to improve the ROI of your marketing.”

Customer Centric Analytics
This new version of Analytics aims to help give you a more complete view of how customers are interacting with your business or organisation by bringing web and app metrics together in reports, along with the acquisition channels.  The result is to produce reports that enable you to drill down to understand every aspect of the customer journey. This also allows you to easily combine data from multiple websites into a single property, in order to get a more complete picture of how users are interacting with your sites if they are crossing multiple domains and websites.

“For example, you can see if customers first discover your business from an ad on the web, then later install your app and make purchases there.”

image

User Privacy
Google has introduced a new approach to the data controls within Analytics. With users and regulator bodies demanding more control on how organisations use their personal data, the aim of these new controls is to make it easier to collect and manage this data. This makes it simpler to identify those users, for example, who have given permission to collect Analytics data but have opted it out of personal ads on a site or app.

Future focused
With the restrictions many browsers are adding on 3rd party cookies, Google understands that there may be gaps in data and their ambition is that machine learning and modelling will be able to accurately fill the void.

“Because the technology landscape continues to evolve, the new Analytics is designed to adapt to a future with or without cookies or identifiers. It uses a flexible approach to measurement, and in the future, will include modeling to fill in the gaps where the data may be incomplete”

Google understands that there may be tools and capabilities that users need from their current analytics set up, so they recommend running the new version in parallel with the current universal analytics for the time being.

image

How do I set up Google Analytics 4?

To get going you’ll need to set up a new property within your Google Analytics and either add the tracking code to your site manually or through Google tag manager.  There is also a Setup Assistant assistant to help you through the process. Once complete, the new property will start to gather user data from your site or app.  If you currently use ecommerce tracking, you will need to double check that the transaction data is flowing correctly once you see the data flowing.

Do I need to do anything right now?

We’d recommend setting up the new property straight away, so you start collecting data. You won’t be able to analyse existing data within the new property, so it really is a case of the sooner the better. Remember, this will not replace your Universal Analytics setup, so you can set up Google Analytics 4 in confidence that no historical data will be lost, enabling you to learn and evaluate the latest update at your own pace.

Don’t forget if you need any help in implementing the change or would like to know how to get the most out of Google Analytics 4 we are here to help. Just drop us an email hello@made.media or if you’re an existing Made client, contact us via our Support Centre or your Digital Producer.

Reading List 266 by Bruce Lawson (@brucel)

It’s been a while; since the last Reading List! Since then, Vadim Makeev and I recorded episode 6 of The F-Word, our podcast, on Mozilla layoffs, modals and focus, AVIF, AdBlock Plus lawsuit. We also chatted with co-inventor of CSS, Håkon Wium Lie, and Brian Kardell of Igalia about the health of the web ecosystem. Anyway, enough about me. Here’s what I’ve been reading about the web since the last mail.

October 13, 2020

I had an item in OmniFocus to “write on why I wish I was still using my 2006 iBook”, and then Tim Sneath’s tweet on unboxing a G4 iMac sealed the deal. I wish I was still using my 2006 iBook. I had been using NeXTSTEP for a while, and Mac OS X for a short amount of time, by this point, but on borrowed hardware, mostly spares from the University computing lab.

My “up-to-date” setup was my then-girlfriend’s PowerBook G3 “Wall Street” model, which upon being handed down to me usually ran OpenDarwin, Rhapsody, or Mac OS X 10.2 Jaguar, which was the last release to boot properly on it. When I went to WWDC for the first time in 2005 I set up X Post Facto, a tool that would let me (precariously) install and run 10.3 Panther on it, so that I could ask about Cocoa Bindings in the labs. I didn’t get to run the Tiger developer seed we were given.

When the dizzying salary of my entry-level sysadmin job in the Uni finally made a dent in my graduate-level debts, I scraped together enough money for the entry-level 12” iBook G4 (which did run Tiger, and Leopard). I think it lasted four years until I finally switched to Intel, with an equivalent white acrylic 13” MacBook model. Not because I needed an upgrade, but because Apple forced my hand by making Snow Leopard (OS X 10.6) Intel-only. By this time I was working as a Mac developer so had bought in to the platform lock-in, to some extent.

The treadmill turns: the white MacBook was replaced by a mid-decade MacBook Air (for 64-bit support), which developed a case of “fruit juice on the GPU” so finally got replaced by the 2018 15” MacBook Pro I use to this day. Along the way, a couple of iMacs (both Intel, both aluminium, the second being an opportunistic upgrade: another hand-me-down) came and went, though the second is still used by a friend.

Had it not been for the CPU changes and my need to keep up, could I still use that iBook in 2020? Yes, absolutely. Its replaceable battery could be improved, its browser could be the modern TenFourFox, the hard drive could be replaced with an SSD, and then I’d have a fast, quiet computer that can compile my code and browse the modern Web.

Would that be a great 2020 computer? Not really. As Steven Baker pointed out when we discussed this, computers have got better in incremental ways that eventually add up: hardware AES support for transparent disk encryption. Better memory controllers and more RAM. HiDPI displays. If I replaced the 2018 MBP with the 2006 iBook today, I’d notice those things get worse way before I noticed that the software lacked features I needed.

On the other hand, the hardware lacks a certain emotional playfulness: the backlight shining through the Apple logo. The sighing LED indicating that the laptop is asleep. The reassuring clack of the keys.

Are those the reasons this 2006 computer speaks to me through the decades? They’re charming, but they aren’t the whole reason. Most of it comes down to an impression that that computer was mine and I understood it, whereas the MBP is Apple’s and I get to use it.

A significant input into that is my own mental health. Around 2014 I got into a big burnout, and stopped paying attention to the updates. As a developer, that was a bad time because it was when Apple introduced, and started rapidly iterating on, the Swift programming language. As an Objective-C and Python expert (I’ve published books on both), with limited emotional capacity, I didn’t feel the need to become an expert on yet another language. To this day, I feel like a foreign tourist in Swift and SwiftUI, able to communicate intent but not to fully immerse in the culture and understand its nuances.

A significant part of that is the change in Apple’s stance from “this is how these things work” to “this is how you use these things”. I don’t begrudge them that at all (I did in the Dark Times), because they are selling useful things that people want to use. But there is decidedly a change in tone, from the “Come in it’s open” logo on the front page of the developer website of yore to the limited, late open source drops of today. From the knowledge oriented programming guides of the “blue and white” documentation archive to the task oriented articles of today.

Again, I don’t begrudge this. Developers have work to do, and so want to complete their tasks. Task-oriented support is entirely expected and desirable. I might formulate an argument that it hinders “solutions architects” who need to understand the system in depth to design a sympathetic system for their clients’ needs, but modern software teams don’t have solutions architects. They have their choice of UI framework and a race to an MVP.

Of course, Apple’s adoption of machine learning and cloud systems also means that in many cases, the thing isn’t available to learn. What used to be an open source software component is now an XPC service that calls into a black box that makes a network request. If I wanted to understand why the spell checker on modern macOS or iOS is so weird, Apple would wave their figurative hands and say “neural engine”.

And a massive contribution is the increase in scale of Apple’s products in the intervening time. Bear in mind that at the time of the 2006 iBook, I had one of Apple’s four Mac models, access to an XServe and Airport base station, and a friend who had an iPod, and felt like I knew the whole widget. Now, I have the MBP (one of six models), an iPhone (not the latest model), an iPad (not latest, not Pro), the TV doohickey, no watch, no speaker, no home doohickey, no auto-unlock car, and I’m barely treading water.

Understanding a G4-vintage Mac meant understanding PPC, Mach, BSD Unix, launchd, a couple of directory services, Objective-C, Cocoa, I/O Kit, Carbon, AppleScript, the GNU tool chain and Jam, sqlite3, WebKit, and a few ancillary things like the Keychain and HFS+. You could throw in Perl, Python, and the server stuff like XSAN and XGrid, because why not?

Understanding a modern Mac means understanding that, minus PPC, plus x86_64, the LLVM tool chain, sandbox/seatbelt, Scheme, Swift, SwiftUI, UIKit, “modern” AppKit (with its combination of layer-backed, layer-hosting, cell-based and view-based views), APFS, JavaScript and its hellscape of ancillary tools, geocoding, machine learning, the T2, BridgeOS…

I’m trying to trust a computer I can’t mentally lift.

I was shoulder-surfing my coworker the other day when he did something that I imagine is common knowledge to everyone except me.

When I’m trying to do something like monitor how quickly a file is growing, it’s not uncommon to see a terminal window on my screen that looks like this:

➜ du -hs index.html
4.0K	index.html
➜ du -hs index.html
4.0K	index.html
➜ du -hs index.html
5.0K	index.html
➜ du -hs index.html
6.0K	index.html
➜ du -hs index.html
8.0K	index.html
➜ du -hs index.html
12.0K	index.html

Not only is this untidy, you hardly look impressive, sitting there jabbingly wildly at your up and return keys.

This is why I found it somewhat revelatory when my coworker entered the command watch du -hs index.html and I saw something like the following:

Every 2.0s: du -hs index.html

4.0K    index.html

From the man pages:

NAME
watch - execute a program periodically, showing output fullscreen

SYNOPSIS
watch [options] command

DESCRIPTION
watch runs command repeatedly, displaying its output and errors (the first screenfull). This allows you to watch the program output change over time. By default, command is run every 2 seconds and watch will run until interrupted.

If you’re a macOS user like myself, this command is available via the Homebrew package watch.

October 12, 2020

I had need to test an application built for Linux, and didn’t want to run a whole desktop in a window using Virtualbox. I found the bits I needed online in various forums, but nowhere was it all in one place. It is now!

Prerequisites: Docker and XQuartz. Both can be downloaded from homebrew.

Create a Dockerfile:

FROM debian:latest


RUN apt-get update && apt-get install -y iceweasel


RUN export uid=501 gid=20 && \
    mkdir -p /home/user && \
    echo "user:x:${uid}:${gid}:User,,,:/home/user:/bin/bash" >> /etc/passwd && \
    echo "staff:x:${uid}:" >> /etc/group && \
    echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    chmod 0440 /etc/sudoers && \
    chown ${uid}:${gid} -R /home/user


USER user
ENV HOME /home/user
CMD /usr/bin/iceweasel

It’s good to mount the Downloads folder within /home/user, or your Documents, or whatever. On Catalina or later you’ll get warnings asking whether you want to give Docker access to those folders.

First time through, open XQuartz, goto preferences > Security and check the option to allow connections from network clients, quit XQuartz.

Now open XQuartz, and in the xterm type:

$ xhost + $YOUR_IP
$ docker build -f Dockerfile -t firefox .
$ docker run -it -e DISPLAY=$YOUR_IP:0 -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/Downloads:/home/users/Downloads firefox

Enjoy firefox (or more likely, your custom app that you’re testing under Linux)!

Iceweasel on Debian on macOS

October 09, 2020

October 06, 2020

September 30, 2020

kubectl explain

This tweet by @ienmiell put me on to kubectl explain. It’s awesome. It tells you right there in your terminal what the stuff in your Kubernetes resources actually means.

Things I Read

September 28, 2020

September 27, 2020

(Read part 1, about creating parametric level smoke tests, here)

Now we have some useful tests, we want our Jenkins install to run them frequently. That way, we really quick feedback on failures. Here’s how we do this at Roll7.

Running tests from the command line

Let’s assume you already have a server to build your game. I won’t cover how to set up a unity Jenkins build server here, except to say that the Unity3d plugin is very helpful when you have lots of unity versions. (Although we couldn’t get the log parameter to be picked up unless we specified the path fully in our command line arguments!)

The command line to run a Unity Test Framework job is as follows:

-batchmode -projectPath "." -runTests -testResults playmodetests.xml -testPlatform playmode -logFile ${WORKSPACE}\playtestlog.txt

You can read more about command line params here, but let’s break that down:

  • -batchmode tells unity not to open a GUI, which is vital for a build server. We don’t want it to hang on a dialog! You can also use Application.isBatchMode to test for this flag in your code.
  • -projectPath "." just tells unity to load the project in our working directory
  • -runTests starts a Unity Test Framework job as soon as the editor loads. It’ll run some specified tests, spit out some output, and make sure test failures cause a non-zero return code.
  • -testPlatform playmode tells the UTF to run our playmode tests, which are the ones we care about for this blog post. You can also use editmode.
  • -testResults playmodetests.xml states where to spit out a report of the run, which will include failures as well as logs. The report is formatted as an nunit test result XML. Jenkins has a plugin that can fail a job based on this file, and present a nice reporting UI.
  • -logFile ${WORKSPACE}\playtest.txt specifies where to write the editor log – by default it won’t stream into the console. The ${WORKSPACE} is a jenkins environment variable, and we found specifying it was the only way to get the unity3d plugin to find the log.

…And that’s enough to do a test run.

Playmode tests in batch

This page of the docs mentions that the WaitForEndOfFrame coroutine is a bad idea in batch mode. This is because “systems like animation, physics and timeline might not work correctly in the Editor”.

There’s not much more detail than this!

In practice, we’ve found any playmode tests that depend on movement, positioning or animation fails pretty reliably. We get around this by explicitly marking tests we know can run on the server with the [Category("BuildServer")] attribute. We can then use the -testCategory "BuildServer" parameter to only run these tests.

This is pretty limiting! But there’s still plenty of value in just making sure your levels, enemies and weapons work without firing errors or warnings.

In the near future we’ll be experimenting with an ec2 instance that has a GPU, to let us run without batchmode, and also allow us to run playmode tests on finished builds more easily.

Gotchas and practicalities

  • Currently, we find a full playmode test run is exceedingly slow, and we don’t yet know why. What takes a minute or two locally takes tens of minutes on the ec2 instance. It’s not a small instance, either! So we’re only scheduling a test run for our twice-daily steam builds, instead of for every push.
  • WaitForEndOfFrame doesn’t work in batch mode, so beware of using that in your tests, or anything your tests depend on.
  • The vagaries of unity’s compile step mean that some sometimes you can get out-of-date OnValidate calls running on new data as you open the editor. Maybe this is fixable with properly defined assemblies, but we hackily get around it by doing a dummy build right at the start of the jenkins job. It goes as far as setting the correct platform, waits for all the compiles, and quits. Compile errors still cause failures, which is good.
  • If you want editmode and playmode tests in the same job, just run the editor twice. We do this with two different testResults xmls, and we can use a wildcard in the nunit jenkins plugin to pick up both.
  • To test your command line in dos, use the start command: start /wait "" "path to unity.exe" -batchmode ... The extra empty spaces are important if your unity path has spaces in it too. To see the last command’s return code in dos, use echo %ERRORLEVEL%.
  • These are running playmode tests in the editor on the build server. We haven’t yet got around to making playmode tests work in a build. That might end up as a follow-up post!

September 15, 2020

Self-organising teams by Graham Lee

In The Manifesto for Anarchic Software Development I noted that one of the agile manifesto principles is for self-organising teams, and that those tend not to exist in software development. What would a self-organising software team look like?

  1. Management hire a gang and set them a goal, and delegate all decisions on how to run the gang and who is in the gang to members of the gang.
  2. The “team lead” is not in charge of decision-making, but a consultant who can advise gang members on their work. The team lead probably works directly on the gang’s product too, unless the gang is too large.
  3. One member of the gang acts as a go-between for management and communicates openly with the rest of the gang.
  4. Any and all members of the gang are permitted to criticise the gang’s work and the management’s direction.
  5. The lead, the management rep, and the union rep are all posts, not people. The gang can recall any of them at any time and elect a replacement.
  6. Management track the outcomes of the gang, not the “productivity” of individuals.
  7. Management pay performance-related benefits like bonuses to the gang for the gang’s collective output, not to individuals.
  8. The gang hold meetings when they need, and organise their work how they want.

September 14, 2020

Someone has been trolling Apple’s Siri team hard on how they think numbers are pronounced. Today is the second day where I’ve missed a turn due to it. The first time because I didn’t understand the direction, the second because the pronunciation was so confusing I lost focus and drove straight when I should have turned.

The disembodied voice doesn’t even use a recognisable dialect or regional accent, it just gets road numbers wrong. In the UK, there’s a hierarchy of motorways (M roads, like M42), A roads (e.g. A34), B roads (e.g. B3400), and unclassified roads. It’s a little fluid around the edges, but generally you’d give someone the number of an M or A road if you’re giving them directions, and the name of a B road.

Apple Maps has always been a bit weird about this, mostly preferring classifications but using the transcontinental E route numbers which aren’t on signs in the UK and aren’t used colloquially, or even necessarily known. But now its voice directions pronounce the numbers incomprehensibly. That’s ok if you’re in a car and the situation is calm enough that you can study the CarPlay screen to work out what it meant. But on a motorbike, or if you’re concentrating on the road, it’s a problem.

“A” is pronounced “uh”, as if it’s saying “a forty-six” rather than “A46”. Except it also says “forrysix”. Today I got a bit lost going from the “uh foreforryfore” to the “bee forryaytoo” and ended up going in, not around, Coventry.

Entering Coventry should always be consensual.

I’ve been using Apple Maps since the first version which didn’t even know what my town was called, and showed a little village at the other end of the county if you searched for it by name. But with the successive apologies, replatformings, rewrites, and rereleases, it always seems like you take one step forward and then at the roundabout use the fourth exit to take two steps back.

September 12, 2020

Go on, read the manifesto again. You’ll see that it’s a manifesto for anarchism, for people coming together and contributing equally toward solving problems. From each according to their ability, to each according to their need.

The best architectures, requirements, and designs
emerge from self-organizing teams.

While new to software developers in the beginning of this millennium, this would not have been news to architects who noticed the same thing in 1962. A digression: this was more than a decade before architects made their other big contribution to software engineering, the design pattern. The RIBA report noticed two organisations of teams:

One was characterised by a procedure which began by the invention of a building shape and was followed by a moulding of the client’s needs to fit inside this three-dimensional preconception. The other began with an attempt to understand, fully the needs of the people who were to use the building around which, when they were clarified, the building would be fitted.

There were trade-offs between these styles, but the writers of the RIBA report clearly found some reason “to value individuals and interactions over processes and tools”:

The work takes longer and is often unprofitable to the architect, although the client may end up with a much cheaper building put up more quickly than he had expected. Many offices working in this way had found themselves better suited by a dispersed type of work organisation which can promote an informal atmosphere of free-flowing ideas.

Staff retention was higher in the dispersed culture, even though the self-organising nature of the teams meant that sometimes the senior architect was not the project lead, but found themselves reporting to a junior because ideas trumped length of tenure.

This description of self-organising teams in architecture makes me realise that I haven’t knowingly experienced a self-organising team in software, even when working on a team that claimed self-organisation. The idea is prevalent in software of a “platform shop”: a company that builds Rails websites, or Java micro services, or Swift native apps. This is software’s equivalent of beginning “by the invention of a building shape”, only more so: begin by the application of an existing building shape, no invention required.

As the RIBA report notes, this approach “clearly goes with rather autocratic forms of control”. By centralising the technology in the solution design, people can argue that experience with that technology stack (and more specifically, with the way it’s applied in this organisation) is the measure of success, and use that to impose or reinforce a hierarchy.

Clearly, length of tenure becomes a proxy measure for authority in such an organisation. The longer you’ve been in the company, the more experience you have contorting their one chosen solution to attempt to address a client’s problem. Never mind that there are other skills needed in designing a software product (not least of which is actually understanding the problem), and never mind that this “experience” is in repeated application of an unsuitable template: one year of experience ten times over, rather than ten years of experience.

You may be familiar with Unity’s Test Runner window, where you can execute tests and see results. This is the user-facing part of the Unity Test Framework, which is a very extensible system for running tests of any kind. At Roll7 I recently set up the test runner to automatically run simple smoketests on every level of our (unannounced) game, and made jenkins report on failures. In this post I’ll outline how I did the former, and in part two I’ll cover the later.

Play mode tests, automatically generated for every level
Some of our [redacted] playmode and editmode tests, running on Jenkins

(I’m going to assume you have passing knowledge of how to write tests for the test runner)

(a lot of this post is based on this interesting talk about UTF from its creators at Unite 2019)

The UTF is built upon NUnit, a .net testing framework. That’s what provides all those [TestFixture] and [Test] attributes. One feature of NUnit that UTF also supports is [TestFixtureSource]. This attribute allows you to make a sort of “meta testfixture”, a template for how to make test fixtures for specific resources. If you’re familiar with parameterized tests, it’s like that but on a fixture level.

We’re going to make a TestFixtureSource provider that finds all level scenes in our project, and then the TestFixutreSource itself that loads a specific level and runs some generic smoke tests on it. The end result is that adding a new level will automatically add an entry for it to the play mode tests list.

There’s a few options for different source providers (see the NUnit docs for more), but we’re going to make an IEnumerable that finds all our level scenes. The results of this IEnumerable are what gets passed to our constructor – you could use any type here.

class AllRequiredLevelsProvider : IEnumerable<string>
{
    IEnumerator<string> IEnumerable<string>.GetEnumerator()
    {
        var allLevelGUIDs = AssetDatabase.FindAssets("t:Scene", new[] {"Assets/Scenes/Levels"} );
        foreach(var levelGUID in allLevelGUIDs)
        {
            var levelPath = AssetDatabase.GUIDToAssetPath(levelGUID);
            yield return levelPath;
        }
    }
    public IEnumerator GetEnumerator() => (this as IEnumerable<string>).GetEnumerator();
}

Our TestFixture looks like a regular fixture, except also with the source attribute linking to our provider. Its constructor takes a string that defines which level to load.

[TestFixtureSource(typeof(AllRequiredLevelsProvider))]
public class LevelSmokeTests
{
    private string m_levelToSmoke;
    public LevelSmokeTests(string levelToSmoke)
    {
        m_levelToSmoke = levelToSmoke;
    }

Now our fixture knows which level to test, but not how to load it. TestFixtures have a [SetUp] attribute which runs before each test, but loading the level fresh for each test would be slow and wasteful. Instead let’s use [OneTimeSetup] (👀 at the inconsistent capitalisation) and to load and unload our level for each fixture. This depends somewhat on your game implementation, but for now let’s go with UnityEngine.SceneManagement:

// class LevelSmokeTests {
    [OneTimeSetUp]
    public void LoadScene()
    {
        SceneManager.LoadScene(m_levelToSmoke);
    }

Finally, we need some tests that would work on any level we throw at it. The simplest approach is probably to just watch the console for errors as we load in, sit in the level, and then as we load out. Any console errors at any of these stages should fail the test.

UTF provides LogAsset to validate the output of the log, but at this time it only lets you prescribe what should appear. We don’t care about Debug.Log() output, but want to know if there was anything worse than that. Particularly, in our case, we’d like to fail for warnings as well as errors. Too many “benign” warnings can hide serious issues! So, here’s a little utility class called LogSeverityTracker, that helps check for clean consoles. Check the comments for usage.

Our tests can use the [Order] attribute to ensure they happen in sequence:

// class LevelSmokeTests {
    [Test, Order(1)]
    public void LoadsCleanly()
    {
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(2)]
    public IEnumerator RunsCleanly()
    {
        // wait some arbitrary time
        yield return new WaitForSeconds(5);
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(3)]
    public IEnumerator UnloadsCleanly()
    {
        // how you unload is game-dependent 
        yield return SceneManager.LoadSceneAsync("mainmenu");
        m_logTracker.AssertCleanLog();
    }

Now we’re at the point where you can hit Run All in the Test Runner and see each of your levels load in turn, wait a while, then unload. You’ll get failed tests for console warnings or errors, and newly-added levels will get automatically-generated test fixtures.

More tests are undoubtedly more useful than less. Depending on the complexity and setup of your game, the next steps might be to get the player to dumbly walk around for a little bit. You can get a surprising amount of info from a dumb walk!

In part 2, I’ll outline how I added all this to jenkins. It’s not egregiously hard, but it can be a bit cryptic at times.

September 11, 2020

Reading List 265 by Bruce Lawson (@brucel)

September 09, 2020

Dos Amigans by Graham Lee

Tomorrow evening (for me; 1800UTC on 10th Sept) Steven R. Baker and I will twitch-stream our journey learning how to write Amiga software. Check out dosamigans.tv!

September 06, 2020

I finally got around to reading Cal Newport’s latest book: Digital Minimalism. Newport’s previous book, Deep Work, is one of my favourites so I had high expectations – and it delivered. Go read it if you haven’t already.

Newport makes the case that much of the technology that we use – in particular smartphones and social media – has a detrimental impact on our ability to live a deep life. Newport describes the deep life as “focusing with energetic intention on things that really matter – in work, at home, and in your soul – and not wasting too much attention on things that don’t.”

The antidote to the addiction that many of us have to our devices is to become a digital minimalist. Newport defines Digital Minimalism as, “a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value, and then happily miss out on everything else.”

The first step to becoming a digital minimalist is to do a thirty-day digital declutter, where you take a break from optional technologies in your life to rediscover more satisfying and meaningful pursuits.

There’s three steps to the digital declutter process:

  1. Define your technology rules. Decide which technologies fall into the “optional” category. The heuristic Newport recommends is: “consider the technology optional unless its temporary removal would harm or significantly disrupt the daily operation of your professional or personal life”.
  2. Take a thirty-day break. During this break, explore and rediscovered activities and behaviours that you find satisfying and meaningful.
  3. Reintroduce technology. Starting from a blank slate, slow reintroduce technologies that add value to your life and determine how you will use them to maximise this value.

This was a timely read for me. I’ve slipped back into bad habits despite knowing full well the toll that social media and my smartphone can have on me.

September felt like a good time to hit reset and do my own digital declutter experiment so for the next thirty days I’ve committed to:

  • No Twitter use (deleted TweetBot from my phone and iPad, blocked access on Mac)
  • No Instagram use (deleted app from my phone)
  • No email on phone
  • No Trading 212 on my phone
  • Not wearing my Apple Watch
  • No news consumption (RSS and a brief check of the news in the morning is okay)

I’ve introduced a few rules that I’m doing my best to follow:

  • No screens in the bedroom
  • Leave my phone in another room while working
  • Run Focus on my Mac while doing 40 minute work sessions (this blocks email, Slack, and a slew of distracting websites)

I’m also tracking a few habits and metrics every day using the Theme System Journal:

  • 10 minutes+ of meditation
  • 30 minutes+ reading
  • 10k steps
  • No alcohol
  • Journaling
  • Whether I’ve completed my daily highlight
  • The number of sessions of deep work I’ve completed (I aim for 4 40-minute sessions per day)
  • Hours on my phone/pickups via iOS’s ScreenTime feature

I am convinced that a reduction in the time I spend on Twitter and Instagram will be beneficial. The Apple Watch is more interesting: I use the health and fitness tracking features which I find useful, but I am also convinced that it creates a low-level anxiety (have I closed my rings? what’s my heart rate? etc.). It’ll be interesting to see how I feel about the Apple Watch at the end of the month.

September 04, 2020

Free as in Water by Graham Lee

The whole “Free as in beer versus free as in freedom” thing confuses people. Or maybe it doesn’t, and it allows detractors to sow fear, uncertainty and doubt over free software by feigning confusion. Either way, people express confusion.

What is “free as in beer”? Beer is never free, it costs money. Oh, you mean when someone gives me free beer. So, like a round-ordering system, where there’s an expectation that I’ll reciprocate later? Or a promotional beer, where there’s a future expectation that I’ll buy more beer?

No, we mean the beer that a friend buys you when you’re out together and they say “let’s get a couple of beers”. There’s no financial tally kept, no expectation to reciprocate, because then it wouldn’t be friendship: it would be some exchange-mediated relationship that can be nullified by balancing the books. There’s no strings attached, just beer (or coffee, or orange squash, whatever you drink). You get the beer, you don’t pay: but you don’t get to make your own beer, or improve that beer. Gratuity, but no liberty.

Various extensions have been offered to the gratis-vs-libre discussions of freedom. One of the funniest, from a proprietary software vendor’s then-CEO, was Scott McNealy’s “free as in puppies”: implying that while the product may be gratis, there’s work to come afterwards.

I think another extension to help software producers like you and me understand the point of the rights conferred by free software is “free as in water”. In so-called developed societies, most of us pay for water, and most of us have a reasonable expectation of a right to access for water. In fact, we often don’t pay for water, we pay for the infrastructure that gets clean, fresh water to our houses and returns soiled water to the treatment centres. If we’re out of our houses, there are public water fountains in urban areas, and a requirement for refreshment businesses to supply fresh water at no cost.

Of course, none of this is to say that you can’t run a for-profit water business. Here in the UK, that infrastructure that gets the main water supply to our houses, offices and other buildings is run for profit, though there are certain expectations placed on the operators in line with the idea that access to water is a right to be enjoyed by all. And nothing stops you from paying directly for the product: you can of course go out and buy a bottle of Dasani. You’ll end up with water that’s indistinguishable from anybody else’s water, but you’ll pay for the marketing message that this water changes your life in satisfying ways.

When the expectation of the “freedom to use the water, for any purpose” is violated, people are justifiably incensed. You can claim that water isn’t a human right, and you can expect that view to be considered dehumanising.

Just as water is necessary to our biological life, so software has become necessary to our social and civic lives due to its eating the world. It’s entirely reasonable to want insight and control into that process, and to want software that’s free as in water.

In the spring of 2020, the GNOME project ran their Community Engagement Challenge in which teams proposed ideas that would “engage beginning coders with the free and open-source software community [and] connect the next generation of coders to the FOSS community and keep them involved for years to come.” I have a few thoughts on this topic, and so does Alan Pope, and so we got chatting and put together a proposal for a programming environment for making simple apps in a way that new developers could easily grasp. We were quite pleased with it as a concept, but: it didn’t get selected for further development. Oh well, never mind. But the ideas still seem good to us, so I think it’s worth publishing the proposal anyway so that someone else has the chance to be inspired by it, or decide they want it to happen. Here:

Cabin: Creating simple apps for Linux, by Stuart Langridge and Alan Pope

I’d be interested in your thoughts.

August 28, 2020

A surprising new creative hobby – Painting Portraits

I started painting at the start of lockdown and here’s my story.

It’s May, Boris Johnson has just told us all we’re not to leave the house and the nation stays indoors. Like many, I turned to surprising new areas of interests to keep me sane – for me it was painting portraits of famous people.

I’ve never picked up a paintbrush for artistic reasons and if I’m honest I’ve never really liked art yet as of August 2020 I have over 80 paintings to my name. So what happened?

We’ll it all started with Grayson Perry’s Art Club – I actually missed the portrait episode but when I saw my wife was watching it I sat and watched with interest.

It was actually seeing Joe Lycett (from my home town of Solihull) painting Chris Whitty that made me think “Oh I’d like to have a go at this, he looks like he’s having fun regardless of the finished product”

My early Acrylic Portrait PaintingsFirst paintings I loved.

Thankfully for me the start up was cheap as my mother-in-law Rosy had loads of acrylic paints and an easel I could have. So I got painting and although the first few were very rough I posted them to Facebook and people really reacted to them so it encouraged me to keep painting.

It was the Richard Ayoade and Frank N’ Furter (Rocky Horror Show) paintings that made me feel I could actually take this further.

A whole load of my acrylic portrait paintingsMy early works

Setting up Portmates

At the time I was posting these paintings to my personal Instagram and Facebook pages which had limited reach as my Facebook is pretty locked down so it was natural to create a new page for my artwork.

I decided to call my brand Portmates, which is a mixture of Portraits and ‘Your mates’ as I was calling my paintings mates at the time. It’s a bit cheesy but it’s stuck.

I did a self portrait which my Facebook page followers LOVED and that became my brand identity and even done versions with 80’s style glasses for other art sets, but more on that later.

My branding based on my self portraitMy self portrait became my brand ID.

Page Growth

I was loving it. For once in my career I was in control. I didn’t have to wait for developers or beg, borrow and steal time from people so I could launch products of my own. I just painted and used my background in product design to release things I could sell.

In the early days I was doing lots of ‘Win a portrait’ competitions and free commissions which really helped get my art in front of new people. I have a whole gallery of people with my paintings. It’s ace! I love seeing them out in the wild. One painting is even hanging in a local hair dressers.

Some of my art sales and commissions Out in the wild!

Facebook groups

Part of my enjoyment is sharing to various facebook groups. I can’t thank Staceylee at the Portrait Artists UK group for the shares and kind words. It’s been a huge part of my journey and will continue to do so.

Stacey has often shared some of my live painting sessions and even purchased a set of cards from me she has really pushed me forward even if she’s unaware she’s been doing it.

The other group I had early success with was the Grayson Perry Art Cub group, they also fed me some very lovely feedback in the early days.

Me, Mulder and Scully paintingI joined the FBI

Developing my style

One of the things that struck me early on was how many of the established artists in these groups were praising my style and how I’d managed to find my artistic voice really quickly.

This lead me to think about the paintings I’ve done before and how I could expand them, make them feel more like original pieces of art rather than perhaps fairly crude paintings.

A weekend of painting portraits A weekends work

I had already been using fairly heavy lines in my paintings and once I upgraded some equipment (better paint brushes, moving from paper to canvases) I was able to be more experimental with my paintings. (It’s far easier to make mistakes on Canvas as you can paint over them without destroying the paper)

I’m now adding far more colour and being braver with my art. Because I have a design background I’m leaning into that – I think my art is somewhere between graphic art and portrait art.

More of my portrait art

Selling art

To my surprise I was getting enquires about buying some of my paintings. To this date I’ve sold three paintings (roughly one a month) which as a hobby ain’t bad at all!

I also started creating my own products such as Art Cards and Prints, which If I’m honest haven’t sold very well but I have sold a few… but as it’s still early days I’m happy with having some stock so when I do find new audiences they’ll have something to buy from me.

80's action art collectionMy 80’s action Art Cards

I’ve experimented with Shopify, Etsy but settled on having a Big Cartel Store. You can of course buy them here.

I’m going to be focusing on selling originals for the next few months before they take over my office and I have to use them as fuel.

I’ve been approached by an online gallery which is exciting, my work should be available to sell from them very soon.

Oh by the way, if you’re interested in the process of getting products from painting to printed items I can sell I did a tweet thread about how I did it (TL;DR, scan, touch up, print) – Read it here.

My art cards volume one available for sale.Art Cards Volume 1

Benefits of painting

My skin has always sucked. I scratch myself senseless and it effects every elements of my life. I don’t sleep well and it effects my mental health. Thankfully since I’ve started painting my skin has recovered MASSIVELY.

Although I still have a way to go before my skin is healthy the painting has really helped me destress and focus my efforts on something other than work, fitness or TV.

When I paint there’s nothing else on my mind, just focusing on the art and getting it how I want it to look (which can take a while!)

(Fitness is still really important to me I’m just less obsessed with how I look in a mirror)

The future

The future is really exciting. I’m planning on expanding my collection of portraits, perhaps even aiming to do a self funded exhibition or even at an established gallery.

I plan on releasing more prints in time for Black Friday / Christmas rush. I know there’s a demand for some of the legends, especially Elton and Freddie so I’ll probably get some of those made ready for Christmas.

Some legends

I’m also going to be exploring other mediums, I’ve ordered two blank skateboards which I’m going to be painting some designs on!

But if I’m honest with myself I don’t see art being my full time focus any time soon as I still really love my UI / UX app design work so I’m mainly going to keep enjoying the new creative outlet I’ve found for myself.

I also need to remind myself I’m very new to this and if any good things are on my horizon with my art it will happen in time. ❤

So TL;DR – I paint now.

Please follow me on all the socials, my Instagram is buzzing at the moment and my Facebook page is a real lovely community of people. Find all my links here.

I’m fairly confident nobody is reading this far down the page but feel free to tweet me with your own lockdown creative stories.

Paintings I did on holiday.Paintings I did on Holiday

The post A surprising new creative hobby – Painting Portraits appeared first on .

August 21, 2020

August 18, 2020

One of the projects I’m working on involves creating a little device which you talk to from your phone. So, I thought, I’ll do this properly. No “cloud service” that you don’t need; no native app that you don’t need; you’ll just send data from your phone to it, locally, and if the owners go bust it won’t brick all your devices. I think a lot of people want their devices to live on beyond the company that sold them, and they want their devices to be under their own control, and they want to be able to do all this from any device of their choosing; their phone, their laptop, whatever. An awful lot of devices don’t do some or all of that, and perhaps we can do better. That is, here’s the summary of that as a sort of guiding principle, which we’re going to try to do:

You should be able to communicate a few hundred KB of data to the device locally, without needing a cloud service by using a web app rather than a native app from an Android phone.

Here’s why that doesn’t work. Android and Chrome, I am very disappointed in you.

Bluetooth LE

The first reaction here is to use Bluetooth LE. This is what it’s for; it’s easy to use, phones support it, Chrome on Android has Web Bluetooth, everything’s gravy, right?

No, sadly. Because of the “a few hundred KB of data” requirement. This is, honestly, not a lot of data; a few hundred kilobytes at most. However… that’s too much for poor old Bluetooth LE. An excellent article from AIM Consulting goes into this in a little detail and there’s a much more detailed article from Novelbits, but transferring tens or hundreds of KB of data over BLE just isn’t practical. Maybe you can get speeds of a few hundred kilo bits per second in theory, but in practice it’s nothing like that; I was getting speeds of twenty bytes per second, which is utterly unhelpful. Sure, maybe it can be more efficient than that, but it’s just never going to be fast enough: nobody’s going to want to send a 40KB image and wait three minutes for it to do so. BLE’s good for small amounts of data; not for even medium amounts.

WiFi to your local AP

The next idea, therefore, is to connect the device to the wifi router in your house. This is how most IoT devices work; you teach them about your wifi network and they connect to it. But… how do you teach them that? Normally, you put them in some sort of “setup” mode and the device creates its own wifi network, and then you connect your phone to that, teach it about your wifi network, and then it stops its own AP and connects to yours instead. This is maybe OK if the device never moves from your house and it only has one wifi network to connect to; it’s terrible if it’s something that moves around to different places. But you still need to connect to its private AP first to do that setup, and so let’s talk about that.

WiFi to the device

The device creates its own WiFi network; it becomes a wifi router. You then connect your phone to it, and then you can talk to it. The device can even be a web server, so you can load the controlling web app from the device itself. This is ideal; exactly what I planned.

Except it doesn’t work, and as far as I can tell it’s Android’s fault. Bah humbug.

You see, the device’s wifi network obviously doesn’t have a route to the internet. So, when you connect your phone to it, Android says “hey! there’s no route to the internet here! this wifi network sucks and clearly you don’t want to be connected to it!” and, after ten seconds or so, disconnects you. Boom. You have no chance to use the web app on the device to configure the device, because Android (10, at least) disconnects you from the device’s wifi network before you can do so.

Now, there is the concept of a “captive portal”. This is the thing you get in hotels and airports and so on, where you have to fill in some details or pay some money or something to be able to use the wifi; what happens is that all web accesses get redirected to the captive portal page where you do or pay whatever’s necessary and then the network suddenly becomes able to access the internet. Android will helpfully detect these networks and show you that captive portal login page so you can sign in. Can we have our device be a captive portal?

No. Well, we can, but it doesn’t help.

You see, Android shows you the captive portal login page in a special cut-down “browser”. This captive portal browser (Apple calls it a CNA, for Captive Network Assistant, so I shall too… but we’re not talking about iOS here, which is an entirely different kettle of fish for a different article), this CNA isn’t really a browser. Obviously, our IoT device can’t provide a route to the internet; it’s not that it has one but won’t let you see it, like a hotel; it doesn’t have one at all. So you can’t fill anything into the CNA that will make that happen. If you try to switch back to the real browser in order to access the website being served from the device, Android says “aha, you closed the CNA and there’s still no route to the internet!” and disconnects you from the device wifi. That doesn’t work.

You can’t open a page in the real browser from the CNA, either. You used to be able to do some shenanigans with a link pointing to an intent:// URL but that doesn’t work any more.

Maybe we can run the whole web app inside the CNA? I mean, it’s a web browser, right? Not an ideal user experience, but it might be OK.

Nope. The CNA is a browser, but half of the features are turned off. There are a bunch of JavaScript APIs you don’t have access to, but the key thing for our purposes is that <input type="file"> elements don’t work; you can’t open a file picker to allow someone to choose a file to upload to the device. So that’s a non-starter too.

So, what do we do?

Unfortunately, it seems that the plan:

communicate a few hundred KB of data to the device locally, without needing a cloud service by using a web app rather than a native app from an Android phone

isn’t possible. It could be, but it isn’t; there are roadblocks in the way. So building the sort of IoT device which ought to exist isn’t actually possible, thanks very much Android. Thandroid. We have to compromise on one of the key points.

If you’re only communicating small amounts of data, then you can use Bluetooth LE for this. Sadly, this is not something you can really choose to compromise on; if your device plan only needs small volumes, great, but if it needs more then likely you can’t say “we just won’t send that data”. So that’s a no-go.

You can use a cloud service. That is: you teach the device about the local wifi network and then it talks to your cloud servers, and so does your phone; all data is round-tripped through those cloud servers. This is stupid: if the cloud servers go away, the device is a brick. Yes, lots of companies do this, but part of the reason they do it is that they want to be able to control whether you can access a device you’ve bought by running the connection via their own servers, so they can charge you subscription money for it. If you’re not doing that, then the servers are a constant ongoing cost and you can’t ever shut them down. And it’s a poor model, and aggressively consumer-hostile, to require someone to continue paying you to use a thing they purchased. Not doing that. Communication should be local; the device is in my house, I’m in my house, why the hell should talking to it require going via a server on the other side of the world?

You can use a native app. Native apps can avoid the whole “this wifi network has no internet access so I will disconnect you from it for your own good” approach by calling various native APIs in the connectivity manager. A web app can’t do this. So you’re somewhat forced into using a native app even though you really shouldn’t have to.

Or you can use something other than Android; iOS, it seems, has a workaround although it’s a bit dodgy.

None of these are good answers. Currently I’m looking at building native apps, which I really don’t think I should have to do; this is exactly the sort of thing that the web should be good at, and is available on every platform and to everyone, and I can’t use the web for it because a bunch of decisions have been taken to prevent that. There are good reasons for those decisions, certainly; I want my phone to be helpful when I’m on some stupid hotel wifi with a signin. But it’s also breaking a perfectly legitimate use case and forcing me to use native apps rather than the web.

Unless I’m wrong? If I am… this is where you tell me how to do it. Something with a pleasant user experience, that non-technical people can easily do. If it doesn’t match that, I ain’t doin’ it, just to warn you. But if you know how this can be done to meet my list of criteria, I’m happy to listen.

Six Colours by Graham Lee

Apple has, in my opinion, some of the best general-purpose computing technology on the market right now, and has had some of the best for all of this millennium. However, their business practices are increasingly punitive, designed to extract greater amounts of rental income from their existing customers (“want to, um, read the newspaper? $9.99/mo, and we get to choose which newspaper!”), with rules that punish those who aren’t interested in helping them extract that rent.

Throughout the iPhone era, Apple has dealt arbitrary hands to people who try to work with them: removing the I Am Rich app without explanation; giving News Corp. a special arrangement to allow in-app subscriptions when nobody else could do it; allowing Netflix to carry on operating rent-free while disallowing others.

People put up with this for the justifiable reason that the Apple technology platform is pleasant and easy to use, well-integrated across multiple contexts including desktop, mobile, wearable and home. None of Apple’s competitors are even playing the same game: you could build some passable simulacrum using multiple vendor technology (for example Windows, Android, Dropbox; or Ubuntu, Replicant, Nextcloud) but no single outlet is going to sell you the “it just works” version of that setup. Not even any vendor consortium works together to provide the same ease and integration: you can’t buy a Windows PC from Samsung, for example, that’ll work out of the box with your Galaxy phone. Even if you get your Chromebook and your Pixel phone from Google, you’ve got some work ahead of you to get everything synced up.

And then, of course, since the failure of their banner ad business, Apple have successfully positioned themselves as the non-ad, non-data-gathering company. Sure, you could get everything we’re doing cheaper elsewhere: but at what cost?

My view is that the one fact—the high-quality technology—doesn’t excuse the other—the rent-extracting business model and capricious heavy-handed application of “the rules” with anyone who tries to work with them. People try to work with them because of the good technology, and get frustrated, enervated, or shut down because of the power imbalance in business. It is OK to criticise Apple for those things they’re not doing well or fairly; it’s a grown-up company worth trillions of dollars, it’ll probably weather the storm. If enough people on the inside learn about and address the criticisms, they may even get better, which will be good for a massive global network of Apple’s employees, suppliers, and customers.

It seems to me that some people (and I’m explicitly talking about people outside Apple now, obviously employees are expected to abide by whatever internal rules the company has and it takes a special kind of person to blow the whistle) will brook none of that. There are people who will defend the two-trillion dollar corporation blocking some small indie’s business; “they’re just applying their rules” (the rules that say I’ll know it when I see it, indicating explicitly that capricious application is to be expected).

It seems weird that a Person On The Internet would feel the need to rush to the defence of The World’s Biggest Company, and so to my mind it seems like they aren’t. It seems like they’re rushing to the defence of 1990s Beleaguered Apple, the company with three weeks of salary money in the bank that’s running on the memory of having the best computers and the hope that maybe the twenty-first model we release this month will be the one that sells. The Apple with its six-coloured logo, where you have to explain that actually the one-button mouse doesn’t make it a toy and you can do real work with it, but could you please send that document in Word 6 format as my Word can’t open Word 97 files thank you. The Apple where actually if you specced up a PC to match this it would probably cost about the same, it’s just that PCs also cover the lower end. The Apple where every friend or colleague you convinced to at least try it out meant a blow to the evil monolith megacorporation bringing computing to the dark side with its nefarious, monopolistic practices and arbitrary treatment of its partners.

That company no longer needs defending. It would be glib to say “that Apple ceased trading on February 7, 1997”, the date that NeXT, Inc. finally disappeared as an independent business. But that’s not what happened. That company slowly boiled as the temperature around it rose. The iMac, iBook, iPod, Mac OS X, iPhone, iPad: all of these things came out of that company. Admittedly, so did iTools, .Mac, and Mobile Me, but eventually along came iCloud. Obviously 2020 Apple is a continuation of the spirit and culture of 1997 Apple, 1984 Apple, 1976 Apple. It has some of the same people, and plenty of people who learned from the people who are and were the same people. But it is also entirely different. Through a continuum of changes, but no deliberate “OK, time to rip off the mask” conversion, Apple is now the IBM that fans booed in 1984, or the Microsoft that fans booed in 1997.

It’s OK to not like that, to not defend it, but to still want something good to come out of their great technology. We have to let go of this notion that for Apple to win, everyone else has to lose.

August 14, 2020

Reading List 264 by Bruce Lawson (@brucel)

August 09, 2020

Nvidia and ARM by Graham Lee

Nvidia’s ambitions are scarcely hidden. Once it owns Arm it will withdraw its licensing agreements from its competitors, notably Intel and Huawei, and after July next year take the rump of Arm to Silicon Valley

This tech giant up for sale is a homegrown miracle – it must be saved for Britain

August 08, 2020

Fairness by Graham Lee

There are two different questions of fairness when it comes to the App Store rules. Apple always spin it to mean “these rules are applied fairly”, which is certainly not true. Putting aside questions of why Netflix get to do things Hey don’t, it’s pretty obvious that the rules include “don’t make apps in these spaces where Apple has apps” that don’t apply to Apple. It’s also clear that nobody in the App Store team is rules lawyering Apple apps on the rest of the rules, either.

But all of that ignores the larger question, “are these rules fair?”

August 07, 2020

grotag by Graham Lee

Lots of Amiga documentation was in the AmigaGuide format. These are simple ASCII documents with some rudimentary markup to turn them into hypertext, working something like TeXInfo manuals. Think more like a markdown-enabled Gopher than the web though: you can link out to an image, video, or any other media (you could once you had AmigaOS 3, anyway) but you can’t display it inline.

Unfortunately choices for modern readers are somewhat limited. Many links are only now found on the Internet Archive, and many of those don’t go to downloads you can actually download. I found a link to an old grotag binary, but it was PowerPC-only.

…and it was on Sourceforge, so I cloned the project and updated the build. I haven’t created a new package yet, but it runs well enough out of Idea. I need to work out how you package Java Swing apps, then do that. It’ll be worth fixing a couple of deprecations, putting assets like the CSS file in the JAR, and maybe learning enough JavaFX to port the UI:

grotag AmigaGuide viewer

To use it, alongside your Guide file you also need a grotag.xml that maps Amiga volume links onto local filesystem paths, so that grotag can find nodes linked to other files. There’s an example of one in the git repo.

Concrete freedoms by Graham Lee

Discussions about free software or open source software can always seem a bit abstract. Who cares if I’ve got the source code, if I’m never going to read it or change it? Why would I want “free” versions of my apps when there are already plenty of zero-cost (i.e. as free as I need) alternatives in my App Store?

This has been the year in which Apple tightened the screws on the App Store, making it clear that anyone who isn’t giving them their 30% or 15% cut or isn’t Netflix or Spotify is on thin ice. In an Orwellian move, they remote-killed Charlie Monroe’s apps and told users that they couldn’t run apps they’d paid for, because the apps would damage their computers.

At the base of the definition of free and open source software are the four freedoms. The first:

The freedom to run the program as you wish, for any purpose (freedom 0).

This is freedom “zero” not just because of the C-style zero indexing pun, but because it was added into the space preceding freedom one after the other three were written. The Free Software Foundation didn’t think it needed explicitly stating, but apparently it does.

In a free software world, YOU are free to run software as you wish, for any purpose. Some trillion-dollar online company pretending to be a bricks-and-mortar retailer equivalent isn’t going to come along and say “sorry, we’ve decided we don’t want you running that, and rather than explain why we’re just going to say it’s for your own good.” They aren’t going to stop developers from sharing or selling their software, on the basis that they haven’t paid enough of a tithe to the mothership.

These four freedoms may seem abstract, but they have real and material consequences. So does their absence.

August 03, 2020

NeXT marketed their workstations by letting Sun convince people they wanted a workstation, then trying to convince customers (who were already impressed by Sun) that their workstation was better.

As part of this, they showed how much better the development tools were, in this very long reality TV infomercial.

If you don’t know the name Igalia, you’ve still certainly used their code. Igalia is “an open source consultancy specialized in the development of innovative projects and solutions”, which tells you very little, but they’ve been involved in adding many features to the open-source browsers (which is now all browsers) such as MathML and CSS Grid.

One of their new initiatives is very exciting, called Open Prioritisation (I refuse to mis-spell it with a “z”). The successful campaign to support Yoav Weiss adding the <picture> element and friends to Blink and WebKit showed that web developers would contribute towards crowdfunding new browser features, so Igalia are running an experiment to get the diverse interests and needs of the web develoment community to prioritise which new features should be added to the web platform.

They’ve identified some possible new features that are “shovel-ready”—that is, they’re fully specified and ready to be worked on, and the Powers That Be who decide what gets upstreamed and shipped are supportive of their inclusion. Igalia says,

Igalia is offering 6 possible items which we could do to improve the commons with open community support, as well as what that would cost. All you need to do is pledge to the ‘pledged collective’ stating that if we ran such a campaign you’re likely to contribute some dollars. If one of these reaches its goal in pledges, we will announce that and begin the second stage. The second stage is about actually running the funding campaign as a collective investment.

I think this is a brilliant idea and will be pledging some of my pounds (if they are worth anything after Brexit). Can I humbly suggest that you consider doing so, too? If you can’t, don’t fret (these are uncertain times) but please spread the word. Or if your employer has some sort of Corporate Social Responsibility program, perhaps you might ask them to contribute? After all, the web is a common resource and could use some nurturing by the businesses it enables.

If you’d like to know more, Uncle Brian Kardell explains in a short video. Or (shameless plug!) you can hear Brian discuss it with Vadim Makeev and me in the fifth episode of our podcast, The F-Word (transcript available, naturally). All the information on the potential features, some FAQs and no photos of Brian are to be found on Igalia’s Open Prioritization page.

YouTube video

Read More

August 02, 2020

Apollo accelerators make the Vampire, the fastest Motorola 680×0-compatible accelerators for Amiga around. Actually, they claim that with the Sheepsaver emulator to trap ROM calls, it’s the fastest m68k-compatible Mac around too.

The Vampire Standalone V4 is basically that accelerator, without the hassle of attaching it to an Amiga. They replicated the whole chipset in FPGA, and ship with the AROS ROMs and OS for an open-source equivalent to the real Amiga experience.

I had a little bit of trouble setting mine up (this is not surprising, as they’re very early in development of the standalone variant and are iterating quickly). Here’s what I found, much of it from advice gratefully received from the team in the Apollo Slack. I replicate it here to make it easier to discover.

You absolutely want to stick to a supported keyboard and mouse, I ended up getting the cheapest compatible from Amazon for about £20. You need to connect the mouse to the USB port next to the DB-9 sockets, and the keyboard to the other one. On boot, you’ll need to unplug and replug the mouse to get the pointer to work.

The Vampire wiki has a very scary-looking page about SD cards. You don’t need to worry about any of that with the AROS image shipped on the V4. Insert your SD card, then in the CLI type:

mount sd0:

When you’re done:

assign sd0: dismount
assign sd0: remove

The last two are the commands to unmount and eject the disk in AmigaDOS. Unfortunately I currently find that while dismounting works, removing doesn’t; and then subsequent attempts to re-mount sd0: also fail. I don’t know if this is a bug or if I’m holding it wrong.

The CF card with the bootable AROS image has two partitions, System: and Work:. These take up around 200MB, which means you’ve got a lot of unused space on the CF card. To access it, you should get the fixhddsize tool. UnLHA it, run it, enter ata.device as your device, and let it fix things for you.

Now launch System:Tools/HDToolBox. And click “Add Entry”. In the Devices dialog, enter ata.device. Now click that device in the “Changed Name” list, then double-click on the entry that appears (for me, it’s SDCFXS-0 32G...). You’ll see two entries, UDH0: (that’s your System: partition) and UDH1: (Work:). Add an entry here, selecting the unused space. When you’ve done that, save changes, close HDToolBox, and reboot. You’ll see your new drive appear in Workbench, as something like UDH2:NDOS. Right click that, choose Format, then Quick Format. Boom.

My last tip is that AROS doesn’t launch the IP stack by default. If you want networking, go to System:Prefs/Network, choose net0 and click Save. Optionally, enable “Start networking during system boot”.

August 01, 2020

6502 by Graham Lee

On the topic of the Apple II, remember that MOS was owned by Commodore Business Machines, a competitor of Apple’s, throughout the lifetime of the computer. Something to bear in mind while waiting to see where ARM Holdings lands.

Obsolescence by Graham Lee

An eight-year-old model of iPad is now considered vintage and obsolete. For comparison, the Apple ][ was made from 1977-1993 (16 years) and the January 1983 Apple //e would’ve had exactly the same software support as the final model sold in November 1993, or the compatibility cards for Macintosh sold from 1991 onwards.

July 31, 2020

Some programming languages have a final keyword, making types closed for extension and open for modification.

July 30, 2020

The Nineteen Nineties by Graham Lee

I’ve been playing a lot of CD32, and would just like to mention how gloriously 90s it is. This is the startup chime. For comparison, the Interstellar News chime from Babylon 5.

Sure beats these.

App Structure 2020 by Luke Lanchester (@Dachande663)

Five years ago I wrote about how I structured applications. A lot has changed in five years. An old saying states you should be able to look back on your work of yesterday and wonder what you were thinking. So here is how I structure apps in 2020, what’s changed and what’s stayed the same.

Without further ado, here are the most common classes created for a fictional Movie type.

App\Http\Controllers\ApiMovieController
App\Modules\Movie\Entities\Movie
App\Modules\Movie\Commands\CreateMovieCommand
App\Modules\Movie\CommandHandlers\CreateMovieCommandHandler
App\Modules\Movie\Policies\MoviePolicy
App\Modules\Movie\Readers\MovieReader
App\Modules\Movie\Repositories\MovieRepository
App\Modules\Movie\Repositories\MovieRepositoryInterface
App\Modules\Movie\Resources\MovieResource

So what’s changed? Well the first thing is entities are now split into modules, with each module containing its own repositories, commands, resources et al. This makes it much simpler to identify all the parts of a specific module, even if there do tend to be strong links between modules (for instance the User module is referenced elsewhere).

The other big change is the move to Command Query Responsibility Segregation (CQRS). In 2015, I was handling a lot of the logic within the Controller. This was fine when the controller was the sole owner of an action. But as systems have grown, there are more and more events. Bundling authentication, logging, retries etc into the controller caused them to become unwieldy, especially in classes with many methods.

With CQRS, read requests go through a Reader. This is responsible for accepting query information, validating the input (and the user), and then returning data. All data is transformed through one or more Resources.

Any actions that may be performed, from creating a movie to submitting a vote on that entity, are now encapsulated as Commands. Each command contains all of the data needed for it to work including the user performing the action, the entity under control and any inputs. Looking at a command, you can see exactly what’s needed!

Commands are then routed through an event bus. This allows for logging of all actions, addition of transactions and retry controls, and authentication all without needing to touch the actual Command Handler that does the final work!

The system isn’t perfect, but it strikes a good balance between been self-documenting/protecting, and fast for rapid development.

Back to Top