When I was growing up, I was obsessed with watching TV. I would rush home after school and wake up early on weekends, just to soak up the magic of storytelling on screen. But as a child with partial deafness, I could only catch about 70% of the dialogue; the rest was guesswork. Like being in a foreign country, winging it with limited vocab, not having full access is tiring and everything is tinged with a sense of alienation.
One day in the early 80s my parents brought home a new TV set. Up flicked a page of blocky coloured digital text – Teletext. They pressed page 888 and subtitles suddenly appeared. It was a revolution, my own personal moon landing. The half stories were unlocked. I had full access.
TV inclusion was later extended to visually impaired people with the arrival of audio description, and in the mid-90s the government legislated that a proportion of UK terrestrial TV would be offered with British Sign Language interpretation. But since then, there has been little innovation in TV access.
As I devised my own children’s series, Mixmups, and started writing stories for Pockets, Giggle and Spin and their magical wooden spoon, I wondered how deaf children, who were too young to read English subtitles, would access my work. I thought about how much of the stories they would understand and how much their brains would be left to guess. At the time, my godson, who is visually impaired, was learning braille and another friend’s two children were diagnosed with autism and ADHD. I realised they too were being left to fill in the blanks.
Anyone with a special needs child knows the time spent navigating an inaccessible world, and adapting to it. I began observing the work teachers and parents do to bring stories to life using props, emotional regulation cues and social stories to embed concepts in a bespoke way.
I wondered if TV access could be personalised too? What if you could turn down the background sound so a deaf child could focus on dialogue alone? What if you could strip out background colour, allowing a child’s eye to be drawn to the characters and essential action only? What if you could choose between British Sign Language or Makaton signs, or learn signs for key concepts at the start of an episode? What if you could watch a shortened storyline to pre-embed understanding or provide a list of sensory props to place in the hands of a child to bring an episode to life in a tactile way? Could Mixmups revolutionise TV access for the next generation by devising a way to allow viewers to pick from a menu based on their own access needs?
The Netflix series You vs. Wild used interactive TV technology to allow viewers to make decisions about the narrative, choosing to send presenter Bear Grylls up the mountain or down the valley. I wondered if this tech could extend personalisation of “how” we view, rather than altering the narrative itself. I contacted Stornaway, the Bristol-based interactive TV technology firm and, together with the Mixmups team, we devised Ultra Access.
The launch of Mixmups with Ultra Access marks an important milestone in broadcasting. With a choice of 14 access features – from low background sound to Makaton and big subtitles – the permutations of viewing Mixmups with Ultra Access are mind boggling. There are now thousands of ways to watch, and meet every child’s unique needs.
Beyond Mixmups, Ultra Access could enable all streaming platforms to have an optional BSL signer (as you can have subtitles and audio description), so that all viewers can start from the same landing page (at present, BSL content is often buried and hard to find), or enable global streamers to choose country-specific sign language at the click of a button. Advances in AI will probably make Ultra Access even easier to streamline. Whatever the future holds, Ultra Access remains the biggest development in TV access for decades. As one parent of a disabled child at our user-testing focus group said, “Finally, someone gets it!”