Lots of dancers, civic society organisations, artists, musicians, clowns are only on Instagram. Unfortunately, my mind finds the platform distracting and overwhelming – for my own accessibilty I need to tame it.
It has a bunch of features for you to take control of Instagram, but the critical one is stopping AI recommendations, so you can just view the accounts you’re following without being pulled into something you didn’t want to spend time on.
Please leave a review, and/or give me feedback here if you try it out!
I’d love to know what other people need it to do. It’s all open source, repository here.
One of the projects in my week long hackathon was to make a system to sample my own mind with a random alarm.
This was crudely based on Hurlburt’s Descriptive Experience Sampling – there are lots of papers about that method, for an introduction see this overview BBC article or my own post about a book on the topic. Hurlburt has for a few years now released videos of the method in progress, which are long and excellent – I’ve watched a lot of the first batch sampling Lena.
There are substantial differences in what I did from the academic method, notably:
I didn’t use a sharp alarm sound, but a distinctive slightly musical one.
My sound stopped after a few seconds, so I didn’t have to turn it off.
My alarm was not direct into my ear via a headpiece, but just ambiently from my phone. The directness apparently drills into the mind interrupting more cleanly (source: introductory Lena video)
I didn’t use a paper notebook but a note taking app on my phone, and that distracted me while I was trying to capture my mental experience.
Hurlburt is very clear that the discussion with an expert within 24 hours of taking samples is critical, and leads to better samples after the first couple of days. I didn’t discuss my samples with anyone, and I only reflected on them immediately after taking them.
I tried to immediately categorise my thoughts with Hurlburt’s Codebook for DES. That’s meant to be a separate process done by researchers after open-mindedly investigating the actual phenomena that is happening.
All that said, I still found this amateur exercise useful. I took 46 samples, 9 of which had more than one category. This is the percentage of samples which had each category in them – so the total is more than 100%:
Unsymbolised thinking
30.4%
Just reading
26.1%
Just doing
19.6%
Just listening
13.0%
Inner speech
8.7%
Just watching
8.7%
Feeling
6.5%
Sensory awareness
4.3%
Uncategorized
2.2%
The 2008 paper by Hurlburt is a good one which gives how common the different categories happen in a population of psychology students, for something to compare to.
Some thoughts:
Of the top 5 most common phenomena (imagination/unsymbolized/inner speech/emotion/sensory – as per the 2008 paper by Hurlburt), I mainly use unsymbolized thinking, then inner speech, then feeling emotion, then very little sensory awareness. No imagination. This wasn’t a big surprise to me – I got into this topic by realising I’m mostly aphantasic.
A lot of the time I am taking in content whether text, audio or video, and not thinking conceptually about it or feeling emotion about it. That feels a bit too much, maybe I could be conscious of when I’m just surfing on incoming content. Reduce content that doesn’t make me think. Or maybe I am thinking more than the samples show, it is just subconscious.
They talk in one of the Lena videos about “scrolling” with nothing else going on being a common experience now, and imply it is bad (although of course are not meant to have opinions about that).
19.6% of the time “just doing” feels like a good amount, and I like that I do that a lot. Feels connected, present, focussed.
My categories are probably wrong. I read the codebook quite a lot while doing this, but I don’t think I’ve learnt it in the detailed way the researchers using it have.
I was between jobs in this period, and reading and doing side projects a lot. I’d get different results if working. Although, I don’t think I could do it in a social work context – I didn’t like taking samples when people were around, and ignored them.
A key observation I had while doing the exercise was that something much richer is happening in my mind than is captured by the categories – it often felt subtle and complex. In the Lena videos, her inner experiences are similarly rich and complicated and multifaceted to dig into. This is why categorisation should happen later – the qualitative analysis of the samples is really important.
Qualitatively I found doing this revealing, although can’t really write clearly what it specifically revealed. The attention feels useful in terms of understanding myself for improving how I relate to the world – in terms of the content of samples, and what I spend my time doing.
I’m starting a meetup / online commnunity to share tips on skillful use of social media.
Things like browser plugins that let you customise YouTube, tips on settings, social practices like how to form a healthy active WhatsApp group for a particular purpose. And so on.
As a challenge, last week I made five things, one each day. Each had to be finished in some sense, and preferably published. This is what I made and what I learnt!
Monday – Godot game
My goal was to learn Godot enough to write some kind of video game and publish it, all in one day. Incredibly this was fairly straightforward. Things I learnt:
Physics engines are really good and easy to use compared to when I last coded games with them which was in 2002.
Open source game engines are genuinely very mature
It’s satisfying making something that just runs locally and is very visual
Itch.io is extremely generous with letting you just make a page for your game. It’s pretty liberating – no servers or DNS to think about like with a website, and no complicated signing mechanism like an iOS app. Although, I’m not confident the Windows build I made worked, only the Mac one…
I had to compromise quite a lot on the game, it isn’t great. But it has a character, and objects you manipulate, and a goal, and one level.
Tuesday – LLM solver
For a job interview, I wrote a program using an LLM to perform an algorithmic task, one with some aesthetic judgement. For obvious reasons I can’t say more about it! The new thing here was using OpenAI functions, which I haven’t done before. Things I learnt / remembered:
OpenAI function callbacks are clunky, as still not guaranteed in structure
Rate limits on a personal OpenAI account are quite low and easy to hit
It’s fun making things with LLMs – it feels powerful and surprising, and fresh
Wednesday – Browser extension
I started coding Instalamb last year, an extension to customise Instagram, for example by removing recommendations. Today my goal was to finish off a first version, package it, and submit it to the Mozilla addons site. Things I learnt:
When modifying the DOM of dynamic React applications, it is best to only alter the styling on individual elements. Removing elements creates strange crashes. I moved a few off the screen to an absolute position, or hid them behind other things, accordingly.
At least for Firefox, extension packaging is crazy simple. You just zip up the manifest and Javascript files and so on. That’s it. It makes publishing to other platforms a bit embarrassing. My first version was just 2219 bytes long.
It’s very hard to manipulate infinite scrolling. The main Instagram feed has a small number of post DOM entries which it rotates through. Breaking the whole page, or it endlessly loading invisible posts, were common failure cases of manipulating this.
Get in touch if you want to try it out – it isn’t quite at public release stage yet. A couple of users who want to customise Instagram in some way would be great.
Thursday – Mind sampler
I’m a big fan of Hurlburt’s Descriptive Experience Sampling, which involves randomly reminding someone to take note of how they were thinking just before a random alarm goes off. My goal was to write a mobile app to help me do this for myself. Things I learnt:
Local notification alarm APIs either don’t exist or are different for iOS and Android in both Flutter and React Native.
In Tasker, it’s not too hard to schedule a task every 2 hours in the day which generates a random variable from 0 to 119 minutes and waits for it – see someone’s post about this.
That can then trigger a notification, with a button to open a text file. The sound and icon can be customised. It’s important to put a mime type in the file open command so it finds the app without an extra prompt.
This looks like it is going to work, so hopefully I’ll now find out what percentage of my time I’m paying attention to my senses, what percentage I’m doing unsymbolised conceptual thinking and so on. I’ll report back somehow.
Friday – Vox pop video
My podcast “Imagine an apple“, about what it’s like to be in our different minds, has just got going. I’ve always wanted to interview a bunch of people about how they use their imagination and compare them, and also to use video so in the end can animate what they see too.
Today I did a prototype, filming people on the streets of London and editing it into a finished video. Lessons learnt:
It feels hard getting strangers to talk to you in London, but when you do it they love it.
Phone battery drains pretty quickly for a relatively short amount of video, so plan for that.
Editing in iMovie is good enough, but I’d look for something else next time. For example, it doesn’t really seem designed to do portrait video, which is a bizarre limit these days.
I could spend forever learning to get better at video filming and editing – it’s lovely getting a feel for why in practice.
I think with enough footage to cherrypick the good surprising bits, and careful editing, this format could work really well. Would need to be denser. Jump cuts contrasting people saying different things about the same aspect of their imaginations worked best.
Doing one thing every day is pretty tiring, like an endless loop of hackdays. However, the pressure and creative diversity of doing that made it worth it.
Practically the projects accumulate. You can end up with several things to follow up on – in my case fixing a couple of things in Instalamb, and analysing all the data I’m sampling about my own mind.
It’s pretty remarkable that the Internet, software and AI combined let me get all the above done in one week. Nothing would be surprising in amount if doing it all the time – but in each case I was doing something very new to me.
This post has two sections, one about gas prices and energy savings, the other about Android automation and the Netatmo API.
1. What a smart thermostat is like and how much money I saved
Last year, with gas prices going up, I decided to get smart radiator valves. I’d thought the saving from these would be by only heating rooms when I’m in them. I was lazy about going round and adjusting the manual radiator valves several times a day!
Having had them for over a year, I now think the saving comes from the thermostat being modulating. This means it adjusts the boiler strength continuously, so it works more efficiently (the old thermostat could only turn the boiler on or off).
My new heating also feels really good. The rooms have a more balanced and even temperature. I tried having the temperature rocket down to below 10°C in unused rooms at night, but it was less effective in both energy use and pleasure than keeping it at about 15°C as a minimum.
I’ve got the Netatmo thermostat. It’s great. I particularly like that it has some kind of eink-style powerless-when-not-changing LEDs, that show the current temperature all the time. Overall the hardware, fitting it myself, and the app are all quite good. The software is a bit ropey – no web application (still), and a major missing feature (see next section).
It saved me money! It’ll take 2.5 years to pay back the £480 total cost of the radiator thermostats. That’s if gas prices are as high as last winter, and not charging myself interest. Not too shabby. Have a look at the rough spreadsheet if you’re interested in details.
2. How to automate it to turn off when you go out
Unfortunately, one downside hit at the start of this year. Netatmo suspended their IFTTT integration. This means there is no official way to make the heating turn off when you go out, and turn on when you get back home. This is quite important for saving energy!
I’ve hacked together my own method, using my new Fairphone. It is very bespoke, and involves programming. However, in these days of AI coding assitants, maybe more people than ever can get this sort of thing working.
Netatmo’s API is fantastic, and can still set my thermostat to away / at home modes. So I did the following steps – you’ll need an Android phone:
Wrote a Python script netatmo-fai.py which can, from the command line, set the mode on the thermostat. There are instructions in the script – you need to register an app as a developer at Netatmo, and make an initial token on their website.
Installed the incredible Termux which is a Linux distribution that runs entirely inside an Android app, without root. Copied the script over (you can use git to do that) and got it working inside Android.
Installed the power-user Tasker app and crucially Termux:Tasker which connects it to Termux. Tasker is a bit like iOS’s Shortcuts feature, only both more powerful and harder to configure.
Set up a profile based on the “Wifi Connected” status. I called it “Home – Wifi”, and set it to run when connected to the SSID (name) of my home Wi-Fi network. I found using Wi-Fi events for this is very reliable, and doesn’t need a foreground notification window (see below).
Create a “Tasker Function” task which runs the Python script with appropriate parameters to turn on the heating. Set the profile to execute that function.
Create the opposite task which turns off the heating. Long press the profile and add an “Exit Task” to run it.
Now you can test it – by turning Wi-Fi on and off! Be aware that if your router breaks, your heating will turn off… If that isn’t suitable for your phone / Wi-Fi setup, it works well with a Location profile, you just can’t turn off the permanent notification.
Some secondary tips:
Wrap the Python script in a shell script so you can log its output to a file. It’s hard to debug otherwise.
Install Termux:API and then add error handling to the shell script so it triggers an Android notification when the Python script fails.
“Wifi Connected” seems to work fine without the permanent Tasker notification. I turned off the Monitor notification for Tasker in the normal Android settings.
If you’re trying to get this to work, and have questions I might be able to answer, please leave a comment below!
These days I alternate mobile operating system. Partly because as an app-making professional I strongly feel I need to understand both, and partly because it slightly irritates people who are die-hard fans of one or the other.
I don’t particularly love either – a plague on both their houses. I’d rather we all used a fully open operating system, or there was a lot more competition and a standard application platform. Still, they work, and both have lots of delights.
This time I jumped in order to have a Fairphone 5 which came out a month or so ago. It’s lovely in terms of hardware – conflict free materials, fair pay, all parts replacable with just a screwdriver (yes, even the USB port), feels and works beautifully. Highly recommended!
Of course all this fair hardware is only possible with Android, so chalk one advantage up to an at least partially open ecosystem.
This time I took notes on everything interesting I noticed while switching from iOS to Android. It is deliberately rough notes – I haven’t researched each one in detail. They’re impressionistic. Every sentence is in my instinctive opinion.
Winner for each section is represented by 🍎, 🤖 or neutral ⚖️.
Installation ⚖️
Google/Fairphone screens felt more slick to me than Apple setup screens
Android didn’t offer a QR code scan for Wifi password – I had to type it in
It got me to plug my old iPhone into the Fairphone via a cable, and tried to copy various things across including WhatsApp message history, but it didn’t work for me. I didn’t need this so didn’t try too hard.
Prompts me to choose my search engine – lovely, I guess Google are forced to do this? I picked DuckDuckGo
Face unlock was very very fast and easy to register, and seems to work really well. Presumably though it is less secure than on iOS due to missing imaging hardware – all the banking apps and so on use the fingerprint recogniser on the power button, which works really well too.
UX Details 🤖
Overall the user interface feels faster and slicker than my old iPhone 11
Actions in notifications feel more comprehensive and easier to use on Android
Android routinely has separate settings for different kinds of notification in one app, so you can configure them separately – if iOS does this I never noticed it, or apps weren’t adopting it as frequently
Timer has more features, including a lovely one to make the sound come in gradually. And of course multiple timers.
Pull down shade keeps audio players in it for longer, and you can swipe between them, e.g. music vs podcasts. Was frustrating how quickly this would disappear on iOS. When I’d just paused to go to an appointment, it was gone by the time I came out.
Auto rotation is considerably better – when you rotate the phone sideways a little icon appears and you tap it to make the phone switch landscape/portrait. This is just much better for me than having a lock/unlock setting, which you have to unlock on the pull down shade, rotate phone, and then when you’re done lock again.
Can choose the default mapping software (e.g. if you open a map link in one app). Wild that capitalism gets Apple to not offer this. Not clear why Google offers it!
Bedtime mode has a cute option – my phone goes black and white from 11pm to 7am.
SMS app has spam detection, and it works.
When something is annoying, there is more likely to be a way to fiddle with it on Android. As an example, for some reason it showed an NFC icon in the top bar by default, which is useless as there is no reason to turn NFC off. In the end I switched to developer mode and typed things like “adb shell settings get secure icon_blacklist” and turned off the icon permanently.
Voice Assistant 🍎
Apple’s better privacy encouraged me to start using Siri when I got my iPhone. Mainly for my own professional development in the era of AI, I ignored this problem and for the first time used Google Assistant on my Fairphone.
It can’t listen in the background when the phone is off. This is a hardware limitation, only top end Android phones have that feature. At first this annoyed me, but now I’ve just stopped using voice assistants as they aren’t that good. It is set to listen while the power is on – so worse case I tap the power button once then talk to it.
When I first started using it it felt fast, but now it seems often really slow, I’ve no idea why.
It has the world’s most awful branded wake word. No, I don’t want to name a trillion dollar corporation everytime I use a user interface.
It only uses Google Calendar, not my local synced calendar. I mean what, seriously?
In theory it lets you enable access to personal things, and detect your voice. Because of the above problems I haven’t played with this much.
Overall I’m very disappointed – I thought the company that wrote the first AI transformers paper 6 years ago would have a better voice assistant in 2023. I guess I’ll have to buy whatever Jonny Ive is designing with OpenAI, or some startup’s pin badge, or hack my hearing aids, or just leave ChatGPT Plus on like a phone call.
Email / Calendar / Contacts / Phone / Browser 🤖
To my surprise, much more choice of email clients for me on Android. On iOS most of them required expensive subscriptions and funnelled all your email via their server, so I just used the standard client. It has a very dated interface. This time I picked the open source K-9 Mail which is both better than Apple’s offering, and I can make a PR to improve it if I like.
Similarly, more choice of calendar app. My old favourite aCalendar+ means I have a weekly view that I like again (one page, 4 days on the left, 3 days on the right), something I couldn’t find on iOS at all.
The contacts and phone apps have a better UI on Android. Partly this is just material design is a bit more thought through and clear than Apple’s strange blue outlines for buttons. Mainly it is because the worst team at Apple works on the phone app (example 1, example 2). For example, I had to search for how to reject a call when I first got my iPhone. While it was angrily ringing at me. They’re not trying, honestly.
Apple monopolistically don’t let you change browser on iPhones – all the other browsers are mere skins around the same one rendering/Javascript engine. On Android I’m using actual Firefox again, with plugins! For some reason only some plugins are allowed right now, they’re bringing back the rest. Oh Mozilla, what are you doing!
Git / Files Support ⚖️
I keep my personal documents in a git repository. One of my favourite apps on iOS is Working Copy, an excellent git client. I even scripted it with Shortcuts to auto-commit everything whenever I plugged in my phone. There’s nothing like Working Copy on Android. One thing iPhones excel at is software designed mainly for tablet users, as due to strategic Google errors there isn’t the same market for Android tablets as for iPads.
So what do I do on Android? To my shock the answer is to run an entire Linux command line environment inside an app. You can install any package. This is called Termux and it is a wonder. I run the same script as on desktop to merge / add / commit / push all my documents automatically. It runs in the background on my phone. I was sure this would either not run reliably or drain my battery. It just works.
I arranged it so other apps can access those files in git – much more flexible permissions system than iOS, but still controllable. Unfortunately I haven’t found a great text editor on Android for my needs – just a decent one.
Syncing ⚖️
I keep my own photos on my own server, on iOS I would sync new camera photos over SSH with Photosync. This was triggered once again when the power is plugged in – frustratingly, the only time you can automate something.
For reasons that escape me now, Photosync just didn’t work on Android. So I use two way command line file syncing tool unison, configured as part of the Termux sessions I describe above. It’s great! And syncs my photos frequently.
I use Fastmail (highly recommend) for my email, calendar and address book. Setting up syncing for that on iOS was a breeze – there’s a standard config file format which Fastmail provide, and it happens in a second. On Android… The assumption is you’re using GMail. So I had to buy a CardDAV/CalDAV sync app, and manually copy and paste all the server names and passwords over. Yeuch.
Miscellaneous ⚖️
Google Pay works, it’s as good as Apple Pay. I really like that you don’t have to double click the power button to use it – just unlock the phone and hold the NFC reader in the right place and it pays, no other action required. Fairphone has a hardware downside here – the NFC reader is in the middle of the phone, so it is harder to activate than if it was at the end like a wand.
Google Fit measures cycling automatically. I cycle casually as part of my day to day life, and like to measure the WHO heart points I naturally take each day. On iOS, it kinda did measure cycling as if it was a similar amount of walking, and I tested it and it came out fine. But Google Fit does this better. It knows it is cycling. Otherwise the apps are similar – Apple Health is more flexible if anything. Google Fit is more of a taskmaster demanding I don’t just walk but walk quickly.
Overcast is one of the best apps there is, an indy podcasting iOS app which I got really used to. Luckily plenty of competitors have cloned its key features of speed adjustment and automatic removal of silence. I’ve gone with Pocket Casts in the end.
Home / Lock / Desktop Customisation 🤖
Android has gone backwards! For some reason these days (2023) it has little to no lock screen customisation. This just as iOS has gained some of that. I don’t really use my lock screen – on both devices it face unlocks before I can anyway. And on iOS it is just kinda annoying you then have to swipe to get to the last app you were on.
In contrast, Android has a plethora of home screen apps. These let you do shocking things like move the icons where you like on the page. Radical I know! It’s wild to me that iOS doesn’t allow this. Android even has a standard for customising app icons, with multiple cheap packs to get your phone looking how you want. Like crayon icons or neon icons. It’s a joy.
It gets better. A few weeks in to using my Fairphone I found the delightful, minimalist Niagra Launcher. It’s incredibly well polished, a thought through UI, utterly fresh and new. I feel like I’m choosing how I use my phone, rather than it choosing me. My home page is the screenshot to the right, get me to show it to you.
A final wild card for Android… Turns out you can just plug it into a monitor. I took the USB-C cable I use with my laptop, and plugged it into my phone. Ping! Android switches to a desktop mode, where the apps are all windows, and there’s a start menu. And I can type into the Termux terminal window. This was very useful – mainly for setting up terminal commands. It’s a bit unloved, looks like nobody has done much with this desktop for a few years. But it works, and is unlike anything an iPhone can do.
Conclusion
Smart phones are a commodity now. It doesn’t matter which you have. And yet, they are different, and you don’t have to use the same one forever.
Every stereotype I had about the two mobile operating systems was wrong – it’s Android that has better user experience polish, and iOS that has the better AI voice assistant.
I like my Fairphone 5. It feels fresh for my mind to learn something new.
Our brains have 100 trillion connections. Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does.
The recent surge in interest in generative AI was sparked by neural networks trained on high quality public, human culture.
Their use of this culture is extremely focussed – they only saw good quality inputs, and only saw each input once (see the paper One Epoch Is All You Need for why). If you show them lots of bad quality stuff, they’re not adaptive enough to tell and ignore it.
So what exactly makes the data they’re trained on “high quality”? I decided to dig into two examples — GPT-3 for text and Stable Diffusion for images — to find out. (I’m not an expert at training networks, so please leave comments about anything you see that I get wrong)
GPT-3 — good articles that Reddit linked to
GPT-3 feels a bit like it saw all the internet as input — actually they started by training another curator network to pick out the “high quality” parts of the internet (see 2.2 Training Dataset and Appendix A in the GPT-3 paper).
What is high quality? Well, the curator network for GPT-3 was taught its concept of high quality by looking at an internet corpus called WebText. That was made by grabbing a copy of all of Reddit, and looking at the outbound links in moderately upvoted posts/comments.
(Details: Read 2.2 Training Dataset in the GPT-2 paper— that says “at least 3 karma”, which doesn’t make clear sense to me. As far as I can tell it is Reddit users who have karma, not links or posts. OpenWebText2, a recreation of this dataset, used URLs from submissions which have a score of 3 or higher — see their documentation — which seems a good assumption for what GPT-3 did. Possibly they took links from posts by users with a karma greater than 3. Posts are also separate from comments, and users have a separate karma score for each.)
GPT-3 was also given lots of books and Wikipedia— but most (82%) of what it took in was the good pages that Reddit linked to, and other pages that “feel” similarly high quality.
Once again, this begins with a copy of the whole Internet, filtering out all images that have alt-text attributes in the HTML using them. This is already a filter — well made sites which care about accessibility will have more and better alt-text. A project called LAION then classifies those images using AI so they can be filtered by language, resolution, chance of having a watermark, and their aesthetic score.
Stable Diffusion’s training is in a complicated series of checkpoints, which starts off with a bit of general LAION data, but ends up mostly trained on highly aesthetic images. This is much the same as for GPT-3 — a curator AI (the LAION-Aesthetic Predictor V2) learns to judge high quality images. Those images from LAION that the predictor scores 5 or higher were used to train Stable Diffusion.
But what does an aeshetic score of 5 mean? For a quick feel, this page shows increasingly aesthetic buckets of images from the full LAION dataset as you go down the page. Digging into the training data used to make LAION, there are two main sources:
1. Simulacra Aesthetic Captions – manually rated images made from an earlier (non-aesthetically trained) checkpoint of Stable Diffusion. Humans, I assume from the Stable Diffusion community, rated a couple of hundred thousand images by hand.
2. Aesthetic Visual Analysis – this is a download of all the images from DPChallenge, a 20 year old digital photography competition site. A few times a week for decades this has competitions like “photograph bananas, any composition, must have at least one banana”. While they only have tens of entries to each competition these days, they used to have hundreds.
There’s a bit more complexity — a small number of logos specially aesthetically rated were thrown in, I think to improve typography and font quality. You can browse all the images used to train Stable Diffusion (browser by Andy and Simon, those links to their blog posts about it).
Conclusion
The notable common properties are:
Each piece of training data is only shown once to the AI during training
Both have a core dataset with some human-curated metric for “high quality”
Both extended that core dataset by training a curator AI to pick out similar high quality items
It’s quite an odd situation, especially given the emergent properties of reasoning, coding and so on that these kinds of models have. The training mechanism isn’t particularly smart, the smart stuff emerges inside the neural networks so trained.
The AIs have learnt to think extremely well about a large chunk of human knowledge in symbolic form. Right now, they are heavily reliant on humans — every upvote you made on Reddit, and an old time niche digital photography competition site.
This was ultimately quite inspiring – it’s led me to ask maybe a hundred people about their own inner lives. The answers so varied, I’m left in wonder at this hidden world that we barely talk about.
My favourite source about this is “The pheneomena of inner experience” (a paper by Heavy, Hurlburt 2008). It uses a method (Descriptive Experience Sampling, or DES) to randomly beep a bunch of volunteers in their every day lives, and get them to then capture their current mental phenomena.
The kicker is this beautiful table 2. It lists the top 5 most common forms of inner experience. For each one there were participants who never experience it and other participants who experience it more than 75% of the time.
Just pause a moment to absorb that.
Take something that you feel is fundamental to life, to your experience of being conscious. For example, that you have an inner voice, or that you’re aware of your senses, or that you imagine visual imagery. For that thing, which you might be doing >75% of the time!, there are substantial numbers of otherwise ordinary people who never do it at all.
It’s framing account is a dispute between the psychologist and philosopher authors, but honestly that is a bit of a side show. It feels like it mainly consists of Schwitzgebel assuming other people have inner experiences like him, which they don’t. Hurlburt is very patient, and the discussions reveal a lot.
No, the important part is the individual experiences of their subject, Melanie. She’s a philosophy and psychology graduate, and you get the sense that in many ways she knows more than the older men writing the book.
The book consists of detailed dialogues in which Melanie recounts her experiences at the moment of a random beep, and Hurlburt quizes her openly and intelligently to unpack and improve the quality of that description.
Melanie’s world is not like mine. It’s not so different, but it is not the same. Just as in user testing, the first sample is worth a fortune compared to no samples at all. These specific, concrete details of how someone else experiences being alive are inspiring and enlivening. They made reading the book worthwhile.
I’ll give two striking examples.
On the first day, Melanie sees emotion as a colour. She’s laughing at something to do with the documentation for a piece of furniture she’s assembling. Along with a verbal thought that it is funny, she gets a “kind of rosy yellow glow” in her head, all over like a “wash of color”.
Melanie says “It was a feeling that was very familiar to me, or I guess, the sight, you could say, of this colour that is really familiar to me and is one that I commonly associate with laughing at a joke or something that involves humour”.
I struggle to associate emotion, as is fashionable, with part of my body. It feels conceptual to me, simply raw emotion. To get a colour for an emotion is even more striking to me.
Schwitzgebel doubts she really sees the colour, mainly because he never does and because the literature never mentions it. Even when, in the end section, Schwitzgebel digs into that literature I’m unconvinced. Unlike DES, previous methods never attempt to get decent, normal accounts of inner experience.
They seem to mainly consist of philosophers who assume everyone has the same experience as them, and introspect in their arm-chairs using ineffective techniques… Or of more low level set experiments trying to catch subtle details like the experience of the third soft resonance tone when playing two notes.
Much later, Melanie has an echo of her inner voice. She’s tidying up some dead flower petals in the sink, and thinks “They lasted for a nice long time” in her inner speech voice. That’s called “articulatory” – it is the inner voice that feels like you’re almost speaking.
Then, overlapping with that and repeating on top of itself several times like an echo, she inner hears her own voice saying “nice long time” “nice long time” “nice long time”…
Schwitzgebel doubts this one too. At first that it happened at all, and then focussing on the timing. Melanie reports the echo happening in a very small amount of real time. She felt all of the words in full, yet little actual time passed.
Doubting this seems bizarre and excessive to me. We know for sure from examining the strategic planning our minds must be doing, that we can run reasonably high fidelity simulations of almost anything very rapidly. We don’t experience them, and we don’t know what form they’re in, but a compressed detailed modelling of some kind must be happening rapidly.
In dreams time is often odd, they can feel like a long time when it’s only ten minutes since you pressed the sleep alarm. My instinct is the opposite of Schwitzgebel – of course time isn’t always real within our conscious experience! So I find it difficult to take him seriously.
The book endswith summary notes from each author reflecting and responding. Neither change their view. Hurlburt is happy with his life’s work developing DES, and Schwitzgebel is happy with his life’s work beind cynical about what we know about our inner experiences.
They bring in a bunch of interesting research history. At the end of the C19 there was a critical argument between Titchener, who believed all thought consisted primarily of images, and Würzburg who believed in intangible mental activities. They both did research which apparently crashed and collapsed, and the quick summary is that everyone then ran to behaviouralism and stopped thinking about inner experience.
I’m being an armchair introspector, which the book dislikes, but I really do think I don’t have very much visual imagery. It’s barely tangible, and usually just spatial without colour or texture. This makes it hard for me to take Titchener, or any of his research, or anyone who references him, very seriously. Especially now there are MRI scans to show there really are radical differences in visual imagination.
Another fun reference was to Flavell, who in the 1990s researched the inner experiences of 5-year olds. It came out that they aren’t aware of their thoughts, even quite socially visible and important ones that their behaviour showed they were having. Flavell concluded that they must have been thinking and therefore their reports and research is wrong. When actually, perhaps 5-year olds have a less developed form of consciousness, and “just do” more without specific conceptual, visual or verbal awareness. This definitely feels like it needs more investigation, and we’d learn a lot.
Hurlburt ends by describing the difference between research that aims to explore and discover, vs research that tries to prove a theory. He says that introspection philosophy and psychology keep trying to jump ahead and test theories. I agree with him that it is too soon to do that, we don’t understand how the mind works at all.
We seem to be missing basic information about how different our inner worlds are from each other. We should use tools like DES, and develop more like it, and get many more people to introspect. We can grow our language and capabilities as a society.
Then perhaps we’ll have the tools and information to make theories, and understand more about that mystical experience of being a conscious being.
To my surprise this list is television heavy – I didn’t find any incredible new board games, and I was disappointed in most video games. It’s somewhat in order – my favourite is roughly last.
Thanks everyone who recommended these to me – you know who you are! I’m not going to link to where to watch things – for TV and films I use JustWatch to find a suitable source.
Community – Seasons 1-3
Rick and Morty is a dense, witty yet also often smart, hard science-fiction, at least for the first couple of seasons. Lots of people always recommended Dan Harman’s earlier hit, Community, whose premise set in a US local college was never very appealing.
It’s brilliant – each bundle of twenty happy minutes is laugh out loud funny, while at the same time building up the characters, universe and connection. That is even before you get to the clever-clever high concept episodes, often based on films.
Not really worth watching after season 3 as Dan Harman was fired as show runner. He comes back later like Steve Jobs, but alas doesn’t create the iPhone.
Undone – Season 2
It seemed hard to make a second season of this beautiful, rotoscoped, ambiguous story about reality and the mind (article on the creator Kate Purdy’s own schizophrenia – she made Bojack Horseman), yet they managed it.
The trick of having warm, rich, real acting, cast into a cartoon form, so that visual memory and hallucination feel real, continues to work (video on how they do it).
I fell again for the emotions of Alma’s family, watching the rainbow song later on repeat. The seventh episode had me bawling, howling about the grandmother’s story, and subconscious connections to my own family.
It blurs fantasy and who we really are, in a way that is utterly relevant and bright.
The Hidden Life of Trees – Peter Wohlleben
Each snappy chapter is an astonishing insight into the complex, social and diverse way trees live.
At first it is simple things – that leaves partly vanish in winter to reduce the surface area to storms. That some species are pioneers in empty ground, others work only in exsiting forests.
Later it gets more shocking – individual trees vary genetically between each other as much as species of animals vary between each other. Our human heartrate measurably changes according to the health of a forest, probably reading the chemicals the trees signal to each other with.
There is a whole world here, hitherto hidden from me, and its scientific detail barely studied and understood.
No need for an alien planet, look closer at ours.
Better Call Saul – Season 6
Somehow, this spin-off ended up being better than Breaking Bad. The first season didn’t seem much when I first watched it, but by season three the reviews were so good I went back.
It’s now one of my favourite shows ever. This final season has more astounding cinematography, and a cathartic and earned ending.
The subtle detail in expressions, tone and mood of Jim and Kim’s relationship have been the heart of the show for years.
There’s a peacefulness, and human and adultness to it. A few years ago it was extremely valuable to me – the only art that truly connected to the complexity of emotions and depth of relationships in my life.
Breaking Bad – Seasons 1-5
After watching Better Call Saul, I felt parched for high quality television, and decided to rewatch this ten years after it finished. I don’t normally do this at all.
Incredibly well made – beautiful and interesting cinematography, compelling acting, plotwise just so so clever. Everything ties up well and resonates well. It doesn’t have a single bad episode.
Even aspects I didn’t like the first time – notably Marie’s kleptomania – now that I understand mental health better, were utterly on point.
This show’s themes don’t especially resonate to me personally, however its quality is ludicriously high, and it is engaging and authentic. It deserves every praise.
Dirty Dancing – Secret Cinema
A friend unexpectedly took a group of us to see this classic 80s film at a kind of festival in a park in the west of London. I hadn’t seen the film before!
The whole experience was delightful. Bars and a fun fair in 60s upper New York holiday camp style, including feeling like you illicitly got to an actual back stagehands party. Dancing!
Then the film itself turns out to be really really joyous, full of energy and love. Morals and ethics that are subtle and powerful – who can refuse a main character who pours water on somebody who reads the Fountainhead! Dancing that was hot without cliché, so confident it is simply powerful.
Best of all – on entry to the festival everyone’s mobile phone was put in a locked bag and given back to us so we couldn’t use it. This added a tangible presence to the whole experience. I hope more events use this!
Rise (En Corps) – Hofesh Schechter
Not that I’ve many to compare, but Hofesh Shechter is by far my favourite dance troupe. Their mailing list led me to go and see this at the Institut Français’s cinema in London.
A beautiful film told in a straightforward yet neat way – suitable jumps in time and setting which are clear and add to the feeling.
I cried when the woman running the retreat centre in Brittany conveyed to the injured protagonist classical dancer how when you’ve fallen it lets you raise in a new way. Directly personal – I’m stuck in a local low due to fear of being injured, just as she was.
Care and support from her sisters and her friends are shown lovingly, such as introducing her to just the new friend she needed at just the right moment.
Then Hofesh’s company – his style of dance takes her in her weakened state, it doesn’t just accept but relies on her not hiding that, not trying to make perfection. This warmed me to the core.
I can’t be perfect and I shouldn’t be. I should live each of my lives that I have.
The world of Stonehenge – British Museum
I’m still a member here years later because the exhibitions are shockingly well curated. This one (exhibition tour video, book) wasn’t really about Stonehenge. It was about a Northern European civilisation that lasted a couple of thousand years and yet doesn’t really even have a name.
Many intricate carved stone balls, almost mathematical in form and regularity, that unwarned you would say were made last week.
Preserved wood randomly unrotted in peat for millenia, revealing glimpses of wood henges and cross-marsh walkways we will never know.
Gold mined in Wales making a gorgeous glimmering shoulder garment for a woman, her source of social power a mystery.
A peat grave with the items so tangible to you she is as real as a modern girl – her woven basket, her wooden earrings, her bear-skin coat, her valuable beads.
This civilisation had no written language and most of its treasures have dissolved away. It was clearly incredibly sophisticated, it is just all hidden from us. Fragments of information accidentally preserved, or forensically deduced by modern material origin tracing.
A three year old draft blog post I just found. Feels worth publishing – the improvements in AI since then if anything make it clearer, and all the “right now” caveats justified.
It’s better to start by thinking of us as pattern matching devices first.
Not simple ones – such as modern deep learning AI that essentially does layered functions to measure correlation. Complex ones, that model causality in a sophisticated way we don’t remotely understand yet.
That’s intuition. Which is an odd word for it, that you’d only invent if you were overly focussed on language, not what we actually are.
So to language.One function of language is to attempt to describe what we pattern matched, our model of the world, to others.To influence them, to train them, to explain yourself to them.
If you’ve learnt a foreign language, or even just lots of varied words for similar concepts in your native language, you’ll know this is hazardous and inaccurate at best. No surprise – a few thousand words even in combination can’t cover conveying the exact logic of hundreds of trillions of synapses.
Rationality and science are attempts to improve our thoroughness and willpower to agree truth, and make explicit our working.
Sometimes this goes reasonably well – academic Maths rarely ends up with persistent mistakes. But it only does this by intense training of people with a very specific ability, and by picking the easy use cases. By definition, Maths is about the cases where logic prevails – and even then there’s Gödel’s theorem to confound that simplistic view.
But generally, it is going to be inaccurate.
You can’t work out everything with data or current AI. You can check you have made mistakes with it. You can feed your brain insights with it. If you’ve got loads of data and lots of resources and you do it really carefully, you can do random controlled trials (A/B testing) and at least be sure about the causal direction of what you learnt.
However hard you try, this won’t ever be a model as sophisticated as only a human mind’s intuition is right now. It’s difficult to use intuition though, because our minds can just as likely be wrong. Good cultural practices of training and validation can hone our minds and insight.
Once we’ve gained our truth, there is only the limited bandwidth of language to try to help others gain it too.