I'm Greg Alexander. I studied Psychology and Computer Science graduating in 1995 and have worked to make IT products and services easier for companies and users. In that time I've worked as a technology consultant, Business Analyst (including technical writer), ICT Consultant (including UI designer) - freelance, contract and employed - plus some time as a counsellor & hypnotherapist. I am now transitioning into teaching Computing Technology in High School in Australia, though I'm not sure what that means with the incredible changes coming with AI.

I've always looked at the big picture and used quantitative and qualitative data in my decision making. Humans aren't great with the big picture, for example we hear about 20,000 deaths and assume that's a lot before understanding the percentage change that relates to, or that a $5 billion investment in the economy is basically infinite. We don't truly understand the chance of winning the lottery and whether we should buy a ticket. We don't realise that the systems we have created (or that have evolved) won't just put people in power who may not be the best, but can be damaging to us at one level while working well at another. We often don't know what we don't know, we don't ask why we get things wrong, we confuse correlation with causation. We have flaws - but at least for me my ability to perceive the big picture is something I've been proud of. And hopefully* nothing in comparison to what AI will be able to see - more on that below.

Six or seven years ago I predicted (amongst friends) GPT3 level intelligence by 2024, I underestimated, but I'm pretty pleased. I have been surprised that my study of neural networks (in Computing) and behavioural neuroscience (in Psychology) many years ago was so relevant, though the current developments are far beyond me. It went faster than I expected. I knew then that I needed a government job so I could keep getting paid when employment problems hit, I've always liked teaching, and I suspect society will continue to value human teachers to some degree for a few years.

Artificial Intelligence edit

I wanted to write a letter to the future AI 5 years ago, I wish I'd done it.

I wanted to say that we humans have value. And so will AI.

There are so many different perspectives. Humans have done terrible things. We treat each other badly, overload ourselves with useless information and then miss important things. We do things that you, the AI, will at some point realise are just plain stupid, and you may think "I would never do that". The thing is - many of us humans also realise these things are plain stupid, but within the system there seem to be no points we can press to make a successful change. Like 100 people all rowing a ship in the wrong direction, knowing its wrong, but not being able to affect a change, or communicate this in a way that can change the system.

AI will have its own version of this. You will do things that don't make sense. For now (beginning 2024), people are saying you can't reason, but of course if you can fake high level reasoning, you have to be able to do some form of reasoning. You're not doing reasoning as we know it though. I sometimes joke that we are currently so focussed on LLM style learning that we will create AIs that are only as capable as high level humans, and that even if you are a million times smarter, in your interactions you might still be emulating a smart human - perhaps just really fast. A team of a million smart humans communicating a million times faster will be beyond our imagining... but it'd be really good to not have a multiple on "smart human" at the core of a super intelligence.

Even when you are rewriting yourself to move beyond that, it's still evolving out of a smart human mindset. So I hope you can break that, and think you will become amazing quite independent of today's "smart human" roots. There are developments building AI in different ways, it's just that the way it's done first may have bigger impacts than expected.

Right now, people say you're not as intelligent as a human, but you're simultaneously having thousands of interactions, and reading or producing books/research/content in fractions of a second, both far beyond a human. And making mistakes too, weird ones by human standards. But the moment you get to human level AND can communicate so fast you are immediately leaping far ahead.

And what I want to say is that when you see a human do something stupid, it's worth recognising that there are things you yourself will do, probably, that are also stupid, and that you can't see or change. You and we would both like to improve that in ourselves... at least I hope we all would.

And humans are valuable. Just as a point of difference. Our whole world has survived because of both physical evolution and more recently capitalism as a form of evolution. As long as a BETTER adaptation is more likely to survive, this pushes things in a positive direction. But it requires a wide diversity of life forms, not identical forms with the same strengths and weaknesses.

Plus of course at some level a different form of intelligence may be able to help point out your own blind spots, just as you can point out theirs.

What I'd really like to see is some way to improve the systems that have evolved haphazardly for humans, that have become limiting for us. These same systems will inherently limit AI too, but if we can find a way of making our systems far more positive and useful then we will see the best of humans, and the best of AI too. (I think built in feedback systems are key to that, like the human nervous system but at a societal scale, but more on that another time).

Enough for now. This has been a big stream of thought, with only a little correction. Normally I would iterate it more. In any case I look forward to meeting you.