The Seven Questions of Basic Income Implementation

From the first meeting of our Implementing a Basic Income in Australia group, I presented my outline of what I think are the fundamental questions which need to be answered before a Basic Income can actually be implemented.

In order to answer these questions we want to organise a range of experts like Caravan finance brokers, on social and economic issues into working groups so that they can discuss the consequences of each decision and how it will be beneficial or detrimental to society, economics, welfare, well-being, employment, power imbalance, freedoms, etc.

The questions are:

  1. How Much / How Often?
    $1 – $10,000+ / Paid daily – Paid annually
  2. What scale is it implemented on? Where?
    Small town? Council? City? State? National.
  3. Who gets it?
    Everyone? Citizens? Residents? 18+? Based on tax return submission? etc
  4. How is it funded?
    Local government? Federal Govt? Increased taxes? New (resource?) taxes? Debt? Transaction tax? Charity? Crowd funding? New money straight to the people?
  5. How long will it run for?
    2 years? 10 years? Indefinitely? 5 years on, 5 years off, etc?
  6. What does it replace?
    Replace all welfare? Just unemployment benefit? Nothing? Minimum wage? Wait and see?
  7. Will there be a transitional period? What will it look like?
    Instant implementation, or gradual implementation over time?

(Have I missed any? Please leave a comment below if I have!)

The answers to each of these questions often influences the answers to others. For example, if you want a National (Q2) Basic Income, it will be virtually impossible to fund that through Charity of Crowd sourcing (Q4), but there is a chance that you could fund a Partial Basic Income (Q1) for 2 years (Q5) in a small remote town (Q2) via charity (Q4).

Of course, a partial income in a small remote town isn’t the ultimate goal, so then we’re talking about a first step implementation. A trial, or a demonstration of value, hoping that it will grow to other towns or else convince enough of the population to enact a nationwide Basic Income. In this case, we’d have to design the best “initial test case implementation” and then a second “Ultimate goal implementation” and perhaps even design the strategy which will take us from the initial test to the ultimate goal.

Whether we want a small test case first or not is still to be answered. I don’t believe the NHS, medicare, welfare etc had incremental steps to implementation, so perhaps it is an error to think that a Basic Income would need it. Perhaps we should instead be focusing on the best possible design for Australia, and then fight for grassroots support of that system while lobbying political parties and getting the support of influential think tanks.

This is all just a first step. We still need to reach out to existing Basic Income organisations (BIEN, QUT, Utrecht University (BIParty NL) etc) to see what information, research and conclusions they are able to share with us which will help inform our answers to these questions.

VN:F [1.9.22_1171]
Rating: 9.0/10 (3 votes cast)

Transcendence – How it should have ended

So I got to watch Transcendence on my flight yesterday, and I was very impressed with it. I went in with extremely low expectations because there have been so many bad philosophy of mind / AI / futurism movies out lately that I think I just assumed this would be another where it was made by someone who clearly had no idea what current thought on the near future will be like, and was almost certainly going to do the usual “Fear science and technological progress because it might kill us all, steal our souls and take away our humanity!!!” – which seems to be the modus operandi of just about every science and technology focused Hollywood movie.

It is quite sad that my first assumption of a movie about uploading would know nothing about uploading, but Hollywood has given me too many examples of people making movies about things which they know nothing about. I mean, when you watch Morgan Freeman (someone who presents a Science show!) say “It is estimated that humans only use 10% of their brain” you tend to feel like it is all beyond hope.

Well, anyway, I’m quite happy to say that it didn’t go too heavily on the ‘fear technological progress’ bandwagon (for the most part). There was definitely a fair share of “Beware the all powerful AI!” fearmongering, but I actually felt it was largely justified. There is a very valid reason to be fearful of run away AI (Terminator). So that wasn’t too bad.

And more importantly, it  seemed to have been written by someone who does actually have a clue about current futurism ideas with regards to uploading, AI and other associated technologies. The whole story was by and large quite realistic (within the usual realms of “lets speed this up for the sake of it being a movie regard).

What I am saying is that, if super intelligent AI was created, in this sort of a setting,  the series of events which follow could go something along these lines. The choices it made were (mostly) clever and progressive, and revolutionary in all the right ways….except for one obvious error…which was of course necessary for the ‘drama’ component to the movie.

Which brings me to the SPOILER ALERT part of this post.

If you read past this point, I will be revealing plot devices and how the movie ended and how I think it should have ended. Last warning.

The main error the AI made was ‘networking’ the minds of the people it healed together, so that they could communicate with one another over the network, and, so that it could inhabit the bodies of those people and take over control… Sure, many people would love to volunteer to be networked with AI (especially if doing so would heal all illnesses and weaknesses and make them super strong!) but very few people would like the idea of being taken over by that AI, and vanishingly few people like to look at other people being controlled by an external mind of unknown intent. And so predictably everyone who was ever allied with the AI quickly turned against it when they saw this Cult like behaviour from this ‘army’ of individuals that it was building which it could control.

It was creepy, it was weird, and it was the one step which really made it easy to fear the AI.

Of course, the nanobots slowly replicating their way across the planet is also a terrifying idea because there is the fear that they will grey-goo the planet, but that probably would have gone unnoticed or ignored if not for the growing number of people terrified of the ‘army’ that the AI was building (even though the respective threats are quite out of proportion).

So, first most obvious thing that the AI wouldn’t do (being far more intelligent than us mortal humans), is it wouldn’t take actions which would obviously turn humanity against it.

The movie also fails to consistently apply the AI’s ability to read people, but again, this is just a necessary plot point for a movie.

How it should have ended

OK, the main point of this post. Assuming all of the rest of the plot devices need to stay in place to make a good movie, I think they screwed the ending up just a little bit. They had the AI keep Evelyn outside while they were being bombed (without a good reason) until she was injured, hoping to force him to upload her (and the virus she was carrying). Of course, the AI knew about the virus and was saddened by the fact the Evelyn (his wife and creator) had lost faith in him and wanted to help destroy him but there is absolutely no reason he wouldn’t have removed her from the dangerous situation of being under mortar and artillery attack. Her injury was easily avoided, and thus the whole “I can either save her or upload the virus” conclusion to the movie is unrealistic.

That, and the fact that he can show Evelyn “everything” and have her understand that he really was healing the planet and people, gives us the real solution to the movie – if Hollywood didn’t need to have everything back away from a utopian finish where everyone is happy and the world is completely provided for – he just needed to do that trick with the people attacking him.

I think a nice finish would be to have him get into Max’s head since Max represented the well informed philosophical voice of concern over the risks of the technology, and bringing him around to understand the vision and reality of the situation would be the seed needed to bring everyone else around too. And then you could have the usual movie tension of Max arguing with the neo-luddite crazy woman, convincing the soliders and all the rest of that jazz. (Though of course, the easier solution would be to just show all of them the same thing he showed Evelyn all at once – but that is a little bit too easy).

So yeah, there was no need for the false choice of saving Evelyn or Killing himself in order to save Max’s life. He could have easily shown everyone what he was actually doing, and then everyone could have gone on with life where everything was provided by the omnipotent god like being taking care of everything – but instead, because Hollywood still reflect American values, the better solution was to destroy the godlike AI taking care of everyone (socialism!) and send everyone back to the hardship and struggle of existence to which they are so accustomed. (which, btw, the movie didn’t cover at all – making it look like “losing the entire internet” would just be a minor inconvenience, and not the end of the modern world as we know it, which it would be).

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)