Thursday, July 13, 2017

CFI- Myth&Reality - Sci-Fi Dreams

How does reading literature, and in particular, SF, influence AI researchers (and I assume developers)?

My take on this was to look at Robots (and disembodied robots) that care  - ranging from positive role models, e.g.
Robbie the Robot (originally in Forbidden Planet)
R Daneel Olivaw (I robot, and all the way up to the 4th law in much later foudnation&robots series by Asimov),
Data, in star trek
the Synths (in the TV series but also in the Alien movies)
the replicants, in blade runner,
but also HAL and Roderick....

So these stories all feature moral tales and ethical dilemmas - sometimes, the resolution is bad, but often it is in favour of humanity. What is interesting is what "goes wrong" is often the result of a paradox, which would also be a problem for a human. There are simple examples (Alien's first movie synth has confluct in mission parameters, as does HAL), and many of the early asimov I robot stories feature Susan Calvin, RObot Psychologist "debugging" the way the 3 laws interact with the mission and humans orders and so on (c.f. funny why the laws are in that order )

but a more subtle problem arises from experience (i.e. training humans and AIs, whether simple learning or deep) which is that data and choices may conflate both bias and policy.....two examples
1. if we use re-offending probabilty as a guide to deciding in court whether to give a custodial sentence or a fine/payback, we include the social, police, jury and court biases (which are many) in the data - we will re-enforce things (as bad as the probabilty an african american is more likely to be found guilty than a european weather that's true or not, but also that the sample is bias and the root cause may be the opposite direction to the inference - i.e. jail causes re-office, not being drawn from a subset of the population who show up more often in jail for social reasons etc etc). De-biasing is tricky, but do-able through running natural experiments and multiple competing ML/AIs and having a meta-AI look at ground truth and (maybe) humans (like Dr Susan Calvin) at mechanism
2. we may decided to disciminate on age for car insurance, but not for gender (as is the case in the EU) because we want to encourage safer driving by young people (policy) but not assumptions about women driving...we need to be careful that this is explicit and transparent, and that acquired rules (e.g. through model inference) dont undermine this implicity...

These sorts of questions havn't shown up much in the SF/Tech/Geek literature so much as in classic novels such as Ralph Ellison's The Invisible Man (not to be confused with HG wells book of same name:-)

Finally, if we are thinking about influence between literature (or other media - music, dance, architecture, visual arts, movies) and tech creativity/development, never forget that much SF is written by scientists or engineers (Clark, Asimov,
Stephenson,  Chiang etc) so the ordering (a causes b) may not be obvious....and movies often have a different narrative arc than novels for many reasons. Someone asked if there's an equivalent to the myths/motifs/archetype (this teaching material for example,.). analysis done for movies for AI-based literature - that'd be a fine thing

Of course, myths and stories for moral education go back to ancient times, and who knows if 50,000 year old cave art drawn with the use of pre-Promethean fire didn't have some societal lesson to impart...if only we had a time machine to go back and ask

1 comment:

Laura James said...