Friday, September 30, 2022

Explainable AI & Quantum

 Much "AI" in use today is basic machine learning, which is frequently simply statistics, which have been in use since the first actuarial and ordnance tables were devised to predict risk or targets...so several hundred years (or longer if the Greeks or Phoencians or Minoans used same math they had for astronomy for landing greek fire on other ships accurately, or insure their ships' cargo against storm damage...


As for autonomy, this occurred as soon as someone built a feedback loop: My fave paper on this james clark maxwell's Royal Society paper 

On Governors back in 1868



For "black box" (as in "inexplicable" AI)- this is true of any system so complex that few or no single human understands all of it - so pretty much any smart phone (without even getting into what goes on in the camera s/w) - nobody could both design the chip and write

the OS (actually i think i know one person, but he's probably the last).



A "deep learning" (aka neural net) is usually explainable if someone just spends the time and energy (it is computationally pricey) - two techniques


1/ shapley values for example here



2/ energy landscapes - as in this

https://www.pnas.org/doi/full/10.1073/pnas.1919995117


Roughly, you can think about comuting significant changes in the entropy of the net at any step in training, and then, using shapley values on input features, identify what caused that change in the neural network (analogous to change point detection, with thresholds etc) and then export the feature list + decision as a new branch in a decision tree or random forest (for example).


So interestingly, once you have built an explanation for a neural net, you can often replace the thing with a random forest or other directly (i.e. self explaining) approach - this was sort of obvious too once people realised you could massively compress neural nets (even using lossy compression algorithms - like video) suggesting most the links and weights were redundant.


So here's the quantum bit (pun intended) - the problem with computing shapley values and energy landscapes at every step in the training iteration is that it is very expensive (compared to the training itself), so if we have to do it often, this is unsustainable.


However, these (especially the energy landscape) might be amenable to computation by an analog quantum computer (described herein ) perhaps making this affordable. Analog quantum computers are available, indeed have been applied to expensive problems like the transfer function of a 3D space to multiple radios - see princeton work on this).

Tuesday, June 21, 2022

the philosophal problems of matter transportation and time travel

Theres' a great book by Harry Harrison called One Step From Earh, where he went through pretty much every possible philosophical interpretation of how matter transporter's could/might work including that they might just copy a person rather than "move" them...


There's also at least one short story by Philip k. Dick where every time someone time "travels", they actually sending a copy of themselves to the next time - the story ends in havoc as the population of the world ends up being dominated by millions of different age copies of the first time traveller...


The problem with this is that there's a point where philosophy meets physics or

chemistry or biology


As in the play, of course, your cells are replaced over time, so that you are not the same set of components that you were (say) 7 years ago - but this is also true at the level of particles, only far faster....


And also at the level of cognition (given brain plasticity, and how it is believed memory really works, we are continually modifying who we "are" as we experience new things, even just recalling old things)...



On a more interesting take about paradox, Behold the Man, by Michael Moorcock was pretty ingenious...and for paradox busting par excellence, Robert Heinlein's classic By His Bootstraps is the business...


So taking the memory model as a template for the philosophical challenge of continuous identity, it seems to me that there's really several problems


  • Fidelity

if something is a high fidelity copy of the previous instance of a person, and the previous version is replaced, then from everyone elses point of view, this is the same

person.


  • Memory

if the next instance of a person has the same (or very similar) memories to the previous instance, then they can delude themselves that they are the same person, as they will have the illusion of continuous existence - this is actually no different than how vision works, where your visual cortex as to make an apparently coherent and spatial and temporal continuous visual space out of what your eyes/retina detect, despite that that is intermittent and imperfect...


  • Consciousness

this is very tricky, since the locus of attention moves ahead (anticipation/ etc) as well as behind (memory) the current moment...however, if the brain is just a machine, then it is reasonable for the model it runs of the world to include prediction, and that model itself is copied from instance to instance of the person, providing the illusion of continued consciousness....

Thursday, April 14, 2022

onboard, board, offboard, outboard & knowledge base

 there was an interesting internal tech talk recently at the Turing Institute by a fairly recent addition to the research engineering group, who had a lot of previous experience in various technologies knowledge bases in various different organisations, and was being mildly critical of the system that had evolved here.


One thing struck me about this was that however you construct such a system, much of it (like an iceberg) is not in the visible components, but is reflective of how people use/navigate/update the knowledge, which is a shared delusion (like William Gibson's depiction of cyberspace in Neuromancer) - not in a bad way, but the longer the system exists, the harder it is for new people to acclimatise to it. Large parts of the structural information used by people to work with it are in their heads, not online.

so the system could automatically document how different kinds of users use it, by keeping breadcrumb/paper trails (you can of course do this in a wiki) and then do some kinds of statistical analysis to provide common, distinct modes/patterns explicitly. This could even be done in a privacy preserving way by combining federated learning (e.g. in client side tools, or browsers etc) with differential privacy, perhaps....


a project for an intern?

Tuesday, April 12, 2022

metaphorical computing considered lazy

 there's a story that originally ECT was discovered as a way to treat manic people after someone observed that chimps in captivity who got that way, but also had epileptic fits, were calmer after a fit. so then realising you could induce something that looked like a fit in chimps and therefore likely in people, the treatment was born, and many people suffered from this ludicrous idea for decades - I heard more recently, ECT has been somewhat rehabilitated and isn't used as a means to control unruly patients but actually has theraputic value, but the origin tale is still alarming.


so what about other ideas that are based in leaky reasoning, for example...

artificial neural networks as a way to build classifiers? not with anything like the same node degree distribution or mechanism for firing whatsoever, so how would one build so many aNNs almost none of which bear any ressemblance to what goes on in our heads?

evolutionary programming (e.g. GP/GA) as a way to do optimisation? but note evolution is about natural selection of anything that fits the niche in the environment - that doesn't make it an optimisation at all, just a choice.

bio-inspired search, e..g based in ants trailing pheremones? as with evolution, this is a blind process that assumes nothing about the setup, and is mind bogglingly wasteful.


Are there actually any vaguely sustainable ways of tackling these tasks (classifiers, optimisation, search) - of course there are...

Monday, December 20, 2021

very good graduate school opportunities

 strongly recommend Max Planck CS@max well resourced, world leading research institute in pleasant setting.

Monday, November 29, 2021

Principles of Communications Week 9 L16@LT2 11am, 30th Nov 2021 - Systems

 Finally, some systems principles....put in to practice.

and the Course Summary 

In general, everything lectured is examinable. Any additional details (e.g. protocol specifics like packet headers/fields etc) or equations should be available as part of any exam questions. 

Thursday, November 25, 2021

Principles of Communications Week 8 L15@LT2 11am, 25th Nov 2021 - Traffic Management

 One other dimension to think of Traffic Management - Signaling 

congestion is a signal whether packet loss or ECN

price is a signal, whether per session/flow, or per day/month year

users demand is a signal - whether sending VBR video or rate adaptive TCP or QUIC traffic - the rates and duration (or file sizes) tell the network provider what to provision for,,,

aggregate demand (e.g. peering traffic or customer provider traffic) is a signal


so even without deploying an explicit signaling protocol (e.g. like RsVP) there is a lot of meta-data telling stakeholders what is affordable and what to be afforded.