Thursday, April 14, 2022

onboard, board, offboard, outboard & knowledge base

 there was an interesting internal tech talk recently at the Turing Institute by a fairly recent addition to the research engineering group, who had a lot of previous experience in various technologies knowledge bases in various different organisations, and was being mildly critical of the system that had evolved here.


One thing struck me about this was that however you construct such a system, much of it (like an iceberg) is not in the visible components, but is reflective of how people use/navigate/update the knowledge, which is a shared delusion (like William Gibson's depiction of cyberspace in Neuromancer) - not in a bad way, but the longer the system exists, the harder it is for new people to acclimatise to it. Large parts of the structural information used by people to work with it are in their heads, not online.

so the system could automatically document how different kinds of users use it, by keeping breadcrumb/paper trails (you can of course do this in a wiki) and then do some kinds of statistical analysis to provide common, distinct modes/patterns explicitly. This could even be done in a privacy preserving way by combining federated learning (e.g. in client side tools, or browsers etc) with differential privacy, perhaps....


a project for an intern?

Tuesday, April 12, 2022

metaphorical computing considered lazy

 there's a story that originally ECT was discovered as a way to treat manic people after someone observed that chimps in captivity who got that way, but also had epileptic fits, were calmer after a fit. so then realising you could induce something that looked like a fit in chimps and therefore likely in people, the treatment was born, and many people suffered from this ludicrous idea for decades - I heard more recently, ECT has been somewhat rehabilitated and isn't used as a means to control unruly patients but actually has theraputic value, but the origin tale is still alarming.


so what about other ideas that are based in leaky reasoning, for example...

artificial neural networks as a way to build classifiers? not with anything like the same node degree distribution or mechanism for firing whatsoever, so how would one build so many aNNs almost none of which bear any ressemblance to what goes on in our heads?

evolutionary programming (e.g. GP/GA) as a way to do optimisation? but note evolution is about natural selection of anything that fits the niche in the environment - that doesn't make it an optimisation at all, just a choice.

bio-inspired search, e..g based in ants trailing pheremones? as with evolution, this is a blind process that assumes nothing about the setup, and is mind bogglingly wasteful.


Are there actually any vaguely sustainable ways of tackling these tasks (classifiers, optimisation, search) - of course there are...