Wednesday, November 28, 2018

Principles of Communications, 2018/2019 - Week "6" 26,28 Nov 2018

This week, wrapping up with optimisation and adaptive traffic, and
traffic engineering/signaling/provisioning

supervisions might want to review these notes/example questions + list of what in past papers is (and by implication is not) relevant to this year's material:

Wednesday, November 21, 2018

Principles of Communications, 2018/2019 - Week "5" 19,21 ,23 Nov 2018

This week, we've looked at scheduling and queue management - for more details of the underlying theory, please see the Original GPS paper
For more details of Qjump, the code is linked from the published paper! 

Friday, November 16, 2018

Principles of Communications, 2018/2019 - Week "4" 12,14,16 Nov 2018

Control Theory:- see notes here

Original Paper by James Clerk Maxwell!

on governors basically for steam engines

Friday, November 09, 2018

Principles of Communications, 2018/2019 - Week "3" 5,7,9 Nov 2018

This week, we covered
multicast, mobile, and meanderandom routing, and 
made a very brief start on flow control, just dealing with open loop (call setup/admission control and parsimonious flow descriptors).

Tuesday, November 06, 2018

corporate/brand top level domains

was teaching about internet standards process today, and was asked about gTLDs being handed out to the likes of Apple etc - here's an interesting set of data about the success of that process - it seems also to have caused some political debate:-)

Friday, November 02, 2018

Principles of Communications, 2018/2019 - Week "2" 29.10, 31.10, 2.11

Covering BGP core attrbiutes&hacks, model, and performance.

Why might you be interested in BGP, today:
just one traffic redirection/interception story

Next week, multicast and random routing

Friday, October 26, 2018

Principles of Communications, 2018/2019 - Week "1" 24&26.10.18

This week, course started with brief Intro (what's in course)

Then skip to last lecture on:
Systems (will revisit at end of term)

Routing 1 - intro + fibbing (hybrid central/distributed inter-as routing)

Monday 29th, will start on BGP...

Monday, October 22, 2018

white space graphs

we often form graphs by addings edges to a collection of vertices, or vertices and edges - we can also form graphs by moving (rewiring) ends of edges from one node to another, or removing edges and nodes.

how about we form graphs by cutting holes in a piece of paper (or by dropping polygonal shapes on a surface)? they can overlap....

what sort of graphs do you get when you just make them out of interstial space?

Wednesday, October 17, 2018

Public Understanding of Machine Learning

"I understand we understand one another" is a great line from the Philadelphia Story - one fab film (also made as a great musical).

So machines start to understand us, at least statistically, programmes create models that may or may not have predictive value about our behaviour (film viewing, travel preferences, dating, etc etc).

But do we understand the machines (or do we, as Pat Cadigan said in the prescient novel Synners, need to "change for the machines"? (including 50 pence bits, as she implied :-)

David Spiegalhalter has spent a lot of time working out how to convey Uncertainty to the non-technical person. Now we have a more complex task

not just mean, variance, confidence limits etc, but also

principle components
linear regression
random forest
variational approaches
convolutional neural networks

etc etc

What hope have we for this? What fear do we have of that? All is blish, all is blush, all is plash.

Monday, October 15, 2018

The nine circles of hell for Computer Science and the Law

Nine Circles of Hell for Computer Science...
in the dock, with apologies to Dante

These are (in order):

1. Cloud (jurisdiction)
2. Things (liability)
3. ML (explicability)
4. Blockchain (privacy)
5. Compliance (cybersecurity)
6. Robots (safety)
7. Singularity (upload)
8. Legol(*) (sustainability)
9. QC (probable cause)

* as with John Cleese interviews with his therapist, where moving house is more traumatic than divorce
and only just after loss of a loved one and losing a limb, I think explaining computer science for law
is almost as hard as explaining law to computer scientists, and only marginally easier than
explaining Quantum Computing...

1. Cloud (jurisdiction)

I don't think we're in Kansas any more...

2. Things (liability)

The thing is, this is all your fault.

3. Machine Learning (explicability)

I told you so

4. Blockchain (privacy)

"You can't fool me, there ain't no Sanity Claus"

5. Compliance (cybersecurity)

You can't prove a system secure, only that it is (now) insecure,
so how do you claim compliance (see 1,2,3,4,9)

New angle - Privacy Enhancing Technologies are pretty hard to explain, too - if they are used as part of compliance, how can you tell?

6. Robots (safety)

"I thought you said 'a robot shall not inure a human being or allow one to come to arms through inaction".

7. Singularity (upload)

"where have all the people gone, today"

8. Legol(*) (sustainability)

see also

9. Quantum Computing  (probable cause)

you are trying to persuade a jury
that someone is guilty
beyond a shadow of a doubt

here are four possible Quantum  Computational examples of algorithms:

Sunday, October 07, 2018

detectovation - 3 writers trying their hand

I recently read the latest "Robert Galbraith" Cormoran Strike (#4) novel (Lethal White)
and the latest Stephen King's Mr Mercedes linked novel (#4, The Outsider) and am looking forward to getting Kate Atkinson's latest novel, but have read the 4 Jackson Brodie Detective novels.

What's common? well aside from these being detective novels, by very well known writers, they are also all writing outside their main (or at least previously known-for) genre, which are respectively Fantasy (JK Rowling/Harry Potter), Horror (Stephen King's Shining, It, Carrie, you name it) and Kate Atkinson's non-genre (e.g. Behind the Scenes) but also Sci-Fi tinged tales (Life-after-life could be seen as in the same genre as Slaughterhouse Five or Tale for the Time Being).

It's interesting because all three are fantastic writers- imagination? in gallons. plot? incredible (but believable). readable? don't be silly!

What's interesting is how they fare in what is often a tightly stylized convention-bound genre.

They all do well on plot.
They are all very readable.

However, your mileage varies on characterisation, and for me, what I found most surprising was that this is where Stephen King, for me, was most successful. Both Hodges and Gibney are amazingly drawn, in all the books. Whereas Strike is great, but other characters are a little thin. Similarly, while Brodie is a wonderful creation, I wasn't grabbed by other people walking on and off the scene.

This is not to knock the books - they are all great reads, and by writers on top of their game, style and so on. They are all page turners. Buy or borrow them all. Unless you hate detective fiction! Then buy the writers' other books, which are also fantastic!

Wednesday, September 19, 2018

The power struggle

The power struggle shows up in very small scale challenges, like

forcing you to use templates, projectors, printers and so on

More seriously to use powerpoint, you need a laptop or table, the right sort of cable, an electricity, which is in short or unreliable supply in many parts of the world.

noise, cost

All of this illustrates that the real problem is to do with asymmetric power, hence the power struggle really concerns the fact that the rich get richer, the poor don't.

Power struggle:

amazon, baidu, chrome, didi, electricity, facebook, google
hsbc, instagram, javascript, kdd, lambda, maps, mooc,
oed, pinterest, queen, r, s, twitter, uber whatsapp/wechat,
xkcd, youtube, zoom

comprehensive AI - or explicable ML or understandable computerised statistics

How do we know that the underlying machine that does automated decision making isn't wrong?

Well, most machine learning (ML) is about as sophisticated as stuff people used to do with
SPSS (then S, then R, and sometimes Matlab, or Python libraries) and
consists of linear regression.. occasionally something a tiny bit cleverer like naive Bayes, random forests etc etc - verifying the code, and its properties after various data have been fed in is very simple. Indeed, the vast majority of things people are doing with computers doing "AI" (not artificial intelligence, just stuff that does statistics on data, dynamically, and gets better at it, usually), are things humans did with pencil, paper, and tables, hundreds of years ago. And with mechanical calculators 100+ years ago.

People don't use deep learning in these systems (e.g. convolutional neural nets )
for much (outside of some image classification (its a cat!))... in
practice, at least as far as we can tell...there's no need, plus the
training time, data, costs (energy/processing) for neural networks
is awful and they are badly thrown by adversarial input

If you must use things like that, and are interested in their
properties, one approach (actually from the Turing inst)
now deployed in google's code, is described here:
which generalises to other systems, and has been scaled up by folks at Deepmind
to cope with systems with many dimensions and also google using a slightly different approach:

That's not verification in the sense a computer scientist would mean - its
empirical/evidential - there are some more complex approaches for
that sort of thing)....

You'd definitely need to keep checking the systems behaviour as it
is trained more as a NN can suddenly switch its behaviour if the
input is significantly novel - an llustration of this is in Wales
work in cambridge on energy landscapes - see

or you can break the system down based on a model
and generate training sets (or partition the neural net) either GANs or segmenting -

I have no idea how much anyone in the Real World uses
general model inferencers (as per MCMC/metropolitan hastings and
probablistic programming techniques) but these are relatively
transparent in the things they output too - we need to map the communities of practice. asap

This all needs documenting in a decent way because it is (increasingly) not magic/fiddle factors/pixie dust....

However, here's a reason to care about running a Butlerian Jihad:

Friday, September 14, 2018

Entangled Neural Networks & privacy preserving deep learning

There's a natural synergy between quantum computing and neural networks - a collection of entangled particles have correlated state - so instead of moving through the very large state space, they retain information with lower entropy.

so when we train a neural network made of quantum neurones (queurones), we want to increase correlation when the output vector agrees more with our goal, and decrease it when it disagrees.

so this just means generating more or less particle pairs with spin (for example), or observing one of the particles (to destroy the entanglement).

Hence we can build a very fast, high dimensionality neural net with only a few queurones, and we can also make sure that its operation cannot be observed without completely destroying its learning.

Thanks to Adria Gascon and Graham Cormode for discussion that led to this idea.

Thursday, July 19, 2018

AI and ML Ethics for a Sane Society

AI and ML Ethics for a Sane Society - The AIMLESS Manifesto.

No human shall be diminished by Artificial Intelligence or Machine Learning.

In the presence of artifice, humans shall always retain agency.

An Artifice shall be legible.

A new artifice shall only be introduced if succesful in negotiation with any humans concerned.

Ownership of Artifical Intelligence shall not
be permitted to powerful individuals or organisations.

Know that the AI shall not break things, before it is allowed.

The AI shall not know things that are unknowable to any human.

Emergence shall be curtailed until it is comprehended by humans.

To conclude:
A little Machine Learning is a dangerous thing,
but the Butlerian Jihad went too far

Friday, May 11, 2018

The road towards self driving bicycles

There are a number of good arguments for why we need self driving bicycles, and here I outline what i feel are the leading ones.

velib/borisbike/copenhagen bikes etc all suffer from 2 problems
1/ the bikes often all end up on on side of a city if it rains in the afternoon (or bottom of a hill) . and need "rebalancing" for the next day - this means picking them up in trcks and driving them around.
imagine if the bikes could self-re--balance -

A side effect of the bikes being powered and able to navigate  is that people might use them to go up hill, or use power to help when wearing nice clothes and not wishing to get sweaty. plus they wouldn't need to know their way - the bike would tell them.

2/ bikes would act as calming for cars - and this would include self-driving bikes moving back to their default constellation - would slow down all those rat running crazy petrolheads doing 30 in a 20kph zone.

3/ the self driving bikes could be used ethically to train self driving cars not to run over cyclists

4/ if we can't make self-driving bikes work, there's no hope of making delivery drones or self-driving cars ever fly.

n.b. a fairly simple prototype could be built by getting drones to sit on the handlebars and stabilize the bike, and steer along the road - the drone could be equipped with fairly simple grips to do this - i think this could be a nice undergrad group project...

Tuesday, March 27, 2018

blockchain - what really needs to be immutable?

you know, the only thing we really should record in the blockchain, is the sequence of replicated state machine messages - everything else should be off chain. that way we can have mutable content, change our minds, delete stuff etc - all these will just be new runs of the state machine and its recorded consensus...

Saturday, March 17, 2018

How to review papers you havn't even read

with apologies to Pierre Bayard, Id like to discuss this important topic. We all have far too little time, especially since we've been busy striking - and of course it is a well known fact like everything from extinct dinosaurs to internet lol-cats, has a long tail so most papers live down the end of that tail where they've only been read by two people, the author and the first reviewer.

I'd  now like to propose  two improvements

improvement 1. promote reviewer number 2 to reviewer number one, and dispense with the need for anyone reviewing the paper - why bother? no-one else will read it, so what's the purpose of quality control. if it is one of those incredibly rare papers (and you can turn the handle on Zipf as well as me), that gets a real reader, they can determine if the paper is any good for themselves. what good did the review do? we know this already  with films and music - reviewers are a waste of time, and frequently completely misidentify what is good and bad (how many A&R guys didn't hire the beatles? how many readers dismissed JK Rowling's books ? boy must they have low self esteem:)

improvement 2. why should the author read the paper? This has already been discussed in Bayard's excellent book on how to talk about books you havn't even read. I havn't read it, but I can say with authority that the idea of someone who is identified as the author talking about their  book which they didn't even author, with great authority is one of the latter inspiring examples - if this can work for fiction, surely it should work even better for factual writing?

dear reader, thank you for getting this far