Friday, November 29, 2013

principles of communications, 2013/2014, end of week #7 to L22

This week, we covered shared media & ad hoc capacity, and started on traffic engineering.

A sharp question on proportional fairness in earlier material prompted me to notice that that isn't well contrasted with max-min fair sharing -- It turns out (as often with technical areas) Wikipedia has a nice explanation - see this article on
proportionally fair w.r.t weighted (max/min) fair queues

Next week will finish traffic engineering and wrap up with summary of course.

Friday, November 22, 2013

principles of communications, 2013/2014, end of week #6 to L19

This week have done scheduling, queue management and switching
and just about to start on shared media

One interesting point historically -the colossus computer at Bletcheley Park built for code breaking was not a von Neumann classical architecture computer but was a "switched programme" machine  - this made it incredibly fast (for a 1940s design) although incredibly inflexible -- and it took a very long time for people to catch up on a standard desktop (about 50 years) - amusingly, about as long as the Dr Who series has run on BBC TV:)

Friday, November 15, 2013

principles of communications, 2013/2014, end of week #5 to L16

This week, control theory and optimization...

Some minor inaccuracies in slides have been corrected in the online copies linked from the course materials page...[or will be as soon as I can get powerpoint with the right fonts:) - the key error is in the calculation of the steady state error of the proportional controller - for some reason, there's a subtraction of the two terms for U(s) where it should be +
(KUs + Rc) / (s(s+K)
I think {need to check this:) it kind of makes sense (if the completion rate increases, the admission rate should increase....)

then when we take the limit of s(U(s), as s->0, we'll get Us + Rc/K
so ess (error in steady state) is Us - (Us + Rc/K) which gives us -Rc/K
(i.e. the answer is right, but the system response wasn't...will check and correct soon...

again, to note, the chapter on control theory in Keshav's book is very clear if you want alternative source + some nice example problems.

Friday, November 08, 2013

principles of communications, 2013/2014, end of week #4 to L13

Error, Flow and Congestion Control done (99.9%)

further reading - maybe - on Network Coding (see Digital Fountains)
and on what's in Linux (CUBIC) and Windows (Compound) for congestion control, and what real traffic actually looks like - see CAIDA
http://www.caida.org/home/

next week: control theory...and optimzation:)

Friday, November 01, 2013

principles of communications, 2013/2014, end of week #3 to L10

This week we covered routing -

there's one egregious error on the slide explaining Dijkstra's algorithm in Link State where the sign on the comparison is the wrong way round (well spotted students!) - I leave it as an exercise for you to find, as it makes for careful reading:-)

In Sparse Mode, we use Rendezvous Points to coordinate a single RPF tree around a designated/configured router (maybe one for each of a different block or subset of multicast addresses) - there's no guarantee the RP is in a sensible place, although the switch from RP centric tree to source based tree after an traffic flows helps reduce latency -  automatic placement of an RP to be in the "centre" of the group would be a solution to the Steiner Tree (Min spanning tree) problem which is NP-Hard, although there are polynomial time approximation algorithms for it (but you probably wouldn't deploy them in routers, but in a network management system for e.g. a gamer or trader network, this might be sensible)

One other note - consistency, symmetry of routes, and so on - IP and IP routing make no guarantees about this at all! BGP (inter-AS routes) are often asymmetric...recent computer science work on building new protocols that provide global consistency during route update and computation does exist, but is still research, largely....although the techniques are promising!

Next week, errors, then flow and congestion control.

Monday, October 21, 2013

principles of communications, 2013/2014, end of week #2, to L7

To note for today- the slide on graphs, with Edge and Node list has a list of all edges, alongside o nthe right list of nodes  - the list of nodes isn't meant to line up with the list on the left - its just a list for node i=1-5, what other nodes, in the directed graph,  are adjacent (look at arrows on edges - note in 2 cases (1<->2 and 5<->4, they are bi-directional)....



fun references today:-
Ghost Maps

Collatz

Kirchoff

Erdos

Small Worlds...

DDOS visualised

Couple more corrigenda/errata
1. in the alpha/beta models of random graphs, there's k used for average degree of the net (e.g. pN in the alpha model), but also used for the toal number of edges (N*(N-1)/2) - so take care with k
2. there's an expression in the slides about max-flow in DAR (the "Sticky Random Routing" for the telephoen net) for using Erlang's call blocking probability for a given link, then work out what the toal capacity will be for 1 hop and 2-hop/tandem routes - this has n, which is number of calls you get through, then mentioned a technique called LP  to solve the maximisation problem given in terms of sum of calls that get through (or are blocked) over all direct and tandem routes- we are'nt covering that technique this year, but LP stands for Linear Programming, and is fairly straightforward if you want to look it up - it is commonly used in optimisation and shows up in Operations research/Logistics (freight etc) and so on all the time.

Friday, October 18, 2013

principles of communications, 2013/2014, end of week #1, to L4

We have now covered Systems, and Layers (lawyers)...

Next week, Graphs, and Routes!

Friday, October 11, 2013

principles of communications, 2013/2014, L1

Today this course starts: principles of communcations

This blog will be where I put errata, answers to questions, and just generally track progress of where we've got to for students and supervisors....

Today [11.10.13] got as far as 1/2 way through Systems lecture - Monday
intend to finish that and go about 1/2 way through Layers material:

slides

Friday, March 01, 2013

Publication Culture in Computing Research Design for impact: Rethinking academic institutions from the ground up


Why do we pretend that a publication is an event, rather than a part of an ongoing
process?
Computer Science is a Soft Subject. We create artificial systems/artefacts, and explore
their behaviours. We then report on this by talking about the behaviours at workshops
and conferences, and writing about the systems in papers for web pages, online
archives or even traditional print journals.
People assume that the artificial dichotomy between social events (workshops,
conferences) and archival repositories (journals and the like) is right. And some of the
debate about CS publication culture is oriented around trying to get people to use
these two modalities  more like other disciplines.
I think this is fundamentally wrong, and flies in the face of real scientific method.
Science does not deliver truth. It delivers things that work, and explanations that are
the best, current, simplest ones (c.f. Popper on Objective Knowledge, and of course
Occam’s Razor).
This means that a work is not the final word. It is just the current word. A goal of this
proposal is to reduce the “slice and dice” culture present today due to various perverse
incentives.
So the notion that an “archival paper” has been thoroughly checked and is infinitely
more “correct” than a “rapidly” reviewed conference submission is not tenable. There
is every chance that during the necessarily longer process to create an archival version
of a work, subsequent work has improved over the results. Hence much archived
material is actually less accurate because it is less timely.
The solution, for me, is to remove the notion of immutable publications, and admit
that we should update work continuously
This can apply to the entire process of socialising our work, hence a dialogue (or
multilogue) between authors, reviewers and readers, continually adds accuracy or
timeliness (or invalidates a work).  The same can apply to citations (which should, by
the way, have a “sign bit” to indicate whether the citation is building on fro ma work,
or citing it as the thing the new work invalidates).
Recognising this mutable publication model, would allow work to be presented at any
point along the “production line”, perhaps merely by “acclaim” - some work has
reached a point where it is mature enough and timely and interesting enough to merit
presentation at a social event (workshop or conference) - this could happen before or
after some notional point when it is recognized that an archival version is the current
best knowledge we have (a rare event).

Along side this continual process, I think one would have to abandon ideas of
anonymity in both authorship of work, and reviews/critiques (viz, the “dialogues”
mentioned above could only work in that open way). It goes without saying that code
and data associated with a systems’ behaviour should also be openly available as part
of this ongoing process (after all, since when did we declare code “bug free”
correctly? Why, therefore do we declare journal papers “correct”?).
Finally, this isn’t exclusive to Computer Science, but we built the tools that would
make the new approach viable, so we should use them first.
In fact we also have the next generation tools for this – we just need to combine Arxiv
with Github (versioning repositories)
1
.
Causes of paper count inflation.
CS is notable (in most branches at least) for submitted to conferences more than
journals. There are two pressures to do this
1. Urgency
2. Promotion
CS is a young disciple, and the young are noted for being impatient and impetuous -
our slogan might even be said to be “Publish Early and Publish Often”
2
.
Urgency
We live in a nanosecond world. More than other disciplines, partly because we built
it.
We supplied the tools and tool chains (the net, e-mail, the web, PDF, bibtex/latex,
databases, HotCRP/EDAS, etc)  that let us cooperate to develop ideas, systems,
results, and write papers faster, and deliver them for review, editing, and presentation
more quickly than any previous generation. Surely, other disciplines use the tools, but
we live and breath them.
As a result, there’s a feedback loop between publication of hot new work, This instant
gratification leads to an increase in the rate of submission.
Our profession has also a tendency (at least anecdotally) to attract a share of people
with OCD/Attention Deficit problems, who maybe (amateur psychologist’s hand
waving here) seek instant rather than deferred gratification.
                                             
1
 Github because we want distributed repositories to avoid re-concentrating power in
one place all over again.
2
 I could speculate here about whether these factors also contribute to the gender
imbalance in Computer Science as a profession and academic career (whether
directly, or simply as proxies for a root cause).


Promotion
Our academic research culture is funded largely by tax payers money (NSF, DARPA,
EU), and the tax payers seek metrics to see their money is well spent, and they seek
such feedback on an annual basis. Paper counts (and to a lesser extent, citation
counts) serve this. The same problem (inflation) has hit the industry research and
development world, where patents are a proxy for real work, and are rewarded.
The amount rather than significance of work is measured - hence, the aforesaid dice
and slice approach to work, producing minimal publishable units, and multiplying the
number of venues and publishable units year on year.
Because CS is young and vigorous, we have in the past been able to keep up with this
inflation. We are close to the limits though.
In the UK, we have a national Research Excellence Framework, for which researchers
in universities do not return all their work. Instead, every 5 years, up to 4 “outputs”
(e.g. papers) are returned. Secondly, and in addition, impact stories (pieces of work
10-20 years old, that have had a long term effect on the world, economically, socially,
or in terms of further developments in a discipline) are employed.
It will be interesting to see the outcome of this process, but for me, it is probably a
better basis for looking at some one person, or groups progress, so if we were to use
these sorts of indicators for tenure or similar, this would remove the aforesaid
perverse inventive to maximise the number of publications.
Acknowledgements
Thanks to Richard Clegg and Ioannis Avramopoulos for comments on this draft.