strongly recommend Max Planck CS@max well resourced, world leading research institute in pleasant setting.
Monday, December 20, 2021
Monday, November 29, 2021
Finally, some systems principles....put in to practice.
and the Course Summary
In general, everything lectured is examinable. Any additional details (e.g. protocol specifics like packet headers/fields etc) or equations should be available as part of any exam questions.
Thursday, November 25, 2021
One other dimension to think of Traffic Management - Signaling
congestion is a signal whether packet loss or ECN
price is a signal, whether per session/flow, or per day/month year
users demand is a signal - whether sending VBR video or rate adaptive TCP or QUIC traffic - the rates and duration (or file sizes) tell the network provider what to provision for,,,
aggregate demand (e.g. peering traffic or customer provider traffic) is a signal
so even without deploying an explicit signaling protocol (e.g. like RsVP) there is a lot of meta-data telling stakeholders what is affordable and what to be afforded.
Tuesday, November 23, 2021
Principles of Communications Week 8 L14@LT2 11am, 23rd Nov 2021 - Optimisation - transport/end-to-end
So we looked at optimisation for routes, including multipath, and optimisation for flow rates/congestion pricing. What scope is there for improving the end-to-end protocols themselves in detail - for the first, we see MPTCP, which enables single source-destination pairs to take advantage of multipath routes, and for the second, QUIC, which reduces the latency in the application/transport/network pipeline through several mechanism improvements - of course, there are also now people working on multipath QUIC too, since we seem to be about to deprecate TCP in various forms (at least for web apps).
One question in class today was about QUIC migration (how does a session survive a change of IP address) - the details are discussed in this draft
Thursday, November 18, 2021
Why is utility cast in terms of log of rate - risk aversion - see bernouilli's argument for this from a long long time ago!
In working out the function that gives us delay cost in terms of rate allocated to a link of a given capaciity, the headroom (load / (capaciity-load) ) is the simplest function, but queuing theory may give more complex results, and multipath assignments of traffic may multiple the impact...
a useful explanation of proportional fairness
Tuesday, November 16, 2021
Principles of Communications Week 7 L12@LT2 11am, 16th Nov 2021 - Data Center networks & Applicatiion Needs&Solutions
basically, this paper covers the topic QJump !
Next, we'll go back to the wide area and look at joint optimisation (of the network and the end system protocols)
Thursday, November 11, 2021
Principles of Communications Week 6 L11@LT2 11am, 11th Nov 2021 - From Schedules to Data Center networks
The main reason Generalised Processor Sharing is useful as a model, is to make clear the idea of a round. In work conserving schedules, the time spent by a given packet in a given flow in the router depends on the number of other packets from other flows, ahead of it, waiting to be transmitted to the next hop. The round advances but it isn't a measure of elapsed real time, as it depends on the number of flows currently in the round, which varies as sources start (and stop) sending, so flows join (or leave) the system.
Of course, if all flows were constant packet size and constant packet rate, and we only admitted as many flows as would fit in the capacity along the path, exactly, then the round would map to a wall clock (real time) - but flows are often variable in number, in packet rate and in packet size.
What the model lets us do is
a) reason about how fair and accurate a real scheduler is
b) work out admission control (can we accept a flow in the call setup/open loop flow control paradigm) or not - i..e for variable flows as above, described using some simple parameters like a linear bounded arrival process with a peak and burst size , that want some sort of minimum capacity guarantee (even if just roughly) - will they fit (without excessively disrupting the existing / established set of flows of packets?
c) what latency we might get for that flow, due to queuing and scheduling inaccuracies (op top of the basic latency due to simple propagation delay over the whole end-to-end path).
In the next section, we'll look at the exceptional setup & requirements in data center networks, where we may be able to make simplifying assumptions about the traffic and topology, and hence employ a simpler approach to schedulers.