Tuesday, November 28, 2023

Principles of Communications Final Week to Nov 28th/L16

 Wrap up - you should now be able to see the supervisor pages which summarise the course and have links to questions and releavnt/in scope past exam questions.

Tuesday, November 21, 2023

Principles of Communications Week 8 to Nov 23rd/L15

 This week is two Big Pictures 


On Nov 21, we do:

1/ Traffic Management over multiple time scales - linking together several ideas between routing/congestion control, open and closed loop flow control, admission, pricing, and, finally, a bit about signaling.

feedback/admission work at any time scale

short> reduce rate/ lose packets

medium> try again later

long>change to a better provider

So note peak rate pricing is just a slow version of congestion control + shadow prices.

Note using a single network may amortize costs well, but a single technology has single failure mode - the model of copper/fiber/radio has a lot of fault tolerance, which is high value for critical infrastructure.

Note that the problem with capacity planning and traffic matrix/source behaviour in the general internet is that a) it isnt a regular topology, b) each ISP is doing it in competition but also in cooperation with other ISPs! However, this paper showed how well the ISPs coped when everyone switched to working/learning from him the internet in the age of covid

On Nov 23rd we'll cover:

2/ System Design Rules - these are rules of thumb that apply in many systems, processsor design, operating systems, networks, etc etc

More can and will be said...

Tuesday, November 14, 2023

Principles of Communications Week 7 to Nov 17th/L13

 This week we're covering firstly data centers (through the lens of q-jump and its clever "fan-in" trick), and secondly, optimisation of routes and rates.

The mix of application platforms we see in the data center lecture are classic mapreduce (as represented by Apache Hadoop) and stream processing (as represented by Microsoft's Naiad)[2], as well as memcached (an in memory file store cache, widely used to reduce storage access latency and increase file throughput - e.g. for aforesaid platforms), as well as PTP (a more precise clock sysnch system than the widely used Network Time Protocol[1]). For an example of use of map reduce, consider Google's original pagerank in hadoop. For  the interested reader, this paper on timely data flow is perhaps useful background.


Optimisation is a very large topic in itself,  and underpins many of the ideas in machine learning when it comes to training - ideas like stochastic gradient descent (SGD) are seen in how assign traffic flows to routes, here. In contrast, the decentralised, implicit optimisation that a collection of TCP or "TCP friendly" flows use is more akin to federated learning, which is another whole topic in itself.

Why a log function? maybe see bernouilli on risk

Why proportional fairness? from social choice theory!

Are people prepared to pay more for more bandwidth? One famous Index Experiment says yes.


1. see IB Distributed Systems for clock synch

2. see last year's Cloud Computing (II) module for a bit more about data centers&platforms.

Monday, November 06, 2023

Principles of Communications Week 6 to Nov 10th/L11

 This week, we cover two closely related topics:-

1. Flow&Congestion Control

There are three regimes  for where there may need to be a matching process between sender and recipient rates.

a) sender - the sender may not always have data to send - think about sourcing data from a camera or microphone with a given coding, you get some peak rate anyhow.  or you just might have a short data transfer that never "picks up speed"

b) network - here some type of feedback is needed to tell senders about network conditions - this could be a call setup (open loop control) - not something we do in today's internet, or it could be closed loop feedback via dupacks, packet loss or ECN, and a rate adjustment in the sender (congestion window or explicit rate control.

c) receiver - the receiver can slow down the sender by advertising a small receive windor or just delaying acks or both.


Much of today's network uses these three approachs via TCP or QUIC. There's a nice study based on wide area measrements that shows how congestion control (case b above) is quite rare and more often we have a or c...see this paper:-

TCP equation&reality


2. Scheduling&Queue Management

Schedulers in switches and routers try to minimise the "damage" caused by one flow to other packet flows through a network device that has more packets arriving than it can send just now. Buffering packets turns into jitter, delay, which for audio or video turns into playout buffer delay, or packet loss, which for TCP data traffic is a key congestion signal, and for audio/video is quality degradation.

Schedulers try to allocate bottleneck capacity fairly e.g. via round robin, but how long do queues really need to be? as the network scales up, perhaps the impact of queueing decreases and fairly simple schedulers may suffice - see this paper/talk  for more ideas in this space. Of course, actual overload leading to packet loss is still bad, but the actual delay and jitter seen by a large aggregation of flows through a switch/router may not be the significant worry...

law of large numbers and what happens in queues then really