today I am in Bertinoro at FuDiCo III, a
workshop on building MAD systems - FuDiCo is Future Directions in distributed Compting,
and MAD is not totally crazy, but stands for "multiple administrative domains".
I took some happy snaps with my trusty camera fone:-)
So far, the place is totally cool, and the talks are v. interesting!
So from the network perspective, I offered these notes:
1. End users adapt their rate to maximize utility & network providers provision
routes to maximise profit and minimise costs (through getting high utilisation)
Separately, ISPs compete by offering transit to end users, and to other ISPs
(either in customer / provider relationship, or peering)
However, end users cannot choose ISP - the mechanism that might work is loose
source routing - it could be policed coarse or fine grained, but is in fact
blocked. If a user subscribed to an IS that advertised "long haul provider
selection" it could be done to differentiate traffic (e.g. by delay, reliability
or whatever), but ISPs argue against this option
i) incorrectly, that it would be a security flaw (claiming it would allow
trangular/indirection attacks on the internet - in facty it still reveals the
source, and most indirection attacks on the net relay on higher level relays to
obfuscate the source)
ii) more crucually, that it would "level the playing field" between ISPs - that
is the point - in an ideal sense, by allowing users to game the system, just as
ISPs can do to each other, it wold move them to a marginal profit (i.e. a
The question is whether in practive it might have other problems if there are
malicious players out there or accidentally bad players as well as rational and
altruistic users and ISPs
how would we show that offering Internet "AS long haul provider selection" is, in
general, not the end of the ISP business (and further is stable or has other nice
properties in terms of deployability)?
2. There are perverser incentives for ISPs to not prevent Distributed Denial of
Service attacks - An ISP in the UK that has no servers measures wuite frequent
traffic flows traversing their network to other ISps at aggregate data rates of
up to 7Gbps! Since their own customers are not impacted by this, AND they get to
claim they are forwarding lots of traffic transit to other ISPs, they get to peer
with those other ISPs instead of just being a customer so they have no reason to
block/ratelimit or blackhole these DDOS attacks!
Other examples of perverse incentives are
Microsoft shipped an anti-vurus advisory system in Windows XP Service Pack II but
did not include their own antivris software because they were worried about
litigation (e.g. under anti-trust law) from the Windows Virus protection software
vendors - default-on anti virus software would mean that less systems would be
susceptible to explotation as (say) botnet bot/farm/ddos sources, but because
microsoft didnt ship a product, many people installing XP chose not to spend
extra to buy antivirus, and so are vulnerable to being used as attacker sources!
how would one mitigate this?
background - distributed systems versus networks - areas of interest, community focus, venues and research approaches:
non gameable congestion control/rate
e.g. dual opt formulation
U: maximise each users utility
N: maximise revenue for min cost
properties we'd like to derive
deviation from fairness?
range and scope of efficiency?
[i.e. quantitative] <- sigcommm
converge (bad gadget)
negotiation & revelation
propertues we'd like results for
range of business relationships
[i.e. qualitative] <- podc
DoS-Proof (Next Generation) Internet & MANET
start with strategy proof
propertes we'd like to have
risk/impact of attack
percentage free riding
[i.e. quantiative] <- sigcomm
incentive align (upload v. download)
sybil proof (non fakeable ID)
non foregeable token
attack on content (dos on content)
cooperation (whether by altruism or enforcement)
performance of p2p v. server v. multicast
[i.e. quantiative & qualiative] <- sigcomm and podc!