1. a wiki for discussing papers:
e.g.
http://picky.wikispaces.com/OSDI2008
2. tag cloud computing
and
3. internet boy scouts
tag cloud computing is basically tag clouds (which must be turing complete) and cloud computing (a la amazon/xen)
internet boy scouts are chasps and chaspesses who go round to old peoples' houses and help them with the internet...
Wednesday, December 10, 2008
Wednesday, December 03, 2008
Monday, December 01, 2008
DCII 1.12.08
there are moves afoot to replace the internet with something more friendly to data distribution (content distribution networks/redenzvous/publish-subscribe - it has lots of names) - a lot of it based on the observation that we have lots of consumers but also many many producers of (the same as well as different) content
see
Data oriented network architecture...
DONA
Translating Relaying Internet Architecture integrating Active Directories
Triad
Routing on flat labels
ROFL
note - all are based on the view that some sort of directory is needed (think databases)
In dealing with network layer identifiers and the real world (as in ROFL above) we can take a more incremental approach, for which,
see
IPNL
Putting all those ideas together with HIP (see earlier post), some folks have come up with PSIRP, the
Publish subscribe internet routing paradigm
You might ask, why not just implement multicast - this would need IPv6 at least as well due to lack of sufficient idenitifier space problems with IPv4 - however, multicast in the network layer has its own deployment problems:
Deployment issues for the IP multicast service and architecture
Diot et al
but also see:
A reliable multicast framework for light-weight sessions and
application level framing, for how it does make a good match for content distribution!
in all the above, we've not mentioned performance guarantees, notice:-)
see
Data oriented network architecture...
DONA
Translating Relaying Internet Architecture integrating Active Directories
Triad
Routing on flat labels
ROFL
note - all are based on the view that some sort of directory is needed (think databases)
In dealing with network layer identifiers and the real world (as in ROFL above) we can take a more incremental approach, for which,
see
IPNL
Putting all those ideas together with HIP (see earlier post), some folks have come up with PSIRP, the
Publish subscribe internet routing paradigm
You might ask, why not just implement multicast - this would need IPv6 at least as well due to lack of sufficient idenitifier space problems with IPv4 - however, multicast in the network layer has its own deployment problems:
Deployment issues for the IP multicast service and architecture
Diot et al
but also see:
A reliable multicast framework for light-weight sessions and
application level framing, for how it does make a good match for content distribution!
in all the above, we've not mentioned performance guarantees, notice:-)
Friday, November 28, 2008
DCII 28.11.08
Timescale Decomposition of Traffic Engineering - a good source of info on this is Richard Mortier's thesis
Wednesday, November 26, 2008
DCII 26.11.08
Receiver driven protocol design is an interesting area - one protocol that brings together a number of ideas I mentioned today is RLM - see
Steve McCanne's receiver driven layered multicast which was designed for multicast video - very nice!
meanwhile, why are there grooves on Hill's Road that are getting deeper year by year?
Steve McCanne's receiver driven layered multicast which was designed for multicast video - very nice!
meanwhile, why are there grooves on Hill's Road that are getting deeper year by year?
Monday, November 24, 2008
DCII 24.11.2008
preventing unwanted traffic, two lesser known attempts:
If you can't beat 'em, join 'em
and confuse 'em
Meanwhile, if you are interested in QoS, Multimedia and Multicast,
I did once write a book about internet multimedia with some colleagues...
If you can't beat 'em, join 'em
and confuse 'em
Meanwhile, if you are interested in QoS, Multimedia and Multicast,
I did once write a book about internet multimedia with some colleagues...
Friday, November 21, 2008
DCII 21.11.08
today we looked at intserv and diffserv - an interesting resource to control in the context of 21t century concerns is energy - an excellent paper introducing a well evaluated set of techniques for doing this is by folks from Intel and Berkeley on Reducing network energy consumption via sleeping and rate-adaptation - the same techniques are also being applied in the context of servers (cpu and storage) in data centers - integrating all of the above would be a good thing - the gains (cost saving) can be large (estimates are that we waste more than 50% of power in servers and networks and that the total contribution to world energy consumption is between 2 and 4% - saving half of that with simple techniques is obviously worth doing:)
Wednesday, November 19, 2008
DCII 19.11.08
paying to use car pool lane is an interesting mix of priority and price based admission control!
On the other hand, aircraft landing slots tend to be more max min fair share to guarantee everyone lands within their max journey time, subject to maximising capacity of runway and thus involved explicit admission control at takeoff time (a.k.a. filing a flight plan!)
On the other hand, aircraft landing slots tend to be more max min fair share to guarantee everyone lands within their max journey time, subject to maximising capacity of runway and thus involved explicit admission control at takeoff time (a.k.a. filing a flight plan!)
Monday, November 17, 2008
DCII 17.10.08
- since the notes are a bit sketchy (although Keshav's book is good here), I'd recommend a couple of good wikipedia articles on
Cross Bar switches, the
Clos Network, and if you want further reading, check out
spanning switches.
In general a good source of ideas on switches and switch router design is Nick McKeown's publications web page at Stanford:
tiny tera was one of his creations; more recently hes worked on open programmable (teaching/learning) router platforms.
lecture too slow?
Cross Bar switches, the
Clos Network, and if you want further reading, check out
spanning switches.
In general a good source of ideas on switches and switch router design is Nick McKeown's publications web page at Stanford:
tiny tera was one of his creations; more recently hes worked on open programmable (teaching/learning) router platforms.
lecture too slow?
Monday, November 10, 2008
DCII 7-10.11.08
Flow Control and Shared Media Access are related topics - goals are similar, but solutions differ, since flow and congestion control operate across multiple hops (as well as possibly hop by hop) and Media Access control schemes typically operate on a single hop and on a single technology, to which they are heavily optimised.
One simple difference then is that on a single technology, we can control the schedule and the MAC at the same time, whereas in the Internet in general, we have to put up with heterogeneity in the schedule in forwarding devices.
Fairness of TCP congestion control is usually defined by Raj Jain's equation
[(sum of rate_i)^2] / [n*(sum (rate_i^2))]
Efficiency (max goodput) - Aloha gives best performance under load where Poisson arrival (i.e. independent random arrivals) are such that throughput is 18.1% - you can work this out - if there are lots of sources sending, then if they send lower rate than this, you get less throughput and if they send higher you get more collisions, so you can figure out the mean rate for this best case (excercise for the reader:) for a given mean Poisson arrival and chance that no other node sends at same time (not 1, nor 2,3...)
slotting things makes it better
if the delay is low, listen before send is better
ranom backoff after collision is better
random backoff after listen before send is lower collision, but longer mean time
rts/cts before send eliminates hidden terminal problem (terminal hidden to sender is not hidden to receivers cts message hopefully)...
distributed schemes are fun...and popular, even though central scheme might be good in some cases (constant rate traffic...)
The latest thing (since 2001) is to consider network coding where packets are combined (through xor or linear coding) and recombined and then split out when they arrive over multiple paths - this is a very promising theoertical and practical trick, which can increase diversity and therefore robustness.
One simple difference then is that on a single technology, we can control the schedule and the MAC at the same time, whereas in the Internet in general, we have to put up with heterogeneity in the schedule in forwarding devices.
Fairness of TCP congestion control is usually defined by Raj Jain's equation
[(sum of rate_i)^2] / [n*(sum (rate_i^2))]
Efficiency (max goodput) - Aloha gives best performance under load where Poisson arrival (i.e. independent random arrivals) are such that throughput is 18.1% - you can work this out - if there are lots of sources sending, then if they send lower rate than this, you get less throughput and if they send higher you get more collisions, so you can figure out the mean rate for this best case (excercise for the reader:) for a given mean Poisson arrival and chance that no other node sends at same time (not 1, nor 2,3...)
slotting things makes it better
if the delay is low, listen before send is better
ranom backoff after collision is better
random backoff after listen before send is lower collision, but longer mean time
rts/cts before send eliminates hidden terminal problem (terminal hidden to sender is not hidden to receivers cts message hopefully)...
distributed schemes are fun...and popular, even though central scheme might be good in some cases (constant rate traffic...)
The latest thing (since 2001) is to consider network coding where packets are combined (through xor or linear coding) and recombined and then split out when they arrive over multiple paths - this is a very promising theoertical and practical trick, which can increase diversity and therefore robustness.
Wednesday, November 05, 2008
DCII 5.11.08
Errors show up even after CRC and TCP checksums - see this paper from stanford on
When the CRC and TCP checksum disagree
In round trip estimation, I commented on the obfuscation in the code - see for example this fragment:
/*
* The smoothed round-trip time and estimated variance
* are stored as fixed point numbers scaled by the values below.
* For convenience, these scales are also used in smoothing the average
* (smoothed = (1/scale)sample + ((scale-1)/scale)smoothed).
* With these scales, srtt has 3 bits to the right of the binary point,
* and thus an "ALPHA" of 0.875. rttvar has 2 bits to the right of the
* binary point, and is smoothed with an ALPHA of 0.75.
*/
#define TCP_RTT_SCALE 32 /* multiplier for srtt; 3 bits frac. */
#define TCP_RTT_SHIFT 5 /* shift for srtt; 3 bits frac. */
#define TCP_RTTVAR_SCALE 16 /* multiplier for rttvar; 2 bits */
#define TCP_RTTVAR_SHIFT 4 /* shift for rttvar; 2 bits */
#define TCP_DELTA_SHIFT 2 /* see tcp_input.c */
/*
* The initial retransmission should happen at rtt + 4 * rttvar.
* Because of the way we do the smoothing, srtt and rttvar
* will each average +1/2 tick of bias. When we compute
* the retransmit timer, we want 1/2 tick of rounding and
* 1 extra tick because of +-1/2 tick uncertainty in the
* firing of the timer. The bias will give us exactly the
* 1.5 tick we need. But, because the bias is
* statistical, we have to test that we don't drop below
* the minimum feasible timer (which is 2 ticks).
* This version of the macro adapted from a paper by Lawrence
* Brakmo and Larry Peterson which outlines a problem caused
* by insufficient precision in the original implementation,
* which results in inappropriately large RTO values for very
* fast networks.
*/
#define TCP_REXMTVAL(tp) \
max((tp)->t_rttmin, (((tp)->t_srtt >> (TCP_RTT_SHIFT - TCP_DELTA_SHIFT)) \
+ (tp)->t_rttvar) >> TCP_DELTA_SHIFT)
In processing acknowledgements, the congestion window needs to be evaluated - here's another fairly compact piece of code that does some of this work (opening the window as data is acked)
/*
* When new data is acked, open the congestion window.
* If the window gives us less than ssthresh packets
* in flight, open exponentially (maxseg per packet).
* Otherwise open linearly: maxseg per window
* (maxseg^2 / cwnd per packet).
*/
{
register u_int cw = tp->snd_cwnd;
register u_int incr = tp->t_maxseg;
if (cw > tp->snd_ssthresh)
incr = incr * incr / cw;
When the CRC and TCP checksum disagree
In round trip estimation, I commented on the obfuscation in the code - see for example this fragment:
/*
* The smoothed round-trip time and estimated variance
* are stored as fixed point numbers scaled by the values below.
* For convenience, these scales are also used in smoothing the average
* (smoothed = (1/scale)sample + ((scale-1)/scale)smoothed).
* With these scales, srtt has 3 bits to the right of the binary point,
* and thus an "ALPHA" of 0.875. rttvar has 2 bits to the right of the
* binary point, and is smoothed with an ALPHA of 0.75.
*/
#define TCP_RTT_SCALE 32 /* multiplier for srtt; 3 bits frac. */
#define TCP_RTT_SHIFT 5 /* shift for srtt; 3 bits frac. */
#define TCP_RTTVAR_SCALE 16 /* multiplier for rttvar; 2 bits */
#define TCP_RTTVAR_SHIFT 4 /* shift for rttvar; 2 bits */
#define TCP_DELTA_SHIFT 2 /* see tcp_input.c */
/*
* The initial retransmission should happen at rtt + 4 * rttvar.
* Because of the way we do the smoothing, srtt and rttvar
* will each average +1/2 tick of bias. When we compute
* the retransmit timer, we want 1/2 tick of rounding and
* 1 extra tick because of +-1/2 tick uncertainty in the
* firing of the timer. The bias will give us exactly the
* 1.5 tick we need. But, because the bias is
* statistical, we have to test that we don't drop below
* the minimum feasible timer (which is 2 ticks).
* This version of the macro adapted from a paper by Lawrence
* Brakmo and Larry Peterson which outlines a problem caused
* by insufficient precision in the original implementation,
* which results in inappropriately large RTO values for very
* fast networks.
*/
#define TCP_REXMTVAL(tp) \
max((tp)->t_rttmin, (((tp)->t_srtt >> (TCP_RTT_SHIFT - TCP_DELTA_SHIFT)) \
+ (tp)->t_rttvar) >> TCP_DELTA_SHIFT)
In processing acknowledgements, the congestion window needs to be evaluated - here's another fairly compact piece of code that does some of this work (opening the window as data is acked)
/*
* When new data is acked, open the congestion window.
* If the window gives us less than ssthresh packets
* in flight, open exponentially (maxseg per packet).
* Otherwise open linearly: maxseg per window
* (maxseg^2 / cwnd per packet).
*/
{
register u_int cw = tp->snd_cwnd;
register u_int incr = tp->t_maxseg;
if (cw > tp->snd_ssthresh)
incr = incr * incr / cw;
Friday, October 31, 2008
DCII 1-3.11.08
Multicast IP has been around for 20 years (!) - but only recently seen large scale use, e.g. for IPTV - there are now some examples of compelling large scale deployments which show why it is really useful. This relies on the relaxation of the old so-called "any-source multicast" to the more restricted "single-source" multicast model (or point-multipoint).
The potential for google click-thru type market research and targetted advertising should be obvious:)
If you are getting tired of reading this, and its after the cocktail hour, then
here is some advice on what to read to go with your drink.
The potential for google click-thru type market research and targetted advertising should be obvious:)
If you are getting tired of reading this, and its after the cocktail hour, then
here is some advice on what to read to go with your drink.
DCII 29/31.10.08
Picking hello timers for routing can be quite hard - too fast, and you react to short term conditions and flap, too long, and routing is not responsive to improved conditions - a very nice discussion of this is in
talk by folks from the beautifully named startup, "packetdesign".
Floyd and Jacobson discuss the problem of naive periodic timers causing
synchronisation of work in this
paper from a while back
There's a great book about road traffic - see
"Traffic: Why We Drive the Way We Do (and What it Says About Us)" by Tom Vanderbilt;
talk by folks from the beautifully named startup, "packetdesign".
Floyd and Jacobson discuss the problem of naive periodic timers causing
synchronisation of work in this
paper from a while back
There's a great book about road traffic - see
"Traffic: Why We Drive the Way We Do (and What it Says About Us)" by Tom Vanderbilt;
Monday, October 27, 2008
DCII 27.10.08
Today, I'm finishing off the protocols soup - I should have said
If a million monkeys type for a million years, you might get a shakespeare play
amongst what they type, but if you subtract the shakespeare play,
then what is left is CS acronyms:)
meanwhile, on protocol stacks, this
talk on the waist of the hour glass by my old mate Steve Deering is great!
If a million monkeys type for a million years, you might get a shakespeare play
amongst what they type, but if you subtract the shakespeare play,
then what is left is CS acronyms:)
meanwhile, on protocol stacks, this
talk on the waist of the hour glass by my old mate Steve Deering is great!
Wednesday, October 22, 2008
DCII 24.10.08
Protocol Implementation - see W Rich Steven's very excellent series of books, TCP/IP Illustrated (especially volume 2!). As well as a structured walk through of Berkeley Unix kernel source code, there's lots of useful lessons in there - the Linux code is "interesting", but less transparent as to purpose in my opinion (e.g. see my book on the Linux code from a while back).
Other useful protocol implementation ideas (in terms of managing concurrency, buffering, and other patterns) are distributed throughout lots of papers. Some early RFCs cover tricks used to do small fast data structures for various things. Model checking protocol implementations was an important contribution from groups here, notably the Network Semantics project, which did a full scale model of TCP and the socket layer!
In terms of the huge protocol TLA (Three Letter Acronym) soup out there, wikipedia is probably your best friend! basically, if a million monkeys typed for a million year,s they'd still not have all the ISO, ITU, IETF acronyms, even if they had shakespeare down pat.
Other useful protocol implementation ideas (in terms of managing concurrency, buffering, and other patterns) are distributed throughout lots of papers. Some early RFCs cover tricks used to do small fast data structures for various things. Model checking protocol implementations was an important contribution from groups here, notably the Network Semantics project, which did a full scale model of TCP and the socket layer!
In terms of the huge protocol TLA (Three Letter Acronym) soup out there, wikipedia is probably your best friend! basically, if a million monkeys typed for a million year,s they'd still not have all the ISO, ITU, IETF acronyms, even if they had shakespeare down pat.
Monday, October 20, 2008
DCII 22.10.08
The Internet's Layering model derives from the ISO's OSI (wow, there's a great mix of acronyms - actually, technically, ISO is not an acronym, since it is the International Organisation for Standardisation (with an 's') - OSI is the Open Systems Interconnection architecture and is an acronym:)
Meanwhile, alternatives abound - see for example, Arizona CS's ideas in the The X Kernel on protocol graphs. More recently, folks at USC and UCL came up with the more random idea of ole based protocols and Heap based composition. There are a lot of ways to skin a cat.
At a more basic level, the idea of interconnection in the OSI model tends to appear as if it is only a "network layer" function - the reality is that engineers and hackers build interconnects at every layer you can think of - for example:
physical layer repeaters and relays (especially in wireless and optical)
link layer switches and bridges (especially in LANs)
network layer (routers)
transport layer relays (usually to deal with transport layers that don't have enough network layer information to react end to end or hop by hop to loss because the network layer doesn't disambiguate congestion from interference or from noise).
Session, presentation and application layer proxies - e.g. web caching proxies, and in general, just about any peer-to-peer (P2P) system.
Data link and physical layer (possibly including network layer too) are starting to fall apart as "isolated" abstractions recently in multihop radio systems where
cooperative antennaes, cooperative coding, and cooperative multi-path routing mean that the three lowest layers need to be treated together at each device. This is a hot research topic area
End-to-end arguments in system design have never been more argumentative, seeing re-factorizations into different network architectures about once a year for the last decade.
Meanwhile, alternatives abound - see for example, Arizona CS's ideas in the The X Kernel on protocol graphs. More recently, folks at USC and UCL came up with the more random idea of ole based protocols and Heap based composition. There are a lot of ways to skin a cat.
At a more basic level, the idea of interconnection in the OSI model tends to appear as if it is only a "network layer" function - the reality is that engineers and hackers build interconnects at every layer you can think of - for example:
physical layer repeaters and relays (especially in wireless and optical)
link layer switches and bridges (especially in LANs)
network layer (routers)
transport layer relays (usually to deal with transport layers that don't have enough network layer information to react end to end or hop by hop to loss because the network layer doesn't disambiguate congestion from interference or from noise).
Session, presentation and application layer proxies - e.g. web caching proxies, and in general, just about any peer-to-peer (P2P) system.
Data link and physical layer (possibly including network layer too) are starting to fall apart as "isolated" abstractions recently in multihop radio systems where
cooperative antennaes, cooperative coding, and cooperative multi-path routing mean that the three lowest layers need to be treated together at each device. This is a hot research topic area
End-to-end arguments in system design have never been more argumentative, seeing re-factorizations into different network architectures about once a year for the last decade.
Friday, October 17, 2008
DCII 20.10.08
Systems Thinking about Communications - some related material:
Systems design for protocols - see Trading packet headers for packet processing, by
Chandranmenon, G.P.; Varghese, G., and for improving state set up costs, see A model, analysis, and protocol framework for soft state-based communication by Raman and McCanne.
Protocols can be given the performance improvements by batching (amortizing cost of processing multiple times in one go - e.g. aggregating packet processing in one interrupt service routine), pipelining (e.g. Integrated Layer Processing), but beware, this can be bad if the loop to do the processing exceeds the instruction cache size (or ditto for packet header data). This is not just diminishing returns, but can be counter-intuitive. We can amalgamate different processing and improve not just implementation, but also design - two elegant examples are TCP/IP header compression and header prediction - see RFC1144 for a beautiful piece of design and implementation in this regard.
Statistical multiplexing (see lectures on telephone, internet and atm for discussion) can be great, if you know the statistics of the traffic at various levels (of scale in time and space and aggregation)- as we pointed out, this is very true for voice (whether telephony voice or skype/voip) but less so for data (we know how a TCP flow works, but not how a lot go togeter, but worse, with a mix of client/server and client/proxy traffic, and P2P traffic, we really don't haev a good handle on the "traffic matrix" any more, if we ever did!)
Virtualization (of h/w/OS - see Xen, or network, see VPNs) is a way computer science hides multiplexing (sharing) - the control plane allocates a share and schedules it, then each share (slice/multiplex) sees a dedicated resource "apparently all its own" - reality (accuracy of dedicated share) depends on accuracy of the scheduler - as in all things CS, "your mileage may vary":-). To work well, the scheduler has to model the resource being virtualized, and the requirements of the accuracy of the model trade-off against its efficiency dependent on the statisitcs of the workload - see, easy!
Systems design for protocols - see Trading packet headers for packet processing, by
Chandranmenon, G.P.; Varghese, G., and for improving state set up costs, see A model, analysis, and protocol framework for soft state-based communication by Raman and McCanne.
Protocols can be given the performance improvements by batching (amortizing cost of processing multiple times in one go - e.g. aggregating packet processing in one interrupt service routine), pipelining (e.g. Integrated Layer Processing), but beware, this can be bad if the loop to do the processing exceeds the instruction cache size (or ditto for packet header data). This is not just diminishing returns, but can be counter-intuitive. We can amalgamate different processing and improve not just implementation, but also design - two elegant examples are TCP/IP header compression and header prediction - see RFC1144 for a beautiful piece of design and implementation in this regard.
Statistical multiplexing (see lectures on telephone, internet and atm for discussion) can be great, if you know the statistics of the traffic at various levels (of scale in time and space and aggregation)- as we pointed out, this is very true for voice (whether telephony voice or skype/voip) but less so for data (we know how a TCP flow works, but not how a lot go togeter, but worse, with a mix of client/server and client/proxy traffic, and P2P traffic, we really don't haev a good handle on the "traffic matrix" any more, if we ever did!)
Virtualization (of h/w/OS - see Xen, or network, see VPNs) is a way computer science hides multiplexing (sharing) - the control plane allocates a share and schedules it, then each share (slice/multiplex) sees a dedicated resource "apparently all its own" - reality (accuracy of dedicated share) depends on accuracy of the scheduler - as in all things CS, "your mileage may vary":-). To work well, the scheduler has to model the resource being virtualized, and the requirements of the accuracy of the model trade-off against its efficiency dependent on the statisitcs of the workload - see, easy!
Thursday, October 16, 2008
DCII 17.10.08
Asynchronous Transfer Mode (not Automatic Teller Machine, or even, Another Terrible Mistake)
What I said in 1994 is still not wrong.
The Fairisle Cambridge ATM Switch
Fore and Marconi and links to Cambridge are historically very interesting.
much longer ago, a precursor to ATM was the Cambridge Ring-based local area network technology
ATM lives on in BT's Colussus network, which is used to provide dynamic control of unbundling DSL lines. The lines themselves often use ATM to multiplex a single voice channel with a higher speed data channel. At least one of tee chips used to do this was designed by a member of the Computer Lab.
What I said in 1994 is still not wrong.
The Fairisle Cambridge ATM Switch
Fore and Marconi and links to Cambridge are historically very interesting.
much longer ago, a precursor to ATM was the Cambridge Ring-based local area network technology
ATM lives on in BT's Colussus network, which is used to provide dynamic control of unbundling DSL lines. The lines themselves often use ATM to multiplex a single voice channel with a higher speed data channel. At least one of tee chips used to do this was designed by a member of the Computer Lab.
Wednesday, October 15, 2008
DCII 15.10.08
IP Addresses (now, lamentably, referred to as IPs) have to be matched to the longest matching prefix (at least in IPv4) - there are a bunch of cool algorithms for this based in fast multi-level hashes (linux), Tries, and in specialized hardware (ternary Content Addressable Memory- CAMs) - so "most specifics"are found (last resort is all match- default route). For the first elegant paper on a data structure for doing this see Degermark's small fast route lookuup work. BSD and Linux kernel code are also informative.
IPv6 has enough bits to let you do proper separation of identity and location, so that one can at least consider a sensible way to support 3 different address allocation policies (topological, provider and geographic), as well as support both multi-homing and mobility. However, none of this is seeing much deployment yet (except in China).
Finally, the Host Identity Protocol (HIP) uses crypto-assigned addresses as a way to generate unique identities and then uses other services to map from ID to location - this, together with appropriate shim software (similar to the shims to let it work on v6) lets TCP work across host movements (or multi-homed - similar problem really) - the HIP RFC also cites other work on IPv6 and the basic problem of IPv4:)
An amusing (and not incorrect) map of the IPv4 Address Allocation today was done in the ever marvellous xkcd.
IPv6 has enough bits to let you do proper separation of identity and location, so that one can at least consider a sensible way to support 3 different address allocation policies (topological, provider and geographic), as well as support both multi-homing and mobility. However, none of this is seeing much deployment yet (except in China).
Finally, the Host Identity Protocol (HIP) uses crypto-assigned addresses as a way to generate unique identities and then uses other services to map from ID to location - this, together with appropriate shim software (similar to the shims to let it work on v6) lets TCP work across host movements (or multi-homed - similar problem really) - the HIP RFC also cites other work on IPv6 and the basic problem of IPv4:)
An amusing (and not incorrect) map of the IPv4 Address Allocation today was done in the ever marvellous xkcd.
Monday, October 13, 2008
DCII 13.10.08
Erlang was a Danish mathematician who first solved the problem of provisioning for telephone networks by computing the relationship between the call arrival statistics, the capacity, and the "call blocking probability" (i.e. chance that there isn't a single call's worth of capacity on any circuit to the destination for your call). N.B. as well as being independent random arrivals, calls are mostly short (3 mins) and local (one hop), so hierarchical capacity assignment is straightforward. Of course, there are "flash crowds" (a.k.a. "Mother's Day" events), which do not follow these statistics, and generally lead to higher than usual call blocking.
Leland, Willinger and others were the first to report the (then surprising) non-poisson nature of Internet traffic in their rightly celebrated sigcomm 93 paper - this somewhat went against received telephone network wisdom that traffic was I.I.D and therefore characterized by the Poisson distribution. This, of course, describes the aggregate traffic characteristics - most traffic is TCP-based, and a single source's behaviour is described by Padhye's equation
THe Telephone networks were designed, top down, by large national monopolies and their topology was created by network design equations. The Internet has grown in a decentralised way, and so its topology is a result of a number of natural (demographic) forces and is described better as a (nearly) scale free graph, whose node degree distribution follows a power law - many other systems are like this (e.g. authorship graphs, the web page link topology, the appearance of actors in films (kevin bacon etc). This also has interesting properties in terms of resilience to failure and attack, and reflects an ancient truth about society which network scientists call "preferential attachment" and social scientists call "rich get richer".
Errata: I am corrected about use of Mobile Phones on planes in landing in HK - apparently, it isn't allowed there yet - this was something I was told a few years back, and was by someone from the far east but perhaps it wasn't HK _ perhaps Beijing?
Anyway, key point is that the main objection is from cell phone providers worried about network overload from roaming, handover and beaconing traffic rather than the specifics of where - two CS solutions are 1). predictive handover and 2). aggregate signaling (e.g. via a small "microcell" on the plane).
Leland, Willinger and others were the first to report the (then surprising) non-poisson nature of Internet traffic in their rightly celebrated sigcomm 93 paper - this somewhat went against received telephone network wisdom that traffic was I.I.D and therefore characterized by the Poisson distribution. This, of course, describes the aggregate traffic characteristics - most traffic is TCP-based, and a single source's behaviour is described by Padhye's equation
THe Telephone networks were designed, top down, by large national monopolies and their topology was created by network design equations. The Internet has grown in a decentralised way, and so its topology is a result of a number of natural (demographic) forces and is described better as a (nearly) scale free graph, whose node degree distribution follows a power law - many other systems are like this (e.g. authorship graphs, the web page link topology, the appearance of actors in films (kevin bacon etc). This also has interesting properties in terms of resilience to failure and attack, and reflects an ancient truth about society which network scientists call "preferential attachment" and social scientists call "rich get richer".
Errata: I am corrected about use of Mobile Phones on planes in landing in HK - apparently, it isn't allowed there yet - this was something I was told a few years back, and was by someone from the far east but perhaps it wasn't HK _ perhaps Beijing?
Anyway, key point is that the main objection is from cell phone providers worried about network overload from roaming, handover and beaconing traffic rather than the specifics of where - two CS solutions are 1). predictive handover and 2). aggregate signaling (e.g. via a small "microcell" on the plane).
Friday, October 10, 2008
DCII 10.10.08
Digital Communications II starts today. I'm blogging links to sites about things I mention in passing.
Talking Telephone Numbers
Party Lines
Strowger
voltage of ringing - as asked: 90v AC, frequency 20-25Hz.
(28 volt I mentioned is the pulse in the exchange to drive the rotary electro-mechanical relay, not the ringing - and was a UK analogue post office standard.
Talking Telephone Numbers
Party Lines
Strowger
voltage of ringing - as asked: 90v AC, frequency 20-25Hz.
(28 volt I mentioned is the pulse in the exchange to drive the rotary electro-mechanical relay, not the ringing - and was a UK analogue post office standard.
Wednesday, October 08, 2008
what does "back of an envelope" mean....
...in an e-mail world? virtual envelopes are arbitrarily large (or small)...
indeed, what is a margin (where fermat might lack the space?) ?
indeed, what is a margin (where fermat might lack the space?) ?
Tuesday, October 07, 2008
very fine tv math!
the story of mathematics...
on the beeb, courtesy of du Sautoy of Oxford and the OU:
...is jolly good fun, but I can't help thinkin he keeps overclaimin
what is "fundamental" and "mathematics" rather than, say, natural sciences
but then I have to remind myself what he is tryin to achieve which wont do us
any harm....and take a look at this for good measure:
http://www.xkcd.com/435/
what I like is that du Sautoy goes just too fast (e.g. I watched the first episode with a smart 10 year old and he had to re-work through the Pythagoras proof by re-arrangement again afterwards to convince himself - this is good as it had the desired result (of making someone work through something again afterwards:)
I also like his unpretentious, but calmly enthusiastic presentation and lack of overly posh vowels:)
on the beeb, courtesy of du Sautoy of Oxford and the OU:
...is jolly good fun, but I can't help thinkin he keeps overclaimin
what is "fundamental" and "mathematics" rather than, say, natural sciences
but then I have to remind myself what he is tryin to achieve which wont do us
any harm....and take a look at this for good measure:
http://www.xkcd.com/435/
what I like is that du Sautoy goes just too fast (e.g. I watched the first episode with a smart 10 year old and he had to re-work through the Pythagoras proof by re-arrangement again afterwards to convince himself - this is good as it had the desired result (of making someone work through something again afterwards:)
I also like his unpretentious, but calmly enthusiastic presentation and lack of overly posh vowels:)
Thursday, October 02, 2008
A refined theory of wind in cambridge
in this continuing series on the problem of wind in Cambridge, and the fact that whatever direction you cycle in, it is against you, we apply the theory of complex dynamical systems, and topological manifolds to prove that, unless Cambridge is converted into a toroidal vertex, the problem is insurmountable, just like some of the bikes.
Essentially there is a set of subjects with a power law distrbution of popularity, and hence there is a distribution of people and rooms that have to be fit together across a set of spaces over time - aside from the known computationally harsh (technical term) problem of scheduling the rooms at all, there is a packing problem - now this means that there are always different numbers of people going in each direction from A->B, and then after a lecture from B->A. Consider then the time of events. One arranges for lectures to be in fixed length slots (for no other reason than otherwise the schedule would be temporally as well as spatially intractable) and yet this creates a set of waves of air across cambridge with fractal vorticies. Now consider the preferred size slot. Clearly there is a tendancy for people to arrive just in time (or just too late) for a lecture. This has a knock on (off) effect which re-enforces the pessimal slot length choice.
If it was an ecosystem (and we used some natural selection - e.g. based on energy left mapping to chances of passing) then we could fail more people, and find a more linear relationship between class size, room size and slot size and then easily solve the packing problem, but the system is essentially sufficiently large that the chances of this are zero.
In the next piece, I will look at whether combined oxygen and cash would be a solution to the market turmoils created by subprime loans.
Essentially there is a set of subjects with a power law distrbution of popularity, and hence there is a distribution of people and rooms that have to be fit together across a set of spaces over time - aside from the known computationally harsh (technical term) problem of scheduling the rooms at all, there is a packing problem - now this means that there are always different numbers of people going in each direction from A->B, and then after a lecture from B->A. Consider then the time of events. One arranges for lectures to be in fixed length slots (for no other reason than otherwise the schedule would be temporally as well as spatially intractable) and yet this creates a set of waves of air across cambridge with fractal vorticies. Now consider the preferred size slot. Clearly there is a tendancy for people to arrive just in time (or just too late) for a lecture. This has a knock on (off) effect which re-enforces the pessimal slot length choice.
If it was an ecosystem (and we used some natural selection - e.g. based on energy left mapping to chances of passing) then we could fail more people, and find a more linear relationship between class size, room size and slot size and then easily solve the packing problem, but the system is essentially sufficiently large that the chances of this are zero.
In the next piece, I will look at whether combined oxygen and cash would be a solution to the market turmoils created by subprime loans.
Wednesday, October 01, 2008
Monday, September 29, 2008
showandtell in CL - lesson: concentrate on long term, basic and teach science and math to our CS students
today's show and tell in the computer lab has some great PhD talks - the first one by laurel riek was a great start (about affec tive responses in robots' interactions with humans) - I was reminded of gregory bateson's essay on dolphin communication (how most their dialogues must be about social relationships, rather than about the concrete world, because they don't have hands and so wont reason able things that can be manpulated nearly as much as we (or other monkeys) do - he also discussed our understanding of dogs/cats and their communication - it might do well to look at making robots that interact with us as pets initially, since this is a simpler world (althoguh, of course, most our interactions with pets are about social relationships, and very furstraing if one wants to interact about some concrete world action or state, whereas I suppose that is the main place one wants to work in practice with robots and androids).
I also recommend reading John Sladek's excellent novel, Roderick, or the education of a young machine.
See also these excellent comments on the role of Universities:
tement for many people on this list nevertheless I presume...
news:
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=403
694&c=2
press release:
http://kampela-leru.it.helsinki.fi/file.php?type=download&id=1324
paper:
http://kampela-leru.it.helsinki.fi/file.php?type=download&id=1323
Maybe it was more "snow and tell" than "show" :-)
I also recommend reading John Sladek's excellent novel, Roderick, or the education of a young machine.
See also these excellent comments on the role of Universities:
tement for many people on this list nevertheless I presume...
news:
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=403
694&c=2
press release:
http://kampela-leru.it.helsinki.fi/file.php?type=download&id=1324
paper:
http://kampela-leru.it.helsinki.fi/file.php?type=download&id=1323
Maybe it was more "snow and tell" than "show" :-)
Sunday, September 28, 2008
free gameware idea
Melt - a world made of ice lollys - your mission is to save various people before global warming destroys their palaces/countries and drowns them in orange goo. if you use to much energy flying around etc etc, then you merely make matters worse, so you need to learn to be green:)
Thursday, September 25, 2008
better bikes through brainpower
safe bike where you dont need a seperate pump or battery
have a biometric lock which basically switches the hub on the back wheel one of 4 ways
1. locked
2. freewheel but drives a pump to reinflate the wheel
3. normal
4. normal+generator
actually, could also have
5. normal + electric motor for uphill running off generator running backwards from either capacity
or from tire pressure...
then you could just always have slightly flat tires too....
better biking through sustainable technology (longer intervals between new tirs, no batteries, and so on...)
have a biometric lock which basically switches the hub on the back wheel one of 4 ways
1. locked
2. freewheel but drives a pump to reinflate the wheel
3. normal
4. normal+generator
actually, could also have
5. normal + electric motor for uphill running off generator running backwards from either capacity
or from tire pressure...
then you could just always have slightly flat tires too....
better biking through sustainable technology (longer intervals between new tirs, no batteries, and so on...)
Monday, September 22, 2008
phase array antennae and beating the Gupta-Kumar limit for capacity of multihop nets
There was a nice paper in the ITA conference from Umass, about how to build cooperative antennae systems in ad hoc nets - so my idea is to combine this
with each node carryin a phase array antennae system
and beating the Gupta-Kumar limit for capacity of multihop nets (capacity decreases with 1/aqrt(n) or something) - the idea is to have a hybrid of
each node running a phase array, but the distribution (spacing) of antennae being a different prim number of wavelengths on each node -then when the nodes all cooperate to send in phase to get beam forming, the side lobes will all be decoherent by guarantee.
i mentioned this idea to the authors of the paper - they are building a prototype so it ought to be easy to add this version in - if it works, you'd get capacity of a
n-node net in a fixed volume as linear in the number of nodes (i.e. whaever link capacity each node gets does not decrease as you add other nodes) - this would effectively give you an arbitrary capacity net
the nice thing with beam forming is you get to do range extension too....
with each node carryin a phase array antennae system
and beating the Gupta-Kumar limit for capacity of multihop nets (capacity decreases with 1/aqrt(n) or something) - the idea is to have a hybrid of
each node running a phase array, but the distribution (spacing) of antennae being a different prim number of wavelengths on each node -then when the nodes all cooperate to send in phase to get beam forming, the side lobes will all be decoherent by guarantee.
i mentioned this idea to the authors of the paper - they are building a prototype so it ought to be easy to add this version in - if it works, you'd get capacity of a
n-node net in a fixed volume as linear in the number of nodes (i.e. whaever link capacity each node gets does not decrease as you add other nodes) - this would effectively give you an arbitrary capacity net
the nice thing with beam forming is you get to do range extension too....
Wednesday, September 10, 2008
Monday, September 08, 2008
leaning towards pisa
I'm spending the week at CNR in
Pisa, with Andrea Passarella and Marco Conti and talking time out to see Pierpaulo Ghiggino at Ericsson/Marconi, and Luigi Rizzo in the University -
[note CNR is a posh national lab for research and has a very good interdisciplinary communications group; university of pisa has a very good classic style cs gang, which is why i blog it here:)
I have to say that pisa is basically heaven for weather, food and architecture - I am staying downtown right near the Arno but CNR Is out on the NE edge - so I walk through the town to the old city wall, then follow a 2000 year old aquaduct (oh, ok, so its medici, not roman - only 400 years old:)
out to CNR - through a pleasant park. On the way back, I stop for a beer and watch kids playing and read my e-mail - then I go out for a pizza in lucca (La Bersagliera) - later today, we hope to get to La Pelleria, which is another awesome restaurent in Lucca. [alas we didn't - we had to make do with some amazing place in town instead...)
the route in cameraphonepix
i dunno, but I could get used to this:)
Pisa, with Andrea Passarella and Marco Conti and talking time out to see Pierpaulo Ghiggino at Ericsson/Marconi, and Luigi Rizzo in the University -
[note CNR is a posh national lab for research and has a very good interdisciplinary communications group; university of pisa has a very good classic style cs gang, which is why i blog it here:)
I have to say that pisa is basically heaven for weather, food and architecture - I am staying downtown right near the Arno but CNR Is out on the NE edge - so I walk through the town to the old city wall, then follow a 2000 year old aquaduct (oh, ok, so its medici, not roman - only 400 years old:)
out to CNR - through a pleasant park. On the way back, I stop for a beer and watch kids playing and read my e-mail - then I go out for a pizza in lucca (La Bersagliera) - later today, we hope to get to La Pelleria, which is another awesome restaurent in Lucca. [alas we didn't - we had to make do with some amazing place in town instead...)
the route in cameraphonepix
i dunno, but I could get used to this:)
Friday, September 05, 2008
v. good advice on writing papers
see
latex paper polishing advice
especially useful on dealing with w h i t e s p a c e
lots of other good stuff there
latex paper polishing advice
especially useful on dealing with w h i t e s p a c e
lots of other good stuff there
Thursday, September 04, 2008
the internet is drowning....
since most of the internet is underwater (i.e. is people fetching web pages from the US to Europe) clearly it is drowning. Surely we can detect failures in the net are mainly caused by either steep divergences in the mains voltage and AC frequency, or else by leakages in the seabed floor.
are my bits lost in the sargasso sea or the marianas trench?
are my bits lost in the sargasso sea or the marianas trench?
Sunday, August 24, 2008
virtual keyboards in the rain
while showering after a hard days tanning on the beach in paleochora, i thought i'd love to be able to check my e-mail - perhaps a virtual keyboard and display could be projected (and detected) by shining lasers on the falling water droplets?
maybe this could be an idea for a future very small keyboard??? somewhere one thinks there might be something in this?
maybe this could be an idea for a future very small keyboard??? somewhere one thinks there might be something in this?
Tuesday, August 12, 2008
big lies to tell small journalists:)
yesterday I got hauled in to do a pop piece
about the recent Ryanair screenscraping story
(see
http://www.theregister.co.uk/2008/07/09/ryanair_screen_scraping_bravofly/)
due to just reading the very fine
Great Lies to Tell Small Children
(can lend on demand if you are interested)
and recent experience with being mis-reported
I was very tempted to give a completely incorect (*)
explanation of screenscraping, and see how they reported it
i was wondering if anyone doing research on ethics and journalism has ever done
plating false stories (especially technical) and tracked their evolution (sort of
perverse version of chinese whispers)...
jon [in the end, the journalists were perfectly sensible, so i just gave straight answers]
(*) sample examples of exaplantions
1. basically, screenscraping is done by offshre companies in china and india hiring
hunderds of people to browse a website, and then scrape off the phosphor from the
screen and put it onto sheets of paper which are then faxed to the parent site
who scan them in and put them on their web site for people to browse - this is
hard to stop as the international law on faxing web pages is not clear.
2. scientsts int he canvendish quamtum inference group have figured out how to
send nanobots constructed out of pure electrons and photons down the internet,
and they can read the screen from your PC, and then send it back to the
cavendish, where it can be displayed on a huge plasma array for reading from
satellite.
3. screenscraping is a typo - it should be called screenscrapping, but, like
routeing, a letter got lost in the US post where they use smaller alphabets (just
as they use smaller paper sizes). screenscapping is where two web servers fight
out who has control pof which pixels on the display, and the stronger server (the
one that buys more badwidth) generaly gets more pixels there - this is a nice
example of market forces beign mapped directly on to the Internet protocols - we
will see a lot more of this with Web 3.0
I' indebted to john daugman for pointing me at the folowing science spoofs
1. pi is 3 (!):
http://www.snopes.com/religion/pi.as
2. c isn't a constant:
http://www.newscientist.com/article/dn6092-speed-of-light-may-have-changed-rec
ently.html
3. fashionable nonsense!
http://www.sfgate.com/cgi-bin/article.cgi?f=/chronicle/archive/1998/12/27/RV156
84.DTL&type=printable
(!): t"he alter font of Solomon's Temple was 10 cubits across and 30
cubits in diameter, and that it was round in compass" -
btw, the temple of solomon must have been in some strange space if it was both 10 cubits across and 30 cubits in
diameter - dia+metros normally means measurement across (from my greek) _ circumference (from the latin, circum,
fere!)
maybe they saw it on tv (tele - greek, video, latin)
of course, cubit could be a context sensitive metric, scaled by pi when referring to curves, but not when referring
to straight lines - that'd be an interesting way to think about the world maybe?
about the recent Ryanair screenscraping story
(see
http://www.theregister.co.uk/2008/07/09/ryanair_screen_scraping_bravofly/)
due to just reading the very fine
Great Lies to Tell Small Children
(can lend on demand if you are interested)
and recent experience with being mis-reported
I was very tempted to give a completely incorect (*)
explanation of screenscraping, and see how they reported it
i was wondering if anyone doing research on ethics and journalism has ever done
plating false stories (especially technical) and tracked their evolution (sort of
perverse version of chinese whispers)...
jon [in the end, the journalists were perfectly sensible, so i just gave straight answers]
(*) sample examples of exaplantions
1. basically, screenscraping is done by offshre companies in china and india hiring
hunderds of people to browse a website, and then scrape off the phosphor from the
screen and put it onto sheets of paper which are then faxed to the parent site
who scan them in and put them on their web site for people to browse - this is
hard to stop as the international law on faxing web pages is not clear.
2. scientsts int he canvendish quamtum inference group have figured out how to
send nanobots constructed out of pure electrons and photons down the internet,
and they can read the screen from your PC, and then send it back to the
cavendish, where it can be displayed on a huge plasma array for reading from
satellite.
3. screenscraping is a typo - it should be called screenscrapping, but, like
routeing, a letter got lost in the US post where they use smaller alphabets (just
as they use smaller paper sizes). screenscapping is where two web servers fight
out who has control pof which pixels on the display, and the stronger server (the
one that buys more badwidth) generaly gets more pixels there - this is a nice
example of market forces beign mapped directly on to the Internet protocols - we
will see a lot more of this with Web 3.0
I' indebted to john daugman for pointing me at the folowing science spoofs
1. pi is 3 (!):
http://www.snopes.com/religion/pi.as
2. c isn't a constant:
http://www.newscientist.com/article/dn6092-speed-of-light-may-have-changed-rec
ently.html
3. fashionable nonsense!
http://www.sfgate.com/cgi-bin/article.cgi?f=/chronicle/archive/1998/12/27/RV156
84.DTL&type=printable
(!): t"he alter font of Solomon's Temple was 10 cubits across and 30
cubits in diameter, and that it was round in compass" -
btw, the temple of solomon must have been in some strange space if it was both 10 cubits across and 30 cubits in
diameter - dia+metros normally means measurement across (from my greek) _ circumference (from the latin, circum,
fere!)
maybe they saw it on tv (tele - greek, video, latin)
of course, cubit could be a context sensitive metric, scaled by pi when referring to curves, but not when referring
to straight lines - that'd be an interesting way to think about the world maybe?
Monday, August 11, 2008
i(t)rony and sterotyping the It crowd
THe IT crowd, the gadget show, tomorrows world, what are these programs ?
not funny. so why not? why is lab rats as good as it gets?
what is wrogn with people? see
for a youtube video on what happens if computers stop working. Now go watch southpark on the same topic
what is missing from the sterotypes is the rich ironical seam of daftness in the techie geek worldview - wht we need is the jeremy clarkson of the compsci world - we
need madness, horror, un-PC behaviour....all that kind of thing
not funny. so why not? why is lab rats as good as it gets?
what is wrogn with people? see
for a youtube video on what happens if computers stop working. Now go watch southpark on the same topic
what is missing from the sterotypes is the rich ironical seam of daftness in the techie geek worldview - wht we need is the jeremy clarkson of the compsci world - we
need madness, horror, un-PC behaviour....all that kind of thing
Saturday, August 09, 2008
anti-social networking
there are a few so-called anti-social networks (isolatr and so on) but they miss the point
a decent anti-social network would not only not invite anti-friends or pass on anti-invitations, and not have any other members of dis-interest groups, it
would act as an anti-dote to social networks
thus if you sign up to my antisocial net (e.g. de-facebook, or bobe or nayspace)
then I guarantee to delete all the annoying interruptions you get on all the corresponding social nets
indeed, anti-mail will delete mail (not just spam, all mail)
a decent anti-social network would not only not invite anti-friends or pass on anti-invitations, and not have any other members of dis-interest groups, it
would act as an anti-dote to social networks
thus if you sign up to my antisocial net (e.g. de-facebook, or bobe or nayspace)
then I guarantee to delete all the annoying interruptions you get on all the corresponding social nets
indeed, anti-mail will delete mail (not just spam, all mail)
Friday, August 08, 2008
rain like gruel
from time to time, in cambridge, there is a gruel rain
its like gruel, because it isn't heavy enough to soak you (feed you)
unless you stay out in it long enough
I suppose if it lasted 100 days and 100 nights ,it would drown cambridge
in a Gruel Sea...
its like gruel, because it isn't heavy enough to soak you (feed you)
unless you stay out in it long enough
I suppose if it lasted 100 days and 100 nights ,it would drown cambridge
in a Gruel Sea...
Tuesday, August 05, 2008
pointless sensor networks, or senseless pointer networks - which is worse?
so just about every sensor net i've seen is
a) static
b) battery powered but controls devices that have mains electrics
c) uses wireless links but could have done broadband down the power line
d) doesn't need more than 1 sensor (e.g .light/heat) and a few actuators
why, why why?
also, the s/w should be written in conic or esterel or some algebraic specification language and compiled onto bare metal
senseless middleware - just say NO!
so here's a problem - you have lotw of tunnels with tunnel lights and you have to adjust them deepdning on the profile of light at the entrance:- tunnel vision (and the arbitrary complexity of a generic middleware like teenylime or limey teens) - solution - put 1 BIG light at the entrance, which is solar powered and emits light in inverse proportion to the sunlight, and then clock (sorry, power down) the rest of the lights to the lowest level...
game over
show me a sensor net problem that actually needs a network:)
a) static
b) battery powered but controls devices that have mains electrics
c) uses wireless links but could have done broadband down the power line
d) doesn't need more than 1 sensor (e.g .light/heat) and a few actuators
why, why why?
also, the s/w should be written in conic or esterel or some algebraic specification language and compiled onto bare metal
senseless middleware - just say NO!
so here's a problem - you have lotw of tunnels with tunnel lights and you have to adjust them deepdning on the profile of light at the entrance:- tunnel vision (and the arbitrary complexity of a generic middleware like teenylime or limey teens) - solution - put 1 BIG light at the entrance, which is solar powered and emits light in inverse proportion to the sunlight, and then clock (sorry, power down) the rest of the lights to the lowest level...
game over
show me a sensor net problem that actually needs a network:)
breaking down CS
an idea for a southpark video about computers everywhere
lets all go to the junkyard and get lots of consumer electronics (fridge, washing machine, microwqave, digital tv, cameras, phones etc) and then get a large hammer and smash them all to pieces to show how many of them have chips in (would be nice to do with a car too:) and then smash the screen we are on to show we are just a computer animation too (pink panther stylee:)
lets all go to the junkyard and get lots of consumer electronics (fridge, washing machine, microwqave, digital tv, cameras, phones etc) and then get a large hammer and smash them all to pieces to show how many of them have chips in (would be nice to do with a car too:) and then smash the screen we are on to show we are just a computer animation too (pink panther stylee:)
Monday, August 04, 2008
semi-lifting, strange loops and hard problems in CS
so was reading douglas hofstadter's excellent "I am a strange loop", and thinking about hard problems in CS in the sense of problems that are kludged around but not really elegantly solved - in the book, DH claims that class/superclass confusion is a first class part of human thought and yet ever since Principia Mathematica, everyone tries to ban (set v. powerset, or member, v. set of) the idea from formal thought - this seems to apply in CS too - but in reality, from time to time, we need to apply an idea and generalise it in some much more powerful and insightful way than this restriction permits - at the lowest level, this is what lets you program bits on wire (bare metal coding in comms and OS work) and then write the rest of the system in some typesafe way. but the "shift" in levels should not be the "dirty little secret" (casts etc) it is today - it should be a proud first class concept and mechanism for CS to use just like DH does
there's probably some parallel problem lurking in the occasional need to think about multiple inheritence too...but I can't quite think what it is right now...
there's probably some parallel problem lurking in the occasional need to think about multiple inheritence too...but I can't quite think what it is right now...
mobile phones and lack of creativity
here's another thing that would be good to do with a cell phone, but the cellphone business is too braindead to allow it
save/restore console game state (e.g. xbox, wii, ps2/3 etc) onto a cell phone (via bluetooth or wifi or even sms)
this would be cool coz you could save a game onto your phone,go round your friends (assuming they have same game) and upload and continue from where you were...
doh.
save/restore console game state (e.g. xbox, wii, ps2/3 etc) onto a cell phone (via bluetooth or wifi or even sms)
this would be cool coz you could save a game onto your phone,go round your friends (assuming they have same game) and upload and continue from where you were...
doh.
Thursday, July 24, 2008
Review Forms for CS Systms conferences...
I was involved in a recent workshop on workshops where there were many concerns about the problems associated with quantity and quality of work in the conference/workshop processes (from submissions, through reviewers, to feedback ,and on to publications and presentation)
it occurs to me that some of this might be improved by more careful design of review forms...viz the addition of judiciously chosen (lightening conuctor) sections in the form:
"Why did you hate this paper?"
(to remove vitriol from the technical feedback part of the review elsewhere)
and
"What paper did you want this person to have written?"
(to avoid feedback that asks an author to submit something different or do additional work which they might have done but not had space for:)
and
"who do I think I am?"
Plan B: have an exam which people have to pass to be qualified as PC members/reviewers - it will calibrate them.
it occurs to me that some of this might be improved by more careful design of review forms...viz the addition of judiciously chosen (lightening conuctor) sections in the form:
"Why did you hate this paper?"
(to remove vitriol from the technical feedback part of the review elsewhere)
and
"What paper did you want this person to have written?"
(to avoid feedback that asks an author to submit something different or do additional work which they might have done but not had space for:)
and
"who do I think I am?"
Plan B: have an exam which people have to pass to be qualified as PC members/reviewers - it will calibrate them.
Tuesday, July 22, 2008
learning statistics and computer games - recharging (energy) is same as (negative) randomness (entropy)
This is a well known problem that illustrates a nice area of proability/statistics (recently showed up inthe so so movie with Kevin Spacey, 21):
In a game show, a contestent has to guess which of three doors
the prize lies behind (e.g. a fast car) - the other two doors lead to nothing.
The game show host knows the right answer.
Now, lets say the contentst guesses door A. The game show host
now opens one of the other two doors (obviously showing
nothing), and asks the contestent
if they want to stay with their choice, or change to the door that
they didn't chose and the host didn't open.
What should the contestent do, and why?
So the answer is change (always - since you have 2/3 chance of the prize being behind the two doors you didn't pick, and in the second go, you get to know it is 50/50 between the door you did pick (with 1/3 chance) and the one you didnt that didnt get opened, so either way, you are twice as likely to win by changing as by sticking.
The more interesting problem is:
how do you convey this (teach it) to people?
I asked 4 random kids - 2 got it (using the 1/3 v. 2/3 or the 50/50 v 1/3 argument above) . two refused to believe me after explanation
Soome said: cast the change in information as like recharging in a game, then maybe they'd get it - so information increase is entropy decrease entropy is just negative heat. maybe there is something in this - could we devise a game whch illustrates this idea generally?
derek says: why not do the 100 (or million) door version and in 2nd go, host opens 999,998 doors...
In a game show, a contestent has to guess which of three doors
the prize lies behind (e.g. a fast car) - the other two doors lead to nothing.
The game show host knows the right answer.
Now, lets say the contentst guesses door A. The game show host
now opens one of the other two doors (obviously showing
nothing), and asks the contestent
if they want to stay with their choice, or change to the door that
they didn't chose and the host didn't open.
What should the contestent do, and why?
So the answer is change (always - since you have 2/3 chance of the prize being behind the two doors you didn't pick, and in the second go, you get to know it is 50/50 between the door you did pick (with 1/3 chance) and the one you didnt that didnt get opened, so either way, you are twice as likely to win by changing as by sticking.
The more interesting problem is:
how do you convey this (teach it) to people?
I asked 4 random kids - 2 got it (using the 1/3 v. 2/3 or the 50/50 v 1/3 argument above) . two refused to believe me after explanation
Soome said: cast the change in information as like recharging in a game, then maybe they'd get it - so information increase is entropy decrease entropy is just negative heat. maybe there is something in this - could we devise a game whch illustrates this idea generally?
derek says: why not do the 100 (or million) door version and in 2nd go, host opens 999,998 doors...
Thursday, July 17, 2008
objective unknowledge
so actually there is a flaw in the Popper classic formulation of
objective knowledge (and even if you extend it to take the community consensus
paridgm model of T. Kuhn, the same problem remains).
The model is that you cannot prove an hypothesis true - merely demonstrate its
use, but you can falsify it.
This means that "truth" is funfible, and based on a shifting sand of hypotheses,
formed by choosing the
i) best fit
ii) with least parameters
(lots of problems here - like why choose parameters and why isooate system from
other "allegedly" non-relevant influences
anyhow the flaw is this: a "proof" of falsity (i.e. falsification of an
hypothsis) is itself merely another piece of objective knowledge.
So while a theory about a truth in the Universe in Popper cannot be proved,
it can only be disproved, is the mantra - the act is that it cannot be disproved
either - all you can do is downrank it in a list of plausible models that have
some pragmatic value.
Hence science is merely the art of the pragmatic. It has no claim to absolute
truth, and even the relative truthes of models are subject to confidence limits
as tom toppard said, int hereal inspector hound,
frankly my dear, you strech my credulity and patience to breaking point.
as houlebque says
whatever
objective knowledge (and even if you extend it to take the community consensus
paridgm model of T. Kuhn, the same problem remains).
The model is that you cannot prove an hypothesis true - merely demonstrate its
use, but you can falsify it.
This means that "truth" is funfible, and based on a shifting sand of hypotheses,
formed by choosing the
i) best fit
ii) with least parameters
(lots of problems here - like why choose parameters and why isooate system from
other "allegedly" non-relevant influences
anyhow the flaw is this: a "proof" of falsity (i.e. falsification of an
hypothsis) is itself merely another piece of objective knowledge.
So while a theory about a truth in the Universe in Popper cannot be proved,
it can only be disproved, is the mantra - the act is that it cannot be disproved
either - all you can do is downrank it in a list of plausible models that have
some pragmatic value.
Hence science is merely the art of the pragmatic. It has no claim to absolute
truth, and even the relative truthes of models are subject to confidence limits
as tom toppard said, int hereal inspector hound,
frankly my dear, you strech my credulity and patience to breaking point.
as houlebque says
whatever
Tuesday, July 08, 2008
sex and drugs and rock´n roll and social networks
so i´ve been reading the fine book by ELizabetn Pisani called
THe Wisdom of WHores
about aids which basically says most the money is wasted on targettng the wrong groups for the wrong reason (the words¨target and group are giveaways there).
but anyhow read it yourself to find out why the wrong groups are targetted and why africa is a mess but there are countries in africa that could be models of the right thing to do
meanswhile on a related topic of sex and drugs and rock and roll
it appears to me that there are only two legitimate kinds of sex
1- the kind the US moral majority approve of
2. the kind that people writing cool papers about get to study.
meanwhile, i´ve been thinking (line from a song - can you name it?):
see my fourthcoming talk at Coseners:
http://www.cl.cam.ac.uk/users/jac22/talks/sdr.ppt
THe Wisdom of WHores
about aids which basically says most the money is wasted on targettng the wrong groups for the wrong reason (the words¨target and group are giveaways there).
but anyhow read it yourself to find out why the wrong groups are targetted and why africa is a mess but there are countries in africa that could be models of the right thing to do
meanswhile on a related topic of sex and drugs and rock and roll
it appears to me that there are only two legitimate kinds of sex
1- the kind the US moral majority approve of
2. the kind that people writing cool papers about get to study.
meanwhile, i´ve been thinking (line from a song - can you name it?):
see my fourthcoming talk at Coseners:
http://www.cl.cam.ac.uk/users/jac22/talks/sdr.ppt
Thursday, July 03, 2008
Haggle and Pocket Switched Networks
So I think we are going to have to work on Altruism and economics for opportunistic people based (ferrying) pocket switched networks- I believe I will have to coin yet another new phrase for the tit-for-tat protocol to pay people for carrying your traffic - this will in future be known as Deep Pocket Inspection :-)
or am i just blogging a dead horse?
or am i just blogging a dead horse?
Tuesday, July 01, 2008
Network Science - paradigm shift or just business as usual?
I attended some of the network science 2008 event recently
(see web site in Norwich science park
for more info) - it was an interesting mix of people networking (socially:)
It is clear that there are nearly as many definitions of "network" as there are people in the event. Indeed, in one workshop, someone asked for a definition of the word
"model" as used by several speakers in the room, and got (I think) more definitions than people:-)
on the way back, I noticed a railway that was being "re-furbished"
I wondered who had furbished it in the first place? furbish is a word that I will associate forever with captain James T. Kirk because Wlliam Shatner's brother used to advertise things like refurbished cars on TV in Canada when I lived there.
If you can sell me a furbished web site, I'd give you 1 euro.
(see web site in Norwich science park
for more info) - it was an interesting mix of people networking (socially:)
It is clear that there are nearly as many definitions of "network" as there are people in the event. Indeed, in one workshop, someone asked for a definition of the word
"model" as used by several speakers in the room, and got (I think) more definitions than people:-)
on the way back, I noticed a railway that was being "re-furbished"
I wondered who had furbished it in the first place? furbish is a word that I will associate forever with captain James T. Kirk because Wlliam Shatner's brother used to advertise things like refurbished cars on TV in Canada when I lived there.
If you can sell me a furbished web site, I'd give you 1 euro.
Thursday, June 19, 2008
nature and human studies
a recent publication in nature about human mobility caused a bit of a stir in the news
what surprised me was that apparently said paper passed the ethics committee in the authors' institution. the data was taken from a european cellular provider
1. the customers were'nt asked -t his makes it unethical
2. i asked uk and german and spanish legal folks they said this is also illegal in european law.
3. the data isnt avaialble to other people to validate, compare
4. not knowing which country it is make it petty hard to geenralise the mobility model (e.g. think greek islands versus spanish plains, versus alps:)
5. the paper appeared already in a slightly different form in the IOP Journal of Phys...
6 they dont seem to have done anythiong to control for factors that social scienctists would do like age group (think Virgin Mobile versus Vodafone - networks that cater mostly for kids or mostly for business users) people that turn off phones when traveling, or have 2 phones for home and work...
in general, I am a bit surprised at Nature for publishing the work - the math/analysis is cool, but the rest of it is doubtful, imho
http://news.bbc.co.uk/1/hi/sci/tech/7433128.stm
what surprised me was that apparently said paper passed the ethics committee in the authors' institution. the data was taken from a european cellular provider
1. the customers were'nt asked -t his makes it unethical
2. i asked uk and german and spanish legal folks they said this is also illegal in european law.
3. the data isnt avaialble to other people to validate, compare
4. not knowing which country it is make it petty hard to geenralise the mobility model (e.g. think greek islands versus spanish plains, versus alps:)
5. the paper appeared already in a slightly different form in the IOP Journal of Phys...
6 they dont seem to have done anythiong to control for factors that social scienctists would do like age group (think Virgin Mobile versus Vodafone - networks that cater mostly for kids or mostly for business users) people that turn off phones when traveling, or have 2 phones for home and work...
in general, I am a bit surprised at Nature for publishing the work - the math/analysis is cool, but the rest of it is doubtful, imho
http://news.bbc.co.uk/1/hi/sci/tech/7433128.stm
Wednesday, June 18, 2008
gravitational and inertial mass explained
so a puzzle slightly harder to explain than P=nP
to the no nscientist, and probably more important
is why gravitational and inertial mass are the same
so my theory is that mass is basically a concrete instantiation of information
(information, as it were, realized, or referent in the form of substance)
so the more information, the more mass (the more mass, the more information)
information breeds interpretation, and interpretation needs to "run"
and needs intelligence which runs on matter and is attracted to information
hence information (masses) attracts (and doesn't repel, whereas ignorance, or lack of information, does repel..)
so inertia is now easy to udnerstand - it is hard to change people's minds - the fact is that facts are things it is foolish to argue about. Hence facts have more inertia than opinions.
I.e. information not only has, but is, inertial and graviational mass.
QED.
to the no nscientist, and probably more important
is why gravitational and inertial mass are the same
so my theory is that mass is basically a concrete instantiation of information
(information, as it were, realized, or referent in the form of substance)
so the more information, the more mass (the more mass, the more information)
information breeds interpretation, and interpretation needs to "run"
and needs intelligence which runs on matter and is attracted to information
hence information (masses) attracts (and doesn't repel, whereas ignorance, or lack of information, does repel..)
so inertia is now easy to udnerstand - it is hard to change people's minds - the fact is that facts are things it is foolish to argue about. Hence facts have more inertia than opinions.
I.e. information not only has, but is, inertial and graviational mass.
QED.
Monday, June 09, 2008
tell me, how can a poor ISP stand such times, and live?
bbc says it will cost 16 billion quid to upgrade UK to 100Mbps (i assume FTTH) -
broadband snakeholders group comments
people are suing comcast for rate limiting stuff in a
class action
you can't win :-)
[ with apologies to Ry Cooder et al for misusing their fine toon title:)
new noos: slashdot lists an article which pretty much says what I said about
differentiation here,
but fails to distinguish as clearly as I would like
that this is not the same as arbitrary discrimination. Viz what I wrote in
CCR, but it does uphold the principle of transparency, which is fine, so I'm happy:)
broadband snakeholders group comments
people are suing comcast for rate limiting stuff in a
class action
you can't win :-)
[ with apologies to Ry Cooder et al for misusing their fine toon title:)
new noos: slashdot lists an article which pretty much says what I said about
differentiation here,
but fails to distinguish as clearly as I would like
that this is not the same as arbitrary discrimination. Viz what I wrote in
CCR, but it does uphold the principle of transparency, which is fine, so I'm happy:)
Friday, June 06, 2008
transformational government : where's the no leak model
so here's the thing - governments want to join-up-the-dots
for a variety of ostensibly ok reasons
1/ reduce costs (single database entry per citizen, keyed on biometric id)
2/ increase consistency (e.g. tax and rebate)
3/ tracking trends
4/ catching bad guys
5/ you name it...
6/ if they were really honest, one could make the system transparent
and probably remove most government (reduce the government to "codes"
this is all transformational government
the problem is that the more unified the databases, the higher the gain to bust it
and the higher the loss to people (in value and number of people) if the system is bust (whether deliberately by bad guys or accidentally by HMRC^H^H idiots).
so once you unify this thing how long does it last? how about forever, stupid?
so the probabilty of leaks might be some decreasing small number - e.g. the chance of leaking 1 record in a year might be 1 in a million. so what are the chances your record is leaked in your lifetime (say 75 years)? well, fairly close to 1 actually.
do the math
the only way to do things is to require that noone keeps data for very long at all. and noone has access across all databases - keep the databases seperate (as per the current data protection laws) and delete data permanently and properly as early as possible.
this needs to be done much more carefully than in the past
by the way, recent reports ont he bbc about the tracking of cell phone users
cite a paper in nature, which reveals that the 100k users were in a european country
firstly, while they claim that they've anonymized the data (and the country) It is fairly easy to deduce from the cell tower locations and population mobility (e.g. 3km mean levy walk, with a 1000km limit) which country, which provider, and therefore for authors (and one assumes, the Nature editors and reviewers , since they are supposed to require access to data to check an experiment is valid and reproduceable or falsifiable, even when the data is proprietarty -as in drug clinical trials))
so this paper is unethical and possibly illegal in european law.
oh well.
(see
http://news.bbc.co.uk/2/hi/science/nature/7433128.stm
and follow links to supplemntary data)
its a shame that Nature has lowered the bar for work like this as it should be possible to do this sort of thing in a way that is with consent (lets say you offer the users some useful service based on location !) and is done scientifically
in a verifiable way too...
nevertheless the results are useful (mind you, so were the nazis' medical work on hypothermia in concentration camps......careful...careful...dont lose your cool).
for a variety of ostensibly ok reasons
1/ reduce costs (single database entry per citizen, keyed on biometric id)
2/ increase consistency (e.g. tax and rebate)
3/ tracking trends
4/ catching bad guys
5/ you name it...
6/ if they were really honest, one could make the system transparent
and probably remove most government (reduce the government to "codes"
this is all transformational government
the problem is that the more unified the databases, the higher the gain to bust it
and the higher the loss to people (in value and number of people) if the system is bust (whether deliberately by bad guys or accidentally by HMRC^H^H idiots).
so once you unify this thing how long does it last? how about forever, stupid?
so the probabilty of leaks might be some decreasing small number - e.g. the chance of leaking 1 record in a year might be 1 in a million. so what are the chances your record is leaked in your lifetime (say 75 years)? well, fairly close to 1 actually.
do the math
the only way to do things is to require that noone keeps data for very long at all. and noone has access across all databases - keep the databases seperate (as per the current data protection laws) and delete data permanently and properly as early as possible.
this needs to be done much more carefully than in the past
by the way, recent reports ont he bbc about the tracking of cell phone users
cite a paper in nature, which reveals that the 100k users were in a european country
firstly, while they claim that they've anonymized the data (and the country) It is fairly easy to deduce from the cell tower locations and population mobility (e.g. 3km mean levy walk, with a 1000km limit) which country, which provider, and therefore for authors (and one assumes, the Nature editors and reviewers , since they are supposed to require access to data to check an experiment is valid and reproduceable or falsifiable, even when the data is proprietarty -as in drug clinical trials))
so this paper is unethical and possibly illegal in european law.
oh well.
(see
http://news.bbc.co.uk/2/hi/science/nature/7433128.stm
and follow links to supplemntary data)
its a shame that Nature has lowered the bar for work like this as it should be possible to do this sort of thing in a way that is with consent (lets say you offer the users some useful service based on location !) and is done scientifically
in a verifiable way too...
nevertheless the results are useful (mind you, so were the nazis' medical work on hypothermia in concentration camps......careful...careful...dont lose your cool).
Sunday, June 01, 2008
Control v. Data Plane Complexity & HCI
I was reading a bunch of HCI research recently and it struck me that the
really successful systems that people build (I realize HCI isn't about systems, but
often the really good HCI work builds systems to do experiments, so you get to see cool design artifacts as a side effect of their work - hey- the iPhone as a side effect is
no bad thing:) so anyhow te good systems have low complexity control planes and all the clever stuff is implicit in the work in the data plane (to use networking terminology)
this is a bit like IP v. ATM- IP has a rediculusly simple control plane (originally)
and so is easy to write many many applications to.
in HCI terms,. this is similar - we want systems with low cognitive overhead - i.e.
you want star trek communicators not things you have to click to unlock then 10 digits to dial/press to call. you want tivo boxes that just go ahead and record your favorite types of programs by default and use the actual broadcast events to trigger recording start/end, rather than VCRs with arcane yarrow-stick-casting interfaces where you type in immutable start and end times in odd formats and program/channel numbers that relate to local geo- and techno- specific assignment of program and station to some random number....
you want a net where you can just send a packet, rather than calling 11M lines of code to set up a virtual path and channel.
really successful systems that people build (I realize HCI isn't about systems, but
often the really good HCI work builds systems to do experiments, so you get to see cool design artifacts as a side effect of their work - hey- the iPhone as a side effect is
no bad thing:) so anyhow te good systems have low complexity control planes and all the clever stuff is implicit in the work in the data plane (to use networking terminology)
this is a bit like IP v. ATM- IP has a rediculusly simple control plane (originally)
and so is easy to write many many applications to.
in HCI terms,. this is similar - we want systems with low cognitive overhead - i.e.
you want star trek communicators not things you have to click to unlock then 10 digits to dial/press to call. you want tivo boxes that just go ahead and record your favorite types of programs by default and use the actual broadcast events to trigger recording start/end, rather than VCRs with arcane yarrow-stick-casting interfaces where you type in immutable start and end times in odd formats and program/channel numbers that relate to local geo- and techno- specific assignment of program and station to some random number....
you want a net where you can just send a packet, rather than calling 11M lines of code to set up a virtual path and channel.
Wednesday, May 28, 2008
making a multiple hash of it
so just re-reading this really nice classic paper
Probabilistic Counting Algorithms for Data Base Applications (1985)
Philippe Flajolet, G. N. Martin, and realized that one of the tricks they use
is to do multiple hashes of the key (they then count the occurrences of bits
in a neat way to get the cardinality - very cool - so how does this relate to
Bloom Filters which count the approximate set membership, and to CAN's, which use multiple hashes to distribute keys in a search space as uniformly as possible, or even to coordinate estimation systems? Seems like there's a small monograph in this...
Probabilistic Counting Algorithms for Data Base Applications (1985)
Philippe Flajolet, G. N. Martin, and realized that one of the tricks they use
is to do multiple hashes of the key (they then count the occurrences of bits
in a neat way to get the cardinality - very cool - so how does this relate to
Bloom Filters which count the approximate set membership, and to CAN's, which use multiple hashes to distribute keys in a search space as uniformly as possible, or even to coordinate estimation systems? Seems like there's a small monograph in this...
Monday, May 26, 2008
internet research for kids
I've been on sabbatical in the last year visiting lots of labs and
talking about research directions and one idea that came up is a
reaction to something I think and I believe you and dave clark have
also said in the last couple of years about academic CS research
(at least Internet related CS research) having gone awry.
The pressure to look at short term fast return has pushed many
academics into effectively competing with industry R&D - this means
that many papers are increasingly being produced that look like
startup white papers (often are) and do not have the depth of work
behind them that one might expect from a core CS conference (let
aloine journal) 15 or 20 years ago. This is not universally
true (self promotion - I believe we published first papers
on work on Xen after about 9PhDs and 3 years of faculty work,
and on Xorp and Metarouting work were both in similar ratios and have
strong underpinning ideas).
What my colleague Tim Griffin refers to as the Hotnetization of
communications research has happened.
Part of the problem is that really smart people get a big buzz out of
being creative, and only some buzz out of the sweat to really nail the
details. But the details matter. One solution to this would be a big
cultural shift back to long term funding and rewards to academics for
long term (not for short term) work - that isn't going to happen in a
hurry.
Another solution is to provide a competitor for short term idea
generation and demonstration to force academics out of the short term
space.
SO here's one idea:
build a system for kids to write new internet scale applications.
give them a sandbox (planetlab junior) to deploy these.
Think lego-mindstorms for the internet.
A whole bunch of things like facebook, myspace, skype, bebo, flicr,
IM,, are totally obvious to kids - 10 year olds would have built us
prototypes if they were given the simple tools to create
distributed applications with simple GUIs I think they would
create 1000s of them - it would take an organisatio nlike
Cisco (and/or microsoft) with a scools outreach programme,
and some resources (not a lot) - it would be nice to
incorporate cell phone handsets in some simple way.
There are some nice example languages (as above, lego
mindstorms, but also Alice (from CMU) which could be extended
easily to do this....
just a thought!
talking about research directions and one idea that came up is a
reaction to something I think and I believe you and dave clark have
also said in the last couple of years about academic CS research
(at least Internet related CS research) having gone awry.
The pressure to look at short term fast return has pushed many
academics into effectively competing with industry R&D - this means
that many papers are increasingly being produced that look like
startup white papers (often are) and do not have the depth of work
behind them that one might expect from a core CS conference (let
aloine journal) 15 or 20 years ago. This is not universally
true (self promotion - I believe we published first papers
on work on Xen after about 9PhDs and 3 years of faculty work,
and on Xorp and Metarouting work were both in similar ratios and have
strong underpinning ideas).
What my colleague Tim Griffin refers to as the Hotnetization of
communications research has happened.
Part of the problem is that really smart people get a big buzz out of
being creative, and only some buzz out of the sweat to really nail the
details. But the details matter. One solution to this would be a big
cultural shift back to long term funding and rewards to academics for
long term (not for short term) work - that isn't going to happen in a
hurry.
Another solution is to provide a competitor for short term idea
generation and demonstration to force academics out of the short term
space.
SO here's one idea:
build a system for kids to write new internet scale applications.
give them a sandbox (planetlab junior) to deploy these.
Think lego-mindstorms for the internet.
A whole bunch of things like facebook, myspace, skype, bebo, flicr,
IM,, are totally obvious to kids - 10 year olds would have built us
prototypes if they were given the simple tools to create
distributed applications with simple GUIs I think they would
create 1000s of them - it would take an organisatio nlike
Cisco (and/or microsoft) with a scools outreach programme,
and some resources (not a lot) - it would be nice to
incorporate cell phone handsets in some simple way.
There are some nice example languages (as above, lego
mindstorms, but also Alice (from CMU) which could be extended
easily to do this....
just a thought!
Thursday, May 22, 2008
biometric - dont let the tech speak bamboozle you
governments (well the british government) likes to pretend it is up-to-date and techie, when in fact we all know that thatcher, major, blair and brown are all pretty technophobe and so there's no real comprehension at the top of the food chain, and since most civil servants train at oxford doing PPE (politics philosophy and economics) rather than at Cambridge doing NatSci (natural sciences - or medicine - where people get jobs in the Real World), the mandarins of White Hall are pretty useless - this is why they keep proposing idiotic things like ID cards, the NHS IT programme, and biometric passports (and the reason the write such bad RFPs and get no-one bidding is that anyone with an ounce of real business skill can stand looking down the river east of the houses of parliament and see a 100 places where they can get a job with 10 times the power and 25 times the income.
so. biometrics. what's that about then?
well, a photo is a biometric. bio - life. metric - measure. so its like a piece of information gathered about you hopefully something a third party (not you or the person who made the passport) can study and compare with the True Life version they can get from you - this then means that they trust who you are because they trust the person issuing the passport checked who you are and only gave you the passport since the other info (birthday, place, citizenship etc) were entitled to, and that the passport cannot be fiddled with (affordably) and that you can't be fiddled with (affordably).
so. basically, voodoo then. I mean who really understands DNA or Iris scans or finger prints or retina scans ? who understands encryption? who understands challenge response protocols or redundent coding of information?
well, I suppose we do. But we don't work for the government, do we? so why do you believe they'd get it right?
but it is all voodoo, because basically the government likes things that sounds technical (remember ghost busters?: "back off: I'm a scientist", and "it's technical".
No, next time someone says we must have biometric NHS entitlement ID cards, point out
1/ bio-washing powerder has been shown to cause rashes in some people, so how do we know bio-metric cards wont casuse rashes? have they been tested? have the RFID readers been checked that they don't cause cancer?
2/ we havn't gone metric in the UK yet in all things (e.g. road distances, measures of beer) so why should we go metric in identity? We still have feet - we do not walk about on iambic pentameters. We have hands and measure horses with them - indeed, we have cars that are powered by horses too, even if the force is also measured in Newtons.
That will shut them up for a while.
so. biometrics. what's that about then?
well, a photo is a biometric. bio - life. metric - measure. so its like a piece of information gathered about you hopefully something a third party (not you or the person who made the passport) can study and compare with the True Life version they can get from you - this then means that they trust who you are because they trust the person issuing the passport checked who you are and only gave you the passport since the other info (birthday, place, citizenship etc) were entitled to, and that the passport cannot be fiddled with (affordably) and that you can't be fiddled with (affordably).
so. basically, voodoo then. I mean who really understands DNA or Iris scans or finger prints or retina scans ? who understands encryption? who understands challenge response protocols or redundent coding of information?
well, I suppose we do. But we don't work for the government, do we? so why do you believe they'd get it right?
but it is all voodoo, because basically the government likes things that sounds technical (remember ghost busters?: "back off: I'm a scientist", and "it's technical".
No, next time someone says we must have biometric NHS entitlement ID cards, point out
1/ bio-washing powerder has been shown to cause rashes in some people, so how do we know bio-metric cards wont casuse rashes? have they been tested? have the RFID readers been checked that they don't cause cancer?
2/ we havn't gone metric in the UK yet in all things (e.g. road distances, measures of beer) so why should we go metric in identity? We still have feet - we do not walk about on iambic pentameters. We have hands and measure horses with them - indeed, we have cars that are powered by horses too, even if the force is also measured in Newtons.
That will shut them up for a while.
Friday, May 16, 2008
laptop v. smartphone
so i've been traveling a lot recently and carry a v old (crusoe) based vaio small lapto p- it is about 7-8 years old and runs win ME or Linux 2.4.17:)
I also carry an HTC touch
the security folks in airports always ask me to take out the laptop - occasionalyl they dont notice it, but they never ask about the phone - the phone has a faster processor, 4 times as much memory and a faster wireless interface....and about 6 times the battery life..on the other hand it is 7 years newer (and 4 times cheaper)....
luckily, i only have to carry one charger for both:)
once again, the staff in the aircraft and airports annoyed the *** out of me by using phrases like "once again" when they havnt said anything before, and also "last remaining" when they mean "remaining" or "last"...
it took me 9 hours to get from stockholm to london last night due to the airtraffic control being down in amsterdam - distributed systems (what is the definition of a ...) hah!!!
I also carry an HTC touch
the security folks in airports always ask me to take out the laptop - occasionalyl they dont notice it, but they never ask about the phone - the phone has a faster processor, 4 times as much memory and a faster wireless interface....and about 6 times the battery life..on the other hand it is 7 years newer (and 4 times cheaper)....
luckily, i only have to carry one charger for both:)
once again, the staff in the aircraft and airports annoyed the *** out of me by using phrases like "once again" when they havnt said anything before, and also "last remaining" when they mean "remaining" or "last"...
it took me 9 hours to get from stockholm to london last night due to the airtraffic control being down in amsterdam - distributed systems (what is the definition of a ...) hah!!!
Monday, May 12, 2008
toposimilarity
Not many people know this, but if you run one of those visualisation tools
on the xen source tree, and on the AS topology of the internet,
at any point in time over the last 5 years,
the result is identical.
this forms one plank in my conclusive proof
of the non existence of Richard Dawkings
on the xen source tree, and on the AS topology of the internet,
at any point in time over the last 5 years,
the result is identical.
this forms one plank in my conclusive proof
of the non existence of Richard Dawkings
Saturday, May 10, 2008
energy saving gadgets
so i have a lot of legacy technoogy in the house which gets left on by certain small people - tvs, games consoles, hifi etc etc
so of course it'd be nice to have it all gracefully shutdown when people are not in the room - how cheaply could one combine motion sensors (say) with timers and mains adaptors that just did this?
so of course it'd be nice to have it all gracefully shutdown when people are not in the room - how cheaply could one combine motion sensors (say) with timers and mains adaptors that just did this?
physcally hierarchical/nested flash memory - matryoshka usb sticks?
so say i have a 5 year old 128M usb memory stick , and i have a 2 year old 1G and a 1 year old 8G - why can't i stack them? couldn't i "write thru" - so I plug the newsest biggest one into the USB slot and it has a microusb conenctor on the bac and i plug the middle one in there, and that has a nano-usb connector, and I plug the oldest smallest one in there:
laptop <=usb 2===>8G stick<===microusb===> 1G stick <=== nanousb===> 128M
why? trade off in write time v. capacity?
redundency and so on...
marketting - easy- just paint them with freiendly russian traditional scenes like the dolls
laptop <=usb 2===>8G stick<===microusb===> 1G stick <=== nanousb===> 128M
why? trade off in write time v. capacity?
redundency and so on...
marketting - easy- just paint them with freiendly russian traditional scenes like the dolls
Friday, May 09, 2008
shall i compare thee to a lewd cuckoo?
there i was cycling along the backs
composing a three part harmony round
with a libretto lavishly drawn from
shakespeare, anglo saxon folksong
and jazz (like to the lark,
at break of day arising
every day we sing cuckoo,
but how strange the change
from ursa major to minah-
bird brained we sing...)
and, it being a beatiful sunny may day
in fenland
I get a mouth full of greenfly...
arghhhhh...now i cannot remember
the tune
composing a three part harmony round
with a libretto lavishly drawn from
shakespeare, anglo saxon folksong
and jazz (like to the lark,
at break of day arising
every day we sing cuckoo,
but how strange the change
from ursa major to minah-
bird brained we sing...)
and, it being a beatiful sunny may day
in fenland
I get a mouth full of greenfly...
arghhhhh...now i cannot remember
the tune
Thursday, May 08, 2008
naming conventions (for research projects)
so i just posted an answer on the xen.org blog to where the name (abbreviation) xen came from - while we are there, metarouting (tim griffin's fine project to replace hand crufted routing protocol code was a local idea too...this project is not quite at the take-over-the-world stage, but it is getting there with recent heroic implementation work. A related work is mark handley (et al)'s XORP open source router project. Although the metarouting gang switched to using Quagga for some pragmatic reasons, other projects (e.g. with Lancaster) have combined xen and xorp for virtualisation of routers - indeed, many US people are trailing this work (for a change:)
also fun is david greaves' work on systems that are corect by design - see
his blog on current work on tools and methods
also fun is david greaves' work on systems that are corect by design - see
his blog on current work on tools and methods
Friday, April 25, 2008
names and power and magic, lack thereof
why is the nanoscience center on the edge of cambridge, not in the center, and why is it so big?
why does the Gates Building only have doors and windows and no white picket fence?
why is the CAPE building 40Km from the sea?
why does the Vet School no fish?
why does the Gates Building only have doors and windows and no white picket fence?
why is the CAPE building 40Km from the sea?
why does the Vet School no fish?
Tuesday, April 22, 2008
The Path to Software Adapation: Intelligent Design or Evolution, and also, Virtualiztupid
We know that there are many species of software - whenever a branch is made in the development, we get (eventually) specification (or is that a specious argument) - but the real question is: is this by design, or is it just random? Does the Xen Master play Dice? Is the Flying Spaghetti Monster Code better, or worse, than IBM's 10 lines a day of commands meant?
meanwhile, also, what is better or worse, intelligence (see the short story by Bruce Sterling about the short term survival value of intelligence) or stupidity?
virtualisation is a way to avoid making decisions (the ultimate random branch, if you like) since you just continue to run all old and new versions....however adapted they are (indeed, if enough of you do this, you effectively run all the eco-systems in history - i.e. world 0.1 alpha thru world 2.0 beta, in parallel universes (if oprah is anything to go by, at least as far as the Onion claimed today...)
so its clear that the virtualization thing is virtually stupid - it allows multiple isolated types of stupidity to co-exist, without learning from one another. This is a good thing (spreading stupidity is a Bad Idea) and a bad thing (clueful people would find easier to recognize all the stupidity if it was all isolated in one place rather than N places). Also, with remus, stupidity is now hilghly mobile and hard to eliminate with Denial of Stupidity attacks. Indeed, SANs (Stupidity Area Networks) with virtualisation mean that stupidity will come to dominate the cybersphere, if it hasn't done already. This, unfortunately is in line with many grand challenges such as Sustainable Stupidity, and Curatable Stupidity.
I am thinking of starting a project on CAS (no not compare-and-swap - Clutching At Straws) - campaign against stupidity
but I suspect I would just get stuck in dom0 an ignored by all the Guest Operating Stupids
meanwhile, also, what is better or worse, intelligence (see the short story by Bruce Sterling about the short term survival value of intelligence) or stupidity?
virtualisation is a way to avoid making decisions (the ultimate random branch, if you like) since you just continue to run all old and new versions....however adapted they are (indeed, if enough of you do this, you effectively run all the eco-systems in history - i.e. world 0.1 alpha thru world 2.0 beta, in parallel universes (if oprah is anything to go by, at least as far as the Onion claimed today...)
so its clear that the virtualization thing is virtually stupid - it allows multiple isolated types of stupidity to co-exist, without learning from one another. This is a good thing (spreading stupidity is a Bad Idea) and a bad thing (clueful people would find easier to recognize all the stupidity if it was all isolated in one place rather than N places). Also, with remus, stupidity is now hilghly mobile and hard to eliminate with Denial of Stupidity attacks. Indeed, SANs (Stupidity Area Networks) with virtualisation mean that stupidity will come to dominate the cybersphere, if it hasn't done already. This, unfortunately is in line with many grand challenges such as Sustainable Stupidity, and Curatable Stupidity.
I am thinking of starting a project on CAS (no not compare-and-swap - Clutching At Straws) - campaign against stupidity
but I suspect I would just get stuck in dom0 an ignored by all the Guest Operating Stupids
Tuesday, April 15, 2008
Virgin Neutrality is a Load of Pollocks
So the head of virgin media says net neutrality is
a load of thingummybobs
Good for him. So as anyone will say, if you run a zero sum game, you better come up trumps. So if he intends to give more to people that pay more no matter where they go to, he better be prepared to explain why he is giving less to others. An alternative model of the world is to simple give people what
their line rate indicates they should get as a proportion of any current bottleneck,
like you get (mostly) today with TCP and the current IP world. Then there is the server side - I suppose virgin (like AT&T threatened to ) are goign to ask Google and Yahoo and CNN and BBC to cough up lots otherwise they'll throttle their traffic, and oh dear, where did all their customers go....
Of course, if you want to grab headlines, then making claims based in fantasy is great, but why not have a modicum of basis for them?
How about: "Virgin will give discounts to customers whose traffic was treated worse today"...?
that might take some business off of other ISPs...
a load of thingummybobs
Good for him. So as anyone will say, if you run a zero sum game, you better come up trumps. So if he intends to give more to people that pay more no matter where they go to, he better be prepared to explain why he is giving less to others. An alternative model of the world is to simple give people what
their line rate indicates they should get as a proportion of any current bottleneck,
like you get (mostly) today with TCP and the current IP world. Then there is the server side - I suppose virgin (like AT&T threatened to ) are goign to ask Google and Yahoo and CNN and BBC to cough up lots otherwise they'll throttle their traffic, and oh dear, where did all their customers go....
Of course, if you want to grab headlines, then making claims based in fantasy is great, but why not have a modicum of basis for them?
How about: "Virgin will give discounts to customers whose traffic was treated worse today"...?
that might take some business off of other ISPs...
Tuesday, April 08, 2008
what is humanist for "thanks"?
traveling europe (as I am right now) i notice that most peole have a word for
"thanks" which derives from christian belief (thanks be to god, grazie/gracias/grace of god, or greek: EuXpharisto) - this begs the question: what did pagans and pre-christians
say? maybe nothing?
what would/should a humanist say (should they say anything?:-)
"thanks" which derives from christian belief (thanks be to god, grazie/gracias/grace of god, or greek: EuXpharisto) - this begs the question: what did pagans and pre-christians
say? maybe nothing?
what would/should a humanist say (should they say anything?:-)
Tuesday, April 01, 2008
10^10 arms, wow, thats entanglement!
so according to a recent article in memory of the BBC micro (which sold 1.5M, about 10,000 more than they expected), the ARM processor shipped its 10,000,000,000
sometime in 2008
thats pretty awesome for a small UK company...
sometime in 2008
thats pretty awesome for a small UK company...
Friday, March 28, 2008
things any good (CS) lab should do
1. encourage almost ALL phd students to go on internships to industry labs at least once and possibly twice
2. have a really wide net for visiting talks which is broad church (include industry and social aspects)
3. involve PhDs in teaching and paper reviewing
4. take students and post docs out for dinner from time to time, and to the pub a lot
5. teach phds about processes (papers, PCs, conference management systems, grant applications) viz
http://www.cl.cam.ac.uk/~jac22/talks/jon-cfip.ppt
2. have a really wide net for visiting talks which is broad church (include industry and social aspects)
3. involve PhDs in teaching and paper reviewing
4. take students and post docs out for dinner from time to time, and to the pub a lot
5. teach phds about processes (papers, PCs, conference management systems, grant applications) viz
http://www.cl.cam.ac.uk/~jac22/talks/jon-cfip.ppt
Friday, March 21, 2008
piety - irrational numbers and social network rganisation
pi is an interesting number - so is that fact that social groups are structured around tiers of trust, centered on each person with nearest trust group being kith and kin, and then
close friends etc...the degree of this graph is something like 3.x (e.g.
3.1415957...)
somewhere at 5 hops (3.x^5) you reach the limit of human's ability to infer intentionality
this tells us that a social networking tool (TM) should be built around making levels
and group at each power visible and should change how it supports people outside of 5 hops and groups larger than 150 or so...
i think such a tool could be called piety since that has PI in it and should be easy peasy to use
ageing (churn in friendship groups) could determine when fols are evicted from the close circle of friends into the outer darkness of cyberspace (or facebook:) and
frequency and duration of contacts (including email/IM, co-location) can be used to include people in...
coud make a nice startup:)
close friends etc...the degree of this graph is something like 3.x (e.g.
3.1415957...)
somewhere at 5 hops (3.x^5) you reach the limit of human's ability to infer intentionality
this tells us that a social networking tool (TM) should be built around making levels
and group at each power visible and should change how it supports people outside of 5 hops and groups larger than 150 or so...
i think such a tool could be called piety since that has PI in it and should be easy peasy to use
ageing (churn in friendship groups) could determine when fols are evicted from the close circle of friends into the outer darkness of cyberspace (or facebook:) and
frequency and duration of contacts (including email/IM, co-location) can be used to include people in...
coud make a nice startup:)
Thursday, March 13, 2008
references for money, and conference reviews for fun
1. so here's my idea of reference writing
I only every write 3 references - the person asking me for one as to pick one of my pre-canned references, and I will only sign if it what they pick matches my belief about them - oh, and i charge 1 euro (stable currency of choice)
2. here's my idea for conference reviewing and PC management
we have PC meetings after all the reviews are written - how about we haver the meeting just after all the papers are submitted and (modulo conflicts) the pC meet to choose who reviews what - this
a) calibrates them
b) socialises them
c) load balances (from each according to her needs, to each according to his means)
then we dont need another meeting - everything else can be done online
3. choosing PCs - easy - ask potential PC members (mostly from pool of PC members from this or related conferences in previous years) to furnish th PC chairs (and conference organising ctttee if there is one) with 2 or 3 example reviews of papers accepted in previous years - these are used to judge if the potential PC members understand the job (fairness, accuracy, etc).
4. alternative, use my reference scheme above - there are only really 9 different kind of reviews (7 if you are in CS theory) so we publish those, and authors submitting papers pick which reviews they think it, and the PC reviewer signs it if she agrees, and doesn't if he disagrees.
I only every write 3 references - the person asking me for one as to pick one of my pre-canned references, and I will only sign if it what they pick matches my belief about them - oh, and i charge 1 euro (stable currency of choice)
2. here's my idea for conference reviewing and PC management
we have PC meetings after all the reviews are written - how about we haver the meeting just after all the papers are submitted and (modulo conflicts) the pC meet to choose who reviews what - this
a) calibrates them
b) socialises them
c) load balances (from each according to her needs, to each according to his means)
then we dont need another meeting - everything else can be done online
3. choosing PCs - easy - ask potential PC members (mostly from pool of PC members from this or related conferences in previous years) to furnish th PC chairs (and conference organising ctttee if there is one) with 2 or 3 example reviews of papers accepted in previous years - these are used to judge if the potential PC members understand the job (fairness, accuracy, etc).
4. alternative, use my reference scheme above - there are only really 9 different kind of reviews (7 if you are in CS theory) so we publish those, and authors submitting papers pick which reviews they think it, and the PC reviewer signs it if she agrees, and doesn't if he disagrees.
Tuesday, March 11, 2008
More on Mobile Social Networks
so lots of people think social networks are worth lots of money
and lots of people think mobile phones are worth lots of money
what can you do with a mobile phone in terms of social networks
1. obvious (I wrote this ages ago) is run a browser (flock) on yr smart phone and go to a social net site
2. obvious (everyone wrote this a zillion years ago) - run a location service and map people's geo-relationships to each other or to locations or both (see MIT reality mining, Intel's project doing location service using just about anything a few years back etc etc)
3. do pocket switch net and decentralised link discovery (non obvious, but we did it in pre-haggle project work about 4-5 years back) and build temporal graph in a distributed way, then use it for
i) establishing relationships (co-lo duration and frenquency above some threshold)
ii) to actualyl forward info (person to person or pub/sub, via percolation, gossip, epidemic, but filterered by relevance (e.g. interest/tags), and by trust/tribal/dunbar/closeness etc) to get efficient delivery and focus...
This is gonna be the topic for the Million People project, and is discovery (not invention) and therefore not patentable:)
clever algorithms to make it work relaibly are more interesting though...
and lots of people think mobile phones are worth lots of money
what can you do with a mobile phone in terms of social networks
1. obvious (I wrote this ages ago) is run a browser (flock) on yr smart phone and go to a social net site
2. obvious (everyone wrote this a zillion years ago) - run a location service and map people's geo-relationships to each other or to locations or both (see MIT reality mining, Intel's project doing location service using just about anything a few years back etc etc)
3. do pocket switch net and decentralised link discovery (non obvious, but we did it in pre-haggle project work about 4-5 years back) and build temporal graph in a distributed way, then use it for
i) establishing relationships (co-lo duration and frenquency above some threshold)
ii) to actualyl forward info (person to person or pub/sub, via percolation, gossip, epidemic, but filterered by relevance (e.g. interest/tags), and by trust/tribal/dunbar/closeness etc) to get efficient delivery and focus...
This is gonna be the topic for the Million People project, and is discovery (not invention) and therefore not patentable:)
clever algorithms to make it work relaibly are more interesting though...
Thursday, February 28, 2008
patention deficit disorder
Just been reading (prompted by last months CACM article) about peer to patent:
http://www.eupaco.org/
and
http://dotank.nyls.edu/communitypatent/
seems a fairly sensible movement, but really lacks teeth ...there's some
big company guns behind more large scale US federal changes
Microsoft:
http://www.microsoft.com/presspass/features/2005/mar05/03-10PatentReform.mspx
and Cisco:
http://www.cisco.com/web/about/gov/markets/patent_reform.html
viz...costs keep mounting just to fight the mountains of nonsense...
and there are some recidivists
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9025841
Kind of old news...but I am catching up....
for me, the key thing is that
a) software patents are silly - we have software copyright, and algorithms are discoveries and should be in the public domain as they are knowledge that underpins many things as in mathematical theorems, whereas the effort lies in turning them into useful computing products and that is supported adequately by copyright for software and patent for hardware
b) all patents should have
i) use it or lose it clauses based on reality check - does someone intend to
do something other than hold other people to ransom with their invention?
ii) lifetimes that reflect the domain of activities typical time to product, service, and profit - for example, in Internet time, this might be about 5 years - in Big Pharma, maybe 10. In vacuum cleaners, perhaps 7.
c) Clearly patent applications need testing properly by motivated experts - one trick would be to tax patent lawsuits (say 10%) and use the tax to allow the patent office to hire patent reviewers (see peer to patent above).
d) intellectual property is not a good - it has some rather different flavour; as with spectrum, and other new aspects of 21st century life, we need the law to move on from simple notions of property and commons to some wider range of notions....
to be continued... ... ...
http://www.eupaco.org/
and
http://dotank.nyls.edu/communitypatent/
seems a fairly sensible movement, but really lacks teeth ...there's some
big company guns behind more large scale US federal changes
Microsoft:
http://www.microsoft.com/presspass/features/2005/mar05/03-10PatentReform.mspx
and Cisco:
http://www.cisco.com/web/about/gov/markets/patent_reform.html
viz...costs keep mounting just to fight the mountains of nonsense...
and there are some recidivists
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9025841
Kind of old news...but I am catching up....
for me, the key thing is that
a) software patents are silly - we have software copyright, and algorithms are discoveries and should be in the public domain as they are knowledge that underpins many things as in mathematical theorems, whereas the effort lies in turning them into useful computing products and that is supported adequately by copyright for software and patent for hardware
b) all patents should have
i) use it or lose it clauses based on reality check - does someone intend to
do something other than hold other people to ransom with their invention?
ii) lifetimes that reflect the domain of activities typical time to product, service, and profit - for example, in Internet time, this might be about 5 years - in Big Pharma, maybe 10. In vacuum cleaners, perhaps 7.
c) Clearly patent applications need testing properly by motivated experts - one trick would be to tax patent lawsuits (say 10%) and use the tax to allow the patent office to hire patent reviewers (see peer to patent above).
d) intellectual property is not a good - it has some rather different flavour; as with spectrum, and other new aspects of 21st century life, we need the law to move on from simple notions of property and commons to some wider range of notions....
to be continued... ... ...
Sunday, February 24, 2008
squeezy throbbing phone skins - new idea for staying in touch
So phones vibrate - so do game console controllers - but what if they provided a "remote touch" mechanism? how hard could this be?
one end simply senses a few "pressure points" - the other end turns this into
the opposite force - a first cut at such a system would be a phone that you could "squeeze" and the phone at the other end of a call just expands a bit - so if someone has the phone in their hand, it "feels" like the remote person is squeezing their hand.
A more complex system would put the phone in a glove and have full on remote robot glove controller sensor and motors (actually you just need motors - in reverse, they work as sensors so it is really really simple to build) - there's a little bit of concern about feedback and the amount of force so the system would need to be sloppy
but that isnt too hard to design.
I wonder if there's anything out there one could adapt to this right away?
one end simply senses a few "pressure points" - the other end turns this into
the opposite force - a first cut at such a system would be a phone that you could "squeeze" and the phone at the other end of a call just expands a bit - so if someone has the phone in their hand, it "feels" like the remote person is squeezing their hand.
A more complex system would put the phone in a glove and have full on remote robot glove controller sensor and motors (actually you just need motors - in reverse, they work as sensors so it is really really simple to build) - there's a little bit of concern about feedback and the amount of force so the system would need to be sloppy
but that isnt too hard to design.
I wonder if there's anything out there one could adapt to this right away?
Monday, February 11, 2008
Small Worlds and Flat Earths
Reading Flat Earth News which is much better than yesterday's guardian review would have it, but not perfect - one interesting item was the use of plagiarism detection software to track where news items came from and how many of them were basically direct or close transcriptions of wire feed (AP, Reuters) stories - would be Interesting if Google were to build on this - some stories come from PR, some from real reporters, some from the public - a tool that provided a provenance tree for online information should be something easy to knock up - sort of a backwards in time, patient-zero finder, that allowed you to see if somethign was cooked up by a government Press Officer, a company
publicity dweeb, or had some basis in the real world (or even more than one).
This would also obsolete reportage:_
You could call it "newsrank", since basically it is just souped up pagerank....
feel free to pay me a small royalty for the bad idea
publicity dweeb, or had some basis in the real world (or even more than one).
This would also obsolete reportage:_
You could call it "newsrank", since basically it is just souped up pagerank....
feel free to pay me a small royalty for the bad idea
Monday, February 04, 2008
SybillRus - a new idea for anonimity in online social networks
oh, ok, so it aint that new an idea, but I think there is a business case for a company that provides ready-made identities (in large numbers) that look plausible, have existing interest, group membership, friendship groups etc, and slot in to facebook or myspace or any of the other social networking environments (cyberspace, we used to call it:) - recently I noticed from some friends that their facebook pages had inaccurate birthday dates -this turnbs out to be deliberate on their part (a mechanism to mitigate identity theft).
So sybilRus.com would provide lots of these - then the bad guys wouldn't know who was real and who was virtual
This idea, or something like it a lot, is also in Ben Elton's latest SciFi novel, Blind Fait, (in my opinion, a return to form by this amusing, contentious, but thoughtful satirist)...it also has some very funny reality-tv extremism...
So sybilRus.com would provide lots of these - then the bad guys wouldn't know who was real and who was virtual
This idea, or something like it a lot, is also in Ben Elton's latest SciFi novel, Blind Fait, (in my opinion, a return to form by this amusing, contentious, but thoughtful satirist)...it also has some very funny reality-tv extremism...
Thursday, January 31, 2008
Tiling Docking (Real and Virtual) of iPhones/Touch screen gadgets
So here's an idea I don't want anyone to patent, so I am publishing it right here, right now.
If you have a household of n people all with mp3 players with displays and phones with displays, why not have a dock for them that tiles them?
so the idea is you have n areas on a panel, and people drop their devices in their area - when all the devices are there (esp. if they are say htc tuch or iphone) you have a larger continuous area of (touch) screen - actually quite high resoluton too - so it could be your tv too...or your computer display...
you could coordinaet this either via the mirousb, or wifi or bluetooth....
it could work approximately so crowds of people could hold up their iGadget at a sports event and do a mexican wave without moving...
If you have a household of n people all with mp3 players with displays and phones with displays, why not have a dock for them that tiles them?
so the idea is you have n areas on a panel, and people drop their devices in their area - when all the devices are there (esp. if they are say htc tuch or iphone) you have a larger continuous area of (touch) screen - actually quite high resoluton too - so it could be your tv too...or your computer display...
you could coordinaet this either via the mirousb, or wifi or bluetooth....
it could work approximately so crowds of people could hold up their iGadget at a sports event and do a mexican wave without moving...
Tuesday, January 29, 2008
google crime wave
an interersting feature of large systems run by large organisations is that, somewhere, sometime, someone is going to turn out to be bad - an argument used by the security research group in cambridge against many systems that share too much data (e.g. children's data base in UK, the HMRC fiasco, the NHS patient record spine).
So to date, we havnt seen any massive abuse of google (or yahoo or hotmail) huge repository of personal data evidenced by large scale misbehaviour - why is this? the penalties for someone accepting bribes, or being blackmailed would be no higher than those for someone working for a UK government agency that abuses their access to private data.
I don't think the procedures employed by the large scale search engine/mail/socal net systems are inherently immune from misuse of power by an insider especially more than the UK governemtn's "transformational" systems - by the way, what a great phrase
transformational government is ! given what most of the attempts to federate government databases have achieved, there has surely been a transformation
from Blair to Brown....but it has largely been one of rapidly increasing entropy, as the pathetic IT-consultant-ignorati that they contract to for these giant projects screw up again and again...oh well. maybe thats it - maybe most people in yahoo and google are paid well and enjoy doing a good job too much to do a bad thing:)
So to date, we havnt seen any massive abuse of google (or yahoo or hotmail) huge repository of personal data evidenced by large scale misbehaviour - why is this? the penalties for someone accepting bribes, or being blackmailed would be no higher than those for someone working for a UK government agency that abuses their access to private data.
I don't think the procedures employed by the large scale search engine/mail/socal net systems are inherently immune from misuse of power by an insider especially more than the UK governemtn's "transformational" systems - by the way, what a great phrase
transformational government is ! given what most of the attempts to federate government databases have achieved, there has surely been a transformation
from Blair to Brown....but it has largely been one of rapidly increasing entropy, as the pathetic IT-consultant-ignorati that they contract to for these giant projects screw up again and again...oh well. maybe thats it - maybe most people in yahoo and google are paid well and enjoy doing a good job too much to do a bad thing:)
Saturday, January 19, 2008
virtual libraries and french literary criticism
Just finished the awesomely clever-clever, witty
How to Talk About Books You Haven't Read by Pierre Bayard ...
far from being a naff "self-help" "make-good" type book, this is a witty and ingineous dissection of the different ways society reads, avoids reading, and discourse about books - it could well apply to the way the scientific community works on reviewing papers too (indeed he talks about academics reviewing each others academic books). and to films, and music and so on.
each time you choose a book, you leave out 18M other books - in your life, you read a small fraction (infinitesimal ) of the available literature of the world. When you skim, you make rapid interpolations and extrapolations. once you finish reading a book, your memory plays tricks. You may remember reading books you have not read and don't even exist - indeed, I was disappointed that his discussion of imaginary books cited
I Am a Cat by Soseki Natsume, which actually does exist, rather than creating a fake book for the purpose - especially since he deliberately gets several details of other books (and films) such as The Name of the Rose and The Third Man subtly wrong to prove a point about imperfect recall, and "desired" or even "screen memory books".
everyone, especially geeky nerdy computer scientists with a literary bent, should read this...it is a hoot....ever so slightly cynical (all the examples are chosen to maximise relevance to a general argument, but then the argument is claimed to apply to everything when only extremum example are used - clearly the "norms" of reading and of virtual, screen, private, shared and other types of libraries are not the same as the extremes.
Also, annoyingly, he uses several arguments which are extremely clearly illustrated more amusingly in the Library of Bable by Jorge Luis Borges (indeed, whose is namechecked by Eco since the blind monk in the Name of the Rose - jorge of borgos- who is the crucial part of the argument about talking about books you havn't read (Aristotle's lost volume of poetics on comedy, of which Wiliam of Baskerville deduces the existence and content, and then debates with jorge, who is blind so hasnt read the book for decades and only, possibly falsely, recalls the contents).
Other Lacunae exist, where, if this was a paper submitted to a scolalry journal, I would have to ask for minor revisions...
I give this a SB++
How to Talk About Books You Haven't Read by Pierre Bayard ...
far from being a naff "self-help" "make-good" type book, this is a witty and ingineous dissection of the different ways society reads, avoids reading, and discourse about books - it could well apply to the way the scientific community works on reviewing papers too (indeed he talks about academics reviewing each others academic books). and to films, and music and so on.
each time you choose a book, you leave out 18M other books - in your life, you read a small fraction (infinitesimal ) of the available literature of the world. When you skim, you make rapid interpolations and extrapolations. once you finish reading a book, your memory plays tricks. You may remember reading books you have not read and don't even exist - indeed, I was disappointed that his discussion of imaginary books cited
I Am a Cat by Soseki Natsume, which actually does exist, rather than creating a fake book for the purpose - especially since he deliberately gets several details of other books (and films) such as The Name of the Rose and The Third Man subtly wrong to prove a point about imperfect recall, and "desired" or even "screen memory books".
everyone, especially geeky nerdy computer scientists with a literary bent, should read this...it is a hoot....ever so slightly cynical (all the examples are chosen to maximise relevance to a general argument, but then the argument is claimed to apply to everything when only extremum example are used - clearly the "norms" of reading and of virtual, screen, private, shared and other types of libraries are not the same as the extremes.
Also, annoyingly, he uses several arguments which are extremely clearly illustrated more amusingly in the Library of Bable by Jorge Luis Borges (indeed, whose is namechecked by Eco since the blind monk in the Name of the Rose - jorge of borgos- who is the crucial part of the argument about talking about books you havn't read (Aristotle's lost volume of poetics on comedy, of which Wiliam of Baskerville deduces the existence and content, and then debates with jorge, who is blind so hasnt read the book for decades and only, possibly falsely, recalls the contents).
Other Lacunae exist, where, if this was a paper submitted to a scolalry journal, I would have to ask for minor revisions...
I give this a SB++
Wednesday, January 16, 2008
peer assisted power over DSL
for today's lesson in green networking, I'd like to consider the problem of
heat dissipation in the PoP, and a radical new approach.
YOu;ve heard of power over ethernet? You've heard of broadband over powerline?
You've heard of Peer-assisted TV?
Well, now we propose: Peer-assisted power over DSL.
Here's how it works.
Take away power from the DSLAM from the exchange building, and provide it from the consumer side - 900 consumeers per DSLAM should be plenty to run the whole thing - as in the old days, when telephones were powered by the exchange, we can certainly run
20 milliamps at 25 volts over a few hundred meters, per consumer - easily enough.
We then put sails and solar panels on peoples houses, and voila,
entirely electrically self-sufficient broadband.
If there is a drop in wind, or a cloudy day, some users can get on an excercise bike.
others can walk up and downstairs. Others can just reduce their downloads.
By decentralising the power suply, we make it more robust, and we make it much greener as we dont have to dissipate all that heat in one place, but it is naturally dispersed by the wind (in the sails)
We can even load balance power and data - it might be possible with AC to network
code the different components in the power too, together with multipath routing of power
multihoming would have to make sure that they got the polarity right (perhaps a BGP
option could be added to convey this?)
heat dissipation in the PoP, and a radical new approach.
YOu;ve heard of power over ethernet? You've heard of broadband over powerline?
You've heard of Peer-assisted TV?
Well, now we propose: Peer-assisted power over DSL.
Here's how it works.
Take away power from the DSLAM from the exchange building, and provide it from the consumer side - 900 consumeers per DSLAM should be plenty to run the whole thing - as in the old days, when telephones were powered by the exchange, we can certainly run
20 milliamps at 25 volts over a few hundred meters, per consumer - easily enough.
We then put sails and solar panels on peoples houses, and voila,
entirely electrically self-sufficient broadband.
If there is a drop in wind, or a cloudy day, some users can get on an excercise bike.
others can walk up and downstairs. Others can just reduce their downloads.
By decentralising the power suply, we make it more robust, and we make it much greener as we dont have to dissipate all that heat in one place, but it is naturally dispersed by the wind (in the sails)
We can even load balance power and data - it might be possible with AC to network
code the different components in the power too, together with multipath routing of power
multihoming would have to make sure that they got the polarity right (perhaps a BGP
option could be added to convey this?)
Subscribe to:
Posts (Atom)