looking at a bunch of OS resource management papers recently, (as I try to play catch up and be able to do my job on the Eurosys and SOSP PCs properly) it seems to me that if you look across the broad vista of papers over a few years on any resource management aspect of OSs, a lot of what people try to do is what you might call "myopic" schemes for scheduling - there's very litle that carries information over one set of processes running to a later similar mix explicitly.
Implicitly, however, the information is there in the set of benchmark specifications and the results in the sequences of papers - so if we codified the papers, then an OS could boot (and network downloadupdate) a configuration for long-sighted allocation of resources - this ought to be easy-peasy - we just specifiy an XML format for results, and then publish online -
its a bit like traffic engineering for the internet (rather than flow or session based resource allocation) but it also has micro- benefits as well as macro- benefits, snce proceses can be started with the right model of input rather than having to infer it from the usual pile of heuristics run online...
p2p-driven scheduling :-)??