The Internet's Layering model derives from the ISO's OSI (wow, there's a great mix of acronyms - actually, technically, ISO is not an acronym, since it is the International Organisation for Standardisation (with an 's') - OSI is the Open Systems Interconnection architecture and is an acronym:)
Meanwhile, alternatives abound - see for example, Arizona CS's ideas in the The X Kernel on protocol graphs. More recently, folks at USC and UCL came up with the more random idea of ole based protocols and Heap based composition. There are a lot of ways to skin a cat.
At a more basic level, the idea of interconnection in the OSI model tends to appear as if it is only a "network layer" function - the reality is that engineers and hackers build interconnects at every layer you can think of - for example:
physical layer repeaters and relays (especially in wireless and optical)
link layer switches and bridges (especially in LANs)
network layer (routers)
transport layer relays (usually to deal with transport layers that don't have enough network layer information to react end to end or hop by hop to loss because the network layer doesn't disambiguate congestion from interference or from noise).
Session, presentation and application layer proxies - e.g. web caching proxies, and in general, just about any peer-to-peer (P2P) system.
Data link and physical layer (possibly including network layer too) are starting to fall apart as "isolated" abstractions recently in multihop radio systems where
cooperative antennaes, cooperative coding, and cooperative multi-path routing mean that the three lowest layers need to be treated together at each device. This is a hot research topic area
End-to-end arguments in system design have never been more argumentative, seeing re-factorizations into different network architectures about once a year for the last decade.