Being digitally colonized has been a serious threat to national sovereignty, but also to individual freedoms from survellance and censorship for decades. This applied to the Internet, the WWW, the Cloud and now AI.
Whether the digital Emperors are based in the USA or China, they are there.
To avoid these sorts of risk for content, Ross Anderson proposed the eternity service which finesses the problems in (typical for Ross) an ingeneous way by structuring the infrastructure as a mix of sharing, striping, and sharding and builds in the threat of mutually assured destruction - if you are a low level engineer/computer scientist, the idea is like CDMA or Network Coding or what some colleagues re-purposed to be spread spectrum computing.
A simpler idea is more coarse grained - organisations that provide critical infrastructure (railways, power grids, water&sewerage, Internet etc) can source technology from (say) three different providers. The London Internet Neutral Exchange (LINX), which extends this to cooperative ownership as well. So undermining of one supplier's gear has a limit to damage on the service - indeed, many services operate with a headroom for coping with simple natural disasters in any case (internet and power grids also to allow for wide variance in demand/supply) so this is a natural way to do things.
Another digital version is the Certificate Transparency, which also creates a merkle tree space for (horrible word) coopetition (cooperation amongst competitors), enforced by the tamper evident (or to some extent, socio-economically tamper proof) service space, in a way a single application version of eternity.
This would apply to sourcing data, training, and models themselves + inferencing after the training.
So how about, using the state of the art multi-AI protocols to connect agents, we construct a multi-national AI substrate that serves no-one in particular, but everyone in general. Any attack (removal of an agency, pollution of data) would damage the attacker as much as everyone else. It is in everyones interest to keep the system running and running honestly.
So how to combine neural networks? (something that would also be useful during training or inference so as to share GPU or other accelerator h/w)? You'd need some sort of way to interpret multiple interleaved graphs with multiple (XORd or turbocoded) weights. This is research. Margin's too small to put it here =>