|We're getting caught up in semantics here.|
We recently upgraded the backplane on our dual redundant core switches to 10 GB and put in a 12 port (non-blocking) module. There are also about 7 other modules that operate at 1 GB. But ignoring those other modules, according to what you said above my backplane on the core should be 240 GB's. But it isn't. It's 10 GB.
We also recently bought some 10 GB switches and the one type of switch has 8, 10 GB (non-blocking) ports and 24, 1 GB ports and they're connected to the 10 GB module on the Core switches. Now, those links operate at 10 GB but being as how there are two of them, one to each core, we have a total of 20 GB's available between those switches and the core.
I point this out because "sharing" may be the wrong word to use. Also, I don't know the intricacies of how a switch physically does it's job. To steal a line, "Damnit Jim, I'm network technician, not an engineer!" Truthfully, I don't need to know. What I need to know is that those 10 GB switches can, and do offer, 10 GB connections to servers and other devices. If I have 6, 10 GB server connections in one switch and the dual 10 GB uplinks and all 6 servers suddenly start using the full 10 GB bandwidth available to them, I won't be pushing 120 GB's through those two uplink ports as even though we're using LACP that only gives us a combined bandwidth of 20 GB's.
Now realistically it's not likely we'll ever have all 6, 10 GB server connections maxed out at the exact same time. But if it ever did happen, once the maximum 20 GB pipe on the uplink ports was filled, then there would be a lot of buffering, collisions and resends slowing down the transfers of all sending units.
As I said above, "sharing" may not be the correct term but the simple truth is, all the other non-uplink ports on a switch have to share the available bandwidth on the uplink port(s).
Take for example my client stack of 5, 48 port switches which are cascaded together and behave as one big switch (located in my main datacenter and feeding our Computing Services department, among others) that are uplinked to my cores with two LACP 1 GB connections. That gives me a grand total of 2 GB available bandwidth from that stack to my core. Right now, with about 200+ active client connections I'm using less than 30% of the available 2 GB's of bandwidth. Most of the users are simply surfing or connected to some internal resource. So that stack's uplinks never come close to being fully utilized. So if one user on that stack suddenly downloads a 1 GB file from somewhere else (either internally or externally) that file will be downloaded in a few seconds because there's still so much bandwidth available on the uplinks and backplane that the download will use the maximum available. However, if every user connected to that switch tried to download that exact same file at the same time....it would take a noticably longer period of time for them to all get their file.
I haven't said anything at all about allocatiing bandwidth.....which can be done on a port-by-port basis on the better switches. This is a technology a lot hosting and colocation sites use.
I bring it up because if one has a switch capable of allocating bandwidth, and they use it. It changes how the bandwidth is allocated. But if it's not in use or an available feature then yes, the switch will "share" all available bandwidth between all connected devices.
It matters not how straight the gate,
How charged with punishments the scroll,
I am the master of my fate;
I am the captain of my soul.