Articles

How to allocate bandwidth in switch

November 19, 2012 at 23:40:00
Specs: Windows 7

Hi guys,

I have got a very basic/fundamental doubt. Would be very greatful if anybody could help me clear those.
r
I would like to know how a bandwidth gets distributed in switches.

for example consider a scenario

where i have a coreswitch A and coreswitch B connected between each other througha a 1Giga Fiber
now each of my core switche are connected to two edge switches through fiber links. all edge switches have giga ports. now if i connect a pc with giga link in th edge switch of coreswitch A and tansfer a file to a PC connnected to the edge switch in network B.. how much bandwidth would i get?

how does the switch allocate bandwidth?


See More: How to allocate bandwidth in switch

Report •


#1
November 20, 2012 at 06:39:10

A switch doesn't allocate bandwidth at all. It just uses whatever is available with available bandwidth being shared as needed.

Stuart


Report •

#2
November 20, 2012 at 07:41:05

What StuartS said is true and I agree.

Before I go further, if you're interested in a career in IT may I suggest to you that you attempt to write in fashion that makes you look, professional as well as literate. Your "texting" type of writing above with it's lack of capitals, punctuation and grammar makes you look like an illiterate boob. If you wish to be taken seriously, you need to step up your game and write like an educated professional.

Now having said that, back to the point at hand.

I think what's messing you up is you're looking at bandwidth as "speed" and not what it actually is. By definition, bandwidth is the amount of data that can flow past any one point in a network in one second.

Compare 100 Mbps to 1000 Mbps. 1000 Mbps is not ten times faster than 100, it can move ten times as much data in the same amount of time.

This gives the impression of it being faster because that large file you're copying transfers a lot faster across a 1000 Mbps network than on a 100. But it's doing so because of a larger pipe as compared to a faster pipe. The data flows at the same "speed" regardless of bandwidth.

You also need to keep in mind you're dealing with a contention based network which will affect data flow.

Overall though, the data will transfer at the maximum available rate. A congested network will cause transfers to be a little slower than an uncongested one due to more packet collisions and resends. But that data is still flowing at the maximum available bandwidth.

It matters not how straight the gate,
How charged with punishments the scroll,
I am the master of my fate;
I am the captain of my soul.

***William Henley***


Report •

#3
November 20, 2012 at 08:27:58

Or to put it in simpler terms:

Speed is a measurement of time/distance.

Bandwidth is a measurement of time/quantity.

Stuart


Report •

Related Solutions

#4
November 20, 2012 at 08:32:39

I'm going to steal that from you StuartS!


LOL

It matters not how straight the gate,
How charged with punishments the scroll,
I am the master of my fate;
I am the captain of my soul.

***William Henley***


Report •

#5
November 20, 2012 at 19:40:01

Thank you all for the reply.

A switch doesn't allocate bandwidth at all. It just uses whatever is available with available bandwidth being shared as needed.

Stuart , what I am trying to ascertain here is how does the available bandwidth get shared? ie If two end points require 1 Gbps each and the total bandwidth available is only 1 Gbps, how does it get shared? Is it just random or does a switch have an algorithm or something like that to share the bandwidth being asked by two end points?

*


Report •

#6
November 20, 2012 at 19:58:30

"It just uses whatever is available with available bandwidth being shared as needed."

That isn't correct. Switches don't share bandwidth at all. They use virtual circuits between the source and destination ports. A 24 port gigabit switch will have a 48Gbps backplane meaning each port can effectively do 1Gbps send and receive (full duplex), simultaneously.

apattat: Read up on duplexing.

Tony


Report •

#7
November 21, 2012 at 00:46:33

Switches don't share bandwidth at all.

So what happens when two sources are trying to use the same destination or one source is trying to send data to two different destinations. Everyone cant have 1 GBs, just not possible.

Imagine this, computer A is sending a large file to computer B Iat 1GBs. Computer C comes along and also tries to send a large file to computer B

Computer C cant receive it at 2Gs because it is only capable of 1 GBs. What happens then. Does Computer C wait till computer a has finished, or does it send it and let the switch work out the who gets what.

The switch sends a packet from computer A to computer B then a packet from computer C to computers B, effectively halving the bandwidth that computers A and C can use, in effect sharing the 1GB..


As Curt says, it is all about contention and when multiple computers are all trying to use the same resources, there will be contention and available sources will need to be shared.


Stuart


Report •

#8
November 21, 2012 at 07:37:28

We're getting caught up in semantics here.

We recently upgraded the backplane on our dual redundant core switches to 10 GB and put in a 12 port (non-blocking) module. There are also about 7 other modules that operate at 1 GB. But ignoring those other modules, according to what you said above my backplane on the core should be 240 GB's. But it isn't. It's 10 GB.

We also recently bought some 10 GB switches and the one type of switch has 8, 10 GB (non-blocking) ports and 24, 1 GB ports and they're connected to the 10 GB module on the Core switches. Now, those links operate at 10 GB but being as how there are two of them, one to each core, we have a total of 20 GB's available between those switches and the core.

I point this out because "sharing" may be the wrong word to use. Also, I don't know the intricacies of how a switch physically does it's job. To steal a line, "Damnit Jim, I'm network technician, not an engineer!" Truthfully, I don't need to know. What I need to know is that those 10 GB switches can, and do offer, 10 GB connections to servers and other devices. If I have 6, 10 GB server connections in one switch and the dual 10 GB uplinks and all 6 servers suddenly start using the full 10 GB bandwidth available to them, I won't be pushing 120 GB's through those two uplink ports as even though we're using LACP that only gives us a combined bandwidth of 20 GB's.

Now realistically it's not likely we'll ever have all 6, 10 GB server connections maxed out at the exact same time. But if it ever did happen, once the maximum 20 GB pipe on the uplink ports was filled, then there would be a lot of buffering, collisions and resends slowing down the transfers of all sending units.

As I said above, "sharing" may not be the correct term but the simple truth is, all the other non-uplink ports on a switch have to share the available bandwidth on the uplink port(s).

Take for example my client stack of 5, 48 port switches which are cascaded together and behave as one big switch (located in my main datacenter and feeding our Computing Services department, among others) that are uplinked to my cores with two LACP 1 GB connections. That gives me a grand total of 2 GB available bandwidth from that stack to my core. Right now, with about 200+ active client connections I'm using less than 30% of the available 2 GB's of bandwidth. Most of the users are simply surfing or connected to some internal resource. So that stack's uplinks never come close to being fully utilized. So if one user on that stack suddenly downloads a 1 GB file from somewhere else (either internally or externally) that file will be downloaded in a few seconds because there's still so much bandwidth available on the uplinks and backplane that the download will use the maximum available. However, if every user connected to that switch tried to download that exact same file at the same time....it would take a noticably longer period of time for them to all get their file.

I haven't said anything at all about allocatiing bandwidth.....which can be done on a port-by-port basis on the better switches. This is a technology a lot hosting and colocation sites use.

I bring it up because if one has a switch capable of allocating bandwidth, and they use it. It changes how the bandwidth is allocated. But if it's not in use or an available feature then yes, the switch will "share" all available bandwidth between all connected devices.

It matters not how straight the gate,
How charged with punishments the scroll,
I am the master of my fate;
I am the captain of my soul.

***William Henley***


Report •


Ask Question