Mininet is a project at Stanford that provides an API to emulate a multi-node network on a single machine. It uses linux containers and namespaces for configuration/resource isolation.
It is (was) not the only implementation of the idea that is as basic as it is extremely useful for testing in complex network setups.
All network stack variables (i.e. settings and state) are gather in one big struct and all network stack functions operate exclusively on the contents of this stack. Every process has one stack instance assigned to it by default and it may switch to another instance at any time. There are no changes to any socket-level API, all calls function exactly as before, but they are executed in the context of process' effective stack.
Process' default stack is inherited from the parent process, and this makes it trivial to run any existing application in a context of specific stack (by launching it from a process that selects desired stack and then does the exec).
At the bottom each stack connects to one or more network interfaces. Stacks may connect to the physical devices (multiple stacks per device is OK as long as it supports promiscuous mode), or they may connect to a virtual interface. Virtual interfaces in turn may have their inputs and outputs meshed together via hubs, switches or direct links, all of which may in turn be configured to emulate packet loss, latency, jitter and what not.
That's your basic network stack virtualization. Functioning version can be assembled from a forked Linux source in a couple of weekends. If only someone would bother submitting the patch after that :)
Unfortunately, I'm unaware of how to do this natively but I recommend using Dummynet which has a Windows port of the FreeBSD networking tool. Oddly, their website appears to be down so here is the Google cache:
You need to install the drivers but once done you effectively use the same commands with ipfw as on FreeBSD (as described in another thread http://news.ycombinator.com/item?id=2005190). I just used it for the first time the other day to test a web app browsing with a 400ms delay on a WinXP machine. It worked like a charm.
This is what I do and it works perfectly. I use VirtualBox and can stop any of the servers at any time and reduce my memory/CPU footprint for other activities. VirtualBox also lets me configure the network easily.
If you have a spare machine, you can use WanEm http://wanem.sourceforge.net/ livecd. WanEm gives you a nice web interface to configure network parameters. I have used it successfully to check effects of latency on Lotus Notes on Windows.
Do be aware that your linux loopback has an MTU an order of magnitude greater than ethernet, so your latency sensitive TCP startup times are not going to behave the same.
Could anyone explain why that would be desirable? Is it to test out a website design that you are hosting locally, and you want it to "feel" right? Something else?
Testing protocol implementations. I had a recent bug where the speed on localhost was masking a race condition between the packet reader and parser. Only caught it when testing over the Internet.
Basically you want full control over your simulation parameters.
I'm currently developing a multiplayer game and use this (although with additional arguments to include variation/jitter and packet loss) to simulate somewhat more realistic network conditions. I find it's great for exposing bugs in the networking protocol that would otherwise only arise under poorer network conditions.
qdiscs in linux have always seems something of an underdocumented dark art to me. I've used HTB to slow down packets that I mark with iptable rules, but that's about as fancy as I can get. Does anybody know of some nice documentation about this stuff that isn't a decade old? ;)
For those on FreeBSD dummynet was perhaps one of the most useful traffic shapers I've ever used. I had a router built on FreeBSD going strong for more than 3 years on an old pentium 90 that would route traffic like no ones business.