Back to the blog and more infrastructure

Feel free to share: Share on LinkedIn
Linkedin
Tweet about this on Twitter
Twitter
Share on Facebook
Facebook
Share on Google+
Google+

I’ve finally got some breathing space to give this blog some love. It’s been crazy busy followed by a vacation and my longest illness as far as I remember (just over 1 week, not too bad I guess ;)).

But it looks like I’m able to get quite some cool stuff done in the coming weeks and should have time to do some serious blogging about it.

Last few days I’ve been doing some more much needed infrastructure work. I’ve finally bitten the bullet and decided to move my little trusty render farm away from my office space. It’s making too much noise and heat to keep it where it sits now, which is making it hard to add render boxes to it when needed (I’m already running into render capacity limitations regularly). This does require quite some reworking of the network infrastructure as I don’t want to run into any bandwidth problems when hundreds of gigabytes of simulation data and render results have to be pushed back and forth between my workstation(s) in the office and the render machines. Currently, they are still in the same server rack and connect to a single switch through dual port aggregates which provide ample bandwidth between the workstation with RAID and the render boxes.

In the new situation the office gets its own switch that will connect my (in the near future two) workstations, each on a 2/4 port gigabyte ethernet aggregate. These connections are part of a fully separate internal network that connects through a 4 port aggregate (the backbone) to the second switch in the ‘render farm’. This switch further connects the render machines. The ‘office switch’ also has a separate virtual LAN that connects the workstation to the ‘home network’ and the internet.

The following diagram of the two switches shows all this a bit more clearly:

It’s actually quite amazing that this can be put together for under €1000 considering the amount of bandwidth it opens up. I’m sure I won’t come close to filling up this much bandwidth any time soon, but at least I can add render boxes without having to worry about creating bottlenecks. And the dual switch setup keeps things very nicely organized and separated. Even if I do end up running into bandwidth issues I can always increase the backbone with 2 additional ports in its aggregate. And if I run out of ports on the ‘RenderFarmSwitch’ I can always put the ipmi ports on a separate cheap switch/hub without much investment.

Anyway, I’ve got all the switch and networking stuff working (with a whopping 0.5m long backbone for testing ;)), now I ‘only’ have to get the cables run all the way down from my office to the new render farm location and get the server rack moved there as well, which is the part I like least of this move. If I haven’t done anything really stupid this new setup should keep me simulating and rendering happily for quite a while…

Ok, that’s it for now. Next post should be about much more fun things!

Cheers,

Erik