Oak Ridge National Laboratory: Blender on a Supercomputer!

Başlatan eribol, 23 Kasım 2011 - 00:14:04

« önceki - sonraki »

0 Üyeler ve 1 Ziyaretçi konuyu incelemekte.

eribol

İngilizce için üzgünüm ama lütfen resimlere ve eklediğim videolara göz atın, blender'ın gücünü görün. Özellikle @alquirel tam sana göre bir haber :)


Oak Ridge National Laboratory in Tennessee is using Blender on their 300,000 core Jaguar supercomputer for scientific visualisation.

Mike Matheson reports:

At Oak Ridge National Laboratory in Tennessee, the largest computing complex in the world devoted to computational science, Blender is used to support scientific visualization. Currently, three large liquid cooled Cray systems are located at the site in a half-acre computer room. The Department of Energy's Jaguar XT5, the University of Tennessee's Kraken XT5 in support of the National Science Foundation, and the National Oceanic and Atmospheric Administration's Gaea XE6 provide the leadership computational resources.

Currently, Jaguar is being transformed into a new Cray XK6 which will be renamed Titan. When the last upgrades are completed, there will be 299,008 AMD cores and 600 terabytes of memory on Titan along with thousands of nVidia next-generation Tesla GPUs. Titan and the other two systems will total more than 500,000 cores and have roughly 1000 terabytes of memory when completed. The disk infrastructure with tens of petabytes of high-bandwidth storage is critical to support the systems.

Blender is well represented by the work done by visualization members at the Oak Ridge Leadership Computing facility. In fact, at this years Scientific Discovery through Advanced Computing (SciDAC) Electronic Visualization Night, five of the 12 awards went to ORNL researchers and all were done with Blender. This competition is an annual event with entries representing visualization work from National Laboratories of the United States, Universities, and other visualization groups.

Here are just a small subset of various examples of Blender use from scientific datasets at ORNL.

Computational fluid dynamics simulations to aid in the design of fuel efficient trailers for semi-trucks rendered with Cycles:


High speed shock wave / boundary layer interactions from computational simulations.
http://vimeo.com/27246962

Magnetic Field Outflows from Active Galactic Nuclei
http://vimeo.com/27247345

Over 500 million polygons show simulation of processes related to the efficient production of ethanol from cellulose as part of the US energy-policy goals:


Blender runs on the Supercomputer which is Linux based. At least, most of the renderer does – we don't build the player or game engine or features we don't use. We normally don't render on it simply because it is busy and we have our own clusters with thousands of cores available. The way we normally use any of the compute resources is to generate frames in parallel. We assign 1 – N frames per node and use 100s of nodes simultaneously. We tend to try to keep the maximum time to render a single frame at less than 1 hour and usually at around 20 minutes. Since we run this way we don't really exploit any high speed interconnects like those available on the Cray which is another reason we don't typically use it ( it's too valuable to people who need the interconnect ). We use Maya/MentalRay as well but because of license restrictions we can never match the sheer horsepower that we can employ with Blender. So our unique situation really makes Blender a great tool for us.

So a very typical render would be to generate 60 seconds of animation at 24 frames per second for 1440 frames. I'd take 128 nodes of roughly 16 cores each ( 2048 cores ) – I'd get back 3 x 128 frames every hour so in less than 4 hours – I'd have 1 minute of HD animation. So it is possible to generate 60-90 second clips in a night without requiring a lot of resources ( compared to what we have anyway ). However, from the number of nodes we have you can see that we could render many minutes of video in less than 30 minutes if we needed to do it. We ( visualization/scientists) are the current bottleneck since we have vast amounts of computing resources. Usually these short clips are sufficient for the needs of most scientists. The most cores that I can think of using simultaneously is probably around 7500-8000. It was a case of rendering the exact same scientific data set with 3 or 4 different cameras.

The most demanding use of Blender we have is for presentations on Everest. Everest is a 35 MPixel powerwall and we do render a select few animations at this resolution which is 17x higher resolution than HD (1920×1080). These frames are absolutely brutal and every rendering artifact will be visible so it takes a lot of care in creating them. These frames take a long time to render and this is the sole use case where we have used many nodes to render single frames – although we still usually will just render 1 frame per node. If we are doing stereo – double the effort.

http://www.blendernation.com/2011/11/22/oak-ridge-national-laboratory-blender-on-a-supercomputer/

alquirel

Alıntı yapılan: eribol
Özellikle @alquirel tam sana göre bir haber :)

Açıkçası haberin içeriğini okurken adamların oluşturdukları sistemin özelliklerini gördükçe ağzım karış karış açıldı gitti.
Bu sisteme de Blender'ı layık görmüşler. Bu her şeyi anlatmaya yetiyor zaten.

Forumda reklamcılık aldı başını gidiyor :)
Bir de reklam sorumlusu diye rütbe mi yapsak ne :P

@eribol, artık birkaç yıla kalmaz Blender kullananlar hakkında yaptığın reklamların birinin konusu da ben olurum :D

eribol

İngilizce başlığının dışına çıkıyoruz, özür dileyerekten. Reklam payını almamız lazım :)

Sistem gerçekten harikulade ama hoşuma giden şu açıklama;
Alıntı YapWe use Maya/MentalRay as well but because of license restrictions we can never match the sheer horsepower that we can employ with Blender.