Explore chapters and articles related to this topic
Measuring stiffness of soils in situ
Published in Fusao Oka, Akira Murakami, Ryosuke Uzuoka, Sayuri Kimoto, Computer Methods and Recent Advances in Geomechanics, 2014
Fusao Oka, Akira Murakami, Ryosuke Uzuoka, Sayuri Kimoto
As FSAIPACK is designed for parallel computers, in this subsection we analyze its degree of parallelism and scalability. To this aim we use the IBM-Bluegene/Q FERMI supercomputer at the CINECA Centre for High Performance Computing. It is a massively parallel machine consisting of 10,240 computing nodes connected through a high bandwidth/low latency 5D torous network. Each FERMI node is equipped with a 16 core IBM PowerA 2 processor at 1.6GHz and 16 Gbytes of RAM with 42.6 Gbytes/s bandwidth.
Recent EUROfusion Achievements in Support of Computationally Demanding Multiscale Fusion Physics Simulations and Integrated Modeling
Published in Fusion Science and Technology, 2018
I. Voitsekhovitch, R. Hatzky, D. Coster, F. Imbeaux, D. C. McDonald, T. B. Fehér, K. S. Kang, H. Leggate, M. Martone, S. Mochalskyy, X. Sáez, T. Ribeiro, T.-M. Tran, A. Gutierrez-Milla, T. Aniel, D. Figat, L. Fleury, O. Hoenen, J. Hollocombe, D. Kaljun, G. Manduchi, M. Owsiak, V. Pais, B. Palak, M. Plociennik, J. Signoret, C. Vouland, D. Yadykin, F. Robin, F. Iannone, G. Bracco, J. David, A. Maslennikov, J. Noé, E. Rossi, R. Kamendje, S. Heuraux, M. Hölzl, S. D. Pinches, F. Da Silva, D. Tskhakaya
Following the increasing computational needs of first-principle simulations and IM, the growing number of HPC users, success in code optimization, and the ability to scale to a large number of cores, the EU extended its computational capabilities by acquiring a new supercomputer for fusion applications under EUROfusion. This supercomputer, called MARCONI-FUSION, is a dedicated part of a larger supercomputer hosted at the Inter-University Computing Consortium (CINECA) (Bologna) under a EUROfusion Project Implementing Agreement with the National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA)/CINECA. It consists of two parts: a conventional processor part and a many-cores processor part. The first phase of the conventional part [based on Intel Xeon-Broadwell processors for a total peak performance of 1 petaflop (Pflop)] of this machine has been operational since mid-2016 and its replacement in a second phase [5 Pflop of Intel Xeon-Skylake processors] is now in progress. The accelerated part, in production since the beginning of 2017, consists of 1 Pflop of Intel Knights Landing many-cores processors. The purpose of this partition is to offer EUROfusion users, in continuation of the Intel Knight Corner partition of HELIOS, access to compute nodes that are very efficient for highly parallel and well-vectorized codes. The compute nodes are interconnected by an Intel Omni-Path network with a fat-tree topology with bandwidth performances measured by means of the Intel MPI Benchmark (Fig. 1), and are connected to a high-performance general parallel file system (GPFS) storage system. Thanks to the CINECA Tier-0 Development Roadmap of the HPC infrastructure for the period 2015 to 2020, the EU fusion community takes advantage of HPC resources based on the latest technology generation of processors.
Global Flux Calculation for IFMIF-DONES Test Cell Using Advanced Variance Reduction Technique
Published in Fusion Science and Technology, 2018
The acceleration effect of the ADVANTG WW mesh has already been discussed in detail Ref. 5, thus it is not presented again in this paper. The aim of this work is to calculate accurate global neutron flux and dose maps for the safety evaluation. The computational expense is not of great concern, as long as the results are good. To meet the needs of heavy computation in this work, the supercomputer MARCONI hosted by CINECA in Italy has been used.
From idealised to predictive models of liquid crystals
Published in Liquid Crystals, 2018
where is a positive constant, , for neighbouring sites and , and zero otherwise. is the relative orientation of the two particles (see Figure 3), i.e. . This simple model has never been solved exactly, and chances are it will not be in the foreseeable future, like no other three dimensional lattice model has [19], but it has certainly been investigated by a number of authors using a great variety of theoretical techniques (see [20–23]). In 1985 we performed MC simulations of a 303030 LL lattice with Periodic Boundary Conditions (PBC), and showed, by an analysis of energy and order parameter histograms, that it has a weak first order transition at = 1.12320.0006 [20]. When we did the simulations, the computational effort was significant and we could do it with Umberto Fabbri, at CINECA, the major Italian computer centre, only using some weeks of the burn-in time on the newly installed Cray XMP, the top supercomputer of those days, with its 235 Mflops peak power (!). The transition properties, including the transition temperature, depend on sample size as shown in Figure 4 for smaller lattices, but the value has essentially been confirmed by other groups [24] also using larger, 606060, lattices [25] and by other recent simulations [22]. Interestingly the model shows pretransitional effects diverging about one degree below the first order transition temperature, a behaviour entirely consistent with real experiments. Thus, albeit simple, the model possesses the fundamental aspects of the nematic ordering. It also features, although with the limitation of having only one elastic constant (like most often assumed in continuum type theories [16] anyway), the essentials of the director field and of its topological defects [26], allowing the predictions of defects in systems like droplets or thin films with specific boundary conditions.