NA-ASC-500-12 Issue 22
The Meisner Minute
As I write this on New Year’s Eve I hope that you and your families have experienced a merry and bright holiday season. A rewarding future awaits you in the new year. As we prepare for it, it is instructive to look back at our successes and see where they are driving us into the future.
Twenty-twelve was the year we finally turned off Red Storm (the machine that saved a company and spawned a successful line of supercomputers) and turned on the world’s fastest computer in June, with the delivery of Sequoia. Before the year was out, we successfully documented the need for our next Advanced Technology System—Trinity.
As we enter a new era in high performance computing (HPC) constrained by memory size and available bandwidth per compute core, we will find that we will no longer compete for the top spot on the Top 500 list. Memory, and access to it, will drive our acquisition strategy into the future. Consequently, you will see a new Platform Strategy appearing in the new year that calls for investment in two classes of systems: Advanced Technology (AT) and Commodity Technology (CT). Capability class systems such as Purple and Cielo will become things of the past, albeit extremely successful things of the past. Under this new strategy, Trinity will be the first AT system and will likely not be a #1 system as we balance FLOPS against memory.
Over the past year we have begun to prove that partnering with the Office of Science brings benefits beyond what each could achieve independently. What started in previous years as joint planning activities blossomed into joint procurements in 2012 with the professionally-executed PathForward project hosted at Livermore. In addition, Trinity completed a year of joint planning that puts us well on the way toward successful platform procurement with the Advanced Scientific Computing Research (ASCR) program. These successes will carry over into 2013 and will spawn partnerships for DesignForward and Sequoia replacement procurements.
Speaking of the Office of Science partnership, “where is that Exascale Plan,” you ask? I expect a coordinated plan to be out of the department before Inauguration Day.
So, the year ahead looks a bit tumultuous with a historic technology paradigm shift hitting us while we continue to build our partnership. But, your experience and record of success indicates this will be another fun and productive year. Tough problems demand the best from the best. I look forward to working with you through another rewarding year of service to our country contributing to a world without nuclear testing.
Novel Mechanism Investigated for Silent-Error Tolerance in Extreme-Scale Simulations
Sandia researchers in the Computational Systems and Software Environment program are exploring algorithm designs that can intrinsically tolerate “silent errors” such as bit flips to enable more efficient and accurate simulations on unreliable hardware. Resilience in high performance computing has typically focused on addressing directly observable failure modes, such as hung processors. A different, increasingly important type of failure involves deviations from intended behavior that occur silently. Bit flips in hardware are an example, and one of increasing concern in power-constrained architectures as transistors shrink to the limits of reliable digital behavior. With extreme-scale computing, such previously rare errors will become more commonplace. An intrinsic means for dealing with these errors is imperative in order to avoid the waste of time and energy incurred when a fault renders a large amount of computation invalid.
Targeting physics simulations on extreme-scale platforms, the algorithm designs under development seek to map perturbations in the digital computation onto perturbations to which the simulated physical system is resilient. A special case of this concept that is already familiar is ensuring the robustness of simulations to floating-point round off. The current research focuses on the numerically large but isolated perturbations that result from bit flips and demonstrates the potential for a “robust stencil” approach that can discard a single outlier point from a neighborhood in a discretized equation and compute an update of sufficient accuracy based on the remaining points. Work is under way to refine and validate this and similar fault-tolerance approaches, exploiting intrinsic stability properties of physical systems to engineer related stability properties into digital algorithms.
Los Alamos Completes Level 1 Milestone on Initial Conditions for Boost I
Los Alamos and Lawrence Livermore national laboratories (LANL & LLNL) recently completed the 2012 Level-1 milestone “Initial Conditions for Boost I.” This is the first in a series of Predictive Capability Framework (PCF) peg-posts envisioned to characterize improvements in boost modeling over the next six to eight years. The results can help the community in Directed Stockpile Work (DSW) articulate where future improvements are needed and where current capability is good enough.
This milestone was the culmination of a decade-long focus on advanced material model and code development within the Science and Advanced Simulation Computing (ASC) programs. Outstanding scientific accomplishments were shown to meet this milestone focusing on work in the last two years to vet, test, validate, and document the current status of our science-based predictive capabilities. The milestone demonstrates reduced reliance on integral calibration and provides a framework for developing and maintaining baselines.
The documentation to meet the milestone presented the current recommended physics models and settings that are best for running performance calculations. “These models should provide an excellent yardstick for the state of performance capability for our simulation tools,” says Jim Cooley, LANL L1 team leader. The small-scale experimental suites and science-base for the recommended models and setting are described in the documentation. The L1 closeout documents provide a complete picture of the current state of initial conditions modeling and predictive capability at LANL. Figure 1 shows extracted burn fronts from proton radiography (dots) and DSD prediction (solid). Figure 2 shows the grid resolution studies with the DSD model.
Early Science Runs Prepare Sequoia for National Security Missions
Sequoia, a world-class IBM BlueGene/Q computer sited at Lawrence Livermore (LLNL) for NNSA, is exploring a broad range of science to shakeout the machine and fully develop the capabilities the system will require to fulfill its national security missions, starting early next year.
Researchers from NNSA's three nuclear weapons laboratories (Lawrence Livermore, Los Alamos, and Sandia) are testing Sequoia's power and versatility by running unclassified science codes relevant to NNSA missions. Science being explored by Lawrence Livermore researchers includes high energy density plasmas and the electronic structure of heavy metals.
The early science runs are part of the "shakeout" of the 20-petaFLOP/s system, which will transition in March 2013 to classified work for the ASC program, a cornerstone of the effort to ensure the safety, security, and effectiveness of the nation's nuclear deterrent without underground testing (stockpile stewardship). Sequoia's mammoth computational power will be used to assess physical weapons systems and provide a more accurate atomic-level understanding of the behavior of materials in the extreme conditions present in a nuclear weapon.
"The early science runs are critical to the success of classified work to which the machine will be dedicated early next year," said Michel McCoy, head of LLNL's ASC program. "These codes represent the first big test of the machine and allow us to explore Sequoia's range and parameters. We accomplish important science for NNSA and in the process get a sense of what Sequoia is capable of doing."
Early unclassified work on the machine allows Livermore researchers and IBM computer scientists to work out the bugs and optimize the system before it transitions to classified work and the limited access and security that entails. Los Alamos researchers will run asteroid and turbulence simulations, and Sandia scientists will explore the properties of tantalum on Sequoia.
Initial efforts by Livermore scientists include a QBox first principles molecular dynamics code examination of the electronic structure of heavy metals, research of interest to stockpile stewardship. Qbox was developed at LLNL to perform large-scale simulations of materials directly from first-principles, allowing scientists to predict the properties of complex systems without first having to carry out experiments.
In addition, LLNL scientists will investigate burn in doped plasmas, exploiting the full capability of Sequoia and the code developed for this purpose. Following a benchmark exploration of the density and temperature dependence of burn in undoped hydrogen plasma, researchers will begin a series of extreme-scale simulations of burn in the presence of small fractions of a percent of high-Z dopants. These studies will be used to deepen scientists' understanding of the effect of dopants on burn, physics that is vital to capsule design for the National Ignition Facility, LLNL's laser fusion experiment.
Sequoia also has demonstrated its amazing scalability with a 3D simulation of the human heart's electrophysiology. Using a code created in a partnership between LLNL and IBM scientists, called Cardioid, researchers are modeling the electrical signals moving throughout the heart. Cardioid has the potential to be used to test drugs and medical devices, paving the way for tests on humans.
Sequoia's power enables suites of highly resolved uncertainty quantification (UQ) calculations. UQ is the quantitative characterization and reduction of uncertainty in computer applications through the running of very large suites of calculations that characterize the effects of minor differences in the systems. Sources of uncertainty are rife in the natural sciences and engineering fields. UQ uses statistical methods to determine likely outcomes.
Located in Livermore's TSF computing facility, Sequoia was ranked No. 1 on the industry-standard Top500 list of the world's fastest supercomputers in June of this year. The system also was No. 1 on the Green 500, as the world's most energy efficient computer, and No. 1 on the Graph 500, a measure of the ability to solve big data problems—finding the proverbial needle in the haystack.
Paving the Manycore Road for ASC Applications and Libraries
Future supercomputer architectures will consist of networks of computing nodes with manycore accelerators, such as Oak Ridge National Laboratory’s Titan system (currently number 1 on the Top 500) with its NVIDIA K20 GPU boards. ASC applications, libraries, and algorithms will have to be ported from their current distributed-memory-only (MPI-only) parallelism to hybrid distributed-plus-manycore parallelism (MPI+X). Manycore parallel scaling and performance has the challenge of cores sharing hardware to access shared memory. Vendors of manycore hardware architectures are implementing a variety of proprietary strategies for this hardware, which impose different performance constraints on code. For example, non-uniform memory access (NUMA) architectures have heavy performance penalties when cores access memory in the wrong NUMA region, and NVIDIA GPU cores have heavy performance penalties when cores access memory in non-coalesced patterns. Thus, ASC applications, libraries, and algorithms must take into account diverse manycore memory architectures in order to scale and perform well on modern computing architectures and future supercomputers.
Sandia’s ASC CSSE project is developing the KokkosArray performance-portable hybrid-parallel programming model and library for distributed-plus-manycore architectures. The KokkosArray application programmer interface (API) portably abstracts and manages both manycore parallelism and the associated performance-critical memory access patterns. This programming model and library will enable applications and libraries to be ported one time to KokkosArray, and achieve the expected manycore performance on diverse manycore architectures with a single version of the code. The costly alternative is to develop and maintain multiple architecture-specific versions of these ASC codes.
The KokkosArray library is distinct from numerous industry-driven manycore programming model efforts such as OpenMP, OpenACC, OpenCL, CUDA, Thrust, CUSP, TBB, and C++AMP in that (1) architecture-portable memory access patterns for the customary science and engineering data structures of multidimensional arrays are specifically managed and (2) no programming language extensions are imposed upon the users’ applications and libraries. Performance-portability and ease-of-use has been demonstrated on Intel and AMD NUMA architectures and NVIDIA GPU architectures via proxy-applications for explicit dynamics finite elements, linear and nonlinear thermal conduction finite elements, and molecular dynamics. Preliminary results from porting to the new Intel MIC (a.k.a., Xeon Phi) manycore architecture indicated that performance-portability, ease-of-use, and instruction vectorization will be achieved.
Sandia’s Curie (Cray XK6) hybrid-parallel testbed was used to generate strong scaling results for the explicit dynamics proxy-application. This proxy-application has two computational phases: (1) compute finite element stress, strain, and internal forces, and (2) sum forces to finite element nodes and compute kinetics. The CPU configuration has two MPI processes on each node (one per NUMA region) and one thread per CPU core. The GPU configuration has one MPI process on each node and utilized the one NVIDIA-Tesla GPU on that node. In both cases the identical, unmodified source code was compiled and run on the manycore hardware.
In FY13, the KokkosArray library will be evolved from its proof-of-concept state to a production-ready version-1 suitable for early evaluation by ASC application and library projects. Through such an evaluation, ASC projects can assess their requirements for porting to modern manycore architectures and future supercomputers. Such evaluations can trade-off a port-once strategy using KokkosArray versus a port-many-times strategy for each vendor-specific manycore architecture and associated programming model.
Bug Repellent for Supercomputers Proves Effective
Lawrence Livermore researchers have used the Stack Trace Analysis Tool (STAT), a highly scalable, lightweight tool, to debug a program running more than one million message passing interface processes on the ASC program’s Sequoia supercomputer.
The debugging tool is a significant milestone in LLNL's multi-year collaboration with the University of Wisconsin, Madison and the University of New Mexico to ensure supercomputers run more efficiently.
Playing a significant role in scaling up the Sequoia supercomputer, STAT, a 2011 R&D 100 Award winner, has helped both early access users and system integrators quickly isolate a wide range of errors, including particularly perplexing issues that only manifested at extremely large scales up to 1,179,648 compute cores. During the Sequoia scale-up, bugs in applications as well as defects in system software and hardware have manifested themselves as failures in applications. It is important to quickly diagnose errors so they can be reported to experts who can analyze them in detail and ultimately solve the problem.
As LLNL works to move the Sequoia system into production, computer scientists will migrate applications that have been running on earlier systems to this newer architecture. This is a period of intense activity for LLNL's application teams as they gain experience with the new hardware and software environment.
"Having a highly effective debugging tool that scales to the full system is vital to the installation and acceptance process for Sequoia. It is critical that our development teams have a comprehensive parallel debugging tool set as they iron out the inevitable issues that come up with running on a new system like Sequoia," said Kim Cupps, leader of the Livermore Computing Division at LLNL.
The STAT team is actively pursuing further optimization of STAT technologies and is exploring commercialization strategies. More information about STAT, including a link to the source code, is available on http://www.paradyn.org/STAT/STAT.html.
Enhancing Circuit Analysis Through Data Mining
Weapons analysts using Xyce, WAVE and Themis can now study more data, more efficiently. Close integration of these three programs enables faster analysis of results by eliminating custom translation steps while providing new techniques to understand the data. Further, the deeper analysis of Xyce results within Themis and WAVE helps users to better understand the variability of their systems and identify correlations and trends in design performance.
In electrical circuit design and analysis, uncertainty quantification (UQ) studies are performed where critical circuit parameters are altered to better understand the design. Such UQ studies typically focus on circuit output characteristics that a designer would deem most relevant. This is significant since a given electrical system may have only a handful of outputs that are actively monitored or have design requirements while there could be thousands of internal circuit nodes whose sound functionality contributes to the operation of the whole circuit. Since a given circuit simulation will generate data for an entire circuit, and those data may hold significant trends, it follows that the ability to efficiently analyze all of the simulation data can be extremely useful.
An example from a W76-1 circuit study demonstrates this. Figure 1 illustrates the use of Themis to produce a dendrogram for an output waveform calculated for all of the outcomes from the UQ study. The dendrogram shows eight families of curves and the population of each family (shown in the triangle before the curve trace). To improve the performance of this calculation, the original waveforms were reduced to an encoded format. A sample of the encoded waveforms is on the right. Of importance to the analyst in this case is the spectrum of shapes in the dendrogram and the relative populations. Figure 2 shows a WAVE visualization that colored the output waveforms by one of the UQ study parameters. While it appears that this parameter variation separates the output into two populations of behavior, the colorization shows that there is some mixing between the groups as the parameter is varied. Figure 3 demonstrates how a thermal design parameter divides the voltages into isolated populations. In this case, Canonical Correlation Analysis (CCA) in Themis identifies the most significant UQ study parameters.
Quantifying Safety Margins in Abnormal Thermal Environments
Sandia analysts are using advanced Verification and Validation (V&V) and UQ techniques to quantify safety margins of the W87 warhead in abnormal thermal environments. Solution verification provides estimates of uncertainties due to numerical errors and mesh discretization, while targeted validation studies are used to confirm or reject assumptions and demonstrate the regimes in which the full system model is appropriate.
Verification required several hundred evaluations of a 1.3 million element model on ASC capacity clusters, while mesh resolution studies were performed up to an 83 million element model on ASC Purple. Similarly, the validation studies necessitated over 1000 simulations of a reduced part of the model to quantify the uncertainty in the simulation and compare it probabilistically to experiments. These studies enabled estimation of the weaklink/stronglink thermal race margin for an engulfing hydrocarbon fuel fire, which demonstrated satisfaction of the Walske criteria with a margin five times greater than the numerical uncertainties using over 15,000 model evaluations.
Understanding Differences Between Epistemic and Aleatory Uncertainty
Computational simulation helps estimate the potential frequency of failure to meet safety requirements for systems or subsystems under abnormal environments, such as those environments associated with pool fires or lightning strikes. The methodology Sandia analysts use for this analysis is known as the Probability of Frequency (PoF), where “frequency” refers to a frequency of failure or loss of safety, and “probability” refers to the uncertainty in estimating this frequency.
Application of the PoF approach requires an understanding of the differences between epistemic and aleatory uncertainty. Epistemic uncertainty is uncertainty associated with the lack of knowledge. This type of uncertainty and can be reduced through more knowledge, such as additional samples or through more refined experimentation. Aleatory uncertainty, on the other hand, can be better characterized through additional experimentation but cannot be reduced. This type of uncertainty is associated with natural randomness, such as the natural variability of material properties or manufacturing variability. For PoF, aleatory uncertainty leads to an estimate of the frequency of failure to meet safety requirements, whereas epistemic uncertainty relates to the uncertainty in estimating the frequency of failure.
A short course recently developed at Sandia gives analysts supporting safety analysis associated with abnormal environments a stronger understanding of epistemic and aleatory uncertainty. The course addresses “what and why,” “how,” and “faster and easier.” The “what and why” focuses on the differences between epistemic and aleatory uncertainty, how to choose between them, and the relevance of this separation to Sandia’s NNSA mission. “How” focuses on the procedures used to propagate these types of uncertainties through a computational model using sampling based techniques, turning the post analysis of aleatory and epistemic results into statements about the frequency of failure, and the subjective probability or confidence in the resulting estimates of this frequency. “Faster” focuses on improving the computation efficiency of the PoF analysis by using response surfaces to represent the behavior of the computation simulation over the required model parameter space. “Easier” summarizes efforts underway at Sandia to develop computation tools to reduce the workload of the analyst in performing such analysis. This course is one in a series of courses developed to provide Sandia’s computational analysts with the background to address issues associated with the verification, validation, and the use of computational tools to evaluate margins and uncertainties.
Lawrence Livermore Scientist Presents Heart Simulation to Partnering for Cures
Fred Streitz, a physicist and computational scientist at Lawrence Livermore, who has pioneered advanced supercomputing techniques for modeling and simulating complex systems and processes, recently presented Livermore’s groundbreaking supercomputer simulation capability to realistically and rapidly model a beating human heart to better understand fatal disease. Developed in collaboration with IBM on one of the world's fastest supercomputers, the ASC Program’s Sequoia, such powerful simulations could have considerable impact on the healthcare industry and further advance medical science. The talk was 1 of 30 transformative, cross-sector collaborations featured in the Innovator Presentation track of the meeting Nov. 29, 2012, in New York City.
Partnering for Cures brings together 800 leaders from all sectors of the medical research enterprise to speed up the time it takes to turn promising scientific discoveries into treatments. It is convened by FasterCures, a center of the Milken Institute.
For more information about the heart simulation, please visit https://str.llnl.gov/Sep12/streitz.html.
LANL’s LAP/B61 Team Recognized for Outstanding Performance
The Lagrangian Application Project/B61 Baseline team is being recognized for substantial advances made to Lagrangian codes resulting in enhanced fidelity, predictive capability, robustness, and speed, associated with simulations for the B61. The project would not have succeeded without outstanding cross-organizational collaborative work at Los Alamos, and also the participation of technical staff from Sandia National Laboratories and the Defense Threat Reduction Agency.
The multidisciplinary team of weapons analysts, code developers, and other physics and numerical experts greatly improved a B61 primary baseline using an ASC Lagrangian code to establish a predictive representation of an important stockpile system. A baseline is a collection of simulation models of relevant nuclear tests and aboveground hydrodynamic tests that establishes and manages — using version control — our best current predictive representation of a particular weapon system as validated across the collection of tests.
Livermore Highlights from SC12
The world’s supercomputing experts gathered to discuss their work, exchange ideas, see the latest equipment, and glimpse the future of computing, technology, and computational research. The annual Supercomputing Conference (SC12) celebrated its 24th year Nov. 10–16 at the largest convention space in SC conference history—the Salt Palace Convention Center in Salt Lake City, Utah.
Although Sequoia dropped from the No. 1 position to No. 2 on the industry standard TOP 500 list of the world's most powerful super-computers, it retained its No. 1 Graph500 ranking, showcasing its ability to conduct analytic calculations or find the proverbial needle in the haystack by traversing 15,363 giga edges per second on a scale of 40 graph (a graph with 2^40 vertices). The system's capability also played a role in one of the conferences most watched competitions—the Gordon Bell Prize. Two of the final five submissions used Sequoia: a simulation of the human heart's electro-physiology using a code called Cardioid . that was developed by an LLNL/IBM team, and a cosmology simulation led by Argonne National Laboratory.
Sequoia also was selected by readers of HPC Wire, the high performance computing news services, for a 2012 Reader’s Choice Award. Michel McCoy, head of LLNL's ASC program, received the award from Tom Tabor, publisher of HPC Wire.
Partnership with IBM to Build Supercomputers Celebrated
Lawrence Livermore National Laboratory (LLNL) and NNSA's nearly two-decade partnership with IBM, which has produced three top-ranked supercomputers and award-winning computational science, was celebrated in a November ceremony at LLNL.
"We're celebrating a decade and a half partnership that has become a model for a research and development relationship," said Bruce Goodwin, principal associate director for WCI and host of the ceremony held in the TSF's Armadillo theatre. "It's this strong relationship that made the Accelerated Strategic Computing Initiative (ASCI) work."
Calling computing the "intellectual electricity" of the Laboratory, Director Parney Albright noted that "we embed computation into the DNA of LLNL organizations" and as a result "it's hard to find a project at the Lab that doesn't involve computing."
High performance computing (HPC) will remain critical to the Lab's ability to fulfill its stockpile stewardship mission well into the future, Albright said. "We really need to get to exascale to do the things we need to do. The more than a decade long unbroken partnership is a cornerstone of our success here at the Laboratory," he said.
The strength of the relationship allowed the development of Deep Computing Solutions, a partnership within LLNL's High Performance Computing Innovation Center (HPCIC), which aims to "make Vulcan and the Laboratory's HPC ecosystem available to U.S. industry to advance the nation's competitiveness," Albright said.
Dimitri Kusnezov, NNSA chief scientist and director of the Office of Science and Policy, said the strength of the relationship is in part because the three partners recognize that "our priorities are not the same.” He continued, “There are many complex faces of this partnership. It can be very ugly at times. At other times there is glory to be celebrated. We recognize the differences in our needs," he said. "It is the communication between all of us that has made this work. If we did not build flexibility rooted in trust, we would fail."
The impact of the partnership on HPC has helped to create an "insatiable" appetite for more powerful computing and a need for next-generation exascale systems, Kusnezov said. "Computing simply underpins everything we do."
John Kelly III, director of IBM Research, reminded the audience that the IBM relationship, in fact, went back to 1954 and the purchase of IBM 701 machines. "This has been a very longstanding relationship."
The HPC advancements under the ASCI Initiative, which began in the mid-1990s, have contributed to IBM's commercial success, Kelly said. "In the end we're a business. The technologies we've developed have broader application and have made it into our commercial systems." The development of the BlueGene line of supercomputers "put us on an entirely different trajectory," he said. "It took risk. Failure was not an option. What galvanized us was your mission and your success in that mission," Kelly said. "It's what inspires us moving forward."
Calling the effort to get to exascale computing "another moonshot," Kelly said, "We will get there, but we will need your help. We could try to develop this on our own, but we would probably miss the mark."
Partnership milestones were highlighted in a 10-minute video retrospective tracing the history of the ASCI/Advanced Simulation and Computing Program, which was dedicated to the memory of Dave Nowak, the first LLNL ASCI executive. The partnership produced three HPC systems ranked No. 1 on the Top500 list of the world's most powerful computers: ASCI White, BlueGene/L, and Sequoia. Teams of Laboratory and IBM computational scientists have also garnered five Gordon Bell Prizes for computational advances that have enabled scientific breakthroughs.
Goodwin closed the ceremony with a view to the future. "We're looking forward to this model relationship taking us to exascale."
Download the attached two-page brochure that was given out at the event.
LANL ASC Program Teams Garner DP Awards
Dr. Don Cook, NNSA’s Deputy Administrator for Defense Programs (DP), was at Los Alamos on September 12 to present the 2011 Defense Programs (DP) Awards of Excellence. “After seeing some of the Lab’s technical achievements today,” said Cook to the audience of award winners, “my already high level of confidence in our ability to deliver on the stockpile stewardship program is even higher.” Three projects from the ASC Program received awards for significant contributions to the Stockpile Stewardship Program. They are:
Sequoia Supercomputer Earns Popular Mechanics 2012 Breakthrough Award
The ASC program’s Sequoia supercomputer, an IBM BlueGene/Q machine, received a 2012 Breakthrough Award from Popular Mechanics magazine. Bruce Goodwin, principal associate director for Weapons and Complex Integration (WCI), Michel McCoy, head of LLNL's ASC program, and Michael Rosenfield of IBM accepted the award at a ceremony in New York City Oct. 4, 2012.
The annual Popular Mechanics Breakthrough Awards recognizes the 10 top "world-changing" innovations each year in fields ranging from computing and engineering to medicine, space exploration, and automotive design. Breakthrough Awards are given in two categories: innovators, whose inventions will make the world smarter, safer, and more efficient in the years to come, and products, which are setting benchmarks in design and engineering.
Goodwin, who participated in a panel discussion about technological innovation prior to the award ceremony, said Sequoia's power "enables us to think about the engineering of very complicated things and bring them to market faster by dramatically reducing the prototyping process.
"If we can do testing through simulation, we can answer many of the 'what ifs' using uncertainty quantification before we even build a prototype," Goodwin said. "When building airplanes, refrigerators, and other products, if you can cut years off of the development cycle, you're going to have a significant advantage over your competition."
Uncertainty quantification, or "UQ," is the quantitative characterization and reduction of uncertainty in computer applications through running very large suites of calculations to characterize the effects of minor differences in the systems. Sources of uncertainty are rife in the natural sciences and engineering fields. UQ uses statistical methods to determine likely outcomes.
"We are excited to recognize this year's list of incredible honorees for their role in shaping the future," said James Meigs, editor-in-chief of Popular Mechanics. "From a featherweight metal to the world's fastest and most electrically efficient supercomputer, this year's winners embody the creative spirit that the Breakthrough Awards were founded upon."
The 'breakthrough' technologies were featured in the magazine's November edition.
Mikhail Shashkov Honored with LANL Fellow Appointment
A world-recognized leader in and developer of modern Arbitrary-Lagrangian-Eulerian (ALE) methods, Dr. Mikhail Shashkov, has been honored with appointment as a Laboratory Fellow making him a distinguished member of the scientific staff at LANL. The ALE methods for high speed, multi-material flows are the heart of the ASC Program for NNSA and LANL weapons calculations. Shashkov's research and methods are used extensively at top research institutions around the world.
Dr. Shashkov began his career in the US when he joined the Theoretical Division at Los Alamos. Most recently he has been in the Computational Physics (XCP) Division since the division formed in 2010 where he has been playing a key role in facilitating advances in the Lagrangian code base. According to XCP Division Leader Mark Chadwick, “Misha has an extraordinary career in numerical methods and hydrodynamics.”
Technical Communicators in LANL Weapons Program Recognized
Technical communicators in the Weapons Program at LANL have been recognized for their work in producing communication products such as magazines and brochures. Managing editor Clay Dillingham works with writer-editors, graphic designers, illustrators, and photographers to produce the National Security Sciences (NSS) magazine. The magazine highlights work in the weapons and security programs at Los Alamos. The DP award recognized the team for effectively communicating the value and importance of weapons program work to a largely nonscientific, nontechnical audience. Current and archived issues of NSS are available at http://www.lanl.gov/orgs/padwp/.
Writer-editor Denise Sessions and graphic designer Jim Cruz were recognized by the Society for Technical Communication (STC) for their brochure “Computational Physics at Los Alamos National Laboratory.” After winning a regional competition, the brochure went on to win an award of excellence in the 2011–2012 STC International Summit Award. One of the judges describes the recruiting brochure, “The choices of artwork and typography, combined with writing and layout, demonstrate thoughtful professionalism that strives to attract and inform an expected audience of career‐focused individuals. Moreover, the entry reflects the passion and concern of conscientious professionals who deliver the strongest message to their audience.” Another comment shows that the brochure’s purpose is clear, “We’re technical, but we’re also edgy. Computational Physics at LANL in Northern New Mexico is a good place to work.”
ASC Relevant Research
Los Alamos National Laboratory
Citations for Publications (previously not listed)
Sandia National Laboratories
Citations for Publications
Key: DOI = Digital Object Identifier
Printer-friendly version -- ASCeNews Quarterly Newsletter - December 2012