NA-ASC-500-12 Issue 20
The Meisner Minute
Guest editorial by Michel McCoy, ASC Program Director,
Lawrence Livermore National Laboratory
Prescience, Persistence and Mission—
the Enduring Impact of NNSA Leadership on HPC
As NNSA works with the Office of Science to build a national consensus in supporting substantial investments to develop functional, low-power-consuming exascale systems, and as ASC is broadly questioned about the validity of our NNSA-related requirements and strategies, it is good to take stock on what we have achieved this past decade in HPC technology development. In short, what have we nurtured and what has flourished?
ASC is often asked why we want to engage in and support the development of technology. Isn’t that expensive? Why do we want to lead? Why don’t we let other government agencies, or even other nations, like Japan or China do the development? Why don’t we simply buy existing technology, execute our missions, and be done with it? A former high-ranking Congressman asked us just this question in the context of China.
The answer is that NNSA laboratories have led because it is the best and perhaps the only way to influence technology providers to build machines that actually meet our national security requirements. Indeed, ASC’s ability to deliver to the Stockpile Stewardship Program was historically enabled by its leadership position in HPC. It is also remarkable that these innovations had broad applicability and enduring impact worldwide, providing clear advantages for American influence and interests.
Instead of embarking here on a tedious retrospective of all past ASCI and ASC investment strategies, successes, and failures, consider the key investment categories that have enjoyed the greatest impact: Platform procurements, frequently leveraging PathForward investments, and ASC-supported but lab-initiated investments that developed into mainline ASC tools. The impact of these investments has been phenomenal. Evidence from just a few examples makes the case.
Now that is impact.
It is true that NNSA systems are often not at the top of the lists; however, this is to miss the point, which is:
- The number of systems on the Top 500 whose technologies have been the beneficiaries of ASC investments.
- The increasingly deep adoption of these technologies in entry-level HPC systems used by businesses and academic labs now affording their own systems and thus accelerating innovation and inspiring the next generation of HPC users.
- The vendors that might not even exist today.
- The architectures that are influencing the thinking in the proposed Exascale Initiative, born from ASC visions up to a decade ago.
It is the prescience of ASC investments and the flexibility that created the breathing room to imagine better solutions that are at the heart of the issue. NNSA laboratories have shaped the HPC ecosystem, through having the perception and flexibility to make the right investments at the right time to initiate enduring change for national security and for the country. Today, the seasoned HPC brain trust embodied at the national labs is poised to make the leap to exascale.
In short, it was mission-driven focus that was the pre-requisite for all that followed.
Red Storm Stands Down
By Neal Singer, Sandia Lab News Contributor (reprinted with permission)
A quietly exuberant celebration took place in Sandia’s Computer Science Research Institute on May 15 to mark finis to Red Storm, the Sandia-designed and Cray Inc.-built supercomputer that became one of the most influential machines of its era, with 124 descendants at 70 sites around the world.
Cray Inc. President and Chief Executive Officer Peter Ungaro did not quibble in his praise. “Everything we have done at Cray was spawned by this project.” He later told the assembled group, “Without Red Storm I wouldn’t be here in front of you today. Virtually everything we do at Cray - each of our three business units - comes from Red Storm. It spawned a company around it - a historic company struggling as to where we would go next. Literally, this program saved Cray.”
The supercomputer design and its descendants have logged more than a billion dollars in sales for the company, he said.
Among the machine’s advances were that it used off-the-shelf parts, which made it cheaper to build, repair, and upgrade. It was air-cooled instead of water-cooled, so that replacement parts and upgrades could be changed out while the machine was running. The only custom component was the Interconnect chip that made it possible to pass information more directly from processor to processor while applications were running. High memory bandwidth kept the processors from being starved for data. And its architecture was intended (and proved) to be upgradeable, going from a theoretical peak in 2005 of 41.47 teraflops to 124.42 teraflops in 2006 to 284.16 teraflops in 2008, because (among other reasons) the machine was able to accommodate single, dual, and quad-core processors that reached 12,920 in number.
Among the machine’s technical achievements was the operation known as Burnt Frost, which in 2008 programmed a rocket to shoot down an errant satellite traveling at 17,000 miles per hour, 153 miles above the earth.
For months, Red Storm ran calculations to fully examine a large number of shoot-down scenarios, until Sandians were ready to brief then-President George W. Bush on his options.
The result: After the successful take-down with no collateral damage, a military commander exulted, “We can hit a spot on a bullet with a bullet.”
Red Storm’s role, classified for several years, was made known when DoD eventually released the information. A Sandia video, using DoD images that showed the launch of the intercept missile and the impact, opened with the sentence: “This IS rocket science!”
Other operations of Red Storm, still classified, were described as having “a dramatic effect on the history of the country.” The machine’s boilerplate description says it was used to solve “pressing national security problems in areas such as cyber defense, vulnerability assessments, informatics (network discovery), space systems threats, and image processing.” One nonclassified use for the machine and its more powerful descendant Jaguar at Oak Ridge, was to produce high-fidelity climate models that revealed, for the first time in simulations, vortices (swirls of water) in the Indian Ocean. That work was led by Sandia technical staff member Mark Taylor.
“It’s over, but its influence is not,” said Bill Camp, the retired Sandia director who worked tirelessly to obtain support for the design first proposed by Sandia technical wizard Jim Tomkins (retired).
Simulations Identify Requirements for High Intensity Laser Lab
A three-day run on the ASC Cielo supercomputer (see Figure 1) identified requirements for a future Los Alamos (LANL) signature facility and enabled discovery science in laser-ion acceleration, overturning decades of conventional wisdom. VPIC code simulations helped identify the facility functional requirements for the High Intensity Laser Laboratory (HILL) signature facility proposed at LANL. Identified in the Tri-Lab Facilities Roadmap, the HILL facility is synergistic with the Matter-Radiation Interactions in Extremes (MaRIE) signature facility being proposed at LANL.
Originally envisioned as part of the full MaRIE project, HILL was submitted as a separate standalone facility. The program development for HILL would not have been possible without capability computing.
The project led to discovery science in which laser-generated ion beams were found to possess “lobes” whose angle depends on laser focus and intensity. As Figure 2 illustrates, VPIC code simulations of laser-ion acceleration at identical laser focus but different intensity show that higher intensity leads to wider lobes, as predicted by analytic theory.
These calculations enabled: (1) a better definition of HILL “first experiments” using laser-generated particle beams, and (2) a proposal for an LDRD-DR project to use isochoric heating with laser-generated ion beams at the Trident Laser Facility to understand mix morphology in dense plasma. This work was published in a high-profile journal article by Yin et al, “Three-Dimensional Dynamics of Breakout Afterburner Ion Acceleration using High-Contrast Short-Pulse Laser and Nanoscale Targets.” Phys. Rev. Lett. 107, 045003 (2011). http://prl.aps.org/abstract/PRL/v107/i4/e045003
Luna Supercomputer Providing Compute Cycles for Directed Stockpile Work
Luna, the newest Tri-Lab Capacity Cluster (TLCC2) supercomputer deployed at Los Alamos National Laboratory (LANL), is providing much needed compute cycles for the Directed Stockpile Work (DSW) Program, including the B61 Life Extension Program. Local integration and security testing, as well as approvals from the Department of Energy’s Los Alamos Site Office were completed in April 2012, two weeks earlier than planned.
Luna augments the existing ASC TLCC1 capacity platforms at LANL: Typhoon (106 teraFLOPS) and Hurricane (51 teraFLOPS) and allowed Redtail (71 teraFLOPS) to be retired. Luna is based on Appro's Xtreme-X™ Supercomputers which uses the Intel® Xeon E5 processor. It has a total of 24,640 processors for a combined peak capability of 539.1 teraFLOPS.
Users and developers are seeing speedups of factors of 2 to 4 on typical calculations compared with previous production machines such as Hurricane and Typhoon. Because of excellent scaling on Luna, users can use more processors for a calculation compared with previous production machines and achieve even higher speedups.
“Luna is such a well-balanced machine that it is changing the way we work,” says LANL user Jas. Mercer-Smith. He continues: “Problems that used to take 3 weeks can now be completed in a few days.”
The code developer perspective is also positive. Lagrangian Applications Project developer Rob Ward notes, “The performance improvements in Luna have allowed us to turn around our testing much faster.”
The speedup is thought to be due to the faster Luna interconnect and on-board hardware improvements. Such speedups mean that users can get faster turnaround on their calculations and/or run with higher fidelity thus improving efficiency and/or results. This will be particularly advantageous for weapons safety calculations at LANL.
Livermore Delivers a Stellar Zin
Like a fine wine, it took time to mature, but Zin—one of ASC’s “wine” systems that were part of the Tri-Lab Linux Capacity Cluster 2 (TLCC2) procurement—is generally available in the secure computing environment (SCF). At 774 teraFLOPS, Zin is ranked no. 27 on the June 2012 TOP500 list of world’s fastest supercomputers.
Zin is the first ASC system to use Intel Sandy Bridge CPUs. This new hardware architecture also required the development and installation of a new version of the operating system software stack—TOSS 2.0. Mark Grondona, summarizing the development work, said, “We had to work extensively with Red Hat, Intel, and Appro to ensure full Sandy Bridge support was included in RHEL 6.2, including testing and development of EDAC (ECC memory error detection), PAPI (performance counters), and other low-level hardware support. We also had to update QLogic support for the new Sandy Bridge architecture, from the device driver level up to and including MPI support for performance-scaled messaging.”
Early science runs on Zin were invaluable for simulating ATP reactions in the kinesin Eg5 enzyme. The enabling of hyper-threading and support for Advanced Vector Extensions should allow for improved parallelization and increased CPU performance and efficiency.
The Zin system consists of 18 TLCC2 scalable units (SU). Each SU has 154 compute nodes for a total of 2772 compute nodes. There are 18 login nodes for interactive use. The system has a total of 44,352 compute cores, with 88.7 TB of memory providing 922 teraFLOPS peak performance. The 72 Lustre routers provide 360 GB/s of peak performance to the Lustre file systems, and 36 NFS gateway nodes provide a peak 36 Gb/s of bandwidth to site-wide NFS servers.
Additionally, the TLCC2 system, Cab, entered General Availability on May 30. Cab is a large capacity resource shared by Livermore and ASC for small to moderate parallel jobs. It consists of 8 SU, and its 1,296 nodes are identical to those in Zin. Each node has 16 CPUs and 32 GB memory connected via Q-Logic quad data rate InfiniBand. Cab’s theoretical system peak performance is 431.3 teraFLOPS.
Test Beds Available for Explorations of Next Generation Architectures
Sandia has created architecture test beds to support path-finding explorations of alternative programming models, architecture-aware algorithms, low-energy runtime and system software, and advanced memory subsystem development. The project is responsible for identifying and acquiring key technology predicted to be applicable to Exascale hardware.
The new advanced architecture systems are being used to develop Mantevo proxy applications, enable application performance analysis with Mantevo proxy applications, support heterogeneous computing and programming model R&D projects, and for Structural Simulation Toolkit (SST) HPC architectural simulation validation efforts. These systems can assess the impact and value of upcoming computer technologies on ASC applications and supporting software. Emphasis is currently on providing a spectrum of node-level architectures. As such, only small clusters are provided. Plans include introducing new memory, interconnect, and I/O technologies, as well as processors.
The project provides system installation and administration, account setup using SARAPE, and user support (via email). The systems are not for production calculations, but for test pilots or pioneers who are comfortable working with experimental hardware.
Several platforms have been set up. Currently, three of the test beds are Internet facing, with the goal of increasing this number over time. In brief, the platforms are:
This project facilitates application enhancements to exploit technologies predicted to be applicable to Exascale computers. The MIC architecture leverages many (10s of) low frequency general-purpose x86 cores and presents a more traditional cache coherent memory space. The Cray XK6 loosely integrates few high frequency general-purpose x86 processors with a discrete NVIDIA GPU, which contains a very large number (100s) of simple cores optimized for Single Instruction Multiple Data (SIMD) parallelism. The AMD Fusion architecture is similar but the general-purpose x86 cores and the simple SIMD cores are more tightly integrated on the same chip. Each of these architectures exposes a large amount of node level parallel processing capability, but they will be exploited in different ways. The Tilera and Convey architectures allow Sandia to investigate advanced network and memory interfaces. Several of the platforms have the potential to provide interfaces for software-level power management studies.
LANL ASC Science Underpins Stockpile Modernization
Before the end of underground nuclear weapons testing, our nation relied on theory, experiment (testing), and simulation to engineer our nuclear weapons and to understand their performance. In the absence of nuclear weapons testing, we are taking even greater advantage of high-performance computing (HPC) and simulation science to ensure the safety and reliability of the stockpile. An illustrative case study of this is the B61 Life Extension Program (LEP). The National Nuclear Safety Administration (NNSA) recently received approval from the Nuclear Weapons Council to proceed with Phase 6.3 of the B61 LEP.
Los Alamos National Laboratory (LANL) — an NNSA national security laboratory —supports three other weapon systems in the nation’s nuclear deterrent — W88, W76, and W78. All work on the weapon systems, including the B61, requires a mix of simulations based on tools provided by the Advanced Simulation & Computing (ASC) Program, experiments carried out by the Science Campaigns, and underground test analysis by Directed Stockpile Work. LANL has begun execution of the B61 LEP.
As the starting point for any work on the B61 LEP, B61 modern system baselines using modern baselining tools will be used for the physics and performance assessments and to certify the B61-12. A system baseline is a collection of models of relevant nuclear tests and aboveground hydrodynamic tests. The system baseline represents our best framework for ensuring confidence in our scientific judgments concerning weapons performance, and ensures consistency and change control as improvements are introduced over time. Using system baselines has shed light on the underlying physics of new insights as well as longstanding mysteries that had persisted since the time of underground tests.
The B61 baselines are being transitioned to ASC simulations. For example, numerous simulations were run in order to develop and understand multipoint safety options. All theoretical work on the B61, as well as the other weapon systems, relies on the capabilities provided by ASC. Capabilities include
The ASC Program provides the infrastructure for applying the capabilities, such as hardware, software, and visualization tools. It provides verification and validation for all of the capabilities. ASC provides capacity supercomputers to run the smaller simulations and capability supercomputers to run the largest simulations. Software environments and computing facilities are provided for the capacity and capability supercomputers.
Each year the LANL Director performs an annual assessment, which reaffirms the integrity of the weapons’ certification to the President. In this way, the nuclear stockpile remains safe, secure, and reliable without the need to carry out underground testing.
Sandia used the W80 system model to quantify the margins and uncertainties of nuclear safety in potential abnormal mechanical environments, such as a handling drop accident. Underpinning this effort were uncertainty quantification methods and computational tools developed by the ASC program, including SIERRA solid mechanics codes to perform the simulations.
This study quantified the probability of loss of assured safety in a worst-case (worst orientation at impact) handling drop as a function of drop height. The margin was evaluated as the maximum drop height maintaining assured safety relative to the maximum lift height during handling. Evaluating the performance of design and components in terms of drop height set this study apart from past assessments, enabling direct comparison of multiple mechanical responses (breach of exclusion region barrier integrity, detachment of nuclear safety critical components, and excessive shock to stronglinks) even though each had different failure mechanisms and criteria.
As an additional benefit, the results were immediately useful to the W80 system group in providing a technical basis for what potential drop heights call for mobilizing an emergency response.
Building on an extensive verification and validation effort that quantified the dominant sources of uncertainty, the system model simulated handling drops spanning those uncertainties (one model run result is shown in the figure above) using a novel two-pronged sampling strategy that reduced the computational burden while ensuring the accuracy of a sufficiently complete sampling.
The project has undergone peer review by a panel of experts in a series of evaluations, most recently in April 2012. Results will be included in the 2012 W80 Annual Assessment Report.
Lawrence Livermore National Laboratory (LLNL) researchers released updates to two major software packages in March. Erik Draeger released version 2.0 of qb@ll, the LLNL version of Qbox, and the Components Team released Babel 2.0.0.
Qbox allows a user to calculate the properties of materials directly from the physics equations describing atoms—rather than from models or after conducting experiments—with some approximations to make things computationally feasible. One of the major approximations is the pseudo-potential, which basically replaces the atom with something that acts almost exactly like an atom in a given system of interest but is much faster to compute.
"Since Qbox was written, users have been coming up with more and more clever and complicated ways to build these pseudo-potentials to further decrease the computational cost of a given calculation," Erik said.
Qb@ll 2.0 is a major rewrite of the code that incorporates some of the newer approaches, including the well-known Vanderbilt ultra soft pseudo-potentials, without losing the massive parallelism that is Qbox's primary distinction from other first-principles materials codes.
"Support for these newer potential types is one of the major reasons the commercial VASP code is so widely used, despite its limited scalability," Erik explained. "The new version will let us calculate bigger systems with predictive accuracy on large supercomputers and stay current with the newest methods for accurately describing atoms."
The LLNL Components Team recently announced the release of Babel 2.0.0, the next major step in Babel research. Babel is a tool that addresses problems of language interoperability, particularly in scientific and engineering applications. At the simplest level, Babel generates glue code so that libraries written in one programming language can be called from other programming languages.
The new version of Babel supports a data type for structured data such as a C "struct." It also includes mutex-free reference counting for GNU Compiler Collection (GCC) 4.1.2 and higher. The mutex-free reference counting uses low-level atomic operations to avoid the performance cost of a thread mutex. In addition, the team incorporated experimental fastcall support into the new release.
"The addition of fastcall support removes essentially all Babel's performance overhead when a C++ client is calling something in Babel that is already implemented in C++," Tom Epperly, Babel project leader, explained. "It's an example of an approach that could be implemented for other pairs of languages when the small performance penalty is still too great."
Babel 2.0.0 also now includes an experimental version of BRAID, an ongoing effort to support PGAS languages. The initial work focuses on Cray's Chapel language.
Lawrence Livermore National Laboratory (LLNL) announced a Request for Proposals (RFP) in April for extreme-scale computing R&D under an initiative called FastForward. FastForward seeks partnerships with multiple high performance computing (HPC) companies to accelerate the R&D of technologies critical to the advancement of extreme-scale computing. Approximately $60 million will be available over two years for accelerated R&D in three technology areas: processors, memory, and storage.
FastForward is funded by the DOE’s Office of Science and by NNSA. LLNL is representing seven DOE laboratories and the DOE as the Source Selection Official for this RFP.
DOE’s strategic plan calls for ensuring U.S. security and prosperity by using transformative science and technology to address the nation’s energy, environmental, and nuclear challenges. This includes advancing simulation-based scientific discovery by investing in applied mathematics, computer science, and networking tools. These investments will enable the research required to develop Exascale computing platforms and the software environment needed to support DOE energy, science, and security missions. Critical to this R&D effort is the aggressive pursuit of energy-efficient HPC systems.
FastForward was born of the recognition that the broader computing market will drive innovation in a direction that may not meet the needs of the DOE mission. FastForward seeks to fund innovative new and/or accelerated R&D of technologies targeted for use in the next 5–10 years. Proposals were due May 11.
“ASC at SC11” Video Now Available Online
Filmed onsite at the SC11 (Supercomputing 2011) Conference, a video featuring the ASC booth is now available online. The 9-minute video, produced by Sandia, emphasizes the benefits that ASC’s presence at the conference brings to the program. The ASC booth theme, “Taking on the World’s Complex Challenges,” reflected the “what’s new, what’s next” atmosphere of SC11.
Held in Seattle from November 12-18, 2011, SC11 provided attendees and exhibitors the opportunity to connect with the best and brightest in the high performance computing (HPC) world and to learn about HPC technical advances and resulting modeling and visualization capabilities.
Pictured above is a team of University of the Pacific (UOP) students who will get hands-on training with some of the fastest supercomputers in the world at Supercomputing 2012 (SC12) this November.
Through a partnership with Lawrence Livermore National Laboratory (LLNL), UOP's Team Venus, made up of 12 female engineering, computer science, and physics students, is preparing to compete in the Student Cluster Competition at SC12. The competition is a real-time, 48-hour challenge to design and assemble a state-of-the-art cluster computer on the exhibit floor and use it to run scientific applications, competing to achieve the greatest performance on a limited power budget. In addition to the technical competition, students will also perform an educational outreach mission by maintaining a booth on the exhibit floor.
UOP was invited by LLNL to assemble an all-female team of students for competition. Team Venus will be mentored by LLNL engineers and UOP faculty. The team has secured $50,000 of hardware for the competition.
Allan Snavely Named Chief Technology Officer for Livermore Computing
Allan Snavely became the Chief Technology Officer (CTO) for Livermore Computing on June 4. As CTO, Allan will be responsible for developing ASC's overall supercomputing architecture and technology strategy for LLNL and will have primary responsibility for procuring advanced computing systems.
Allan has worked at the San Diego Supercomputer Center in various roles since 2000, most recently as Associate Director. He possesses deep and wide knowledge of high performance computing (HPC) applications and design of supercomputers, is a two-time finalist for the Gordon Bell Prize, and winner of the SC09 Data Challenge for the design of the first flash-based supercomputer. Allan is a premier researcher in performance modeling and a regular collaborator on DOE projects. As founder of the Performance Modeling and Characterization Lab, he has developed numerous technologies for improving the performance and energy efficiency of supercomputers. He designed and built the Gordon supercomputer, which set the bar for energy efficiency and capability on data-intensive HPC workloads.
Allan has a Ph.D. in computer science from the University of California San Diego (UCSD), a B.S. degree in computer engineering, and an M.S. in computer science also from UCSD.
Lawrence Livermore National Laboratory (LLNL) hosted a two-day workshop in early May on behalf of the Department of Defense (DoD) High Performance Computing Modernization Program. The "HPCMP 2nd Portals Workshop" convened experts from Hawaii to Washington D.C. who are developing desktop and Web-based portals for accessing HPC systems, thereby making HPC easier to use and more accessible to a wider range of users. LLNL employees gave several presentations, including one demonstrating the Lorenz portal (see the last issue of e-News). Rob Neely was the workshop host, and it was held in the HPC Innovation Center in the Livermore Valley Open Campus.
"It is exciting to see the HPC community beginning to pay greater attention to 'broadening the base' of HPC usage by thinking about how to make our powerful, yet complex, platforms and applications more easily accessible to a largely untapped pool of potential HPC users,” said Rob. HPC Portals could have a huge impact in lowering barriers to HPC adoption within DOE, DoD, and the private sector, and is an area ripe for deep and lasting partnerships between those groups." http://nnsa.energy.gov/asc/ascnewsletters/ascenewsmar2012
ASC Salutes Dean L. Preston
With over 30 years of experience as a theoretical physicist at Los Alamos National Laboratory (LANL), Dean L. Preston is no stranger to the ASC Program. In a recent classified presentation to the ASC program staff at NNSA Headquarters in Washington, DC, Dean talked about his current work — exciting research using quantum molecular dynamics (QMD) to calculate the phase diagram of plutonium. The phase diagram will be used to construct an advanced equation of state (EOS) for plutonium that will improve the predictive capability of weapon simulations. The calculations are running on ASC’s Cielo Petaflop supercomputer. Plans are underway to work with computational and computer scientists to speed up certain QMD numerical algorithms.
Dean is currently a member of the Materials and Physical Data Group in the Computational Physics Division at LANL.
At one time in his career, Dean served two years as the Accelerated Strategic Computing Initiative (ASCI) Senior Project Leader for Weapon Physics and Physical Data. His work was instrumental in establishing a post-testing-era experimental program on plutonium EOS and dynamic response. This work revitalized shock wave physics at the national laboratories.
Dean received his Bachelor of Science degree from Rensselaer Polytechnic Institute in Physics in 1975 and his Ph.D. in Theoretical Particle Physics from Princeton University in 1980. From 1980 to 1983 he held a postdoctoral appointment in the Elementary Particles and Field Theory Group at LANL. He left LANL for two years to teach mathematical physics. In 1985, he joined the Applied Physics (X) Division, where he changed his field to materials physics. In 1994 he constructed the Preston-Tonks-Wallace (PTW) material strength model, now recognized internationally as a reliable model for simulations of explosively or laser-driven systems.
Dean received a DOE Award of Excellence for Significant Contribution to the Nuclear Weapons Program for his work on the first subcritical experiment, Rebound. From 2000 to 2004, Dean was group leader of the Materials Science Group in X-Division.
Currently, he is the principal investigator (PI) for an Institutional Computing Project at LANL. He is also the U.S. Principal Investigator for five material physics projects between LANL and the Russian Federal Nuclear Centers. In 2011, Dean was elected a Fellow of the American Physical Society (APS). The APS nomination credited Dean “for rigorous scientific contributions in the field of shock compression theory, and in particular for contributions leading to a better understanding of material strength at very high strain rates.”
He has over 70 peer-reviewed publications in diverse areas of theoretical physics, including elementary particle physics, theoretical/computational materials physics, and plasma theory. He is a coauthor of the well-known Brown-Preston-Singleton (BPS) theory of charged particle transport in plasmas. His current research is focused on QMD calculations of the phase diagram of plutonium, the development of a dislocation dynamics model of the plastic flow of polycrystals from quasistatic to very high strain rates, and phase transformation kinetics in shock waves.
ASC Relevant Research
Lawrence Livermore National Laboratory
Citations for Publications
Sandia National Laboratories
Citations for Publications
Key: DOI = Digital Object Identifier
CORRECTION TO FY12 Q2 SUBMITTAL
Franke, B. C., Kensek, R. P. (2009). “An hp-Adaptivity Approach for Monte Carlo Tallies,” International Conference on Mathematics, Computational Methods and Reactor Physics (M&C 2009), Saratoga Springs, NY, on CD-ROM, American Nuclear Society, LaGrange Park, IL.
Los Alamos National Laboratory
Citations for Publications (previously not listed)
2011 and 2012
Printer-friendly version -- ASCeNews Quarterly Newsletter - June 2012