ASC eNews Quarterly Newsletter September 2012

 ||   Who’s Who in ASC   ||   ASC eNews POCs   ||   Upcoming Events   ||

ASC    
  NA-ASC-500-12 Issue 21
  September 2012


The Meisner Minute

meisner




Bob Meisner






Guest editorial by Wendy Cieslak, ASC Program Director,

Sandia National Laboratories

Managing the Work-Work Balance

In popular culture these days, you can't help but stumble across endless articles and op-ed pieces on work-life balance.  Who has it, who doesn't, how to get it, and why it is important.  But within ASC these days, it feels more like the challenge is a work-work balance.  In our case, the work is capability development for advanced simulation, and developing predictive power for the challenges of the future.  But our work is also stewarding the present stockpile, and preparing the stockpile of the future, including an increasing workload associated with stockpile modernization for the B61 and W88.

At Sandia, our challenge over the last few years has been to quickly mature capabilities across a broad spectrum of applications in order to be ready to support design and qualification activities on these new programs, and be in a position to integrate early and often with the system engineers leading the programs.  On the B61 LEP, we are broadening the scope of our assessment of accident and safety scenarios, and applying new capabilities to mitigate risk by predicting vibration on new platforms during flight before we have opportunities to do flight tests.  We're also working towards the first application of new radiation effects capabilities for qualification on the W88 ALT 370 program.  Its an exciting, and sometimes frustrating, time for our ASC program, as we experience new successes and opportunities for further impact, but see people stretched thinner across the spectrum of research, development and application. Not to mention the occasional budget hiccups.

All across the program, we see these continuing challenges to balance our efforts.  For example, will it be implementing a constitutive model with failure for rigid foams to simulate a handling accident for the B61, or refactoring the in-core data models of the code to enable a mixture of MPI and Threads communications for many-core architectures?  The answer is just "yes", because if we don't continually work this balance now, the future will be even more difficult.  Everyone knows this, of course, because we face this challenge in practically every facet of the program.  But without a crystal ball (or a magic iPhone app), we'll always be uncertain that we're getting that balance right.

The situation is not likely to get easier any time soon, so it is important to try and appreciate the positive aspects of the current environment.  Applying capabilities to enable new design and qualification activities is a necessary loop to close for a program like ASC.  This is really the "Admiral's test" for simulation capabilities, and the lessons learned from battle-testing the capabilities feeds new ideas for the next cycle of development.  As the pendulum swings slowly back from a program focused strongly on long-term capability and the science of simulation to one balancing that long-term perspective with nearer-term impacts to design and qualification, we can recall that we have seen some of this cycle before (remember the W76-1 and other design studies?) So, like on a playground swing, we can go along for the ride, or we can lean into it and, with a kick, take it to the next higher level.

 ______________________________________________________

Cielo Ready for Production Capability Operations

Improving reliability and performance of the Cielo file system has been a high priority for the New Mexico Alliance for Computing at Extreme Scale (ACES) partnership. A partnership between Los Alamos and Sandia National Laboratories, ACES operates the NNSA Cielo supercomputer. Cielo is a 1.37 petaFLOPS system built by Cray, Inc. and installed at Los Alamos National Laboratory (LANL) in 2010. In 2011, the ACES team decided to change the file system for Cielo to increase the stability and performance necessary to support capability computing campaigns (CCCs) over the next several years. These campaigns support simulations for Los Alamos, Lawrence Livermore, and Sandia National Laboratories. For more information, see the Cielo website at http://www.lanl.gov/orgs/hpc/cielo/index.shtml.

This month, the file system was transitioned to Lustre™, a file systems infrastructure supported by Cray and similar to that used at the National Energy Research Scientific Computing Center and many other high performance computing installations. Migrating the hardware infrastructure of the Panasas® file system to Lustre preserved a significant investment of the original Cielo file system. The transition provided an improvement in I/O functionality, reliability, and performance. Delivery of computational cycles for campaign cycles was maintained during the transition period.  Campaign 2 completed during the initial transition, and Campaign 3 is currently in progress.

On September 6, 2012, the ACES team passed a Level-2 milestone review. Lustre performance results are consistent, reliable, and show speedups. For example, Eulerian Application Project code improvements are seeing 14 GB/s reads as compared to 2 GB/s reads, which is a 7x improvement in performance.

Cielo User Feedback “I want to compliment the team on Cielo's new disk system. I used ParaView yesterday to visualize some of Ray Lemke's data, and WWWWOWWW.  Moving through different directories was an order of magnitude faster on lscratch4 vs. the old scratch4/scratch5 systems; header information loaded within seconds (rather than minutes) and load times for data was significantly faster.  Finally, it just worked.”--August 29, 2012

 ______________________________________________________

Performance-Based Code Assessment for Low Mach Large Eddy Simulations (LES)

Sandia has completed a performance-based assessment of fluid dynamics simulation capabilities within the Sierra code base. The improved performance of an acoustically incompressible LES capability did not sacrifice the generality needed to address key needs of the B61 Life Extension Plan (LEP) and W88 ALT programs. Flexibility in software design is necessary for development of new capabilities that will support these programs, while performance is necessary to ensure that new and existing capabilities have a timely impact on qualification and design activities.

Conducted on Cielo, code performance and scaling simulations used up to 65,536 cores. Near optimal algorithmic scaling for linear system solves was demonstrated, and improvements of factors of 3 to 4 were achieved in CPU performance. Future work will address remaining scaling bottlenecks and performance of the matrix assembly.

The simulations used unstructured hexahedral mesh element counts ranging from 17.5 million to 1.12 billion elements. These mesh sizes and core counts are among the largest simulations within the unstructured low Mach community. In addition to software-related performance and scalability improvements, algorithmic advances were realized. Collectively, these activities and advances represent a path forward to exascale simulations in Sierra.

LES treatment of fluid turbulence is required for qualification efforts for aerodynamics, fire environments, and captive-carry loading. The unsteady nature of flows related to Abnormal Thermal and Normal Delivery environments requires LES for accurate environment prediction. Other less expensive techniques, such as Reynolds-Averaged Navier-Stokes (RANS), have proven to be inadequate. The characterization of fire environments requires sub-centimeter resolution to capture Rayleigh/Taylor instabilities leading to large-scale plume core collapse in pool fires of 5-10 meters. Many lessons learned for acoustically incompressible LES are also applicable for compressible LES, which is necessary for aerodynamic simulations. Resolution of vortex/fin interactions will require over 200 million element meshes for design calculations, and even more for qualification. Recent gains in performance and scalability will make these large LES simulations practical.

 ______________________________________________________

NNSA's Sequoia Supercomputer Ranked as World's Fastest

The National Nuclear Security Administration (NNSA) recently announced that a supercomputer called Sequoia at Lawrence Livermore National Laboratory (LLNL) was ranked the world's most powerful computing system.

Clocking in at 16.32 sustained petaFLOPS (quadrillion floating point operations per second), Sequoia earned the number one ranking on the industry standard Top 500 list of the world's fastest supercomputers released Monday, June 18, at the International Supercomputing Conference (ISC12) in Hamburg, Germany. Sequoia was built by IBM for NNSA.

A 96-rack IBM Blue Gene/Q system, Sequoia will enable simulations that explore phenomena at a level of detail never before possible. Sequoia is dedicated to NNSA's Advanced Simulation and Computing (ASC) program for stewardship of the nation's nuclear weapons stockpile, a joint effort from LLNL, Los Alamos National Laboratory, and Sandia National Laboratories.

“Computing platforms like Sequoia help the United States keep its nuclear stockpile safe, secure, and effective without the need for underground testing,” NNSA Administrator Thomas D'Agostino said. “While Sequoia may be the fastest, the underlying computing capabilities it provides give us increased confidence in the nation's nuclear deterrent as the weapons stockpile changes under treaty agreements, a critical part of President Obama's nuclear security agenda. Sequoia also represents continued American leadership in high performance computing, key to the technology innovation that drives high-quality jobs and economic prosperity.”

For more information, see the press release.

 ______________________________________________________

LANL Workshops Prepare for Next-Generation Architectures

Standing up the first petaflops supercomputer, Roadrunner, in 2008, gave Los Alamos National Laboratory (LANL) early exposure to next-generation computer systems. This experience made it clear that emerging architectures required computer scientists, computational scientists, and theorists to work closely together. The Roadrunner experience fostered development of the Applied Computer Science group (CCS-7)—a group of skilled scientists bridging computational and computer science.

The key lesson from Roadrunner was that computer architectures would undergo a sea change over the next few years with an explosion of on-node parallelism. This was visible on Roadrunner, is evident on Sequoia, and will certainly be true on the future system called Trinity. The increase in on-node parallelism is different from the parallelism seen over the past 15 years, which was mainly fueled by increasing the number of nodes within a machine.  

 

 




To deal with this explosion of parallelism, application developers will need to acquire a new tool in their repertoire of skills: the ability to expose all possible parallelism within their applications/algorithms. This requires changing from a flow-control mode of thinking to a more data/task parallel mode of thinking.  

To create this pool of advanced developers within the weapons program, LANL is running a workshop series nicknamed the Exa-xx series, one of multiple co-design projects LANL is conducting. Each series runs for a year and pairs six Integrated Codes (IC) application developers with experts from the IC and Computational Systems and Software Engineering (CSSE) programs in an intensive one-week-a-month exercise where the goal is to pick a single-physics application and explore its manifestations on different hardware including many-core, GPUs, and Intel MICs. The developers who graduate from this series form the primary pipeline of staff for the Software Infrastructure for Future Technologies (SWIFT) project. Two iterations of this workshop have been run with great success. Exa-11 was taught by Timothy Kelley and Exa-12 was taught by Bryan Lally, both from the CCS-7 group. This coming year, based on the feedback, the goal is to restructure the workshop series to increase the scale and expose more than six developers at a time.

 ______________________________________________________

FastForward Program Kick-Starts Exascale R&D

Under an initiative called FastForward, the Department of Energy (DOE), Office of Science, and the NNSA have awarded $62 million in research and development (R&D) contracts to five leading companies in high performance computing (HPC) to accelerate the development of next-generation supercomputers vital to national defense, scientific research, energy security, and the nation's economic competitiveness.

AMD, IBM, Intel, Nvidia, and Whamcloud received awards to advance "extreme scale" computing technology with the goal of funding innovative R&D of critical technologies needed to deliver next-generation capabilities within a reasonable energy footprint. DOE missions require exascale systems that operate at quintillions of floating point operations per second. Such systems would be 1,000 times faster than a 1-petaFLOP/s (quadrillion floating point operations per second) supercomputer. Currently, the world's fastest supercomputer—the IBM BlueGene/Q Sequoia system at LLNL—clocks in at 16.3 petaFLOP/s.

“The challenge is to deliver 1,000 times the performance of today's computers with only a fraction more of the system’s energy consumption and space requirements,” said William Harrod, division director of research in DOE Office of Science's Advanced Scientific Computing Research program.

Contract awards were in three HPC technology areas: processors, memory, and storage and input/output (I/O). The FastForward program is managed by LLNL on behalf of seven national laboratories including: Lawrence Berkeley, Los Alamos, Sandia, Oak Ridge, Argonne and Pacific Northwest. Technical experts from the participating national laboratories evaluated and helped select the proposals and will work with selected vendors on co-design.

For more information, see the press release.

______________________________________________________

The Survey Says…

As the high-performance computing community looks toward developing exascale systems, power consumption is considered the most challenging obstacle. Researchers and practitioners from every area of system architecture are coming together to examine component and subsystem power use, as well as future trends. In this spirit of examination, Sandia, LANL, and Clemson University researchers performed a survey of three supercomputers during normal operation: Cielo, hosted at LANL; Red Sky, hosted at Sandia; and Palmetto, a commodity cluster hosted at Clemson University. Each institution gathered rack-level power statistics, enabling the power budget to be partitioned between compute and storage resources.

The survey results offer a reassuring perspective on storage system efficiency. Of the three machines surveyed, none used more than six percent of their power on disk systems. Further, an aggregate survey of the entire LANL secure computing environment, which includes Cielo, Roadrunner, capacity clusters, and twenty petabytes of data storage, found that it used less than 2.5% of its power on all storage infrastructure, including disks, storage networking, and servers. Because 94% or more of the power per machine was dedicated to computation, efficiencies gained in compute-related subsystems will have the largest impact on future exascale systems.

The data collected also allowed the researchers to project how future systems will consume power, and how system design must change to remain sustainable. According to estimates, simply scaling the size of the storage system to meet bandwidth demands will not be possible. An exascale-class storage system in 2020 would include more than 100,000 disks and consume 66% of the 20 MW exascale power budget. However, incorporating burst buffers into an exascale-class system is estimated to reduce power use by 90% (to 6.6% of the power budget) while meeting performance requirements.

______________________________________________________

Reference Implementation Released for Updated Network Protocol Specification

Sandia recently released a reference implementation of the Portals 4.0 interconnect programming interface specification designed to enable scalable, high-performance network communication for massively parallel computing systems. Portals has evolved from a component of early lightweight compute node operating systems to provide scalable interconnect performance when deployed on production systems to an important vehicle for enabling interconnect research and software/hardware co-design. Previous versions of Portals ran on several successful vendor-supported systems, including the Intel ASCI Red machine and the Cray XT series.

Unlike other user-level network programming interfaces, Portals employs a building block approach that encapsulates the semantic requirements of a broad range of upper-level protocols needed to support high-performance computing applications and services. For example, Portals provides benefits like scalable buffering for MPI, but also enables functionality needed for system services like remote procedure calls and parallel file system network communication. This building block approach has also enabled hardware designers to focus on developing components that accelerate key functions in Portals, facilitating the application/architecture co-design process.

The most recent version of the Portals specification is the result of a close collaboration between Sandia and researchers at Intel working on advanced network interface hardware. This collaboration has led to two CRADAs between Sandia and Intel over the last two years. In addition, the ASC collaboration with CEA/DAM, the military applications division of the French Atomic Energy and Alternative Energies Commission, has led to a partnership between Sandia and CEA. CEA researchers added support for Portals 4.0 to their MultiProcessor Computing (MPC) software stack and plan to explore more advanced capabilities in future implementations.

The reference implementation of Portals 4.0 was developed in collaboration with System Fabric Works. It is layered on top of the OpenFabrics Verbs interface, allowing applications to be developed and tested using InfiniBand network hardware. Sandia gave invited talks about Portals 4.0 and this reference implementation at the OpenFabrics Alliance Annual Workshop at the end of March and at the IEEE Symposium on High Performance Interconnects at the end of August. Several research papers about Portals 4.0 have been published in the last year, and a paper entitled “A Low Impact Flow Control Implementation for Offload Communication Interfaces” that describes how Portals 4.0 supports scalable receiver-based resource exhaustion recovery for MPI will be presented at the upcoming European MPI Users’ Group Conference. The following graph shows simulation data from Sandia’s Structural Simulation Toolkit that illustrates the benefit of Portals 4.0 triggered operations in supporting a non-blocking Allreduce operation on several thousand nodes. Such non-blocking collective operations will soon be available in the MPI 3.0 Standard.

______________________________________________________

IBM, Lawrence Livermore Researchers Form 'Deep Computing Solutions' Collaboration to Help Boost U.S. Industrial Competitiveness

Researchers at IBM and LLNL recently announced that they are broadening their nearly 20-year collaboration in high performance computing (HPC) by joining forces to work with industrial partners to help boost their competitiveness in the global economy.

Under a recently concluded agreement, IBM and LLNL have formed an HPC collaboration called Deep Computing Solutions to take place within LLNL's High Performance Computing Innovation Center (HPCIC). Announced last June, the HPCIC was created to help American industry harness the power of supercomputing to better compete in the global marketplace. Deep Computing Solutions will bring a new dimension to the HPCIC, adding IBM's computational science expertise to LLNL's own, for the benefit of Deep Computing Solution's clients.

“The capabilities of California's Lawrence Livermore National Laboratory are uniquely suited to boost American industry's competitiveness in the global marketplace. The new collaboration between the Lab and IBM is an excellent example of using the technical expertise of both the government and the private-sector to spur innovation and investment in the U.S. economy,” said Sen. Dianne Feinstein, (D-Calif.). “The strength of supercomputing facilities like Livermore's High Performance Computing Innovation Center offers a broad range of solutions to energy, environmental, and national security problems. I look forward to following the progress of this new collaboration in accelerating the development of products and services to maintain the nation's competitive advantage.”

For more information, see the press release.

______________________________________________________

Phase II of PSAAP Program Issues Calls for Cooperative Agreements

From its earliest days, the ASCI/ASC Program recognized that some program objectives can best be achieved by establishing a strong research portfolio of strategic alliances with leading U.S. academic institutions.  ASCI’s Academic Strategic Alliance Program (ASAP) was formed in 1997 to engage the U.S. academic community in advancing science-based modeling and simulation technologies. This program funded centers (at Caltech, Stanford, the University of Utah, the University of Chicago, and the University of Illinois – Urbana/Champaign) with a focus on creating large, 3D, scalable multi-science/engineering codes.  The second five years added initial work in verification and validation (V&V). In 2008, the Predictive Science Academic Alliance Program (PSAAP) continued this academic engagement, with strong emphasis on V&V, and the introduction of uncertainty quantification (UQ) focused on a chosen concrete predictive application.  The PSAAP program currently supports centers at Caltech, Stanford, Purdue, the University of Michigan, and the University of Texas at Austin. A Federal Opportunities Announcement (FOA) was released for PSAAP II, the newest phase of the ASC Alliances program, on April 17, 2012.   The NNSA ASC Office Federal Manager for PSAAP II is Lucille Gentry.

The PSAAP II FOA, which incorporated some changes based on input provided by ASC tri-lab personnel during a January 2011 New ASC Alliance Program Input Meeting, calls for Cooperative Agreements to create either a Multidisciplinary Simulation Center (MSC) focused on a large multidiscipline application or a Single-Discipline Center (SDC) focused on solving a problem that advances basic science/engineering.  Both MSCs and SDCs must include V&V/UQ and demonstrate technology towards achieving effective exascale computing. Both must demonstrate predictive science in an HPC environment.  Recognizing that true exascale computing will likely not be achieved during this program, the computer-science emphasis of PSAAP II is on resolving critical issues that arise in reaching towards exascale (i.e. the next HPC paradigm shift to extreme, heterogeneous, multi-core on-node parallelism – and not necessarily to any hardware or system at such scale.)

How do the two types of Centers differ?  The overarching application for an MSC should advance predictive science (e.g. predict a range of phenomena, over a wide range of space- and time-scales, with improved predictive accuracy and reduced uncertainty) in a multi-disciplinary, integrated application that is 3-D and multiscale (in space and time), and enabled by exascale computing.  Overall, the advance may require a combination of progressions in a potentially exascale-enabled piece of science, integration science or UQ science, together with wider use of state-of-the art V&V techniques.

By contrast, an SDC should focus on scientific advances for a problem or challenge in a single discipline that is multiscale (in space and time) and expects to be enabled by exascale computing. The technical advance proposed must be compelling and significant, and make use of state-of-the-art V&V/UQ techniques. 

As with PSAAP I, both types of centers must demonstrate a verified, validated, predictive simulation capability for a specific, well-defined application, system, or problem, with UQ, using specific values of key parameters.  Fully integrated V&V/UQ is to be used in furthering predictive science.   As with PSAAP I, NNSA-funded graduate students at each Center will be required to complete a 10-week visit to one of the three NNSA Defense Programs laboratories during their graduate career.  Additionally, the nature of collaborations among Center participants and the labs that can be proposed by universities has been expanded in the PSAAP II FOA to encourage more lab personnel to supervise students by serving on doctoral committees and serving as adjunct professors.

Applying institutions must be United States academic institutions that can grant Ph.D. degrees.  Cooperative agreements will be awarded for 5 years, with an option to issue a renewal for up to three additional years (if DOE/NNSA judges the research of the Center to be making significant progress).  PSAAP II proposals were reviewed on July 18-19, 2012.  The Alliance Strategy Team is presently preparing a briefing for the ASC Execs to be presented in late September. The next step will be to plan and complete site visits to a selected group of proposers.

It is important to note that a communication blackout is now in effect.  ASC lab personnel are reminded they should have no communications with any university teams to discuss PSAAP II at this time. 

 ______________________________________________________

Purdue–LLNL Collaboration Looks to Increase Detail of Nuclear Weapon Simulations

U.S. researchers are perfecting simulations that show a nuclear weapon's performance in precise molecular detail. The simulations must be run on supercomputers containing thousands of processors, but doing so has posed reliability and accuracy problems, said Saurabh Bagchi, an associate professor in Purdue University's School of Electrical and Computer Engineering.

Now researchers at Purdue and high performance computing experts at LLNL have solved several problems hindering the use of the ultra-precise simulations. The simulations, which are needed to more efficiently certify nuclear weapons, may require 100,000 machines, a level of complexity that is essential to accurately show molecular-scale reactions taking place over milliseconds, or thousandths of a second. The same types of simulations also could be used in areas such as climate modeling and studying the dynamic changes in a protein's shape.

For the complete story, see the Purdue press release.

______________________________________________________

Students Show Off Research Projects at LANL

Students from computing organizations at Los Alamos National Laboratory (LANL) gave presentations and posters on their research at the annual mini-showcase event on August 4, 2012. LANL’s Information Science and Technology Institutes (http://institutes.lanl.gov/) and the High-Performance Computing Division sponsored the event, giving students an opportunity to discuss their unclassified research with LANL employees, network, and celebrate their accomplishments.

“The LANL Institutes and summer student programs represent an important training pipeline capable of delivering expertise in key areas of HPC,” according to Acting HPC Deputy Division Leader Randal Rheinheimer. The LANL Postdoc and Student Program provides early career and learning opportunities to ensure LANL and NNSA have a robust pipeline for sustaining long-term mission and support capabilities. In FY11, 40% of new hires, 70% of R&D hires, and 80% of non-management Ph.D. staff hires were former postdocs and students.

______________________________________________________

High Performance Computing Research (HPC) and Science on Display at ISC ’12

Lawrence Livermore National Laboratory’s (LLNL’s) international leadership in scientific computing and technology R&D was on display at the 27th International Supercomputing Conference (ISC) in Hamburg, Germany. ISC is Europe's premier HPC event. Approximately 2,400 attendees and 175 exhibitors from 57 nations attended ISC’12.

A highlight of ISC’12 was the release of the latest Top500 list of the world's most powerful supercomputers, where ASC’s Sequoia machine took the number one position. The latest Graph500 list was also announced, with Sequoia and Argonne’s Mira supercomputer tying for the number one spot. Sequoia and Mira achieved (by a factor of seven) better performance than the next best machine (a Defense Advanced Research Projects Agency prototype).

Graph500 rates machines on their ability to solve complex problems that have seemingly infinite numbers of components, rather than ranking machines on how fast they solve those problems. The rankings are oriented toward enormous graph-based data problems, a core part of most analytics workloads.

The LLNL booth showcased examples of the Laboratory's HPC research and science through simulations, posters, articles, and publications. This is the fourth year the Laboratory has had a booth at the conference. Associate Director Dona Crawford was a member of a Think Tank panel reflecting on the Top500 list 20 years after its inception. She also chaired a session on energy and HPC, to which Julio Friedman contributed. Crawford helped close the conference as a participant in an “analyst crossfire.” Livermore Computing Division Leader Kim Cupps gave a presentation on the Sequoia integration as part of an invited session on New Petascale Systems in the World and Their Applications. Martin Schulz presented a half-day tutorial on Supporting Performance Analysis and Optimization on Extreme-Scale Computer Systems and a full-day tutorial on Next Generation Message Passing Interface (MPI) Programming: Advanced MPI-2 and New Features in MPI-3.

______________________________________________________

National Academy of Sciences Visits LANL

On August 30, 2012, The National Academy of Sciences Modeling & Simulation Committee met at Los Alamos National Laboratory. The purpose of the visit was to review the quality of management and modeling & simulation at DOE’s national security laboratories.

Technical staff and managers gave presentations and held discussions on these topics:

  • Current Codes: Current Physics and Current Models/Algorithms, presented by Mark Chadwick, Computational Physics division leader
  • New Physics under Development, presented by Robert Little, Computational Physics Deputy Division Leader and ASC Physics & Engineering Models program manager
  • Verification & Validation, presented by Fred Wysocki, ASC V&V program manager
  • New Algorithms under Development, presented by Robert Lowrie, project leader in Computational Physics & Methods
  • Computing and Platform Strategies, presented by Stephen Lee; Computer, Computational, and Statistical Sciences division leader

 ______________________________________________________

Michel McCoy Honored with First NNSA Science and Technology Award

Dr. Michel McCoy, whose pioneering work in high performance computing (HPC) established LLNL as a world-renowned supercomputing center, was honored recently with the NNSA's Science and Technology Award.

McCoy received the award for “16 years of dedicated and relentless pursuit of excellence” from NNSA Administrator Thomas D'Agostino to a standing ovation from colleagues during an early afternoon ceremony at LLNL.

Calling HPC “the lifeblood of NNSA science and technology,” D'Agostino said McCoy's leadership in HPC “has had a global impact.”

“Mike McCoy is an example of the difference one individual can make on a team,” D'Agostino said. “You have to have a leader who knows how to pull things together and to make tough decisions. That leader is Mike McCoy.  If it wasn't for Mike, this would be a very different place.”

In introducing D'Agostino, Lab Director Parney Albright noted his agreement with Weapons and Complex Integration Principal Associate Director Bruce Goodwin's assertion that “HPC is the intellectual electricity of this Laboratory.” Albright called McCoy “the heart and soul of HPC, not just for this Lab but for the NNSA program.”

The newly created Science and Technology Excellence Award is the highest recognition for science and technology achievements in NNSA. The award recognizes accomplishments that can include vision, leadership, innovation, and intellectual contributions. McCoy is the first recipient of the award.

As director of LLNL's ASC program, a deputy director for Computation, and head of the Integrated Computing and Communications Department, McCoy leads the Laboratory's effort to develop and deploy the HPC systems required for the three national weapons labs to fulfill their mission to ensure the safety, security, and reliability of the nation's nuclear deterrent without testing, systems such as Sequoia.

For more information, see the press release.

 ______________________________________________________

Robin Goldstone Named NNSA Defense Programs Employee of the Quarter

Robin Goldstone was recognized by NNSA as an employee of the quarter for serving as the primary author of the FastForward R&D statement of work (SOW). The 38-page SOW was developed in just three weeks. Robin worked closely with the leads of the processor, memory, and storage teams to define a set of performance metrics and target technical requirements for their respective technology areas and to ensure that all technical requirements were accurate and clearly articulated. These efforts will ensure that ASC/ASCR FastForward R&D investments lead to technology that can be deployed in future systems and meet mission needs.

“The Defense Programs employee of the quarter awards highlight the talent and expertise of the men and women from throughout the national nuclear security enterprise who promote our nuclear security agenda,” said Don Cook, NNSA’s Deputy Administrator for Defense Programs. “The contributions, hard work, and strong leadership from each recipient are large reasons for the many successes that NNSA Defense Programs continue to enjoy.”


______________________________________________________

Veterans Helping Veterans Sharpen Job Search Skills

LLNL’s Bill Oliver is reminded daily of the time he spent in the U.S. Navy. All he has to do is look at the screen saver he installed on his Lab computer monitor—a photo of the USS Swordfish, the submarine he served on some forty years ago.

Today, Oliver thinks about those who have served or are currently serving in the U.S. military. But, his thoughts have since turned into actions. Oliver volunteers his personal time helping veterans sharpen their skills in searching for jobs either at the Lab or elsewhere. Oliver believes he is one of the fortunate veterans. He graduated from the University of Utah in 1974 with a bachelor's degree in math. Through the ROTC program, he was commissioned in the U.S. Navy for five years, during the Vietnam War.

The skills he learned in the service, coupled with his college degree, led him to secure several jobs after his discharge. “And, a lot had to do with the economy which was better back then, he added. In 1996, he submitted an application after seeing a newspaper ad for an experienced control systems worker at LLNL. Today, Oliver works for the ASC Program at LLNL as a software quality engineer.

Last year, he saw a brochure about joining the American Legion. “I realized that I was not doing any kind of community service,” he said. “That was the catalyst for me to start thinking about what I could to do to help our veterans.” Oliver learned that despite the fact that veterans have many skills and the experience that makes them excellent hires, their unemployment rate is 12 percent.

To find out more about LLNL’s involvement in hiring veterans, he wrote a letter to 'Ask the Director' in the lab’s internal paper. “I share your commitment,” then Lab Director George Miller answered, encouraging Oliver to work with Strategic Human Resources Management (SHRM) and explore ways to recruit veterans to the Lab.

“The response I received was very positive,” Oliver said about Miller's answer. “It made me proud to work here.” Oliver created a PowerPoint presentation that highlights job-seeking tips. He is in touch with several organizations, as well as Las Positas College where many local veterans are currently enrolled. He also is partnering with Bethany McCormick, Michele Michael, and U.S. Air Force veteran Lee Bennett, all members of the SHRM staff who together volunteer their time to present a series of workshops on resume writing, interviewing and social networking. These sessions have been conducted off-site, on Saturdays, in collaboration with the Pleasanton VFW Post 6298 and American Legion Post 237, and held at the Pleasanton Veterans Memorial Building with the help of Patrick Leary. Additional sessions have been conducted at Las Positas College on weekday evenings. So far, five veterans have attended the Pleasanton workshops and subsequently have applied for positions at the Lab.

“We don't give veterans enough credit for the assets they bring when they come home,” Oliver said. When he returned from his service in the U. S. Navy in the 1970s, Oliver remembers that many Vietnam veterans were not accepted or well respected. “I want to show today's veterans they are valued,” he said.

 ______________________________________________________

LLNL Supercomputing's New Chief Technology Officer Dies

Dr. Allan Snavely, a widely recognized expert in high performance computing and LLNL supercomputing's chief technology officer, died of an apparent heart attack Saturday, July 14. He was 49.

Snavely took up his post at LLNL April 30 of this year after 18 years at the UC San Diego Supercomputing Center (SDSC), which he helped develop into a world-class computing institution. In the short time he'd been at the Laboratory, Snavely made an impact on LLNL's high performance computing program.

“Allan was one of the most deeply and naturally honorable people I have ever known. He had a gift for intelligent, candid and thoughtful communication,” said Michel McCoy, head of LLNL's Advanced Simulation and Computing program and deputy director for Computation. “He was optimistic and full of hope for the future. He infected us all with optimism. In the few short months I knew him, I came to look to him for wisdom and decent, heartfelt advice.”

Snavely, who earned his Ph.D. from UC San Diego, joined SDSC in 1994 and held a variety of leadership positions, serving as associate director of the center. While at SDSC, Snavely also was an adjunct professor in computer science and engineering at UC San Diego. He was particularly well known as a co-developer of the famed Gordon SuperComputer, funded by the NSF. The machine was the first to use FLASH memory at scale, featuring very high I/O rates that made the machine ideal for data intensive computing. This represented the kind of inventiveness that made him an ideal choice for the position at LLNL, as the Laboratory begins to think about its next major supercomputer procurement.

Snavely is survived by his wife, Nancy and his nine-year-old daughter, Sophia. For more about Snavely's career in San Diego, see the UC San Diego Website.

______________________________________________________

ASC Salutes Pam Hamilton

As group leader for Livermore Computing’s (LC) Software Development Group, Pam Hamilton sees herself as a master of facilitating communications, both within her group and within the larger ASC tri-lab community.

“My biggest challenge on ASC is the diversity of my assignments,” said Pam. “Besides being a group leader, I’m the LC Information System Security Officer, the Community Development working group lead for OpenSFS[1], and the TOSS[2] tri-lab lead. Keeping everyone happy is tricky, and I worry… a lot.”

But the worry has paid off with successful teams, successful implementation of software, and personal success—all in the name of the ASC Program. Pam has received two significant honors in the last couple of years: one given by the Nuclear Weapons Complex for significant contributions to the success of implementing the ASC Tripod[3] initiative, and the other from NNSA for her work as the ASC Purple project integration leader.

“Pam is a solution-focused leader, said Kim Cupps, head of the Livermore Computing Division. “Pam’s most recent leadership role, as the JIRA working group leader, is the latest demonstration of her ability to bring people together to solve important problems and work more effectively. The JIRA working group is implementing a collaborative issue tracking system in the center that allows all of the groups to work production software issues more efficiently. Pam’s ability to make forward progress across many areas simultaneously (TOSS, security, OpenSFS and now JIRA) are key to the LC’s ability to continuously improve our customer’s productivity.”

Pam’s entire 31-year career has been at Lawrence Livermore, with the first 22 years as a system software developer. She worked on the LLNL-developed Cray operating systems, the Cray time-sharing system (CTSS), and the Network Livermore time-sharing system (NLTSS), along with archival storage software (the National Storage Lab version of Unitree), and the high-performance storage system (HPSS). One of Pam’s first forays into leadership was to serve as project leader over all operational activities for Livermore Computing’s classified and unclassified archival storage systems. Today she oversees work on the tri-lab system software stack (TOSS) for the Linux clusters along with work on the Lustre parallel file system software.

When not at work, Pam focuses on her other roles as wife and mother of two boys (one a senior in college and one a junior in high school). “It’s either baseball or travel in our family,” Pam said. “We’re always talking, living, scheming one or the other. In fact, my ideal family vacation would be a trip along the east coast visiting all of the major league baseball parks.” When asked about her favorite team, Pam noted that she likes the underdog.

Kim wasn’t surprised about her rooting for the underdog. “It’s her compassion and positive outlook,” said Kim, “that makes Pam such an effective people and project lead for the ASC Program.”

____________________________

[1] OpenSFS is the open scalable file system foundation, a technical organization focused on high-end, open-source file system technologies.

[2] TOSS is the Tripod operating system software, which is the common software environment for commodity Linux clusters at LLNL, LANL, and SNL.

[3] Tripod is the project chartered by the ASC Program to develop a seamless software environment for use by the NNSA tri-lab community (LANL, LLNL, and SNL), initially targeted at Linux commodity computing clusters.

ASC Relevant Research


  Los Alamos National Laboratory
Citations for Publications (previously not listed)

2012

  1. Abbasi, R., et al. (2012). "Searching for soft relativistic jets in core-collapse supernovae with the IceCube optical follow-up program," Astronomy & Astrophysics, Vol. 539.

  2. Abbasi, R., et al. (2012). "An absence of neutrinos associated with cosmic-ray acceleration in gamma-ray bursts," Nature, Vol. 484, No. 7394, pp. 351-354.

  3. Abdallah, J., Colgan, J. (2012). "Time-dependent calculations of electron energy distribution functions for cold argon gas in the presence of intense black-body radiation," Journal of Physics B-Atomic Molecular and Optical Physics, Vol. 45, No. 3.

  4. An, Q., Han, W.Z., Luo, S.N., Germann, T.C., Tonks, D.L., Goddard, W.A. (2012). "Left-right loading dependence of shock response of (111)//(112) Cu bicrystals: Deformation and spallation," Journal of Applied Physics, Vol. 111, No. 5.

  5. An, Q., Luo, S.N., Goddard, W.A., Han, W.Z., Arman, B., Johnson, W.L. (2012). "Synthesis of single-component metallic glasses by thermal spray of nanodroplets on amorphous substrates," Applied Physics Letters, Vol. 100, No. 4.

  6. Andersson, D.A., Espinosa-Faller, F.J., Uberuaga, B.P., Conradson, S.D. (2012). "Stability and migration of large oxygen clusters in UO2+x: Density functional theory calculations," Journal of Chemical Physics, Vol. 136, No. 23.

  7. Bai, X.M., Uberuaga, B.P. (2012). "Multi-timescale investigation of radiation damage near TiO2 rutile grain boundaries," Philosophical Magazine, Vol. 92, No. 12, pp. 1469-1498.

  8. Balakumar, B.J., Orlicz, G.C., Ristorcelli, J.R., Balasubramanian, S., Prestridge, K.P., Tomkins, C.D. (2012). "Turbulent mixing in a Richtmyer-Meshkov fluid layer after reshock: velocity and density statistics," Journal of Fluid Mechanics, Vol. 696, pp. 67-93.

  9. Balasubramanian, S., Orlicz, G.C., Prestridge, K.P., Balakumar, B.J. (2012). "Experimental study of initial condition dependence on Richtmyer-Meshkov instability in the presence of reshock," Physics of Fluids, Vol. 24, No. 3.

  10. Bennett, M.E., Hirschi, R., Pignatari, M., Diehl, S., Fryer, C., Herwig, F., Hungerford, A., Nomoto, K., Rockefeller, G., Timmes, F.X., Wiescher, M. (2012). "The effect of C-12+C-12 rate uncertainties on the evolution and nucleosynthesis of massive stars," Monthly Notices of the Royal Astronomical Society, Vol. 420, No. 4, pp. 3047-3070.

  11. Beyerlein, I.J., Wang, J., Barnett, M.R., Tome, C.N. (2012). "Double twinning mechanisms in magnesium alloys via dissociation of lattice dislocations," Proceedings of the Royal Society a-Mathematical Physical and Engineering Sciences, Vol. 468, No. 2141, pp. 1496-1520.

  12. Bhattacharyya, D., Dickerson, P., Odette, G.R., Maloy, S.A., Misra, A., Nastasi, M.A. (2012). "On the structure and chemistry of complex oxide nanofeatures in nanostructured ferritic alloy U14YWT," Philosophical Magazine, Vol. 92, No. 16, pp. 2089-2107.

  13. Boettger, J.C., Honnell, K.G., Peterson, J.H., Greeff, C.W., Crockett, S.D. (2012). "TABULAR EQUATION OF STATE FOR GOLD," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  14. Brandl, C., Germann, T.C. (2012). "SHOCK LOADING AND RELEASE OF A SMALL ANGLE TILT GRAIN BOUNDARY IN CU," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  15. Bringa, E.M., Monk, J.D., Caro, A., Misra, A., Zepeda-Ruiz, L., Duchaineau, M., Abraham, F., Nastasi, M., Picraux, S.T., Wang, Y.Q., Farkas, D. (2012). "Are Nanoporous Materials Radiation Resistant?," Nano Letters, Vol. 12, No. 7, pp. 3351-3355.

  16. Brown, L.S., Preston, D.L. (2012). "Leading relativistic corrections to the Kompaneets equation," Astroparticle Physics, Vol. 35, No. 11, pp. 742-748.

  17. Brown, L.S., Preston, D.L., Singleton, R.L. (2012). "Electron-ion energy partition when a charged particle slows in a plasma: Results," Physical Review E, Vol. 86, No. 1.

  18. Brown, L.S., Preston, D.L., Singleton, R.L. (2012). "Electron-ion energy partition when a charged particle slows in a plasma: Theory," Physical Review E, Vol. 86, No. 1.

  19. Buttler, W.T., Oro, D.M., Preston, D.L., Mikaelian, K.O., Cherne, F.J., Hixson, R.S., Mariam, F.G., Morris, C., Stone, J.B., Terrones, G., Tupa, D. (2012). "Unstable Richtmyer-Meshkov growth of solid and liquid metals in vacuum," Journal of Fluid Mechanics, Vol. 703, pp. 60-84.

  20. Buttler, W.T., Oro, D.M., Preston, D.L., Mikaelian, K.O., Cherne, F.J., Hixson, R.S., Mariam, F.G., Morris, C., Stone, J.B., Terrones, G., Tupa, D. (2012). "THE STUDY OF HIGH-SPEED SURFACE DYNAMICS USING A PULSED PROTON BEAM," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  21. Carpenter, J.S., Liu, X., Darbal, A., Nuhfer, N.T., McCabe, R.J., Vogel, S.C., LeDonne, J.E., Rollett, A.D., Barmak, K., Beyerlein, I.J., Mara, N.A. (2012). "A comparison of texture results obtained using precession electron diffraction and neutron diffraction methods at diminishing length scales in ordered bimetallic nanolamellar composites," Scripta Materialia, Vol. 67, No. 4, pp. 336-339.

  22. Carpenter, J.S., Misra, A., Anderson, P.M. (2012). "Achieving maximum hardness in semi-coherent multilayer thin films with unequal layer thickness," Acta Materialia, Vol. 60, No. 6-7, pp. 2625-2636.

  23. Carpenter, J.S., Vogel, S.C., LeDonne, J.E., Hammon, D.L., Beyerlein, I.J., Mara, N.A. (2012). "Bulk texture evolution of Cu-Nb nanolamellar composites during accumulative roll bonding," Acta Materialia, Vol. 60, No. 4, pp. 1576-1586.

  24. Cawkwell, M.J., Sanville, E.J., Mniszewski, S.M., Niklasson, A.M.N. (2012). "SELF-CONSISTENT TIGHT-BINDING MOLECULAR DYNAMICS SIMULATIONS OF SHOCK-INDUCED REACTIONS IN HYDROCARBONS," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. 1426.

  25. Cerreta, E.K., Escobedo, J.P., Perez-Bergquist, A., Koller, D.D., Trujillo, C.P., Gray, G.T., Brandl, C., Germann, T.C. (2012). "Early stage dynamic damage and the role of grain boundary type," Scripta Materialia, Vol. 66, No. 9, pp. 638-641.

  26. Chadwick, M.B. (2012). "ENDF nuclear data in the physical, biological, and medical sciences," International Journal of Radiation Biology, Vol. 88, No. 1-2, pp. 10-14.

  27. Cheng, B.L., Glimm, J., Sharp, D.H., Lim, H. (2012). "MODELING TURBULENT MIXING," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  28. Cherne, F.J., Dimonte, G., Germann, T.C. (2012). "RICHTMYER-MESHKOV INSTABILITY EXAMINED WITH LARGE-SCALE MOLECULAR DYNAMICS SIMULATIONS," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  29. Chyzh, A., Wu, C.Y., Kwan, E., Henderson, R.A., Gostic, J.M., Bredeweg, T.A., Haight, R.C., Hayes-Sterbenz, A.C., Jandel, M., O'Donnell, J.M., Ullmann, J.L. (2012). "Evidence for the stochastic aspect of prompt gamma emission in spontaneous fission," Physical Review C, Vol. 85, No. 2.

  30. Clausen, B., Brown, D.W., Bourke, M.A.M., Saleh, T.A., Maloy, S.A. (2012). "In situ neutron diffraction and Elastic-Plastic Self-Consistent polycrystal modeling of HT-9," Journal of Nuclear Materials, Vol. 425, No. 1-3, pp. 228-232.

  31. Clements, B.E., Thompson, D.G., Luscher, D.J., DeLuca, R., Brown, G.W. (2012). "TAYLOR IMPACT TESTS AND SIMULATIONS OF PLASTIC BONDED EXPLOSIVES," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  32. Clerouin, J., Starrett, C., Noiret, P., Renaudin, P., Blancard, C., Faussurier, G. (2012). "Pressure and Electrical Resistivity Measurements on Hot Expanded Metals: Comparisons with Quantum Molecular Dynamics Simulations and Average-Atom Approaches," Contributions to Plasma Physics, Vol. 52, No. 1, pp. 17-22.

  33. Collins, L.A., Kress, J.D., Hanson, D.E. (2012). "Reflectivity of warm dense deuterium along the principal Hugoniot," Physical Review B, Vol. 85, No. 23.

  34. Cooper, F., Khare, A., Quintero, N.R., Mertens, F.G., Saxena, A. (2012). "Forced nonlinear Schrodinger equation with arbitrary nonlinearity," Physical Review E, Vol. 85, No. 4.

  35. Correa, A.A., Kohanoff, J., Artacho, E., Sanchez-Portal, D., Caro, A. (2012). "Nonadiabatic Forces in Ion-Solid Interactions: The Initial Stages of Radiation Damage," Physical Review Letters, Vol. 108, No. 21.

  36. Csanak, G., Fontes, C.J., Inal, M.K., Kilcrease, D.P. (2012). "The creation, destruction and transfer of multipole moments in electron scattering by ions," Journal of Physics B-Atomic Molecular and Optical Physics, Vol. 45, No. 10.

  37. Demkowicz, M.J., Misra, A., Caro, A. (2012). "The role of interface structure in controlling high helium concentrations," Current Opinion in Solid State & Materials Science, Vol. 16, No. 3, pp. 101-108.

  38. Dennis-Koller, D., Escobedo-Diaz, J.P., Cerreta, E.K., Bronkhorst, C.A., Hansen, B., Lebensohn, R., Mourad, H., Patterson, B., Tonks, D. (2012). "CONTROLLED SHOCK LOADING CONDITIONS FOR MICRSTRUCTURAL CORRELATION OF DYNAMIC DAMAGE BEHAVIOR," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  39. Dorado, B., Andersson, D.A., Stanek, C.R., Bertolus, M., Uberuaga, B.P., Martin, G., Freyss, M., Garcia, P. (2012). "First-principles calculations of uranium diffusion in uranium dioxide," Physical Review B, Vol. 86, No. 3.

  40. Escobedo, J.P., Cerreta, E.K., Trujillo, C.P., Martinez, D.T., Lebensohn, R.A., Webster, V.A., Gray, G.T. (2012). "Influence of texture and test velocity on the dynamic, high-strain, tensile behavior of zirconium," Acta Materialia, Vol. 60, No. 11, pp. 4379-4392.

  41. Fensin, S.J., Cerreta, E.K., Escobedo, J.P., Gray, G.T., Farrow, A., Trujillo, C.P., Lopez, M.F. (2012). "THE ROLE OF INTERFACES ON DYNAMIC DAMAGE IN TWO PHASE METALS," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  42. Fontes, C.J., Zhang, H.L., Thorn, D.B., Gumberidze, A. (2012). "The Effect of the Breit Interaction on Electron-Impact Excitation," in 17th International Conference on Atomic Processes in Plasmas. K. Aggarwal and F. Shearer. Melville, Amer Inst Physics. 1438: 216-221.

  43. Graziani, F.R., et al. (2012). "Large-scale molecular dynamics simulations of dense plasmas: The Cimarron Project," High Energy Density Physics, Vol. 8, No. 1, pp. 105-131.

  44. Haight, R.C., Lee, H.Y., Taddeucci, T.N., O'Donnell, J.M., Perdue, B.A., Fotiades, N., Devlin, M., Ullmann, J.L., Laptev, A., Bredeweg, T., Jandel, M., Nelson, R.O., Wender, S.A., White, M.C., Wu, C.Y., Kwan, E., Chyzh, A., Henderson, R., Gostic, J. (2012). "Two detector arrays for fast neutrons at LANSCE," Journal of Instrumentation, Vol. 7.

  45. Hammerberg, J.E., Ravelo, R., Germann, T.C., Holian, B.L. (2012). "FINITE SIZE EFFECTS AT HIGH SPEED FRICTIONAL INTERFACES," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  46. Han, W.Z., An, Q., Luo, S.N., Germann, T.C., Tonks, D.L., Goddard, W.A. (2012). "Deformation and spallation of shocked Cu bicrystals with Sigma 3 coherent and symmetric incoherent twin boundaries," Physical Review B, Vol. 85, No. 2.

  47. Han, W.Z., Huang, L., An, Q., Chen, H.T., Luo, S.N. (2012). "Crystallization of liquid Cu nanodroplets on single crystal Cu substrates prefers closest-packed planes regardless of the substrate orientations," Journal of Crystal Growth, Vol. 345, No. 1, pp. 34-38.

  48. Hetherly, J., Martinez, E., Di, Z.F., Nastasi, M., Caro, A. (2012). "Helium bubble precipitation at dislocation networks," Scripta Materialia, Vol. 66, No. 1, pp. 17-20.

  49. Hogden, J., Vander Wiel, S., Bower, G.C., Michalak, S., Siemion, A., Werthimer, D. (2012). "COMPARISON OF RADIO-FREQUENCY INTERFERENCE MITIGATION STRATEGIES FOR DISPERSED PULSE DETECTION," Astrophysical Journal, Vol. 747, No. 2.

  50. Huang, J., et al. (2012). "Beam-Target Double-Spin Asymmetry A(LT) in Charged Pion Production from Deep Inelastic Scattering on a Transversely Polarized He-3 Target at 1.4 < Q(2) < 2.7 GeV2," Physical Review Letters, Vol. 108, No. 5.

  51. Huang, L., Chowdhury, D.R., Ramani, S., Reiten, M.T., Luo, S.N., Taylor, A.J., Chen, H.T. (2012). "Experimental demonstration of terahertz metamaterial absorbers with a broad and flat high absorption band," Optics Letters, Vol. 37, No. 2, pp. 154-156.

  52. Huang, L., Han, W.Z., An, Q., Goddard, W.A., Luo, S.N. (2012). "Shock-induced consolidation and spallation of Cu nanopowders," Journal of Applied Physics, Vol. 111, No. 1.

  53. Jackson, S.I., Short, M. (2012). "DETERMINATION OF THE VELOCITY-CURVATURE RELATIONSHIP FOR UNKNOWN FRONT SHAPES," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  54. Jiang, C., Sickafus, K.E., Stanek, C.R., Rudin, S.P., Uberuaga, B.P. (2012). "Cation disorder in MgX2O4 (X = Al, Ga, In) spinels from first principles," Physical Review B, Vol. 86, No. 2.

  55. Kang, K., Wang, J., Beyerlein, I.J. (2012). "Atomic structure variations of mechanically stable fcc-bcc interfaces," Journal of Applied Physics, Vol. 111, No. 5.

  56. Kawano, T., Talou, P., Chadwick, M.B. (2012). "Monte Carlo Simulation for Statistical Decay of Compound Nucleus," in Cnr*11 - Third International Workshop on Compound Nuclear Reactions and Related Topics. M. Krticka, F. Becvar and J. Kroll. 21.

  57. Khare, A., Saxena, A. (2012). "Solutions of several coupled discrete models in terms of Lame polynomials of order one and two," Pramana-Journal of Physics, Vol. 78, No. 2, pp. 187-213.

  58. Kim, Y., et al. (2012). "Determination of the deuterium-tritium branching ratio based on inertial confinement fusion implosions," Physical Review C, Vol. 85, No. 6.

  59. Kim, Y., et al. (2012). "D-T gamma-to-neutron branching ratio determined from inertial confinement fusion plasmas," Physics of Plasmas, Vol. 19, No. 5.

  60. Kim, Y., Budiman, A.S., Baldwin, J.K., Mara, N.A., Misra, A., Han, S.M. (2012). "Microcompression study of Al-Nb nanoscale multilayers," Journal of Materials Research, Vol. 27, No. 3, pp. 592-598.

  61. Kucharik, M., Shashkov, M. (2012). "One-step hybrid remapping algorithm for multi-material arbitrary Lagrangian-Eulerian methods," Journal of Computational Physics, Vol. 231, No. 7, pp. 2851-2864.

  62. Kullback, B.A., Terrones, G., Carrara, M.D., Hajj, M.R. (2012). "QUANTIFICATION OF EJECTA FROM SHOCK LOADED METAL SURFACES," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. 1426.

  63. Kunieda, S., Haight, R.C., Kawano, T., Chadwick, M.B., Sterbenz, S.M., Bateman, F.B., Wasson, O.A., Grimes, S.M., Maier-Komor, P., Vonach, H., Fukahori, T., Watanabe, Y. (2012). "Measurement and model analysis of (n, x alpha) cross sections for Cr, Fe, Co-59, and Ni-58,Ni-60 from threshold energy to 150 MeV," Physical Review C, Vol. 85, No. 5.

  64. Kunieda, S., Kawano, T., Chadwick, M.B., Fukahori, T., Watanabe, Y. (2012). "Clustering Pre-equilibrium Model Analysis for Nucleon-induced Alpha-particle Spectra up to 200 MeV," in Cnr*11 - Third International Workshop on Compound Nuclear Reactions and Related Topics. M. Krticka, F. Becvar and J. Kroll. 21.

  65. Lebensohn, R.A., Holt, R.A., Caro, A., Alankar, A., Tome, C.N. (2012). "Improved constitutive description of single crystal viscoplastic deformation by dislocation climb," Comptes Rendus Mecanique, Vol. 340, No. 4-5, pp. 289-295.

  66. Lebensohn, R.A., Kanjarla, A.K., Eisenlohr, P. (2012). "An elasto-viscoplastic formulation based on fast Fourier transforms for the prediction of micromechanical fields in polycrystalline materials," International Journal of Plasticity, Vol. 32-33, pp. 59-69.

  67. Lee, T., Baskes, M.I., Lawson, A.C., Chen, S.P., Valone, S.M. (2012). "Atomistic Modeling of the Negative Thermal Expansion in delta-Plutonium Based on the Two-State Description," Materials, Vol. 5, No. 6, pp. 1040-1054.

  68. Lee, T., Baskes, M.I., Valone, S.M., Doll, J.D. (2012). "Atomistic modeling of thermodynamic equilibrium and polymorphism of iron," Journal of Physics-Condensed Matter, Vol. 24, No. 22.

  69. Lim, H., Kaman, T., Yu, Y., Mahadeo, V., Xu, Y., Zhang, H., Glimm, J., Dutta, S., Sharp, D.H., Plohr, B. (2012). "A MATHEMATICAL THEORY FOR LES CONVERGENCE," Acta Mathematica Scientia, Vol. 32, No. 1, pp. 237-258.

  70. Liu, X.Y., Hoagland, R.G., Demkowicz, M.J., Nastasi, M., Misra, A. (2012). "The Influence of Lattice Misfit on the Atomic Structures and Defect Energetics of Face Centered Cubic-Body Centered Cubic Interfaces," Journal of Engineering Materials and Technology-Transactions of the Asme, Vol. 134, No. 2.

  71. Liu, X.Y., Uberuaga, B.P., Demkowicz, M.J., Germann, T.C., Misra, A., Nastasi, M. (2012). "Mechanism for recombination of radiation-induced point defects at interphase boundaries," Physical Review B, Vol. 85, No. 1.

  72. Maniadis, P., Lookman, T., Saxena, A., Smith, D.L. (2012). "Proposal for Manipulating Functional Interface Properties of Composite Organic Semiconductors with Addition of Designed Macromolecules," Physical Review Letters, Vol. 108, No. 25.

  73. Martinez, E., Hirth, J.P., Nastasi, M., Caro, A. (2012). "Structure of a 2 degrees (010) Cu twist boundary interface and the segregation of vacancies and He atoms," Physical Review B, Vol. 85, No. 6.

  74. Merkel, S., Gruson, M., Wang, Y.B., Nishiyama, N., Tome, C.N. (2012). "Texture and elastic strains in hcp-iron plastically deformed up to 17.5 GPa and 600 K: experiment and model," Modelling and Simulation in Materials Science and Engineering, Vol. 20, No. 2.

  75. Meyer, C.D., Balsara, D.S., Aslam, T.D. (2012). "A second-order accurate Super TimeStepping formulation for anisotropic thermal conduction," Monthly Notices of the Royal Astronomical Society, Vol. 422, No. 3, pp. 2102-2115.

  76. Michalak, S.E., DuBois, A.J., Storlie, C.B., Quinn, H.M., Rust, W.N., DuBois, D.H., Modl, D.G., Manuzzato, A., Blanchard, S.P. (2012). "Assessment of the Impact of Cosmic-Ray-Induced Neutrons on Hardware in the Roadrunner Supercomputer," Ieee Transactions on Device and Materials Reliability, Vol. 12, No. 2, pp. 445-454.

  77. Mniszewski, S.M., Cawkwell, M.J., Germann, T.C. (2012). "MOLECULAR DYNAMICS SIMULATIONS OF DETONATION ON THE ROADRUNNER SUPERCOMPUTER," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  78. Myatt, J.F., Zhang, J., Delettrez, J.A., Maximov, A.V., Short, R.W., Seka, W., Edgell, D.H., DuBois, D.F., Russell, D.A., Vu, H.X. (2012). "The dynamics of hot-electron heating in direct-drive-implosion experiments caused by two-plasmon-decay instability," Physics of Plasmas, Vol. 19, No. 2.

  79. Ni, S., Wang, Y.B., Liao, X.Z., Figueiredo, R.B., Li, H.Q., Ringer, S.P., Langdon, T.G., Zhu, Y.T. (2012). "The effect of dislocation density on the interactions between dislocations and twin boundaries in nanocrystalline materials," Acta Materialia, Vol. 60, No. 6-7, pp. 3181-3189.

  80. Olson, G.L. (2012). "Grey and multigroup radiation transport models for two-dimensional stochastic media with material temperature coupling," Journal of Quantitative Spectroscopy & Radiative Transfer, Vol. 113, No. 5, pp. 325-334.

  81. Olson, G.L. (2012). "Alternate closures for radiation transport using Legendre polynomials in 1D and spherical harmonics in 2D," Journal of Computational Physics, Vol. 231, No. 7, pp. 2786-2793.

  82. Oro, D.M., Hammerberg, J.E., Buttler, W.T., Mariam, F.G., Morris, C., Rousculp, C., Stone, J.B. (2012). "A CLASS OF EJECTA TRANSPORT TEST PROBLEMS," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  83. Perez-Bergquist, A.G., Cao, F., Perez-Bergquist, S.J., Lopez, M.F., Trujillo, C.P., Cerreta, E.K., Gray, G.T. (2012). "The constitutive response of three solder materials," Journal of Alloys and Compounds, Vol. 524, pp. 32-37.

  84. Perez-Bergquist, A.G., Cerreta, E.K., Trujillo, C.P., Gray, G.T., Brandl, C., Germann, T.C. (2012). "Transmission electron microscopy study of the role of interface structure at 100/111 boundaries in a shocked copper multicrystal," Scripta Materialia, Vol. 67, No. 4, pp. 412-415.

  85. Perez-Bergquist, A.G., Escobedo, J.P., Trujillo, C.P., Cerreta, E.K., Gray, G.T., Brandl, C., Germann, T.C. (2012). "THE ROLE OF THE STRUCTURE OF GRAIN BOUNDARY INTERFACES DURING SHOCK LOADING," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  86. Peterson, J.H., Honnell, K.G., Greeff, C.W., Johnson, J.D., Boettger, J.C., Crockett, S.D. (2012). "GLOBAL EQUATION OF STATE FOR COPPER," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  87. Preston, D.N., Brown, G.W., Skidmore, C.B., Reardon, B.L., Parkinson, D.A. (2012). "SMALL-SCALE EXPLOSIVES SENSITIVITY SAFTEY TESTING: A DEPARTURE FROM BRUCETON," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  88. Prime, M.B., Chen, S.R., Adams, C.D. (2012). "ADVANCED PLASTICITY MODELS APPLIED TO RECENT SHOCK DATA ON BERYLLIUM," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  89. Ramsey, S.D., Hutchens, G.J. (2012). "Approximate Solution and Application of the Survival Probability Diffusion Equation," Nuclear Science and Engineering, Vol. 170, No. 1, pp. 1-15.

  90. Ramsey, S.D., Kamm, J.R., Bolstad, J.H. (2012). "The Guderley problem revisited," International Journal of Computational Fluid Dynamics, Vol. 26, No. 2, pp. 79-99.

  91. Reich, B.J., Kalendra, E., Storlie, C.B., Bondell, H.D., Fuentes, M. (2012). "Variable selection for high dimensional Bayesian density estimation: application to human exposure simulation," Journal of the Royal Statistical Society Series C-Applied Statistics, Vol. 61, pp. 47-66.

  92. Romick, C.M., Aslam, T.D., Powers, J.M. (2012). "The effect of diffusion on the dynamics of unsteady detonations," Journal of Fluid Mechanics, Vol. 699, pp. 453-464.

  93. Sadigh, B., Erhart, P., Stukowski, A., Caro, A., Martinez, E., Zepeda-Ruiz, L. (2012). "Scalable parallel Monte Carlo algorithm for atomistic simulations of precipitation in alloys," Physical Review B, Vol. 85, No. 18.

  94. Salje, E.K.H., Ding, X., Zhao, Z., Lookman, T. (2012). "How to generate high twin densities in nano-ferroics: Thermal quench and low temperature shear," Applied Physics Letters, Vol. 100, No. 22.

  95. Saumon, D., Starrett, C.E., Kress, J.D., Clerouin, J. (2012). "The quantum hypernetted chain model of warm dense matter," High Energy Density Physics, Vol. 8, No. 2, pp. 150-153.

  96. Srinivasan, B., Dimonte, G., Tang, X.Z. (2012). "Magnetic field generation in Rayleigh-Taylor unstable inertial confinement fusion plasmas," Physical Review Letters, Vol. 108, No. 16.

  97. Starrett, C.E., Saumon, D. (2012). "A variational average atom approach to closing the quantum Ornstein-Zernike relations," High Energy Density Physics, Vol. 8, No. 1, pp. 101-104.

  98. Starrett, C.E., Saumon, D. (2012). "Fully variational average atom model with ion-ion correlations," Physical Review E, Vol. 85, No. 2.

  99. Tang, M., Wynn, T.A., Patel, M.K., Won, J., Monnet, I., Pivin, J.C., Mara, N.A., Sickafus, K.E. (2012). "Structure and mechanical properties of swift heavy ion irradiated tungsten-bearing delta-phase oxides Y6W1O12 and Yb6W1O12," Journal of Nuclear Materials, Vol. 425, No. 1-3, pp. 193-196.

  100. Ticknor, C. (2012). "Finite-temperature analysis of a quasi-two-dimensional dipolar gas," Physical Review A, Vol. 85, No. 3.

  101. Tonks, D.L., Bronkhorst, C.A., Bingert, J.F. (2012). "A COMPARISON OF CALCULATED DAMAGE FROM SQUARE WAVES AND TRIANGULAR WAVES," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  102. Trujillo, C.P., Martinez, D.T., Burkett, M.W., Escobedo, J.P., Cerreta, E.K., Gray, G.T. (2012). "A NOVEL USE OF PDV FOR AN INTEGRATED SMALL SCALE TEST PLATFORM," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

  103. Uberuaga, B.P., Choudhury, S., Bai, X.M., Benedek, N.A. (2012). "Grain boundary stoichiometry and interactions with defects in SrTiO3," Scripta Materialia, Vol. 66, No. 2, pp. 105-108.

  104. Uberuaga, B.P., Stuart, S.J., Windl, W., Masquelier, M.P., Voter, A.F. (2012). "Fullerene and graphene formation from carbon nanotube fragments," Computational and Theoretical Chemistry, Vol. 987, pp. 115-121.

  105. Valone, S.M., Baskes, M.I., Rudin, S.P. (2012). "Stacking fault energy in FCC plutonium with multiple reference states in the modified embedded atom method," Journal of Nuclear Materials, Vol. 422, No. 1-3, pp. 20-26.

  106. Wang, J., Beyerlein, I.J. (2012). "Atomic structures of symmetric tilt grain boundaries in hexagonal close packed (hcp) crystals," Modelling and Simulation in Materials Science and Engineering, Vol. 20, No. 2.

  107. Wang, J., Beyerlein, I.J., Hirth, J.P. (2012). "Nucleation of elementary {(1)over-bar 0 1 1} and {(1)over-bar 0 1 3} twinning dislocations at a twin boundary in hexagonal close-packed crystals," Modelling and Simulation in Materials Science and Engineering, Vol. 20, No. 2.

  108. Wang, J., Misra, A., Hoagland, R.G., Hirth, J.P. (2012). "Slip transmission across fcc/bcc interfaces with varying interface shear strengths," Acta Materialia, Vol. 60, No. 4, pp. 1503-1513.

  109. Williams, S., Petersen, M., Hecht, M., Maltrud, M., Patchett, J., Ahrens, J., Hamann, B. (2012). "Interface Exchange as an Indicator for Eddy Heat Transport," Computer Graphics Forum, Vol. 31, No. 3, pp. 1125-1134.

  110. Yarotski, D., Fu, E.G., Yan, L., Jia, Q.X., Wang, Y.Q., Taylor, A.J., Uberuaga, B.P. (2012). "Characterization of irradiation damage distribution near TiO2/SrTiO3 interfaces using coherent acoustic phonon interferometry," Applied Physics Letters, Vol. 100, No. 25.

  111. Yeager, J.D., Luo, S.N., Jensen, B.J., Fezzaa, K., Montgomery, D.S., Hooks, D.E. (2012). "High-speed synchrotron X-ray phase contrast imaging for analysis of low-Z composite microstructure," Composites Part a-Applied Science and Manufacturing, Vol. 43, No. 6, pp. 885-892.

  112. Yesilyurt, G., Martin, W.R., Brown, F.B. (2012). "On-the-Fly Doppler Broadening for Monte Carlo Codes," Nuclear Science and Engineering, Vol. 171, No. 3, pp. 239-257.

  113. Yin, L., Albright, B.J., Rose, H.A., Bowers, K.J., Bergen, B., Kirkwood, R.K., Hinkel, D.E., Langdon, A.B., Michel, P., Montgomery, D.S., Kline, J.L. (2012). "Trapping induced nonlinear behavior of backward stimulated Raman scattering in multi-speckled laser beams," Physics of Plasmas, Vol. 19, No. 5.

  114. Yu, T., Chen, J.H., Ehm, L., Huang, S., Guo, Q.Z., Luo, S.N., Parise, J. (2012). "Study of liquid gallium at high pressure using synchrotron x-ray," Journal of Applied Physics, Vol. 111, No. 11.

  115. Zhang, R.F., Wang, J., Beyerlein, I.J., Misra, A., Germann, T.C. (2012). "Atomic-scale study of nucleation of dislocations from fcc-bcc interfaces," Acta Materialia, Vol. 60, No. 6-7, pp. 2855-2865.

  116. Zhang, R.F., Wang, J., Liu, X.Y., Beyerlein, I.J., Germann, T.C. (2012). "NONEQUILIBRIUM MOLECULAR DYNAMICS SIMULATIONS OF SHOCK WAVE PROPAGATION IN NANOLAYERED CU/NB NANOCOMPOSITES," in Shock Compression of Condensed Matter - 2011, Pts 1 and 2. M. L. Elert, W. T. Buttler, J. P. Borg, J. L. Jordan and T. J. Vogler. Melville, Amer Inst Physics. 1426.

 

Sandia National Laboratories
Citations for Publications

Key: DOI = Digital Object Identifier

  1. Anderson, N. L., Vedula, R. P., Schultz, P. A., Van Ginhoven, R. M., Strachan, A. (2012).  “Defect Level Distributions and Atomic Relaxations Induced by Charge Trapping in Amorphous Silica,” Applied Physics Letters, Vol. 100, Issue 17, 172908 (4 pages).  Published online 26 April 2012.  DOI: 10.1063/1.4707340. SAND2012-0788 J.

  2. Brown, A. A., Bammann, D. J. (2012).  “Validation of a Model for Static and Dynamic Recrystallization in Metals,” International Journal of Plasticity, Vol. 32-33, pp 17-35.  DOI: 10.1016/j.ijplas.2011.12.006. SAND2008-4879 J.

  3. Brown, A. L., Wagner, G. J.; Metzinger, K. E. (2012).  “Impact, Fire, and Fluid Spread Code Coupling for Complex Transportation Accident Environment Simulation,” Journal of Thermal Science and Engineering Applications, Vol. 4, Issue 2, 21004 (10 pages).  DOI: 10.1115/1.4005735. SAND2011-5538 J.

  4. Carroll, J. D., Brewer, L. N., Battaile, C. C., Boyce, B. L., Emery, J. M. (2012).  “The Effect of Grain Size on Void Deformation,” International Journal of Plasticity.  Available online 22 June 2012.  DOI: 10.1016/j.ijplas.2012.06.002. SAND2012-4023 J.

  5. Foiles, S. M. (2011).  “Comparison of Binary Collision Approximation and Molecular Dynamics for Displacement Cascades in GaAs,” DOI: 10.2172/1029787. SAND2011-8082.

  6. Hjalmarson, H. P., Pineda, A. C., Jorgenson, R. E., Pasik, M. F. (2012).  “Dielectric Surface Effects on Transient Arcs in Lightning Arrester Devices,” 18th International Pulsed Power Conference , Chicago, IL, pp. 223-225.  DOI: 10.1109/PPC.2011.6191419. SAND2011-5300 C.

  7. Kanouff, M. P., Gharagozloo, P. E.; Salloum, M., Shugard, A. D. (2012).  “A Multiphysics Numerical Model of Oxidation and Decomposition in a Uranium Hydride Bed,” Chemical Engineering Science.  Available online 18 May 2012.  DOI: 10.1016/j.ces.2012.05.005. SAND2012-0448 J.

  8. Kerr, B., Axness, C. L., Verley, J. C., Hembree, C. E., Keiter, E. R. (2012).  “A New Time-Dependent Analytic Model for Radiation-Induced Photocurrent in Finite 1D Epitaxial Diodes,” Sandia Technical Report SAND2012-2161.  DOI: 10.2172/1039400

  9. Logan, J., Klasky, S., Lofstead, J., Abbasi, H., Ethier, S., Grout, R., Ku, S. H., Liu, Q., Ma, X., Parashar, M., Podhorszki, N., Schwan, K., Wolf, M. (2011).  "Skel: Generative Software for Producing Skeletal I/O Applications," Proceedings, IEEE Seventh International Conference on e-Science Workshops (eScienceW), Stockholm, Sweden, pp. 191-198.  DOI: 10.1109/eScienceW.2011.26. SAND2011-7850 C.

  10.  Moreland, K. (2012).  "A Survey of Visualization Pipelines," IEEE Transactions on Visualization and Computer Graphics,  Volume PP, Issue 99, pp.1.  PrePrint.  DOI: 10.1109/TVCG.2012.133. SAND2012-0350 J. 

  11. Oldfield, R. A., Kordenbrock, T., Lofstead, J. (2012).  “Developing Integrated Data Services for Cray Systems with a Gemini Interconnect,” Proceedings, Cray User Group Meeting 2012, Stuttgart, Germany. SAND2012-3487 C.

  12. Perks, O. F. J., Beckingsale, D. A., Hammond, S. D., Miller, I., Herdman, J. A., Vadgama, A., Bhalerao, A. H., He, L., Jarvis, S. A. (2012).  “Towards Automated Memory Model Generation Via Event Tracing,” The Computer Journal, Published online 04 June 2012.  DOI: 10.1093/comjnl/bxs051. SAND2012-0920 J. 

  13. Plimpton, S. J., Thompson, A. P. (2012).  "Computational Aspects of Many-Body Potentials,“ Materials Research Society (MRS) Bulletin, Vol. 37, Issue 5, pp. 513-521.  DOI: 10.1557/mrs.2012.96. SAND2012-4783 J. 

  14. Romero, V., Dempsey, J. F., Wellman, G.,  Antoun, B., Scherzinger, W. (2012).  “A Method for Projecting Uncertainty from Sparse Samples of Discrete Random Functions ─ Example of Multiple Stress-Strain Curves,” AIAA 2012-1365, 14th AIAA Non-Deterministic Approaches Conference, Honolulu, HI. SAND2012-2645 C. 

  15. Schultz, P. A. (2012).  “First Principles Predictions of Intrinsic Defects in Aluminum Arsenide, AlAs: Numerical Supplement,” DOI: 10.2172/1039396. SAND2012-2938.

  16. Schultz, P. A. (2012).  “Simple Intrinsic Defects in GaAs: Numerical Supplement,” DOI: 10.2172/1039410. SAND2012-2675.

  17. Timko, H., Crozier, P. S., Hopkins, M. M., Matyash, K., Schneider, R. (2012).  "Why Perform Code-to-Code Comparisons: A Vacuum Arc Discharge Simulation Case Study," Contributions to Plasma Physics, Vol. 52, Issue 4, pp. 295-308.  DOI: 10.1002/ctpp.201100051. SAND2011-3397 J. 

  18. Weinberger, C. R., Battaile, C. C., Buchheit, T. E., Holm, E. A. (2012).  “Incorporating Atomistic Models of Lattice Friction into BCC Crystal Plasticity Models,” International Journal of Plasticity, Vol. 37, pp. 16-30.  DOI: 10.1016/j.ijplas.2012.03.012. SAND2011-9251 J.

LALP-12-025

 

Printer-friendly version -- ASCeNews Quarterly Newsletter - September 2012