GROMACS is a molecular dynamics package mainly designed for simulations of proteins, lipids, and nucleic acids. It was originally developed in the Biophysical Chemistry department of University of Groningen, and is now maintained by contributors in universities and research centers worldwide.[4][5][6] GROMACS is one of the fastest and most popular software packages available,[7][8] and can run on central processing units (CPUs) and graphics processing units (GPUs).[9] It is free, open-source software released under the GNU General Public License (GPL),[3] and starting with version 4.6, the GNU Lesser General Public License (LGPL).

GROMACS
Developer(s)University of Groningen
Royal Institute of Technology
Uppsala University[1]
Initial release1991; 33 years ago (1991)
Stable release
2023.3 / 19 October 2023; 5 months ago (2023-10-19)[2]
Repository
Written inC++, C, CUDA, OpenCL, SYCL
Operating systemLinux, macOS, Windows, any other Unix variety
PlatformMany
Available inEnglish
TypeMolecular dynamics simulation
LicenseLGPL versions >= 4.6,
GPL versions < 4.6[3]
Websitewww.gromacs.org

History edit

The GROMACS project originally began in 1991 at Department of Biophysical Chemistry, University of Groningen, Netherlands (1991–2000). Its name originally derived from this time (GROningen MAchine for Chemical Simulations) although currently GROMACS is not an abbreviation for anything, as little active development has taken place in Groningen in recent decades. The original goal was to construct a dedicated parallel computer system for molecular simulations, based on a ring architecture (since superseded by modern hardware designs). The molecular dynamics specific routines were rewritten in the programming language C from the Fortran 77-based program GROMOS, which had been developed in the same group.[citation needed]

Since 2001, GROMACS is developed by the GROMACS development teams at the Royal Institute of Technology and Uppsala University, Sweden.

Features edit

GROMACS is operated via the command-line interface, and can use files for input and output. It provides calculation progress and estimated time of arrival (ETA) feedback, a trajectory viewer, and an extensive library for trajectory analysis.[3] In addition, support for different force fields makes GROMACS very flexible. It can be executed in parallel, using Message Passing Interface (MPI) or threads. It contains a script to convert molecular coordinates from Protein Data Bank (PDB) files into the formats it uses internally. Once a configuration file for the simulation of several molecules (possibly including solvent) has been created, the simulation run (which can be time-consuming) produces a trajectory file, describing the movements of the atoms over time. That file can then be analyzed or visualized with several supplied tools.[10]

OpenCL and CUDA are possible for actual GPUs of AMD, Intel, and Nvidia with great acceleration against CPU based runs since Version 5 or higher. In Version 2021 OpenCL is deprecated and SYCL is in early new support.[11]

Easter eggs edit

As of January 2010, GROMACS' source code contains approximately 400 alternative backronyms to GROMACS as jokes among the developers and biochemistry researchers. These include "Gromacs Runs On Most of All Computer Systems", "Gromacs Runs One Microsecond At Cannonball Speeds", "Good ROcking Metal Altar for Chronical Sinner", "Working on GRowing Old MAkes el Chrono Sweat", and "Great Red Owns Many ACres of Sand". They are randomly selected to possibly appear in GROMACS's output stream. In one instance, such an bacronym, "Giving Russians Opium May Alter Current Situation", caused offense.[12]

Applications edit

Under a non-GPL license, GROMACS is widely used in the Folding@home distributed computing project for simulations of protein folding, where it is the base code for the project's largest and most regularly used series of calculation cores.[13][14] EvoGrid, a distributed computing project to evolve artificial life, also employs GROMACS.[15]

See also edit

References edit

  1. ^ The GROMACS development team
  2. ^ "Gromacs Downloads". gromacs.org. Retrieved 2023-11-01.
  3. ^ a b c "About Gromacs". gromacs.org. 16 August 2010. Retrieved 2012-06-26.
  4. ^ "People — Gromacs". gromacs.org. 14 March 2012. Retrieved 26 June 2012.
  5. ^ Van Der Spoel D, Lindahl E, Hess B, Groenhof G, Mark AE, Berendsen HJ (2005). "GROMACS: fast, flexible, and free". J Comput Chem. 26 (16): 1701–18. doi:10.1002/jcc.20291. PMID 16211538. S2CID 1231998.
  6. ^ Hess B, Kutzner C, Van Der Spoel D, Lindahl E (2008). "GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation". J Chem Theory Comput. 4 (2): 435–447. doi:10.1021/ct700301q. hdl:11858/00-001M-0000-0012-DDBF-0. PMID 26620784. S2CID 1142192.
  7. ^ Carsten Kutzner; David Van Der Spoel; Martin Fechner; Erik Lindahl; Udo W. Schmitt; Bert L. De Groot; Helmut Grubmüller (2007). "Speeding up parallel GROMACS on high-latency networks". Journal of Computational Chemistry. 28 (12): 2075–2084. doi:10.1002/jcc.20703. hdl:11858/00-001M-0000-0012-E29A-0. PMID 17405124. S2CID 519769.
  8. ^ Berk Hess; Carsten Kutzner; David van der Spoel; Erik Lindahl (2008). "GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation". Journal of Chemical Theory and Computation. 4 (3): 435–447. doi:10.1021/ct700301q. hdl:11858/00-001M-0000-0012-DDBF-0. PMID 26620784. S2CID 1142192.
  9. ^ "GPUs — Gromacs". gromacs.org. 20 January 2012. Retrieved 26 June 2012.
  10. ^ "GROMACS flow chart". gromacs.org. 18 January 2009. Archived from the original on 24 June 2010. Retrieved 26 June 2012.
  11. ^ https://www.iwocl.org/wp-content/uploads/22-iwocl-syclcon-2021-alekseenko-slides.pdf[bare URL PDF]
  12. ^ "Re: Working on Giving Russians Opium May Alter Current Situation". Folding@home. 17 January 2010. Retrieved 2012-06-26.
  13. ^ Pande lab (11 June 2012). "Folding@home Open Source FAQ". Folding@home. Stanford University. Archived from the original (FAQ) on 17 July 2012. Retrieved 26 June 2012.
  14. ^ Adam Beberg; Daniel Ensign; Guha Jayachandran; Siraj Khaliq; Vijay Pande (2009). "Folding@home: Lessons from eight years of volunteer distributed computing". 2009 IEEE International Symposium on Parallel & Distributed Processing (PDF). pp. 1–8. doi:10.1109/IPDPS.2009.5160922. ISBN 978-1-4244-3751-1. ISSN 1530-2075. S2CID 15677970.
  15. ^ Markoff, John (29 September 2009). "Wanted: Home Computers to Join in Research on Artificial Life". The New York Times. Retrieved 26 June 2012.

External links edit