Computational chemistry

Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. It is essential because, apart from relatively recent results concerning the hydrogen molecular ion (dihydrogen cation, see references therein for more details), the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials.[1]

Examples of such properties are structure (i.e., the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge density distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity, or other spectroscopic quantities, and cross sections for collision with other particles.

The methods used cover both static and dynamic situations. In all cases, the computer time and other resources (such as memory and disk space) increase quickly with the size of the system being studied. That system can be a molecule, a group of molecules, or a solid. Computational chemistry methods range from very approximate to highly accurate; the latter is usually feasible for small systems only. Ab initio methods are based entirely on quantum mechanics and basic physical constants. Other methods are called empirical or semi-empirical because they use additional empirical parameters.

Both ab initio and semi-empirical approaches involve approximations. These range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system (for example, periodic boundary conditions), to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born–Oppenheimer approximation, which greatly simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methods eventually converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable.

In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are used, typically with molecular mechanics force fields, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, cheminformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. Other problems include predicting binding specificity, off-target effects, toxicity, and pharmacokinetic properties.

History edit

Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.

With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One major advance came with the 1951 paper in Reviews of Modern Physics by Clemens C. J. Roothaan in 1951, largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals), for many years the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe.[2] The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960.[3] The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers.[4] By 1971, when a bibliography of ab initio calculations was published,[5] the largest molecules included were naphthalene and azulene.[6][7] Abstracts of many earlier developments in ab initio theory have been published by Schaefer.[8]

In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford.[9] These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.[10]

In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.[11]

One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality."[12] During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry.[13] The Journal of Computational Chemistry was first published in 1980.

Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry.[14] Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".[15]

Fields of application edit

The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.

Computational chemistry has two different aspects:

  • Computational studies, used to find a starting point for a laboratory synthesis or to assist in understanding experimental data, such as the position and source of spectroscopic peaks.
  • Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments.

Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects.

Several major areas may be distinguished within computational chemistry:

  • The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied.[16]
  • Storing and searching for data on chemical entities (see chemical databases).
  • Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)).
  • Computational approaches to help in the efficient synthesis of compounds.
  • Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis).

Catalysis edit

Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts.[17] Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures.[18] Using these methods, researchers can predict values like activation energy, site reactivity[19] and other thermodynamic properties.[18]

Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles.[19] Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets.[18] With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.

Drug Development edit

Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules.[20] Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds.[21] Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals.[22] Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.

Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers.[23] Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.

Computational Chemistry Databases edit

Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data.[24] Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.

Databases can also use purely calculated data.[24] Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.

Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules.[24] Some publicly available chemistry databases include:

  • BindingDB: Contains experimental information about protein-small molecule interactions.[25]
  • RCSB: Stores publically available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors)[26]
  • ChEMBL: Contains data from research on drug development such as assay results.[24]
  • DrugBank: Data about mechanisms of drugs can be found here.[24]

Computational Costs in Chemistry Algorithms edit

Also see: Computational Complexity

For types of computational complexity classes: List of complexity classes

The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.

In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system.[27] This exponential growth is a significant barrier to simulating large or complex systems accurately.

Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency.[28] In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.

Algorithmic Complexity Examples edit

Molecular Dynamics for Argon Gas

1. Molecular Dynamics (MD) edit

see: Molecular dynamics

Algorithm: Solves Newton's equations of motion for atoms and molecules.[29]

Complexity: The standard pairwise interaction calculation in MD leads to an  complexity for   particles. This is because each particle interacts with every other particle, resulting in   interactions.[30] Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to   or even   by grouping distant particles and treating them as a single entity or using clever mathematical approximations.[31][32]

Molecular mechanics potential energy function with continuum solvent.

2. Quantum Mechanics/Molecular Mechanics (QM/MM) edit

see: QM/MM

Algorithm: Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.[33]

Complexity: The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations.[34] For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as  , where   is the number of basis functions in the quantum region.[34] This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.

Molecular orbital diagram of the conjugated pi systems of the diazomethane molecule using Hartree-Fock Method, CH2N2

3. Hartree-Fock Method edit

Algorithm: Finds a single Fock state that minimizes the energy.

Complexity: NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations.[35] The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as   to   depending on implementation, with   being the number of basis functions.[35] The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.

C60 with isosurface of ground-state electron density as calculated with DFT

4. Density Functional Theory (DFT) edit

Algorithm: Investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases.

Complexity: Traditional implementations of DFT typically scale as  , mainly due to the need to diagonalize the Kohn-Sham matrix.[36] The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling.[37] Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.

5. Standard CCSD and CCSD(T) Method edit

Algorithm: CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.


CCSD: Scales as   where   is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.[38]

CCSD(T): With the addition of perturbative triples, the complexity increases to  . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.[38]

6. Linear-Scaling CCSD(T) Method edit

Algorithm: An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.

Complexity: Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD.[38] This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.[38]

Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.[39]

For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.[39]

Quantum Computational Chemistry edit

For the foundation of Quantum Chemistry see:  Quantum Chemistry

For the electronic structure problem see: Electronic Structure  

For  a specific recap of quantum computing see:  Quantum Computing

Quantum computational chemistry is an emerging field that integrates quantum mechanics with computational methods to simulate chemical systems. Despite quantum mechanics' foundational role in understanding chemical behaviors, traditional computational approaches face significant challenges, largely due to the complexity and computational intensity of quantum mechanical equations. This complexity arises from the exponential growth of a quantum system's wave function with each added particle, making exact simulations on classical computers inefficient.[40]

Efficient quantum algorithms for chemistry problems are expected to have run-times and resource requirements that scale polynomially with system size and desired accuracy. Experimental efforts have validated proof-of-principle chemistry calculations, though currently limited to small systems.

Historical Context for Classical Computational Challenges in Quantum Mechanics edit

  • 1929: Dirac noted the inherent complexity of quantum mechanical equations, underscoring the difficulties in solving these equations using classical computation.[41]
  • 1982: Feynman proposed using quantum hardware for simulations, addressing the inefficiency of classical computers in simulating quantum systems.[42]

Methods in Quantum Complexity edit

Qubitization edit

One of the problems with hamiltonian simulation is the computational complexity inherent to its formation. Qubitization is a mathematical and algorithmic concept in quantum computing to the simulation of quantum systems via Hamiltonian dynamics. The core idea of qubitization is to encode the problem of Hamiltonian simulation in a way that is more efficiently processable by quantum algorithms.[43]

Qubitization involves a transformation of the Hamiltonian operator, a central object in quantum mechanics representing the total energy of a system. In classical computational terms, a Hamiltonian can be thought of as a matrix describing the energy interactions within a quantum system. The goal of qubitization is to embed this Hamiltonian into a larger, unitary operator, which is a type of operator in quantum mechanics that preserves the norm of vectors upon which it acts.[43] This embedding is crucial for enabling the Hamiltonian's dynamics to be simulated on a quantum computer.

Mathematically, the process of qubitization constructs a unitary operator   such that a specific projection of   proportional to the Hamiltonian H of interest. This relationship can often be represented as  , where    is a specific quantum state and   is its conjugate transpose. The efficiency of this method comes from the fact that the unitary operator   can be implemented on a quantum computer with fewer resources (like qubits and quantum gates) than would be required for directly simulating  [43]

A key feature of qubitization is in simulating Hamiltonian dynamics with high precision while reducing the quantum resource overhead. This efficiency is especially beneficial in quantum algorithms where the simulation of complex quantum systems is necessary, such as in quantum chemistry and materials science simulations. Qubitization also develops quantum algorithms for solving certain types of problems more efficiently than classical algorithms. For instance, it has implications for the Quantum Phase Estimation algorithm, which is fundamental in various quantum computing applications, including factoring and solving linear systems of equations.

Applications of Qubization in chemistry edit

Gaussian Orbital Basis Sets

In Gaussian orbital basis sets, phase estimation algorithms have been optimized empirically from   to   where   is the number of basis sets. Advanced Hamiltonian simulation algorithms have further reduced the scaling, with the introduction of techniques like Taylor series methods and qubitization, providing more efficient algorithms with reduced computational requirements[44].

Plane Wave Basis Sets

Plane wave basis sets, suitable for periodic systems, have also seen advancements in algorithm efficiency, with improvements in product formula-based approaches and Taylor series methods[43].

Quantum Phase Estimation in Chemistry edit

For the foundational recap of the quantum fourier transform Quantum Fourier Transform

Overview edit

Phase estimation, as proposed by Kitaev in 1996[45], identifies the lowest energy eigenstate (   ) and excited states (   ) of a physical Hamiltonian, as detailed by Abrams and Lloyd in 1999[46]. In quantum computational chemistry, this technique is employed to encode fermionic Hamiltonians into a qubit framework.

Brief Methodology edit

1. Initialization: The qubit register is initialized in a state , which has a nonzero overlap with the Full Configuration Interaction (FCI) target eigenstate of the system.[47] This state   is expressed as a sum over the energy eigenstates of the Hamiltonian ,   , where  represents complex coefficients [47] .

2. Application of Hadamard Gates: Each ancilla qubit undergoes a Hadamard gate application, placing the ancilla register in a superposed state.[47] Subsequently, controlled gates, as shown above, modify this state.

The standard quantum phase estimation circuit utilizes three ancilla qubits. In this configuration, when the ancilla qubits are in the state  , a controlled rotation, denoted as  , is applied to the target state  . This operation is a key component of the process. The term 'QFT' refers to the quantum Fourier transform, a fundamental quantum computing operation detailed by . In the final step of the process, the ancilla qubits are measured in the computational basis. This measurement causes the ancilla qubits to collapse to a specific eigenvalue of the Hamiltonian ( ), simultaneously collapsing the register qubits into an approximation of the corresponding energy eigenstate. This mechanism is central to the functioning of the quantum phase estimation circuit, allowing for the estimation of energy levels of the system under study. Figure reproduced without permission.[48]

3. Inverse Quantum Fourier Transform: This transform is applied to the ancilla qubits, revealing the phase information that encodes the energy eigenvalues.[47]

4. Measurement: The ancilla qubits are measured in the Z basis, collapsing the main register into the corresponding energy eigenstate   based on the probability  .[47]

Requirements edit

The algorithm requires   ancilla qubits, with their number determined by the desired precision and success probability of the energy estimate. Obtaining a binary energy estimate precise to n bits with a success probability   necessitates   ancilla qubits. This phase estimation has been validated experimentally across various quantum architectures.[47]

Applications of QPEs in chemistry edit

Time Evolution and Error Analysis edit

The total coherent time evolution   required for the algorithm is approximately  .[49] The total evolution time is related to the binary precision  , with an expected repeat of the procedure for accurate ground state estimation. Errors in the algorithm include errors in energy eigenvalue estimation ( ), unitary evolutions ( ), and circuit synthesis errors ( ), which can be quantified using techniques like the Solovay-Kitaev theorem.[50]

The phase estimation algorithm can be enhanced or altered in several ways, such as using a single ancilla qubit  for sequential measurements, increasing efficiency, parallelization, or enhancing noise resilience in analytical chemistry.[51] The algorithm can also be scaled using classically obtained knowledge about energy gaps between states.

Limitations edit

Effective state preparation is needed, as a randomly chosen state would exponentially decrease the probability of collapsing to the desired ground state. Various methods for state preparation have been proposed, including classical approaches and quantum techniques like adiabatic state preparation.[52]

Variational Quantum Eigensolver edit


The Variational Quantum Eigensolver is an innovative algorithm in quantum computing, crucial for near-term quantum hardware.[53] Initially proposed by Peruzzo et al. in 2014 and further developed by McClean et al. in 2016, VQE is integral in finding the lowest eigenvalue of Hamiltonians, particularly those in chemical systems [54]. It employs the variational method (quantum mechanics), which guarantees that the expectation value of the Hamiltonian for any parameterized trial wave function is at least the lowest energy eigenvalue of that Hamiltonian.[55] This principle is fundamental in VQE's strategy to optimize parameters and find the ground state energy.  VQE is a hybrid algorithm that utilizes both quantum and classical computers. The quantum computer prepares and measures the quantum state, while the classical computer processes these measurements and updates the system. This synergy allows VQE to overcome some limitations of purely quantum methods.

Applications of VQEs in chemistry edit

1-RDM and 2-RDM Calculation:

For terminology see:  Density Matrix

The reduced density matrices (1-RDM and 2-RDM) can be used to extrapolate the electronic structure of a system.[56]

Ground State Energy Extrapolation:

In the Hamiltonian variational ansatz, the initial state    is prepared to represent the ground state of the molecular Hamiltonian without electron correlations. The evolution of this state under the Hamiltonian, split into commuting segments   , is given by:


where    are variational parameters optimized to minimize the energy, providing insights into the electronic structure of the molecule.

Measurement Scaling:

McClean et al. (2016) and Romero et al. (2019) proposed a formula to estimate the number of measurements (   ) required for energy precision. The formula is given by   , where   are coefficients of each Pauli string in the Hamiltonian. This leads to a scaling of   in a Gaussian orbital basis and   in a plane wave dual basis.[57][58] Note that   is the number of basis functions in the chosen basis set.

Fermionic Level Grouping:

A method by Bonet-Monroig, Babbush, and O'Brien (2019) focuses on grouping terms at a fermionic level rather than a qubit level, leading to a measurement requirement of only   circuits with an additional gate depth of   .[59]

Limitations of VQE

While VQE's application in solving the electronic Schrödinger equation for small molecules has shown success, its scalability is hindered by two main challenges: the complexity of the quantum circuits required and the intricacies involved in the classical optimization process[60]. These challenges are significantly influenced by the choice of the variational ansatz, which is used to construct the trial wave function. Consequently, the development of an efficient ansatz is a key focus in current research. Modern quantum computers face limitations in running deep quantum circuits, especially when using the existing ansatzes for problems that exceed several qubits.

Jordan-Wigner Encoding edit

Also see: Jordan-Wigner Transformations

Jordan-Wigner encoding is a fundamental method in quantum computing, extensively used for simulating fermionic systems like molecular orbitals and electron interactions in quantum chemistry.[61]


In quantum chemistry, electrons are modeled as fermions with antisymmetric wave functions. The Jordan-Wigner encoding maps these fermionic orbitals to qubits, preserving their antisymmetric nature. Mathematically, this is achieved by associating each fermionic creation   and annihilation   operator with corresponding qubit operators through the Jordan-Wigner transformation:


Where   ,   , and   are Pauli matrices acting on the   qubit.

Applications of Jordan-Wigner Encoding in Chemistry edit

Electron Hopping

Electron hopping between orbitals, central to chemical bonding and reactions, is represented by terms like  . Under Jordan-Wigner encoding, these transform as follows:[61]

This transformation captures the quantum mechanical behavior of electron movement and interaction within molecules.[62]

Computational Complexity in Molecular Systems

The complexity of simulating a molecular system using Jordan-Wigner encoding is influenced by the structure of the molecule and the nature of electron interactions. For a molecular system with   orbitals, the number of required qubits scales linearly with   , but the complexity of gate operations depends on the specific interactions being modeled.

Limitations of Jordan–Wigner Encoding edit

The Jordan-Wigner transformation encodes fermionic operators into qubit operators, but it introduces non-local string operators that can make simulations inefficient.[63] The FSWAP gate is used to mitigate this inefficiency by rearranging the ordering of fermions (or their qubit representations), thus simplifying the implementation of fermionic operations.

Fermionic SWAP (FSWAP) Network edit

FSWAP networks rearrange qubits to efficiently simulate electron dynamics in molecules.[64] These networks are essential for reducing the gate complexity in simulations, especially for non-neighboring electron interactions.

When two fermionic modes (represented as qubits after the Jordan-Wigner transformation) are swapped, the FSWAP gate not only exchanges their states but also correctly updates the phase of the wavefunction to maintain fermionic antisymmetry.[65] This is in contrast to the standard SWAP gate, which does not account for the phase change required in the antisymmetric wavefunctions of fermions.

The use of FSWAP gates can significantly reduce the complexity of quantum circuits for simulating fermionic systems.[66] By intelligently rearranging the fermions, the number of gates required to simulate certain fermionic operations can be reduced, leading to more efficient simulations. This is particularly useful in simulations where fermions need to be moved across large distances within the system, as it can avoid the need for long chains of operations that would otherwise be required.

Methods edit

One molecular formula can represent more than one molecular isomer: a set of isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization.

The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures.

The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception is certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants of the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are:

Ab initio methods edit

The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theoretical principles, with no inclusion of experimental data – are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).

Diagram illustrating various ab initio electronic structure methods in terms of energy. Spacings are not to scale.

The simplest type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, in which the correlated electron-electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations (termed post-Hartree–Fock methods) begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include relativistic and spin orbit terms, both of which are far more important for heavy atoms. In all of these approaches, along with a choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz. Ab initio methods need to define a level of theory (the method) and a basis set.

The Hartree–Fock wave function is a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used. Here, the coefficients of the configurations, and of the basis functions, are optimized together.

The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.

A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.

Density functional methods edit

Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.

Semi-empirical methods edit

Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.

Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely emprirical" because they do not derive from a Hamiltonian.[67] Yet, the term "empirical methods", or "empirical force fields" is usually used to describe Molecular Mechanics.[68]

Molecular mechanics edit

In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.

The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class.

These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.[69][70]

Methods for solids edit

Computational chemical methods can be applied to solid-state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone.

Chemical dynamics edit

Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.

The most popular methods for propagating the wave packet associated to the molecular geometry are:

To better understand the split operator technique, an explanation is provided below.

Split Operator Technique edit

How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems.[71] Computational costs are about how much time it takes for computers to caclulate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into 2 different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution. For example:

This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, for the following solution for a differential equation:


The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.


There are ways to reduce this error, which include taking an average of two split equations. Using the above example, it can be done like this.

Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to impliment, and are not useful for solving differential equations despite the higher accuracy.

Computational chemists spend much time trying to find ways to make systems calculated with split operator technique more accurate while minimizing the computational cost. Finding that middle ground of accurate and plausible to calculate is a massive challenge for many chemists trying to simulate molecules or chemical environments.

Molecular dynamics edit

Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.

Monte Carlo edit

Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.[72][73]

Quantum mechanics/Molecular mechanics (QM/MM) edit

QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.

Accuracy edit

Computational chemistry is not an exact description of real-life chemistry, as our mathematical models of the physical laws of nature can only provide us with an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.

Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.

Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).

There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM). In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).

Interpreting molecular wave functions edit

The atoms in molecules (QTAIM) model of Richard Bader was developed to effectively link the quantum mechanical model of a molecule, as an electronic wavefunction, to chemically useful concepts such as atoms in molecules, functional groups, bonding, the theory of Lewis pairs, and the valence bond model. Bader has demonstrated that these empirically useful chemistry concepts can be related to the topology of the observable charge density distribution, whether measured or calculated from a quantum mechanical wavefunction. QTAIM analysis of molecular wavefunctions is implemented, for example, in the AIMAll software package.

Software packages edit

Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:

See also edit

References edit

  1. ^ Willems, Henriëtte; De Cesco, Stephane; Svensson, Fredrik (2020-09-24). "Computational Chemistry on a Budget: Supporting Drug Discovery with Limited Resources: Miniperspective". Journal of Medicinal Chemistry. 63 (18): 10158–10169. doi:10.1021/acs.jmedchem.9b02126. ISSN 0022-2623. PMID 32298123. S2CID 215802432.
  2. ^ Smith, S. J.; Sutcliffe, B. T. (1997). "The development of Computational Chemistry in the United Kingdom". Reviews in Computational Chemistry. 10: 271–316.
  3. ^ Schaefer, Henry F. III (1972). The electronic structure of atoms and molecules. Reading, Massachusetts: Addison-Wesley Publishing Co. p. 146.
  4. ^ Boys, S. F.; Cook, G. B.; Reeves, C. M.; Shavitt, I. (1956). "Automatic fundamental calculations of molecular structure". Nature. 178 (2): 1207. Bibcode:1956Natur.178.1207B. doi:10.1038/1781207a0. S2CID 4218995.
  5. ^ Richards, W. G.; Walker, T. E. H.; Hinkley R. K. (1971). A bibliography of ab initio molecular wave functions. Oxford: Clarendon Press.
  6. ^ Preuss, H. (1968). "DasSCF-MO-P(LCGO)-Verfahren und seine Varianten". International Journal of Quantum Chemistry. 2 (5): 651. Bibcode:1968IJQC....2..651P. doi:10.1002/qua.560020506.
  7. ^ Buenker, R. J.; Peyerimhoff, S. D. (1969). "Ab initio SCF calculations for azulene and naphthalene". Chemical Physics Letters. 3 (1): 37. Bibcode:1969CPL.....3...37B. doi:10.1016/0009-2614(69)80014-X.
  8. ^ Schaefer, Henry F. III (1984). Quantum Chemistry. Oxford: Clarendon Press.
  9. ^ Streitwieser, A.; Brauman, J. I.; Coulson, C. A. (1965). Supplementary Tables of Molecular Orbital Calculations. Oxford: Pergamon Press.
  10. ^ Pople, John A.; Beveridge, David L. (1970). Approximate Molecular Orbital Theory. New York: McGraw Hill.
  11. ^ Allinger, Norman (1977). "Conformational analysis. 130. MM2. A hydrocarbon force field utilizing V1 and V2 torsional terms". Journal of the American Chemical Society. 99 (25): 8127–8134. doi:10.1021/ja00467a001.
  12. ^ Fernbach, Sidney; Taub, Abraham Haskell (1970). Computers and Their Role in the Physical Sciences. Routledge. ISBN 978-0-677-14030-8.
  13. ^ "vol 1, preface". Reviews in Computational Chemistry. Vol. 1. 1990. doi:10.1002/9780470125786. ISBN 978-0-470-12578-6.[permanent dead link]
  14. ^ "The Nobel Prize in Chemistry 1998".
  15. ^ "The Nobel Prize in Chemistry 2013" (Press release). Royal Swedish Academy of Sciences. October 9, 2013. Retrieved October 9, 2013.
  16. ^ Musil, Felix; Grisafi, Andrea; Bartók, Albert P.; Ortner, Christoph; Csányi, Gábor; Ceriotti, Michele (2021-08-25). "Physics-Inspired Structural Representations for Molecules and Materials". Chemical Reviews. 121 (16): 9759–9815. doi:10.1021/acs.chemrev.1c00021. ISSN 0009-2665. PMID 34310133.
  17. ^ Elnabawy, Ahmed O.; Rangarajan, Srinivas; Mavrikakis, Manos (2015-08-01). "Computational chemistry for NH3 synthesis, hydrotreating, and NOx reduction: Three topics of special interest to Haldor Topsøe". Journal of Catalysis. Special Issue: The Impact of Haldor Topsøe on Catalysis. 328: 26–35. doi:10.1016/j.jcat.2014.12.018. ISSN 0021-9517.
  18. ^ a b c Patel, Prajay; Wilson, Angela K. (2020-12-01). "Computational chemistry considerations in catalysis: Regioselectivity and metal-ligand dissociation". Catalysis Today. Proceedings of 3rd International Conference on Catalysis and Chemical Engineering. 358: 422–429. doi:10.1016/j.cattod.2020.07.057. ISSN 0920-5861. S2CID 225472601.
  19. ^ a b van Santen, R. A. (1996-05-06). "Computational-chemical advances in heterogeneous catalysis". Journal of Molecular Catalysis A: Chemical. Proceedings of the 8th International Symposium on the Relations between Homogeneous and Heterogeneous Catalysis. 107 (1): 5–12. doi:10.1016/1381-1169(95)00161-1. ISSN 1381-1169. S2CID 59580128.
  20. ^ Tsui, Vickie; Ortwine, Daniel F.; Blaney, Jeffrey M. (2017-03-01). "Enabling drug discovery project decisions with integrated computational chemistry and informatics". Journal of Computer-Aided Molecular Design. 31 (3): 287–291. Bibcode:2017JCAMD..31..287T. doi:10.1007/s10822-016-9988-y. ISSN 1573-4951. PMID 27796615. S2CID 23373414.
  21. ^ van Vlijmen, Herman; Desjarlais, Renee L.; Mirzadegan, Tara (March 2017). "Computational chemistry at Janssen". Journal of Computer-Aided Molecular Design. 31 (3): 267–273. Bibcode:2017JCAMD..31..267V. doi:10.1007/s10822-016-9998-9. ISSN 1573-4951. PMID 27995515. S2CID 207166545.
  22. ^ Ahmad, Imad; Kuznetsov, Aleksey E.; Pirzada, Abdul Saboor; Alsharif, Khalaf F.; Daglia, Maria; Khan, Haroon (2023). "Computational pharmacology and computational chemistry of 4-hydroxyisoleucine: Physicochemical, pharmacokinetic, and DFT-based approaches". Frontiers in Chemistry. 11. Bibcode:2023FrCh...1145974A. doi:10.3389/fchem.2023.1145974. ISSN 2296-2646. PMC 10133580. PMID 37123881.
  23. ^ El-Mageed, H. R. Abd; Mustafa, F. M.; Abdel-Latif, Mahmoud K. (2022-01-02). "Boron nitride nanoclusters, nanoparticles and nanotubes as a drug carrier for isoniazid anti-tuberculosis drug, computational chemistry approaches". Journal of Biomolecular Structure and Dynamics. 40 (1): 226–235. doi:10.1080/07391102.2020.1814871. ISSN 0739-1102. PMID 32870128. S2CID 221403943.
  24. ^ a b c d e Muresan, Sorel; Sitzmann, Markus; Southan, Christopher (2012), Larson, Richard S. (ed.), "Mapping Between Databases of Compounds and Protein Targets", Bioinformatics and Drug Discovery, Methods in Molecular Biology, Totowa, NJ: Humana Press, vol. 910, pp. 145–164, doi:10.1007/978-1-61779-965-5_8, ISBN 978-1-61779-964-8, PMC 7449375, PMID 22821596
  25. ^ Gilson, Michael K.; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny (2016-01-04). "BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology". Nucleic Acids Research. 44 (D1): D1045–1053. doi:10.1093/nar/gkv1072. ISSN 1362-4962. PMC 4702793. PMID 26481362.
  26. ^ Zardecki, Christine; Dutta, Shuchismita; Goodsell, David S.; Voigt, Maria; Burley, Stephen K. (2016-03-08). "RCSB Protein Data Bank: A Resource for Chemical, Biochemical, and Structural Explorations of Large and Small Biomolecules". Journal of Chemical Education. 93 (3): 569–575. Bibcode:2016JChEd..93..569Z. doi:10.1021/acs.jchemed.5b00404. ISSN 0021-9584.
  27. ^ Modern electronic structure theory. 1. Advanced series in physical chemistry. Singapore: World Scientific. 1995. ISBN 978-981-02-2987-0.
  28. ^ Adcock, Stewart A.; McCammon, J. Andrew (2006-05-01). "Molecular Dynamics: Survey of Methods for Simulating the Activity of Proteins". Chemical Reviews. 106 (5): 1589–1615. doi:10.1021/cr040426m. ISSN 0009-2665. PMC 2547409. PMID 16683746.
  29. ^ Durrant, Jacob D.; McCammon, J. Andrew (2011-10-28). "Molecular dynamics simulations and drug discovery". BMC Biology. 9 (1): 71. doi:10.1186/1741-7007-9-71. ISSN 1741-7007. PMC 3203851. PMID 22035460.
  30. ^ Stephan, Simon; Horsch, Martin T.; Vrabec, Jadran; Hasse, Hans (2019-07-03). "MolMod – an open access database of force fields for molecular simulations of fluids". Molecular Simulation. 45 (10): 806–814. doi:10.1080/08927022.2019.1601191. ISSN 0892-7022. S2CID 119199372.
  31. ^ Kurzak, J.; Pettitt, B. M. (September 2006). "Fast multipole methods for particle dynamics". Molecular Simulation. 32 (10–11): 775–790. doi:10.1080/08927020600991161. ISSN 0892-7022. PMC 2634295. PMID 19194526.
  32. ^ Giese, Timothy J.; Panteva, Maria T.; Chen, Haoyuan; York, Darrin M. (2015-02-10). "Multipolar Ewald Methods, 1: Theory, Accuracy, and Performance". Journal of Chemical Theory and Computation. 11 (2): 436–450. doi:10.1021/ct5007983. ISSN 1549-9618. PMC 4325605. PMID 25691829.
  33. ^ Groenhof, Gerrit (2013), Monticelli, Luca; Salonen, Emppu (eds.), "Introduction to QM/MM Simulations", Biomolecular Simulations: Methods and Protocols, Methods in Molecular Biology, Totowa, NJ: Humana Press, vol. 924, pp. 43–66, doi:10.1007/978-1-62703-017-5_3, hdl:11858/00-001M-0000-0010-15DF-C, ISBN 978-1-62703-017-5, PMID 23034745
  34. ^ a b Tzeliou, Christina Eleftheria; Mermigki, Markella Aliki; Tzeli, Demeter (January 2022). "Review on the QM/MM Methodologies and Their Application to Metalloproteins". Molecules. 27 (9): 2660. doi:10.3390/molecules27092660. ISSN 1420-3049. PMC 9105939. PMID 35566011.
  35. ^ a b Lucas, Andrew (2014). "Ising formulations of many NP problems". Frontiers in Physics. 2: 5. arXiv:1302.5843. Bibcode:2014FrP.....2....5L. doi:10.3389/fphy.2014.00005. ISSN 2296-424X.
  36. ^ Michaud-Rioux, Vincent; Zhang, Lei; Guo, Hong (2016-02-15). "RESCU: A real space electronic structure method". Journal of Computational Physics. 307: 593–613. arXiv:1509.05746. Bibcode:2016JCoPh.307..593M. doi:10.1016/ ISSN 0021-9991. S2CID 28836129.
  37. ^ Motamarri, Phani; Das, Sambit; Rudraraju, Shiva; Ghosh, Krishnendu; Davydov, Denis; Gavini, Vikram (2020-01-01). "DFT-FE – A massively parallel adaptive finite-element code for large-scale density functional theory calculations". Computer Physics Communications. 246: 106853. arXiv:1903.10959. Bibcode:2020CoPhC.24606853M. doi:10.1016/j.cpc.2019.07.016. ISSN 0010-4655. S2CID 85517990.
  38. ^ a b c d Sengupta, Arkajyoti; Ramabhadran, Raghunath O.; Raghavachari, Krishnan (2016-01-15). "Breaking a bottleneck: Accurate extrapolation to "gold standard" CCSD(T) energies for large open shell organic radicals at reduced computational cost". Journal of Computational Chemistry. 37 (2): 286–295. doi:10.1002/jcc.24050. ISSN 0192-8651. PMID 26280676. S2CID 23011794.
  39. ^ a b Whitfield, James Daniel; Love, Peter John; Aspuru-Guzik, Alán (2013). "Computational complexity in electronic structure". Phys. Chem. Chem. Phys. 15 (2): 397–411. arXiv:1208.3334. Bibcode:2013PCCP...15..397W. doi:10.1039/C2CP42695A. ISSN 1463-9076. PMID 23172634. S2CID 12351374.
  40. ^ Gieres, François (2000). "Mathematical surprises and Dirac's formalism in quantum mechanics". Reports on Progress in Physics. 63 (12): 1893–1931. arXiv:quant-ph/9907069. Bibcode:2000RPPh...63.1893G. doi:10.1088/0034-4885/63/12/201. S2CID 250880658.
  41. ^ Dirac, P. A. M. (1929-04-06). "Quantum mechanics of many-electron systems". Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character. 123 (792): 714–733. Bibcode:1929RSPSA.123..714D. doi:10.1098/rspa.1929.0094. ISSN 0950-1207. S2CID 121992478.
  42. ^ Feynman, Richard P. (2019-06-17). Hey, Tony; Allen, Robin W. (eds.). Feynman Lectures On Computation. Boca Raton: CRC Press. doi:10.1201/9780429500442. ISBN 978-0-429-50044-2. S2CID 53898623.
  43. ^ a b c d Low, Guang Hao; Chuang, Isaac L. (2019-07-12). "Hamiltonian Simulation by Qubitization". Quantum. 3: 163. Bibcode:2019Quant...3..163L. doi:10.22331/q-2019-07-12-163. S2CID 119109921.
  44. ^ Kwon, Hyuk-Yong; Curtin, Gregory M.; Morrow, Zachary; Kelley, C. T.; Jakubikova, Elena (2022). "Adaptive Basis Sets for Practical Quantum Computing". arXiv:2211.06471. {{cite journal}}: Cite journal requires |journal= (help)
  45. ^ Kitaev, Alexei (1996-01-17). Quantum measurements and the Abelian Stabilizer Problem (Report). Electronic Colloquium on Computational Complexity (ECCC).
  46. ^ Abrams, Daniel S.; Lloyd, Seth (1999-12-13). "Quantum Algorithm Providing Exponential Speed Increase for Finding Eigenvalues and Eigenvectors". Physical Review Letters. 83 (24): 5162–5165. arXiv:quant-ph/9807070. Bibcode:1999PhRvL..83.5162A. doi:10.1103/PhysRevLett.83.5162. S2CID 118937256.
  47. ^ a b c d e f Nielsen, Michael A.; Chuang, Isaac L. (2010). Quantum computation and quantum information (10th anniversary ed.). Cambridge: Cambridge university press. ISBN 978-1-107-00217-3.
  48. ^ McArdle, Sam; Endo, Suguru; Aspuru-Guzik, Alán; Benjamin, Simon C.; Yuan, Xiao (2020-03-30). "Quantum computational chemistry". Reviews of Modern Physics. 92 (1): 015003. Bibcode:2020RvMP...92a5003M. doi:10.1103/RevModPhys.92.015003. S2CID 119476644.
  49. ^ Du, Jiangfeng; Xu, Nanyang; Peng, Xinhua; Wang, Pengfei; Wu, Sanfeng; Lu, Dawei (2010-01-22). "NMR Implementation of a Molecular Hydrogen Quantum Simulation with Adiabatic State Preparation". Physical Review Letters. 104 (3): 030502. Bibcode:2010PhRvL.104c0502D. doi:10.1103/PhysRevLett.104.030502. ISSN 0031-9007. PMID 20366636.
  50. ^ Lanyon, B. P.; Whitfield, J. D.; Gillett, G. G.; Goggin, M. E.; Almeida, M. P.; Kassal, I.; Biamonte, J. D.; Mohseni, M.; Powell, B. J.; Barbieri, M.; Aspuru-Guzik, A.; White, A. G. (2010). "Towards quantum chemistry on a quantum computer". Nature Chemistry. 2 (2): 106–111. arXiv:0905.0887. Bibcode:2010NatCh...2..106L. doi:10.1038/nchem.483. ISSN 1755-4349. PMID 21124400. S2CID 640752.
  51. ^ Wang, Youle; Zhang, Lei; Yu, Zhan; Wang, Xin (2022). "Quantum Phase Processing and its Applications in Estimating Phase and Entropies". arXiv:2209.14278 [quant-ph].
  52. ^ Sugisaki, Kenji; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji (2022-07-25). "Adiabatic state preparation of correlated wave functions with nonlinear scheduling functions and broken-symmetry wave functions". Communications Chemistry. 5 (1): 84. doi:10.1038/s42004-022-00701-8. ISSN 2399-3669. PMC 9814591. PMID 36698020.
  53. ^ Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O'Brien, Jeremy L. (2014-07-23). "A variational eigenvalue solver on a photonic quantum processor". Nature Communications. 5 (1): 4213. arXiv:1304.3061. Bibcode:2014NatCo...5.4213P. doi:10.1038/ncomms5213. ISSN 2041-1723. PMC 4124861. PMID 25055053.
  54. ^ Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L. (2014-07-23). "A variational eigenvalue solver on a photonic quantum processor". Nature Communications. 5 (1): 4213. Bibcode:2014NatCo...5.4213P. doi:10.1038/ncomms5213. ISSN 2041-1723. PMC 4124861. PMID 25055053.
  55. ^ Chan, Albie; Shi, Zheng; Dellantonio, Luca; Dur, Wolfgang; Muschik, Christine A (2023). "Hybrid variational quantum eigensolvers: merging computational models". arXiv:2305.19200 [quant-ph].
  56. ^ Liu, Jie; Li, Zhenyu; Yang, Jinlong (2021-06-28). "An efficient adaptive variational quantum solver of the Schrödinger equation based on reduced density matrices". The Journal of Chemical Physics. 154 (24). arXiv:2012.07047. Bibcode:2021JChPh.154x4112L. doi:10.1063/5.0054822. ISSN 0021-9606. PMID 34241330. S2CID 229156865.
  57. ^ Romero, Jonathan; Babbush, Ryan; McClean, Jarrod R; Hempel, Cornelius; Love, Peter J; Aspuru-Guzik, Alán (2018-10-19). "Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz". Quantum Science and Technology. 4 (1): 014008. arXiv:1701.02691. doi:10.1088/2058-9565/aad3e4. ISSN 2058-9565. S2CID 4175437.
  58. ^ McClean, Jarrod R; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán (2016-02-04). "The theory of variational hybrid quantum-classical algorithms". New Journal of Physics. 18 (2): 023023. arXiv:1509.04279. Bibcode:2016NJPh...18b3023M. doi:10.1088/1367-2630/18/2/023023. ISSN 1367-2630. S2CID 92988541.
  59. ^ Bonet-Monroig, Xavier; Babbush, Ryan; O'Brien, Thomas E. (2020-09-22). "Nearly Optimal Measurement Scheduling for Partial Tomography of Quantum States". Physical Review X. 10 (3): 031064. arXiv:1908.05628. Bibcode:2020PhRvX..10c1064B. doi:10.1103/PhysRevX.10.031064. S2CID 199668962.
  60. ^ Grimsley, Harper R.; Barron, George S.; Barnes, Edwin; Economou, Sophia E.; Mayhall, Nicholas J. (2023-03-01). "Adaptive, problem-tailored variational quantum eigensolver mitigates rough parameter landscapes and barren plateaus". npj Quantum Information. 9 (1): 19. arXiv:2204.07179. Bibcode:2023npjQI...9...19G. doi:10.1038/s41534-023-00681-0. ISSN 2056-6387. S2CID 257236255.
  61. ^ a b Jiang, Zhang; Sung, Kevin J.; Kechedzhi, Kostyantyn; Smelyanskiy, Vadim N.; Boixo, Sergio (2018-04-26). "Quantum Algorithms to Simulate Many-Body Physics of Correlated Fermions". Physical Review Applied. 9 (4): 044036. Bibcode:2018PhRvP...9d4036J. doi:10.1103/PhysRevApplied.9.044036. ISSN 2331-7019. S2CID 54064506.
  62. ^ Li, Qing-Song; Liu, Huan-Yu; Wang, Qingchun; Wu, Yu-Chun; Guo, Guo-Ping (2022). "A unified framework of transformations based on the Jordan–Wigner transformation". The Journal of Chemical Physics. 157 (13). arXiv:2108.01725. Bibcode:2022JChPh.157m4104L. doi:10.1063/5.0107546. PMID 36209000. S2CID 236912625. Retrieved 2023-11-13.
  63. ^ "Custom Fermionic Codes for Quantum Simulation | Perimeter Institute". Retrieved 2023-11-13.
  64. ^ Kivlichan, Ian D.; McClean, Jarrod; Wiebe, Nathan; Gidney, Craig; Aspuru-Guzik, Alán; Chan, Garnet Kin-Lic; Babbush, Ryan (2018-03-13). "Quantum Simulation of Electronic Structure with Linear Depth and Connectivity". Physical Review Letters. 120 (11): 110501. arXiv:1711.04789. Bibcode:2018PhRvL.120k0501K. doi:10.1103/PhysRevLett.120.110501. PMID 29601758. S2CID 4219888.
  65. ^ Hashim, Akel; Rines, Rich; Omole, Victory; Naik, Ravi K; John MarkKreikebaum, John Mark; Santiago, David I; Chong, Frederic T.; Siddiqi, Irfan; Gokhale, Pranav (2021). "Optimized fermionic SWAP networks with equivalent circuit averaging for QAOA". arXiv:2111.04572 [quant-ph].
  66. ^ Rubin, Nicholas C.; Gunst, Klaas; White, Alec; Freitag, Leon; Throssell, Kyle; Chan, Garnet Kin-Lic; Babbush, Ryan; Shiozaki, Toru (2021-10-27). "The Fermionic Quantum Emulator". Quantum. 5: 568. arXiv:2104.13944. Bibcode:2021Quant...5..568R. doi:10.22331/q-2021-10-27-568. S2CID 233443911.
  67. ^ Counts, Richard W. (1987-07-01). "Strategies I". Journal of Computer-Aided Molecular Design. 1 (2): 177–178. Bibcode:1987JCAMD...1..177C. doi:10.1007/bf01676961. ISSN 0920-654X. PMID 3504968. S2CID 40429116.
  68. ^ Dinur, Uri; Hagler, Arnold T. (1991). Lipkowitz, Kenny B.; Boyd, Donald B. (eds.). Reviews in Computational Chemistry. John Wiley & Sons, Inc. pp. 99–164. doi:10.1002/9780470125793.ch4. ISBN 978-0-470-12579-3.
  69. ^ Rubenstein, Lester A.; Zauhar, Randy J.; Lanzara, Richard G. (2006). "Molecular dynamics of a biophysical model for β2-adrenergic and G protein-coupled receptor activation" (PDF). Journal of Molecular Graphics and Modelling. 25 (4): 396–409. doi:10.1016/j.jmgm.2006.02.008. PMID 16574446. Archived (PDF) from the original on 2008-02-27.
  70. ^ Rubenstein, Lester A.; Lanzara, Richard G. (1998). "Activation of G protein-coupled receptors entails cysteine modulation of agonist binding" (PDF). Journal of Molecular Structure: THEOCHEM. 430: 57–71. doi:10.1016/S0166-1280(98)90217-2. Archived (PDF) from the original on 2004-05-30.
  71. ^ Lukassen, Axel Ariaan; Kiehl, Martin (2018-12-15). "Operator splitting for chemical reaction systems with fast chemistry". Journal of Computational and Applied Mathematics. 344: 495–511. doi:10.1016/ ISSN 0377-0427. S2CID 49612142.
  72. ^ Allen, M. P. (1987). Computer simulation of liquids. D. J. Tildesley. Oxford [England]: Clarendon Press. ISBN 0-19-855375-7. OCLC 15132676.
  73. ^ McArdle, Sam; Endo, Suguru; Aspuru-Guzik, Alán; Benjamin, Simon C.; Yuan, Xiao (2020-03-30). "Quantum computational chemistry". Reviews of Modern Physics. 92 (1): 015003. Bibcode:2020RvMP...92a5003M. doi:10.1103/RevModPhys.92.015003. ISSN 0034-6861.

General bibliography edit

Specialized journals on computational chemistry edit

External links edit