Algorithmic composition

(Redirected from Music generation)

Algorithmic composition is the technique of using algorithms to create music.

Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures used to plot voice-leading in Western counterpoint, for example, can often be reduced to algorithmic determinacy. The term can be used to describe music-generating techniques that run without ongoing human intervention, for example through the introduction of chance procedures. However through live coding and other interactive interfaces, a fully human-centric approach to algorithmic composition is possible.[1]

Some algorithms or data that have no immediate musical relevance are used by composers[2] as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.

Models for algorithmic composition

edit

Compositional algorithms are usually classified by the specific programming techniques they use. The results of the process can then be divided into 1) music composed by computer and 2) music composed with the aid of computer. Music may be considered composed by computer when the algorithm is able to make choices of its own during the creation process.

Another way to sort compositional algorithms is to examine the results of their compositional processes. Algorithms can either 1) provide notational information (sheet music or MIDI) for other instruments or 2) provide an independent way of sound synthesis (playing the composition by itself). There are also algorithms creating both notational data and sound synthesis.

One way to categorize compositional algorithms is by their structure and the way of processing data, as seen in this model of six partly overlapping types:[3]

  • mathematical models
  • knowledge-based systems
  • grammars
  • evolutionary methods
  • systems which learn
  • hybrid systems

Translational models

edit

This is an approach to music synthesis that involves "translating" information from an existing non-musical medium into a new sound. The translation can be either rule-based or stochastic. For example, when translating a picture into sound, a JPEG image of a horizontal line may be interpreted in sound as a constant pitch, while an upwards-slanted line may be an ascending scale. Oftentimes, the software seeks to extract concepts or metaphors from the medium, (such as height or sentiment) and apply the extracted information to generate songs using the ways music theory typically represents those concepts. Another example is the translation of text into music,[4][5] which can approach composition by extracting sentiment (positive or negative) from the text using machine learning methods like sentiment analysis and represents that sentiment in terms of chord quality such as minor (sad) or major (happy) chords in the musical output generated.

Mathematical models

edit

Mathematical models are based on mathematical equations and random events. The most common way to create compositions through mathematics is stochastic processes. In stochastic models a piece of music is composed as a result of non-deterministic methods. The compositional process is only partially controlled by the composer by weighting the possibilities of random events. Prominent examples of stochastic algorithms are Markov chains and various uses of Gaussian distributions. Stochastic algorithms are often used together with other algorithms in various decision-making processes.

Music has also been composed through natural phenomena. These chaotic models create compositions from the harmonic and inharmonic phenomena of nature. For example, since the 1970s fractals have been studied also as models for algorithmic composition.

As an example of deterministic compositions through mathematical models, the On-Line Encyclopedia of Integer Sequences provides an option to play an integer sequence as 12-tone equal temperament music. (It is initially set to convert each integer to a note on an 88-key musical keyboard by computing the integer modulo 88, at a steady rhythm. Thus 123456, the natural numbers, equals half of a chromatic scale.) As another example, the all-interval series has been used for computer-aided composition.[6]

Knowledge-based systems

edit

One way to create compositions is to isolate the aesthetic code of a certain musical genre and use this code to create new similar compositions. Knowledge-based systems are based on a pre-made set of arguments that can be used to compose new works of the same style or genre. Usually this is accomplished by a set of tests or rules requiring fulfillment for the composition to be complete.[7]

Grammars

edit

Music can also be examined as a language with a distinctive grammar set. Compositions are created by first constructing a musical grammar, which is then used to create comprehensible musical pieces. Grammars often include rules for macro-level composing, for instance harmonies and rhythm, rather than single notes.

Optimization approaches

edit

When generating well defined styles, music can be seen as a combinatorial optimization problem, whereby the aim is to find the right combination of notes such that the objective function is minimized. This objective function typically contains rules of a particular style, but could be learned using machine learning methods such as Markov models.[8] Researchers have generated music using a myriad of different optimization methods, including integer programming,[9] variable neighbourhood search,[10] and evolutionary methods as mentioned in the next subsection.

Evolutionary methods

edit

Evolutionary methods of composing music are based on genetic algorithms.[11] The composition is being built by the means of evolutionary process. Through mutation and natural selection, different solutions evolve towards a suitable musical piece. Iterative action of the algorithm cuts out bad solutions and creates new ones from those surviving the process. The results of the process are supervised by the critic, a vital part of the algorithm controlling the quality of created compositions.

Evo-Devo approach

edit

Evolutionary methods, combined with developmental processes, constitute the evo-devo approach for generation and optimization of complex structures. These methods have also been applied to music composition, where the musical structure is obtained by an iterative process that transform a very simple composition (made of a few notes) into a complex fully-fledged piece (be it a score, or a MIDI file).[12][13]

Systems that learn

edit

Learning systems are programs that have no given knowledge of the genre of music they are working with. Instead, they collect the learning material by themselves from the example material supplied by the user or programmer. The material is then processed into a piece of music similar to the example material. This method of algorithmic composition is strongly linked to algorithmic modeling of style,[14] machine improvisation, and such studies as cognitive science and the study of neural networks. Assayag and Dubnov[15] proposed a variable length Markov model to learn motif and phrase continuations of different length. Marchini and Purwins[16] presented a system that learns the structure of an audio recording of a rhythmical percussion fragment using unsupervised clustering and variable length Markov chains and that synthesizes musical variations from it.

Hybrid systems

edit

Programs based on a single algorithmic model rarely succeed in creating aesthetically satisfying results. For that reason algorithms of different type are often used together to combine the strengths and diminish the weaknesses of these algorithms. Creating hybrid systems for music composition has opened up the field of algorithmic composition and created also many brand new ways to construct compositions algorithmically. The only major problem with hybrid systems is their growing complexity and the need of resources to combine and test these algorithms.[17]

Another approach, which can be called computer-assisted composition, is to algorithmically create certain structures for finally "hand-made" compositions. As early as in the 1960s, Gottfried Michael Koenig developed computer programs Project 1 and Project 2 for aleatoric music, the output of which was sensibly structured "manually" by means of performance instructions. In the 2000s, Andranik Tangian developed a computer algorithm to determine the time event structures for rhythmic canons and rhythmic fugues,[18][19] which were then worked out into harmonic compositions Eine kleine Mathmusik I and Eine kleine Mathmusik II; for scores and recordings see.[20]

See also

edit

References

edit
  1. ^ The Oxford Handbook of Algorithmic Music. Oxford Handbooks. Oxford, New York: Oxford University Press. 2018-02-15. ISBN 9780190226992.
  2. ^ Jacob, Bruce L. (December 1996). "Algorithmic Composition as a Model of Creativity". Organised Sound. 1 (3): 157–165. doi:10.1017/S1355771896000222. hdl:1903/7435. S2CID 15546277.
  3. ^ Papadopoulos, George; Wiggins, Geraint (1999). "AI Methods for Algorithmic Composition: A Survey, a Critical View and Future Prospects" (PDF). Proceedings from the AISB'99 Symposium on Musical Creativity, Edinburgh, Scotland: 110–117.
  4. ^ Davis, Hannah (2014). "Generating Music from Literature". Proceedings of the EACL Workshop on Computational Linguistics for Literature: 1–10. arXiv:1403.2124. Bibcode:2014arXiv1403.2124D. doi:10.3115/v1/W14-0901. S2CID 9028922.
  5. ^ "Generating Music from Text".
  6. ^ Mauricio Toro, Carlos Agon, Camilo Rueda, Gerard Assayag. "GELISP: A Framework to Represent Musical Constraint Satisfaction Problems and Search Strategies." Journal of Theoretical and Applied Information Technology 86 (2). 2016. 327–331.
  7. ^ Brown, Silas (1997). "Algorithmic Composition and Reductionist Analysis: Can a Machine Compose?". CamNotes. Cambridge University New Music Society. Retrieved 28 October 2016.
  8. ^ Herremans, D.; Weisser, S.; Sörensen, K.; Conklin, D. (2015). "Generating structured music for bagana using quality metrics based on Markov models" (PDF). Expert Systems with Applications. 42 (21): 7424–7435. doi:10.1016/j.eswa.2015.05.043. hdl:10067/1274260151162165141.
  9. ^ Cunha, Nailson dos Santos; Anand Subramanian; Dorien Herremans (2018). "Generating guitar solos by integer programming" (PDF). Journal of the Operational Research Society. 69 (6): 971–985. doi:10.1080/01605682.2017.1390528. S2CID 51888815.
  10. ^ Herremans, D.; Sörensen, K. (2013). "Composing fifth species counterpoint music with a variable neighborhood search algorithm" (PDF). Expert Systems with Applications. 40 (16): 6427–6437. doi:10.1016/j.eswa.2013.05.071.
  11. ^ Charles Fox 2006 Genetic Hierarchical Music Structures (American Association for Artificial Intelligence)
  12. ^ Ball, Philip (2012). "Algorithmic Rapture". Nature. 188 (7412): 456. doi:10.1038/488458a.
  13. ^ Fernandez, JD; Vico, F (2013). "AI Methods in Algorithmic Composition: A Comprehensive Survey". Journal of Artificial Intelligence Research. 48: 513–582. arXiv:1402.0585. doi:10.1613/jair.3908.
  14. ^ S. Dubnov, G. Assayag, O. Lartillot, G. Bejerano, "Using Machine-Learning Methods for Musical Style Modeling Archived 2017-08-10 at the Wayback Machine", IEEE Computers, 36 (10), pp. 73–80, October 2003.
  15. ^ G. Assayag, S. Dubnov, O. Delerue, "Guessing the Composer's Mind : Applying Universal Prediction to Musical Style", in Proceedings of International Computer Music Conference, Beijing, 1999.
  16. ^ Marchini, Marco; Purwins, Hendrik (2011). "Unsupervised Analysis and Generation of Audio Percussion Sequences". Exploring Music Contents. Lecture Notes in Computer Science. Vol. 6684. pp. 205–218. doi:10.1007/978-3-642-23126-1_14. ISBN 978-3-642-23125-4.
  17. ^ Harenberg, Michael (1989). Neue Musik durch neue Technik? : Musikcomputer als qualitative Herausforderung für ein neues Denken in der Musik. Kassel: Bärenreiter. ISBN 3-7618-0941-7. OCLC 21132772.
  18. ^ Tangian, Andranik (2003). "Constructing rhythmic canons" (PDF). Perspectives of New Music. 41 (2): 64–92. Retrieved January 16, 2021.
  19. ^ Tangian, Andranik (2010). "Constructing rhythmic fugues (unpublished addendum to Constructing rhythmic canons)". IRCAM, Seminaire MaMuX, 9 February 2002, Mosaïques et pavages dans la musique (PDF). Retrieved January 16, 2021.
  20. ^ Tangian, Andranik (2002–2003). "Eine kleine Mathmusik I and II". IRCAM, Seminaire MaMuX, 9 February 2002, Mosaïques et pavages dans la musique. Retrieved January 16, 2021.

Further reading

edit
edit