Listir
Last reviewed 2026-04-15

From CALMUS to Suno: Algorithmic Composition and Rights

A timeline of algorithmic music tools from 1981 to 2024, and how rights and ownership have shifted as machines moved from assistant to author.

This is informational content, not legal advice. Consult a qualified attorney for guidance specific to your situation.

Introduction

Algorithmic composition is not new. Composers have used rule systems, probability, and mathematical structures to generate music for decades. What changed is the degree of autonomy. Early tools required deep musical knowledge to operate. Current tools require a text prompt.

This shift matters for rights and ownership. When a composer uses a tool to realize their creative vision, authorship is clear. When a user types "upbeat jazz song about rain" and receives a finished track, the authorship question is harder to answer.

This guide traces the history of algorithmic composition tools, from early rule-based systems through neural networks to consumer AI music generators. At each stage, we examine how rights and ownership were understood.

Early Systems: 1950s to 1960s

The roots of algorithmic composition extend to the 18th century (Mozart's Musikalisches Wurfelspiel, a dice-based composition game), but the computational era begins in the mid-20th century.

Illiac Suite (1957). Lejaren Hiller and Leonard Isaacson at the University of Illinois programmed the ILLIAC I computer to compose a string quartet using Markov chains and rule-based algorithms. It is generally considered the first computer-composed piece. The composers published the work under their own names; the computer was a tool, and authorship was not questioned.

Iannis Xenakis, throughout the 1950s and 1960s, used stochastic mathematics and computer programs to generate compositional structures. His works (Metastaseis, Pithoprakta) are attributed to Xenakis. The algorithms implemented his compositional theories. The computer executed; the composer authored.

David Cope and EMI (1981)

David Cope began work on Experiments in Musical Intelligence (EMI, also known as Emmy) in 1981 at the University of California, Santa Cruz. EMI analyzed existing compositions by specific composers, extracted stylistic patterns, and recombined them to generate new works in the style of the original composer.

EMI produced compositions in the style of Bach, Mozart, Chopin, and others. In blind listening tests, audiences sometimes could not distinguish EMI's output from human compositions. Cope published the works under his own name, crediting EMI as a tool.

The rights question with EMI was relatively straightforward. Cope designed the system, selected the training corpus, curated the outputs, and made editorial decisions about which pieces to present. He was the author. The system was his instrument.

Cope eventually destroyed EMI's database in 2004, reportedly due to the controversy surrounding the project. He continued with a successor system called Emily Howell, which generated original compositions (not in the style of existing composers) through interactive feedback loops.

CALMUS: Rule-Based Composition in Iceland (1988)

CALMUS, developed by Kjartan Olafsson in Iceland beginning in 1988, is a rule-based algorithmic composition system. Unlike EMI's pattern-recombination approach, CALMUS uses music theory rules (counterpoint, voice leading, harmonic progression) to generate compositions from input parameters set by the user.

The user defines constraints: key, time signature, melodic contour, harmonic rhythm, voice count. CALMUS generates compositions within those constraints. The system is a realization engine for compositional decisions that the user specifies.

CALMUS is significant for three reasons. First, it demonstrated that rigorous rule-based systems could produce musically coherent output. Second, it emerged from Iceland's experimental music community, a context where technology and composition have long intersected. Third, it represents the clearest model of "tool-assisted composition" where authorship unambiguously belongs to the human. The user makes every structural decision; the system fills in the details according to established music theory.

From a rights perspective, CALMUS compositions belong to their creators. The system does not introduce external training data or learned patterns from other composers' works. It applies rules. The rights model is identical to using a notation program or a sequencer.

Iamus: Autonomous Composition (2012)

Iamus, developed at the University of Malaga in Spain, is a computer cluster that composes contemporary classical music autonomously. In 2012, it produced "Iamus," an album of compositions performed by the London Symphony Orchestra. It was the first full-length album of computer-composed music performed by a professional orchestra.

Iamus uses a different approach from both EMI and CALMUS. It generates music through evolutionary algorithms that evolve musical fragments according to fitness criteria, without reference to existing human compositions. The system does not learn from or imitate specific composers.

The rights situation with Iamus introduced new complexity. The compositions were not created "in the style of" anyone. They were not constrained by user-specified parameters in the way CALMUS operates. The system generated music with minimal human direction. The researchers published the works and made them available, but the authorship question was more ambiguous than with any prior system.

Under current copyright law, Iamus's compositions likely lack copyright protection in most jurisdictions. No human made the specific creative decisions that produced the musical output. The researchers designed the system, but system design is not the same as composing.

Neural Networks Enter Music (2016 to 2019)

Google's Magenta project, launched in 2016, applied deep learning to music generation. Magenta used recurrent neural networks (RNNs) and later transformer architectures to generate melodies, drum patterns, and multi-instrument compositions. The project was open-source and research-oriented.

Magenta's approach differed fundamentally from rule-based systems. Instead of encoding music theory explicitly, neural networks learn statistical patterns from training data. The system ingests thousands of compositions and generates new music that reflects the patterns it absorbed.

This introduced the training data question that now dominates AI music law. When a neural network generates music that reflects patterns learned from copyrighted compositions, is the output a derivative work? Magenta used public-domain and licensed datasets, partially sidestepping this issue. But the architecture it pioneered led directly to the commercial tools that would face legal challenges.

OpenAI released MuseNet in 2019, a deep neural network capable of generating 4-minute compositions across 10 instrument types and multiple genres. MuseNet demonstrated that transformer models (the same architecture behind GPT) could produce coherent long-form music. OpenAI did not release MuseNet as a commercial product and did not fully disclose its training data.

These research projects established the technical foundation. The rights implications were discussed in academic contexts but had not yet reached courts or legislatures.

Consumer AI Music Tools (2023 to 2024)

The consumer AI music era began in 2023 and 2024 with the launches of Suno, Udio, and Stable Audio.

Suno (2024) generates complete songs with vocals, instrumentation, and lyrics from text prompts. The user types a description; the system produces a finished track. Generation takes under 90 seconds. Suno has not disclosed its training data. The RIAA filed suit in June 2024, alleging that Suno trained on copyrighted recordings.

Udio (2024) offers similar capabilities with particular strength in audio fidelity and genre accuracy. It also features audio inpainting for editing specific sections of generated tracks. Udio faces the same RIAA lawsuit and the same training data questions.

Stable Audio (Stability AI, 2024) generates instrumental music and sound design elements. Unlike Suno and Udio, Stable Audio disclosed that its training data comes from a licensing agreement with AudioSparx. This transparency is a differentiator, though Stability AI's financial instability introduces a different category of risk.

The rights shift in this generation is fundamental. In prior eras, algorithmic composition tools required musical knowledge to operate. CALMUS required understanding of counterpoint. EMI required curatorial judgment. Even Magenta required technical skill to deploy and configure.

Consumer AI tools require only a text prompt. The human contribution is reduced to describing a desired outcome. This challenges authorship claims in ways that prior tools did not.

Comparison: Tool, Approach, and Rights Model

Tool: ILLIAC Suite. Year: 1957. Approach: Markov chains, rule-based. Output quality: Experimental (string quartet). Rights model: Human composer credited. Commercial use: Academic/published.

Tool: EMI (David Cope). Year: 1981. Approach: Pattern analysis and recombination. Output quality: Stylistically convincing (classical). Rights model: Cope as author, EMI as tool. Commercial use: Published recordings and scores.

Tool: CALMUS (Kjartan Olafsson). Year: 1988. Approach: Rule-based (music theory). Output quality: Musically coherent, constrained. Rights model: User owns output unambiguously. Commercial use: Yes, by user.

Tool: Iamus. Year: 2012. Approach: Evolutionary algorithms. Output quality: Professional (LSO-performed). Rights model: Ambiguous, likely uncopyrightable. Commercial use: Released as album.

Tool: Google Magenta. Year: 2016. Approach: Deep learning (RNN, transformer). Output quality: Variable (research-grade). Rights model: Open-source outputs. Commercial use: Research, open tools.

Tool: OpenAI MuseNet. Year: 2019. Approach: Transformer neural network. Output quality: Coherent 4-minute compositions. Rights model: Not commercially released. Commercial use: Demo only.

Tool: Suno. Year: 2024. Approach: Neural network (proprietary). Output quality: Consumer-ready songs with vocals. Rights model: Paid users own outputs (per ToS), training data disputed. Commercial use: Yes on paid plans.

Tool: Udio. Year: 2024. Approach: Neural network (proprietary). Output quality: High-fidelity songs with vocals. Rights model: Paid users own outputs (per ToS), training data disputed. Commercial use: Yes on paid plans.

Tool: Stable Audio. Year: 2024. Approach: Latent diffusion. Output quality: Strong instrumentals, no vocals. Rights model: Licensed training data, paid users own outputs. Commercial use: Yes on Professional plan.

The Authorship Spectrum

The history reveals a spectrum of authorship, from clear to contested.

At one end: CALMUS and similar rule-based tools. The human specifies the musical parameters, the tool executes. Authorship is unambiguous. The tool is an instrument.

In the middle: EMI and Iamus. The human designs the system and curates the output, but the specific musical decisions are made by the algorithm. Authorship depends on how much weight you give to system design versus composition of specific notes.

At the other end: Suno, Udio, and similar prompt-to-song generators. The human provides a text description. The system makes all compositional decisions: melody, harmony, rhythm, arrangement, lyrics, vocals. The human selects from outputs. Whether selection constitutes authorship is the core legal question.

The key insight: there is no bright line. The transition from "tool" to "author" happened gradually across decades. Courts and legislatures are now forced to draw a line that technology blurred incrementally.

Current copyright frameworks generally require "human authorship" involving creative choices. The US Copyright Office has used this standard to deny registration of purely AI-generated works while allowing registration of works with sufficient human creative input. The question is where "sufficient" begins.

Sources

Hiller, L. and Isaacson, L. (1959). Experimental Music: Composition with an Electronic Computer. McGraw-Hill.

Cope, D. (1991). Computers and Musical Style. A-R Editions.

Cope, D. (2001). Virtual Music: Computer Synthesis of Musical Style. MIT Press.

Olafsson, K. CALMUS system documentation: https://www.calmus.net/

Iamus project: https://geb.uma.es/iamus/

Google Magenta: https://magenta.tensorflow.org/

OpenAI MuseNet: https://openai.com/index/musenet/

Suno: https://suno.com/

Udio: https://udio.com/

Stable Audio: https://stableaudio.com/

US Copyright Office, Copyright Registration Guidance for AI-Generated Works: https://www.copyright.gov/ai/

RIAA v. Suno and RIAA v. Udio (June 2024): https://www.riaa.com/

The Bottom Line

Algorithmic composition has existed for nearly 70 years. For most of that history, the tools required musical expertise to operate, and authorship was not contested. The human composed; the machine assisted.

The shift from rule-based systems (CALMUS, EMI) to trained neural networks (Magenta, MuseNet) to consumer generators (Suno, Udio) compressed the human role from composer to prompter. Each generation of tools reduced the musical knowledge required to produce output and increased the creative autonomy of the machine.

Rights frameworks have not kept pace. Copyright law still assumes a human author. PROs still require registered works to have human composers. Streaming platforms are developing AI policies in real time. Courts have cases pending but no definitive rulings.

For creators, the practical takeaway is this: the more your process resembles composition (writing melodies, structuring arrangements, making iterative creative decisions), the stronger your authorship claim. The more it resembles ordering from a menu (typing a prompt, selecting from options), the weaker your claim becomes. The tools in between, those that require musical input and return enhanced output, occupy the safest legal ground.

Stay current on AI music rights

Tool updates, policy changes, and court rulings. No spam.

We will never share your email. Unsubscribe anytime.