This project explores the intersection of large language models (LLMs) and unique interfaces, specifically focusing on music generation. The idea was to test the capability of LLMs to create musical compositions by generating structured responses that could be rendered using Tone.js. The resulting snippets are intentionally short and repetitive, serving as loops with engaging musical dynamics.
While the compositions may not achieve professional levels of musical sophistication, they represent a fascinating experiment in AI-driven creativity. Tone.js and similar music libraries provide a robust framework for translating text-based model outputs into musical forms, allowing models to 'speak' through music. This project highlights the potential of such libraries in facilitating new modalities for AI interaction, even when the models themselves are not specialized in music.