AudioCipher is the first DAW plugin to offer a text-to-MIDI converter for creative inspiration. Users type in words and transform them into melodies and chord progressions. Take control of parameters like key signature, scale, chord extensions, and rhythm automation. Leverage the randomization features to audition unlimited variations. Find a starting point and build from there!
> Innovation #1: Built a JUCE plugin from a lost musical code, popular among European composers 100 years ago AudioCipher's algorithm was inspired by a century-old musical code from France, known as the French musical cryptogram. Composers used this cipher to convert their names into short musical phrases. They used the musical phrase as a motif in their songs. We created an app that makes these same transformations instantly, as a source of creative inspiration for producers and composers. > Innovation #2: Expanded on the original cipher, to cover 12 notes, nine scales and six chord extensions. The original cipher featured one scale only: A minor. We extended this basic model across the twelve root notes, seven diatonic modes, harmonic minor and the 12-note chromatic scale. We also introduced the idea of chord extensions and rhythm automation, with randomization features to create unlimited variations. > Innovation #3: Positioning ourselves for an emergent Artificial Intelligence revolution (text-to-music APIs) In the past three months, three major AI text-to-music services have emerged: Google MusicLM, Riffusion, and Mubert. I believe that this is an indicator that a wave of music generators will soon emerge, supporting text-to-music prompts like ChatGPT and Midjourney. There is currently a plugin called Neutone that acts as the host for multiple AI music APIs, but they perform tone transfer only (not MIDI generation). There is no text-to-music functionality anywhere in Neutone. AudioCipher is positioning itself to play that role in the near future. We're creating a user interface for something that doesn't exist yet: a set of text-to-music APIs linked up to neural networks, to generate MIDI and audio on command. I believe that these APIs will become available within 1-5 years. Our app will act as the interface that takes text prompts from users, key signature, chord and rhythm information. When the user hits a "generate" button, AudioCipher will send that packet to a neural network in the cloud and retrieve a musical segment to drag and drop onto a track in their DAW. While we wait for these APIs to become available, our team is working on other important functionality that's not dependent on AI. V4 will have a save button and a separate MIDI library plugin to help producers keep their ideas organized.