google-research.github.io/seanet/musiclm/examples
We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff". MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption. To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.
not perfect by any means but its impressive how smooth the generation is esp if youre only a casual listener not knowing what to look for
who will be the first rapper to hop on an AI-generated beat
we need AI mixing and mastering now
wasn't Jai Paul working on something like this a long time ago where like you could upload stems and it would auto-master/mix it and he used jasmine as proof of concept?
wasn't Jai Paul working on something like this a long time ago where like you could upload stems and it would auto-master/mix it and he used jasmine as proof of concept?
what if he jus tryna steal ideas
wasn't Jai Paul working on something like this a long time ago where like you could upload stems and it would auto-master/mix it and he used jasmine as proof of concept?
wasn't Jai Paul working on something like this a long time ago where like you could upload stems and it would auto-master/mix it and he used jasmine as proof of concept?
Isn’t Landr that too (mastering)
Isn’t Landr that too (mastering)
wasnt familiar with this but googled it and yeah seems to be. not surprised