-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to train on your own music? #11
Comments
Hi, for tag annotation, you can try some NN-based tag model, e.g., https://github.com/minzwon/sota-music-tagging-models, as most tagging models have similar training data, there is no need to worry out-of-domain. For model fine-tuning, you can use smaller lr, e.g., 1e-5, with 2-3 epochs with the train.py file :) |
Im just getting into testing the training, but so far it seems my workflow is such: I am still running the captioning conversions, I had a couple issues as the captioning is based on 10 second clips from the audio, so it might miss entirely a vocalist or something. The workaround for now was to take 3-5 10 second samples from around the middle of the audio and combine those 3-5 captions together, then run it through a summarize in T5-basic, which seems to work for now. If it all works well, I can probably put it up, as it should work fine on Windows or linux, and I can throw together a docker compose for it quickly. This new csv will be converted to a .parquet file, and should then be good to train with the prebuilt scripts in this repo. I haven't gotten to actually testing the training yet, the captioning and summarizing takes roughly 5-7 seconds per song. I'll update once I do. I will also be testing the repo feizc linked for captioning, as it may well be much better than the one I am using. EDIT: I also found I got better results when I added the tag_list (Genres) field directly to the end of the caption. The caption all get summarized, which strips a lot of data out of the 3-5 captions, but this helped greatly with preserving the genre/style of the music in the caption, which obviously is important for the training. An easy example of this would be a metal song, where the caption would originally call it aggressive rock or metal, adding the tag_list to the caption before summarization might result in it more correctly calling it progressive metal or death metal. This should result in more specificity in recreating the style during inference. |
OK. So it appears you don't really NEED the parquet file, as you just use that to generate a json to train from. Might not be a bad idea with regard to storing or sharing datasets, however. I have been testing on Windows currently, once I get some more concrete numbers I will move it to a GPU linux box based on the required vram. I have been testing against Giant, which is the worst case for vram usage as it is by far the largest model. Using a smaller model will lower the vram required, but I am not sure by how much as I have just got it working. It may also be possible to reduce vram usage by changing how the models are loaded, such as loading t5_xxl in int8/int4 instead of bf16, though this will always come with some level of reduced quality. If the quality drop is acceptable you can go with that, but it will require a good bit of testing. If you are familiar with the image generation Flux, this is how you can have much much smaller models but still get very good quality out of them, the much much smaller model may be 80/85% of the default model, but a quarter of the size and vram requirements. It may also be possible to load the different models to different GPUs to help prevent the vram <-> system ram swap, which I may fool with as I currently have a system with some GPUs that do not have enough vram to run the whole system, but could each run a component. I have zero idea on how to do this, but I will look into how it was done with the base Flux model and t5. If testing training, training output is a bit of a mess. Easiest way to check progress is the log, which will by default log every 100 steps. You must also set --ckpt-every unless you want to use the default 100,000, this will change how often it generates a usable model and resume point. It was necessary for me to lower it as I am testing on a machine with insufficient vram, so hitting 100,000 steps will take a long time. Current results: |
@eftSharptooth hi thank you for your comments! How many songs did you train with and what could you control? Did the resulting finetune have the timbre of the singer and what kind of control did you have over the output when using your finetuned version? |
I have not had any success with completing a training yet, as it crashes when trying to save the model. Most likely due to running it on Windows (I had to change the nccl to gloo), and it spikes vram while compiling for save. I am hoping to be able to utilize a 48GB card to test the training on linux once that card frees up. If anyone has completed any training so far, could you please just note down what OS, models size and card (VRAM) you used? It would likely help people with going forward with experiments. |
@tensimixt Also to better answer your question, the custom dataset was about 12000 full length songs after trimming out anything where the captioning didn't come out with a minimal level of descriptiveness. Those could be done manually, but I just wanted something I could do a test with. The next move is likely to chop those songs up into segments, then recaption those with beginning of song, middle of song, end of song. Or intro, song, outtro. Then add that into the captions and see if I can complete a training that way. Having much shorter segments to train on will significantly reduce the vram requirements, and allow me to do a (very slow to train) model on a 24GB card. I just want to do some tests to see how it all works before having to rent an 8xA100 or 8xA6000 system, as I expect that would be required to train a large or giant model in a reasonable timeframe. |
@eftSharptooth I would like to try training this on a large dataset like the one you mentioned. Where did you find this? |
which specific dataset you guys are using to train?@eftSharptooth |
Sorry for slow response! The database I have been testing with is just my local music, I did full songs and chopped up segments, captioned them all with the same music captioning mentioned in the repo code. I then used a program called mp3tag (free) to push all the mp3 metadata to a csv, merged the corresponding captions in, and used the repo to generate a custom_dataset.json file which you can train with. I had no luck with the full songs, as only a 24GB card was available for testing and it wasnt enough VRAM. I will be trying again soon with the chopped up dataset. NOTE: I think the csv with mp3tag is only required if you are trying to add more info to the captions. I added artist, genre etc to the caption so that it could learn the styles as well, to give better control over the output. Like I said though, no real success with the full length songs yet on a 24GB card. I also did download the following datasets for testing with, and also as they were referenced in the repo it allowed me to make sure I was conforming to the required formats when creating my own dataset, though it isn't really necessary anymore, as I posted the relevant details somewhere here in this github: |
The json file for the custom dataset is pretty simple, the most complicated part (for custom datasets) is for sure putting better info into the captions. an example of the json for a custom_dataset is as follows:
I pretty much all the stuff at the end is added from the mp3tag data. The captions themselves were tougher, as the captioning model is really only good for (i think) 20 or 30 second clips, so I had it sample beginning, middle and end, then stuck the captions together and summarized them with some super basic LLM. Then tacked the mp3tag info on the end and called it a day. Captioning 12000 songs (and putting it all together into the custom dataset) took a couple days, but I have a python script somewhere that helped with automating it, Ill find it and link here when I do. |
Any instructions on what would be required to tag the data and train this to generate music in your style?
The text was updated successfully, but these errors were encountered: