-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API: how to send audio (input) to VST? #9
Comments
I see the problem - so with the current VST you are using, you want to send buffers of audio in at get altered FX audio out, rather than triggering a MIDI synth to subsequently generate audio frames? This VST host was designed in my Undergrad Dissertation to host a synth and create one-shot sounds from various synthesiser patches. This was so I 'learn' an automatic VST synthesiser programmer, by training a neural network representation between the MFCCs features (derived from the sound) and the parameters used to make the said sound. Although it's been a year since I have looked at the source, I suspect the code would need to be modified in Lines 121, 122 of RenderEngine.cpp show the audioBuffer being passed to the plugin as a reference. // Turn Midi to audio via the vst.
plugin->processBlock (audioBuffer, midiNoteBuffer); I have about 20 days left on my thesis, and as I said I will be reviving RenderMan for a creative ML course I'll be doing. Until then my hands are tied! Let me know if I can point you in the right direction, and if not, I'll add this to the list of features to be implemented. Thanks for your perserverance! |
In that case, I have a language suggestion: change references to "VST host" to "VSTi host." Thanks for pointing out the place to start in the code. If I can convince a couple local JUCE experts to help, maybe we can add audio and send a PR. How 'bout we leave this issue open, and maybe someone else in the world will contribute! Aside: Good luck with your thesis! Sounds interesting. I'm working on deep learning as well, only with audio. And I'll be in London in late June for a couple AI conferences. I'd love to visit Goldsmiths while I'm around. I took Rebecca Fiebrink's online course recently and loved it. |
Apologies if I wasted your time and the references to VST have been changed so thanks for pointing that out. Rebecca is well worth meeting if you get the chance - one of the standout lecturers for me by far! Contributions are very welcome but the potential to train neural networks for automatic mixing / FX is an entincing one so I'll see what can be done in the coming months. I should add the VSTi programming project has already accepted to IEEE as a paper. My dissertation this year is focused on neural audio synthesis at high sample rates and in real-time usage! :) |
Hi Scott,
I had the same excitement till a figured out that renderPatch function is
applied only to virtual instruments. Also, the librenderman has feature
extraction process that is a lit bit more complex than my needs. Those
reasons encourage me to build a very simple audio-to-audio vst host
interface, you can checkout in https://github.com/igorgad/dpm
under vstRender folder.
I also decided to replace the boost interface with swig. The bad news is
that it is still not working due to problems with the swig interface ;/
…On Sat, Apr 28, 2018 at 12:32 PM, Leon Fedden ***@***.***> wrote:
Apologies if I wasted your time - not my intention - and the references to
VST have been changed so thanks for pointing that out. Rebecca is well
worth meeting if you get the chance - one of the standout lecturers for me *by
far!*
Contributions are very welcome but the potential to train neural networks
for automatic mixing / FX is an entincing one so I'll see what can be done
in the coming months.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#9 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AFiAyB_0KuDkHrkQ6k2ZPZP-H60rHxCkks5ttIuJgaJpZM4TrNuc>
.
|
@fedden No worries; I'd just been wanting an audio-audio python VST host for a while. That's great about your paper being accepted! @igorgad Great to hear about your project. I pulled your repo, built it, and will see if I can help. Currently getting an error that seems unrelated to swig. I'll send you an Issue... |
@drscotthawley did you find a way to handle audio->VST from python? |
@faroit It's been a while, but yes, we had something working once: Check out @igorgad's "dpm" repo, e.g. https://github.com/igorgad/dpm/blob/master/contrib/run_plugin.py I keep meaning to come back to this, but so many other things to work on! Let me know if this helps and/or if you make progress with it. |
I downloaded a free compressor plugin and started modifying your example of the Dexed synth to use the compressor, and I could query all the parameters and their names, but then...
...I've been all through the code and docs, and I still can't figure it out: How does one send audio into the VST plugin?
I see several "get" routines in the source for RenderEngine.... but for a plugin like an echo or compressor ...how do I "put"?
Thanks!
(little screenshot of how far I got, LOL)
The text was updated successfully, but these errors were encountered: