-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: save waveform data to file in json format #114
base: develop
Are you sure you want to change the base?
feat: save waveform data to file in json format #114
Conversation
fix: use waveform from file when not extracted fix: file was not read because of wrong if fix: do data validation in `useAudioRecorder` + write waveform to file chore: add readme for new prop + use `rn-fs` instead `rn-fetch-blob`
Please advise on the formula to move forward! |
@dprevost-LMI My initial plan to move forward is that if we can improve the speed of waveform extraction in Android, we should explore that route first. On the last resort, we can think of doing something like this, but this increases the complexity of managing resources in the user's device. If we go this path, then we need to clean this waveform data periodically, and we need to add methods to delete single data or all the data or provided paths. This will increase chances of code breaking and introduce a lot more possibilities before we can make it production-ready that other fellow developers can use in there production projects reliably. |
@kuldip-simform I agree. I did that; I requested help with that issue: #84. In short, I was able to speed up reading I even found the possible original code of the Android and asked the owner if he had any idea how to make it work, but so far, I am still waiting. See stefanJi/Android-Audio-Waveform-Generator#2 |
@dprevost-LMI As mentioned in that issue that MediaCodec is itself slow at processing, I am thinking of exploring ffmpeg for extraction of waveforms. I have not used ffmpeg before so need to look into that. If you have experience with ffmpeg, feel free to try that to speed up extraction. |
We did when we tried this lib with our app and found some other issues. I might explore that! Let's see the time I have 😄 |
Sorry to chime in out of nowhere. I was facing issues with Opus Encoding on a .m4a extension for MediaCodec and MediaExtractor (got some crashes). I did rewrite the extractor to use The app doesn't crash anymore but I still have slow processing on the waves if the audio is at least bigger than 1 minute. I still think that, saving the amplitude to generate the waves statically is probably faster on re-render. Or at least accept an entry of amplitudes if I don't want to run Media Extract whatsoever (having to use FFMpeg on the backend to generate those instead on the app that could cause OOM issues) |
Thanks for chiming in. After some thinking (@kuldip-simform):
I agree that it does; however, I do not see such a thing for the recording itself, so I interpret that as not a hard requirement but a nice to have. Moreover, my implementation gives flexibility in where and what name to give to files; therefore, library users can manage it if they want, and it is turned off by default, so there is no pollution if not explicitly used.
I agree that this is a path to explore, but this path has more chances of breaking the current code and causing some instability until we can have a fully proven and functional implementation. The current one is slow but solid, so it is worth providing this alternative. So, even with the FFmpeg library, we would need a way not always to extract waveform live each time we render audio files since, for example, if we think of a Facebook conversation that you scroll through, you could trigger hundreds, if not thousands, of waveform extractions!
Could you provide either a PR or more details about that in a new issue? That way, we could try to merge it into that repo and share it with everyone!
That's a good idea; I could review my implementation to have an array in entry and give a "FileProdiver" to use that feature with local files or server data! |
As pointed out in this issue, loading audio files on Android can take quite some time.
Therefore, while waiting to fix the Android performance issue and be able to read more and more audio files piling up, it would be better to save the waveform to files and read it instead of continuously extracting it from the file.
One evident scenario is opening and scrolling multiple text conversations where we can have plenty of audio files.
This PR provides this capability where we save the extracted waveform of the audio file to the provided
waveFormPath
. Then, on the next load, we read the JSON data from the file instead of extracting the data live from the audio.In
example/App.tsx
, I provided a usage example where we use the item file name and save/read the waveform from theCache
folder with_wf.json.
appended to the file name.Unfortunately, I wanted to save the recorded waveform, but the format differs from the one when reading it from the file, so I left that one out.
I had to integrate
react-native-fs
for this matter; if needed, I can also switch torn-fetch-blob
, or I could also provide handlers and code this read from/write to file into theApp.tsx
Highlights:
static
namedwaveFormPath
that contains the path of the file where to save the extracted waveform datauseAudioRecorder#stopRecording
now returns a formatted object based onIStoppedAudioRecordingData
instead of two unknown values in an arrayBonuses:
onChangeWaveformLoadState
to use a default implementation and have a leaner code.stopRecordingAction
pauseRecordingAction
resumeRecordingAction
and remove all the unnecessaryPromise.resolve
&Promise.reject
bloating the codeWaveform.tsx
and moved it intouseAudioRecorder.tsx
to have a better separation of concerns and have leaner code in the component