You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to develop a text to speech Discord bot. To achieve shorter latencies, I have tried the streaming on voice generation, but I'm encountering nearly identical latencies with both settings.
I am using v0.5.0.
With streaming:
constaudioStream=awaitelevenlabs.generate({stream: true,voice: "Josh",text: text,model_id: "eleven_multilingual_v2",optimize_streaming_latency: 2,voice_settings: {stability: 0.5,similarity_boost: 0.8,style: 0.0,use_speaker_boost: true,}});/*Output times (in milliseconds): 2143214821423678*/
Without streaming:
constaudioStream=awaitelevenlabs.generate({stream: false,voice: "Josh",text: text,model_id: "eleven_multilingual_v2",optimize_streaming_latency: 2,voice_settings: {stability: 0.5,similarity_boost: 0.8,style: 0.0,use_speaker_boost: true,}});/*Output times (in milliseconds):2145222222412268*/
Is it normal or is there an error on the code? Is it expected for the streaming mode to have similar or identical latencies compared to non-streaming mode under these settings? I have also tried using the direct API streaming endpoint (POST request in the https://elevenlabs.io/docs/api-reference/streaming) and got similar results.
I appreciate any guidance or insights you can provide!
The text was updated successfully, but these errors were encountered:
Kaanayden
changed the title
Streaming results similar response times with standard request
Similar Latencies in Text-to-Speech for Streaming and Standard Requests
May 26, 2024
const before = Date.now()
let firstChunk = false
for await (const chunk of audioStream) {
if (!firstChunk) {
console.log(Date.now() - before)
firstChunk = true
}
}```
something like this
Hello ElevenLabs Team,
I am trying to develop a text to speech Discord bot. To achieve shorter latencies, I have tried the streaming on voice generation, but I'm encountering nearly identical latencies with both settings.
I am using v0.5.0.
With streaming:
Without streaming:
Is it normal or is there an error on the code? Is it expected for the streaming mode to have similar or identical latencies compared to non-streaming mode under these settings? I have also tried using the direct API streaming endpoint (POST request in the https://elevenlabs.io/docs/api-reference/streaming) and got similar results.
I appreciate any guidance or insights you can provide!
The text was updated successfully, but these errors were encountered: