最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

NAudio: Capturing Microphone audio and saving to a C# MemoryStream as WAVE format - Stack Overflow

programmeradmin1浏览0评论

I am trying to write a code in C#, using NAudio library. This consists on recording the audio from microphone and sending it to a cloud API. The API method receives a MemoryStream. I can successfully make it working when loading the recorded file into a new MemoryStream.

var recResult = speechToText.Recognize(
audio: new MemoryStream(File.ReadAllBytes("audio-file.wav")),
model: "pt-BR_Multimedia",
contentType: "audio/wav");

What I am trying now is to avoid saving the WAV file and send the MemoryStream directly from the recording process.

As a C# (very) beginner dev, I am using the great samples from NAudio library and as an attempt, I tried to save the recording bytes directly to a MemoryStream as below:

writer is a WaveFileWriter object; memStream is a MemoryStream object

void OnDataAvailable(object sender, WaveInEventArgs e)
{
    if (InvokeRequired)
    {
        //Debug.WriteLine("Data Available");
        BeginInvoke(new EventHandler<WaveInEventArgs>(OnDataAvailable), sender, e);
    }
    else
    {
        //this is my new MemoryStream object
        memStream.Write(e.Buffer, 0, e.BytesRecorded);

        //this is the reguler working writer object
        writer.Write(e.Buffer, 0, e.BytesRecorded);

        int secondsRecorded = (int)(writer.Length / writer.WaveFormat.AverageBytesPerSecond);
        if (secondsRecorded >= 5)
        {
            StopRecording();
        }
        else
        {
            progressBar1.Value = secondsRecorded;
        }
    }
}

While this compiles and runs fine, the MemoryStream isn´t accepted by the API which results in a runtime error ( bad request ).

I can only guess my MemoryStream doesn´t really have the WAVE format in.

Any suggestion on how I can make sure my MemoryStream does represent a WAV audio content ?

Thank you, Márcio

I am trying to write a code in C#, using NAudio library. This consists on recording the audio from microphone and sending it to a cloud API. The API method receives a MemoryStream. I can successfully make it working when loading the recorded file into a new MemoryStream.

var recResult = speechToText.Recognize(
audio: new MemoryStream(File.ReadAllBytes("audio-file.wav")),
model: "pt-BR_Multimedia",
contentType: "audio/wav");

What I am trying now is to avoid saving the WAV file and send the MemoryStream directly from the recording process.

As a C# (very) beginner dev, I am using the great samples from NAudio library and as an attempt, I tried to save the recording bytes directly to a MemoryStream as below:

writer is a WaveFileWriter object; memStream is a MemoryStream object

void OnDataAvailable(object sender, WaveInEventArgs e)
{
    if (InvokeRequired)
    {
        //Debug.WriteLine("Data Available");
        BeginInvoke(new EventHandler<WaveInEventArgs>(OnDataAvailable), sender, e);
    }
    else
    {
        //this is my new MemoryStream object
        memStream.Write(e.Buffer, 0, e.BytesRecorded);

        //this is the reguler working writer object
        writer.Write(e.Buffer, 0, e.BytesRecorded);

        int secondsRecorded = (int)(writer.Length / writer.WaveFormat.AverageBytesPerSecond);
        if (secondsRecorded >= 5)
        {
            StopRecording();
        }
        else
        {
            progressBar1.Value = secondsRecorded;
        }
    }
}

While this compiles and runs fine, the MemoryStream isn´t accepted by the API which results in a runtime error ( bad request ).

I can only guess my MemoryStream doesn´t really have the WAVE format in.

Any suggestion on how I can make sure my MemoryStream does represent a WAV audio content ?

Thank you, Márcio

Share Improve this question edited Mar 17 at 13:41 user23633404 asked Mar 15 at 17:34 Marcio CorreaMarcio Correa 494 bronze badges 3
  • You never told us what API you're using, but perhaps it expects an actual .WAV-file with appropriate headers, and not just raw WAVE data? Try wrapping the stream in the WaveFileWriter and write to that only, not the stream. writer = new WaveFileWriter(memStream, new Wave format(44100, 16, 1)); - Change the sampling rate and number of channels as needed. – Visual Vincent Commented Mar 16 at 8:19
  • @VisualVincent, your suggestion worked perfectly. Thank you very much ! Yes, the API is expecting a WAVE content, not just raw bytes[], and that was what I was trying to do, wrapping raw bytes as WAVE. The writter option, saving to a MemoryStream is what I needed. writerMem = new WaveFileWriter(memStream, new WaveFormat(44100, 16, 1)); then... writerMem.Write(e.Buffer, 0, e.BytesRecorded); – Marcio Correa Commented Mar 16 at 14:02
  • Btw. the API is IBM SpeechToText: cloud.ibm/apidocs/speech-to-text?code=dotnet-standard – Marcio Correa Commented Mar 16 at 14:04
Add a comment  | 

1 Answer 1

Reset to default 2

The answer/solution to this question is to use a WaveFileWriter and save the bytes directly to a MemoryStream rather a file. The WaveFileWritter wraps the raw bytes[] as a formatted Wave stream. Change rate and channels as needed.

writerMem = new WaveFileWriter(memStream, new WaveFormat(44100, 16, 1));

Write read bytes to the WaveFileWriter:

writerMem.Write(e.Buffer, 0, e.BytesRecorded);

Call the API passing the Memory Stream only

var recResult = speechToText.Recognize(
audio: memStream,
model: "pt-BR_Multimedia",
contentType: "audio/wav");

This way, the API accepts the MemoryStream and identify the WAVE stream from within.

发布评论

评论列表(0)

  1. 暂无评论