最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - Is there a way to pipe the stdio stream with the output segment ts files during m3u8 from mp4 conversion in ffmpeg

programmeradmin5浏览0评论

I am writing an automated pipeline in Node-JS that takes an MP4 file, compresses it, then returns the buffer of the compressed file, feeds it into another FFmpeg command, which then converts the mp4 to .m3u8 for HLS.

The problem I’m having is that I am not able to pipe the segment file data stream to automatically upload each .ts file to the cloud versus having them stored to the disk (which is not an option because this is a cloud function).

I either have to opt in to them being created in memory in the directory, which then my manifest file is based off of, or I create one large buffer with the binary data and then the manifest data intertwined. I am able to perform all of this if I can write these files to the disk and store them locally, but, as I said, this is for a cloud function.

The command I’m using for conversion is

  const ffmpeg = spawn(
    "ffmpeg",
    [
      "-i",
      `pipe:0`,
      "-codec",
      "copy",
      "-start_number",
      "0",
      "-hls_time",
      "10",
      "-hls_list_size",
      "0",
      "-f",
      "hls",
      "-hls_flags",
      "delete_segments",
      "pipe:1",
    ],
    { stdio: ["pipe", "pipe", "pipe", "pipe"] }
  );

I am writing to the stdin with an input buffer.
The only way around not generating physical TS files and not getting an EPIPE error (by trying to pipe the segment outputs to stdout) is by passing the delete segments flag. At least that’s what I have found. Then I am intercepting the stream on stdout, which then I get the large buffer array of all of the binary and manifest file data (the latter is in utf-8 encoding so I can parse it out).

Even if I physically parse it and then upload the buffer as a blob, and then insert that downloadURL from the cloud in the areas of the manifest file where the different .ts files are called sequentially, I am not able to make it work.

Please ask any clarifying questions.

Update I think I found a good solution that also aligns with one of the comments under this post. Here it is:

Instead of spawning multiple child processes in node, I spawned just one that contains all of the compression information and hls conversion flags. This solves the issue of storing the buffer in memory manually and writing it to stdin for the next ffmpeg function. Additionally, I manually set the video bitrate and the maxbuffer size for the video and encoded the audio. I played around with adaptive bitrate streaming, but found it was not necessary for my use case and I could always simply opt in to it later. Additionally, while the ffmpeg function is running, I am employing a file watcher with chokidar and fetching, uploading, and deleting each .ts file to minimize memory usage in a cloud environment. I then store all of the download urls in a map with the key set to the segment, then parse the manifest file, replacing each segment name with its respective download url. If anyone wants to see my solution or the specific ffmpeg command, please let me know.

I am writing an automated pipeline in Node-JS that takes an MP4 file, compresses it, then returns the buffer of the compressed file, feeds it into another FFmpeg command, which then converts the mp4 to .m3u8 for HLS.

The problem I’m having is that I am not able to pipe the segment file data stream to automatically upload each .ts file to the cloud versus having them stored to the disk (which is not an option because this is a cloud function).

I either have to opt in to them being created in memory in the directory, which then my manifest file is based off of, or I create one large buffer with the binary data and then the manifest data intertwined. I am able to perform all of this if I can write these files to the disk and store them locally, but, as I said, this is for a cloud function.

The command I’m using for conversion is

  const ffmpeg = spawn(
    "ffmpeg",
    [
      "-i",
      `pipe:0`,
      "-codec",
      "copy",
      "-start_number",
      "0",
      "-hls_time",
      "10",
      "-hls_list_size",
      "0",
      "-f",
      "hls",
      "-hls_flags",
      "delete_segments",
      "pipe:1",
    ],
    { stdio: ["pipe", "pipe", "pipe", "pipe"] }
  );

I am writing to the stdin with an input buffer.
The only way around not generating physical TS files and not getting an EPIPE error (by trying to pipe the segment outputs to stdout) is by passing the delete segments flag. At least that’s what I have found. Then I am intercepting the stream on stdout, which then I get the large buffer array of all of the binary and manifest file data (the latter is in utf-8 encoding so I can parse it out).

Even if I physically parse it and then upload the buffer as a blob, and then insert that downloadURL from the cloud in the areas of the manifest file where the different .ts files are called sequentially, I am not able to make it work.

Please ask any clarifying questions.

Update I think I found a good solution that also aligns with one of the comments under this post. Here it is:

Instead of spawning multiple child processes in node, I spawned just one that contains all of the compression information and hls conversion flags. This solves the issue of storing the buffer in memory manually and writing it to stdin for the next ffmpeg function. Additionally, I manually set the video bitrate and the maxbuffer size for the video and encoded the audio. I played around with adaptive bitrate streaming, but found it was not necessary for my use case and I could always simply opt in to it later. Additionally, while the ffmpeg function is running, I am employing a file watcher with chokidar and fetching, uploading, and deleting each .ts file to minimize memory usage in a cloud environment. I then store all of the download urls in a map with the key set to the segment, then parse the manifest file, replacing each segment name with its respective download url. If anyone wants to see my solution or the specific ffmpeg command, please let me know.

Share Improve this question edited yesterday Jacob L asked Feb 14 at 2:36 Jacob LJacob L 191 silver badge3 bronze badges 1
  • "...and then upload the buffer as a blob" Can you share a testable link to a copy of your uploaded TS file? – VC.One Commented Feb 14 at 11:00
Add a comment  | 

1 Answer 1

Reset to default 1

No, this isn't possible, at least not with STDIO. Those files are written to the filesystem.

You can use named pipes if you want.

In my applications, I simply have FFmpeg write everything to a specific temp directory, and watch that directory for new files. When a new file comes in, I upload it and then delete the temp file.

发布评论

评论列表(0)

  1. 暂无评论