Show HN: FFmpeg-over-IP – Connect to remote FFmpeg servers

github.com

144 points by steelbrain 2 days ago

Dear HN,

I’m excited to show case a personal project. It has helped me quite a bit with my home lab, I hope it can help you with yours too! ffmpeg-over-ip has two components, a server and a client. You can run the server in an environment with access to a GPU and a locally installed version of ffmpeg, the client only needs network access to the server and no GPU or ffmpeg locally.

Both and client and the server need a shared filesystem for this to work (so the server can write output to it, and client can read from it). In my usecase, smb works well if your (GPU) server is a windows machine, nfs works really well for linux setups.

This utility can be useful in a number of scenarios:

- You find passing through a (v)GPU to your virtual machines complicated

- You want to use the same GPU for ffmpeg in multiple virtual machines

- Your server has a weak GPU so you want to use the GPU from your gaming machine

- Your GPU drivers in one OS are not as good as another (AMD RX6400 never worked for me in linux, but did so in windows)

I’ve posted some instructions in the Github package README, please let me know if they are unclear in any way and I’ll try to help!

Here's the link: https://github.com/steelbrain/ffmpeg-over-ip

NavinF 2 days ago

> need a shared filesystem for this to work

Oh oof. I thought removing that requirement would be the whole point of something named "FFmpeg-over-IP". Shared filesystem usually involves full trust between machines, bad network error handling, and setting things up by hand (different config on every distro)

  • steelbrain 2 days ago

    I hear you. If your usecase doesn't require live streaming of converted file, a sibling comment may fit the usecase: https://news.ycombinator.com/item?id=41745593

    • qwertox a day ago

      One could write a small Python server in one day which receives a chunked POST request and transcodes the video on the fly.

      The same server could also offer a download link for the transcoded video, and also receive URL parameters for the transcoding options. Or the transcoded video itself is returned after the subprocess finishes.

      Something along the lines of:

      Server

        import asyncio
        import subprocess
        from aiohttp import web
        async def transcode_video(request):
            cmd = [
                "ffmpeg",
                "-i", "pipe:0",
                "-f", "mp4",
                "-preset", "fast",
                "output_file.mp4"
            ]
            process = await asyncio.create_subprocess_exec(
                *cmd,
                stdin=subprocess.PIPE,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE
            )
            async for chunk in request.content.iter_any():
                process.stdin.write(chunk)
                await process.stdin.drain()
            await process.stdin.close()
            await process.wait()
            return web.Response(text="Video transcoded successfully!")
        app = web.Application()
        app.router.add_post('/upload', transcode_video)
        if __name__ == '__main__':
            web.run_app(app, port=8080)
      
      Client

        curl -X POST http://<server_ip>:8080/upload \
             --header "Content-Type: application/octet-stream" \
             --data-binary @video.mp4
      
      
      This way no shared filesystem is required.
      • steelbrain a day ago

        This and the other solutions in this thread wont work with plex or the media servers. The way it works with the media servers is that there’s a variable number of outputs, and you just cant emulate that on the client side without a virtual filesystem or by some creative hacks that use multiple file descriptors (even then how do you handle dynamic?).

        As for why the variable number of outputs exist in the first place, it’s because when you’re a media server, you dont want to wait until the full file is converted before serving it to the user. You want to start playing immediately so the output is chunked into multiple files

    • NavinF 2 days ago

      Ah unfortunately my use case is similar to yours: Use Windows desktop to transcode files stored on a Linux NAS. My files are ~100GB so encoding multiple files in parallel would waste a lot of space and unnecessarily burn write cycles

      • steelbrain 2 days ago

        FWIW, you can run an smb server from within a docker container (on the linux side). I forget which one I used but it makes the setup painless and you can configure different auth strategies as well. Network errors (little bit of packet loss) are generally handled by the underlying OS, and in case of windows, it can use multiple network paths simultaneously to give you the aggregate bandwidth.

        • linuxdude314 2 days ago

          You typically don’t even have to go to that length. This is usually supported out of the box.

          No idea what problem this is trying to solve. Just seems like the user wasn’t familiar enough with how to use a NAS.

          • NavinF a day ago

            > out of the box

            If you're talking about how today's open source NAS software has a button for enabling NFS/SMB on a directory:

            1. I built my NAS long before software like that was common. Some of my custom stuff (Eg tiered storage, storing the first few seconds of every video on flash, etc) would be a pain to migrate.

            2. Some of my Windows machines are untrusted. Unlike the NAS, they have internet access. I can't give them read access to the entire NAS, but I still want to use their GPUs and CPUs to run ffmpeg on arbitrary files on the NAS.

            3. I could spend a day writing more code to move files in and out of a shared directory every time I need to run ffmpeg on them. But I was hoping "FFmpeg-over-IP" would let me run a remote ffmpeg on a local file. Like calling an RPC

            • linuxdude314 a day ago

              Did you build your NAS in the 90’s or early 2000’s?

              This has been available in Synology and QNAP devices for that long…

              I used to own the tape library that is the Disney Vault. A common pattern for transcoding is to have a watched folder. Drop files you need transcoded in, get files you want in a diff directory when finished.

          • oefrha 2 days ago

            More like you’re not familiar enough with the video encoding performance of a typical NAS that is not in the thousand dollar range. Or what’s a NAS — it can be anything really.

            • linuxdude314 a day ago

              Why would you transcode on the NAS? Not what I’m suggesting at all; perf on most NAS is awful for that…

              There’s a pretty well understood concept of what a NAS is; this isn’t a complicated philosophical problem.

              A very common workflow in motion picture production is to use NAS for storage on a fast network, mount the SMB share, have a script/tool/app that monitors the ingest directory and writes to an output dir.

              FWIW the key differentiator between a NAS and other types of network storage is the protocols they use.

              If files is the main primitive, it’s a NAS; it’s its blocks it’s considered a SAN.

              Sometimes SANs have NAS “heads” for clients that want file access or a block level storage device.

              • otterley 18 hours ago

                My five-year-old Synology DS418play has support for hardware realtime transcoding via its integrated low-power GPU. I think I paid about $600 for it. You can get a new DS423+ for about the same price.

  • yarg 2 days ago

    Couldn't you create and share a virtual file system with FUSE?

    • NavinF 2 days ago

      Mounting anything requires root even if you use FUSE.

      There are ways to intercept writes without root and send them to another machine. Eg you could use LD_PRELOAD. But that's exactly the kind of pain in the ass that I was hoping a project named FFmpeg-over-IP would automate.

      • duped 2 days ago

        mount doesn't require root, but even without getting into mount namespaces, this is why fusermount3 exists

        • NavinF a day ago

          Interesting, the man page says "To allow mounting and unmounting by unprivileged users, fusermount3 needs to be installed set-uid root". Is this the default case on any distros?

          • duped a day ago

            It's the case on all distros that distribute it, because otherwise it's useless.

    • yjftsjthsd-h 2 days ago

      Like SSHFS, or something custom? And how much does that help with any of those concerns?

      • yarg 2 days ago

        The primary concern is trust/security.

        User space limits potential security impacts, and a restricted VFS could be used to prevent clients from accessing anything that they shouldn't.

        (Although I'm not even pretending to know whether or not this is a remotely good idea - my guess is that it isn't, but I'd like to know just how bad an idea it actually is.)

steelbrain 2 days ago

There is an existing solution in the community called rffmpeg[1] but that did not work for me. It seems too heavy weight for what I was trying to do. It requires access to sudo, global configuration files (/etc/) and most importantly, this, which is a deal-breaker for me:

> Note that if hardware acceleration is configured in the calling application, the exact same hardware acceleration modes must be available on all configured hosts, and, for fallback to work, the local host as well, or the ffmpeg commands will fail.

I wanted to mix and match windows and linux, and it was clear rffmpeg wasn't going to work for me.

One plus rffmpeg does have is that it supports multiple target hosts, so it's useful if you want some load balancing action. Although you could do the same with ffmpeg-over-ip, just selecting the servers dynamically but rffmpeg does make it easier out of the box.

[1]:https://github.com/joshuaboniface/rffmpeg

qwertox 2 days ago

IDK, this lacks a lot of examples and explaining what exactly it is for. Is it for remote transcoding only?

Because if so, the word transcoding does not appear neither in this Show HN nor in the GitHub README.

And I can't think of any other use for this than to perform hardware-assisted transcoding on a remote machine.

Apparently it has nothing to do with OpenGL or CUDA, which are the primary uses for a GPU. And ffmpeg itself has more use cases than just transcoding files.

  • dylan604 2 days ago

    I question for why not just SSH into the more powerful computer to run the ffmpeg command like a normal person. Why would you need to install a server and client? There are plenty of binaries available for ffmpeg to avoid compiling from source difficulties.

    Solutions like these are the things that just make me tilt my head and make the clueless "huh?" sound.

    • oefrha 2 days ago

      Because OpenSSH on Windows is a shitshow. Mounted SMB shares can’t be accessed because they’re tied to login sessions, and you can’t mount them from within an SSH session (IIRC in theory you can, but in practice it never worked for me).[1] Which means ffmpeg is practically useless if you need input that you can’t (e.g. livestream) or don’t want to copy in advance, or need output in realtime.

      At least that’s why I built something similar in Go for myself.

      Before anyone mentions WSL: it either didn’t support GPU passthrough or was very difficult to configure when I set this up a few years ago, don’t know about current status. And you can’t call Windows executables from WSL when you SSH into it.

      [1] https://github.com/PowerShell/Win32-OpenSSH/issues/139

      • dylan604 2 days ago

        there are other ways to mount drives. you can even mount drives on a PC like normal computers with a "/" mount point instead of ridiculous C:\ stuff that does not require WSL.

        you can use a URI as an input to ffmpeg which allows for not needing access to a C:\. pretty much any decent NAS will allow access via URI instead of drive letter mounts.

        if something was learned during the building of the server/client app, then great if that's why it was done. but it's only one of the 99 ways of skinning the cat, and probably not even close to the best one.

        • oefrha a day ago

          Don’t know why you’re fixated on NAS, I use my program mostly from my Mac laptop and desktop, and it allows me to simply replace ffmpeg with ffmpeg-remote, nothing more, which is not true for ssh (thanks double quoting). The thing took me less than hour to write and has been running for years, I don’t need to learn anything new to justify the trivial amount of investment. I also wrote a simple utility to monitor all my ffmpeg-remote processes, reporting all kinds of statistics, which is more involved if I was running a raw ssh command. And who cares about “the best” way, and who are you to decide my simple solution isn’t the best for me?

slt2021 2 days ago

this is really not much different from ssh-ing to your GPU server and running ffmpeg. Very roundabout way to execute remote bash command on a server

I dont meant to discourage you, but it is possible to replace your entire repo with a simple bash alias:

  alias ffmpeg-over-ip='ssh myserver "ffmpeg \"\$@\""'
  • steelbrain 2 days ago

    This comment gave me flashbacks to another comment I read a while ago: https://news.ycombinator.com/item?id=9224

    If your usecase is solved by an alias, that's really good! I am glad you can use an alias. My usecase required a bit more so I wrote this utility and am sharing it with my peers

    • slt2021 2 days ago

      Dropbox had a cutting edge file synchronization algorithm, they solved a problem of large file sync over unreliable network. There was a clear engineering IP they developed. (https://dropbox.tech/infrastructure/rewriting-the-heart-of-o...)

      I looked over your source code and just saw a bash wrapper with webserver, so no significant IP. Any potential innovations: like possible distributed transcoding, sharding/partitioning transcoding pipeline to speed-up are missing.

      its just a bash wrapper, thats why I commented about bash alias.

      I don't mean to sound like a jerk, but I was honestly looking for some innovation about ffmpeg

      • steelbrain 2 days ago

        This was not meant to offend. I appreciate you explaining your message further.

        There's no significant IP in this utility, it's something I wrote for a usecase and it works well for that usecase. I ran the server side on a windows machine, and I did not want to setup a full blown ssh server and expose it over the network for this usecase.

        Another thing was logging. The way logging is currently setup really hits the sweet spot for debugability for me. Lastly it's the rewrites. I've used the config to rewrite incoming codecs to something the machine supports.

        This is a purpose built utility that does one job and IMO does it fairly well. It's definitely not as complex as Dropbox but also not as simple as an ssh alias. I appreciate you sharing the alias code (not just the comment) so if some of our peers have usecases that could be solved by it, they are welcome to use that as well!

    • KolmogorovComp 2 days ago

      > My usecase required a bit more so I wrote this utility and am sharing it with my peers

      Can you expand on that?

      • steelbrain 2 days ago

        For sure! One of the software I was working with was hardcoding what codecs it would use based on the operating system it was running on. The rewrites section of the configuration allows more than just file paths, I've used it to rewrite incoming codec requests

  • asveikau 2 days ago

    I would add screen or tmux to that because you may run a long job that you may want to get back to after a connection drop.

  • amelius 2 days ago

    I suppose this only works if you have some shared filesystem. Or does this work with piping too?

    • slt2021 2 days ago

      the original poster's project also requires shared filesystem.

      as for bash-ssh solution, you don't need shared FS, if you don't need intermediate results. you can use SCP to get the final result after transcoding has finished. something like:

        alias ffmpeg-over-ip='ssh myserver "ffmpeg \"\$@\" /tmp/output/"'
        alias download-results='scp myserver:/tmp/output/*.* .'
      
        ffmpeg-over-ip <args> && download-results
      
      
      
      my meta point being is, before engineering something with programming language, and handrolling webservers, with auth, and workers - just try to implement your system with bash scripts.

      Martin Klepmann created an entire database using just bash aliases in his book "Designing Data Intensive Applications"

      • whoopdedo a day ago

        Even the temporary file is optional. Ffmpeg supports a number of network protocols. For instance, you could read from one port and write to another with

            ffmpeg -f webm -i tcp://[::]:55601?listen -c copy -f webm tcp://[::]:55602?listen
        
        There's also UDP, SFTP, and more convenient protocols such as SRT or the long-in-the-tooth RTMP. I expect it will eventually add WHIP/WHEP as well.
        • amelius a day ago

          I experimented a bit with piping, but it turned out that mp4 doesn't always like to be streamed.

maxlin a day ago

I wrote something similar to this. Instead of requiring a shared network drive, dependent files are automatically detected and transferred over with HTTP, and as an advanced feature, it splits the video to chunks to allow 20x+ encode speeds, concurrently utilizing multiple machines to encode a single input file and utilizing NVENC. Video is then concatenated together and API for that mode is identical.

Wrote it in C# and it runs both on Windows and Linux. The original need for this actually was to accelerate encodes a system I run on a cloud VM needs when I have either my desktop or laptop available, who then work as encode slaves and pick up jobs, run them and send the output files back. If no slaves are available, the system falls back to local CPU encode. Later I ended up using this using a local Windows server machine as the "client" the slaves also connect to to ask for jobs (actually the system runs inside a Unity project in that case because C# is awesome).

Probably a bit of a rare problem I hit with this is that different NVENC generations generate different enough bitstreams that they can't be concatenated together. From my pool of machines I found out only my RTX 2070 and RTX 3080 Mobile are compatible. GTX 970 and Quadro P5000 that I had laying around both were incompatible with that pool and with each other.

Interestingly I found no similar software to the chunked encoding in my system, but am chalking it up on me not searching hard enough or these being integrated deep in to commercial/private systems. It would make sense that big players like YouTube have something like this in place, as reducing latency for individual upload transcodes finishing is beneficial instead of limiting their speed to a single hardware encoding / software encoding node. It's the same amount of processing that needs to be done in the end anyway, so might as well use 10 nodes to complete single jobs really quickly one after another

steelbrain 2 days ago

The latest release[1] on Github should have binaries combinations for almost everybody here. If you don't find a binary for your environment, you can probably just run the javascript files and it'll be fine.

If you are wondering why the binaries are so large, it's because they are packaged-up node.js binaries. I tried to learn a compile-to-native language to rewrite this in so you won't have to download such bloated binaries but didn't get far. I learned Swift and still have a WIP branch up for it[2]. I gave up after learning that there's no well maintained windows http server for swift.

I'm currently on my journey to learn Rust. So maybe one day when I do, you'll see the binary sizes drop.

[1]:https://github.com/steelbrain/ffmpeg-over-ip/releases/tag/v3... [2]:https://github.com/steelbrain/ffmpeg-over-ip/tree/swift-lang

  • Cyph0n 2 days ago

    Go would be a good fit for this kind of application. But Rust is a great choice too.

    Keep up the good work!

  • chrisldgk a day ago

    You might be able to switch to Bun or Deno with relatively little effort and create binaries that are probably a lot smaller (and maybe also faster) with each of their compile commands [1][2]. In the end I do also believe that a compiled language like Swift, Rust or Go might be a better fit though.

    [1] https://docs.deno.com/runtime/reference/cli/compiler/ [2] https://bun.sh/docs/bundler/executables

    • steelbrain 3 hours ago

      FWIW, `bun` produced larger output files

          ~/P/s/ffmpeg-over-ip  bun build ./src/client.ts --compile --minify --sourcemap --bytecode --outfile ffmpeg-over-ip-client ; ls -lah | grep ffmpeg-over-ip-client
            [6ms]  minify  -90.92 KB (estimate)
            [3ms]  bundle  6 modules
            [44ms] compile  ffmpeg-over-ip-client
          -rwxrwxrwx    1 steelbrain  staff    56M Oct  6 16:53 ffmpeg-over-ip-client
      
      
      and `deno` produced even bigger outputs

          ~/P/s/ffmpeg-over-ip  deno compile --allow-read --allow-net ./lib/client.js ; ls -lah | grep client
          Compile file:///Users/steelbrain/Projects/steelbrain/ffmpeg-over-ip/lib/client.js to client
          -rwxr-xr-x    1 steelbrain  staff    65M Oct  6 16:56 client
LeoPanthera 2 days ago

As other comments have suggested it's difficult to imagine that this has a lot of advantages to simply using pipes over ssh. And pipes don't need a shared filesystem either.

I suppose ssh would be tricky if you're combining multiple input files.

  • _hyn3 2 days ago

       xargs
    • LeoPanthera 2 days ago

      That doesn't help send multiple files over a single pipe. You can't just cat them.

      • bqmjjx0kac a day ago

        You can `tar` them though :)

        • LeoPanthera a day ago

          That's true, although that requires storage on the other end. Not too much of a problem in most cases, perhaps.

          But with single input, single output, you can just do:

            ssh destination "ffmpeg -switches" <infile >outfile
          • _hyn3 a day ago

            you can xargs your files and even parallelize (read the xargs man page) before you ssh.

            • bqmjjx0kac 20 hours ago

              Great point! Something like this could work, modulo ffmpeg arguments which I didn't check.

                  find . -name '*.mkv' \
                    -exec bash -c \
                      "ssh server ffmpeg < {} > {}.transcoded.mp4" \;
mdrzn a day ago

"Your server has a weak GPU so you want to use the GPU from your gaming machine" would be my usecase for this, but it seems overly complicated, having to setup a shared filesystem.

I have a VPS on OVH which uses ffmpeg to convert mp3 files from 320kbps to 128kbps, it runs at 32x real speed on the server, but it could run 100x faster on my desktop pc. If there was an easy way to let the VPS "run" ffmpeg from my machine, that'd be great.

  • 9dev a day ago

    Use Tailscale (or really just WireGuard) to connect the server to your desktop, run it using this tool or ssh?

  • sulandor a day ago

    what is stopping you from taking the ovh-vps out of the equation?

leshokunin 2 days ago

Sounds super interesting. Maybe the people currently using Tdarr would prefer something like this. I could also imagine something like Plex or Jellyfin making use of this tech and offloading transcoding. Hope this takes off.

  • steelbrain 2 days ago

    Thanks! I developed this primarily for plex & jellyfin after struggling with Tdarr myself. For people running plex/jellyfin in containers, it's as simple as mounting the client binary at the ffmpeg path (using docker -v) and adding the config somewhere accessible (also using docker -v? lots of options here).

    • blue_cookeh 2 days ago

      Does this work well with Plex and if so, what binary are you replacing? Last I looked they used a customised fork of ffmpeg which meant replacing it was more awkward. It would be a nice way to avoid passing a GPU through to a virtual machine.

    • leshokunin 2 days ago

      Maybe you could make guides and post it on the various synology and self hosting subreddits. I could see this get traction

jauntywundrkind 2 days ago

Not to steal thunder (nice! Well done!) but also this reminded me to go check in on https://kyber.media (currently a landing page), a ffmpeg streaming project from me ffmmpeg himself (I think?) Jean-Baptiste Kempf. He had a LinkedIn update two weeks ago, mentioning the effort! Yay! https://www.linkedin.com/posts/jbkempf_playruo-the-worlds-fi...

Submission from 6 months ago, https://news.ycombinator.com/item?id=39929602 https://www.youtube.com/watch?v=0RvosCplkCc

  • steelbrain 2 days ago

    Very cool! Thank you for sharing!

VWWHFSfQ 2 days ago

I used to use dvd::rip [1] (written in perl) that was sort of a similar concept. Deploy transcode jobs onto a cluster of servers accessing a shared filesystem (nfs, smb, etc.). worked really well. I think it used gstreamer though. I set up a homelab of a bunch of pentium 3s that I salvaged from a PC recycler behind my work. They just had a big pile of obsolete computers covered by a tarp. I grabbed a few chassis with working motherboards, and then scrounged around for the best intel CPUs I could find. and memory sticks. I put together a fun little DVD-ripping factory with those machines.

[1] https://www.exit1.org/dvdrip/

Am4TIfIsER0ppos 2 days ago

I can encode faster than I can upload. Might be useful if you have gigabit to a computer more powerful than one in your home.

ptspts 2 days ago

What problem does it solve?

How to use it? Do you have example commands?

How is video data transferred between the client and the server?

Would it be possible to connect with SSH to a Linux server, using SFTP with FUSE or Samba with port forwarding for file sharing? This way the server could be zero-configuration (except that ffmpeg has to be installed, but the executable can also be transferred over SSH).

  • steelbrain 2 days ago

    > What problem does it solve?

    For different people it's going to solve different problems. For me, most recently, I wanted to use the powerful GPU in my gaming machine for transcoding in my plex server with an integrated GPU.

    > How to use it? Do you have example commands?

    The Github repository should have instructions on how to use. The client usage (once you setup the configuration) is the same as ffmpeg, so anything ffmpeg ... becomes ffmpeg-over-ip-client ... -- you need a server running on the machine with the GPU and then client anywhere network-accessible.

    > How is video data transferred between the client and the server?

    The server and client only transfer commands, stdout/err etc. The data of the transcoded files themselves is transferred over the network mount. The README of the repository has more details here but essentially you'll want a shared filesystem between the two.

    > Would it be possible to connect with SSH to a Linux server, using SFTP with FUSE or Samba with port forwarding for file sharing? This way the server could be zero-configuration (except that ffmpeg has to be installed, but the executable can also be transferred over SSH).

    Configuration should be pretty straight forward but let me know if you try it and find it difficult. A template configuration file is provided and you can edit your way out of it. You can absolutely do this with port forwarding even over the internet, provided the file system mount over the network can keep up.