Skip to content

CHANGELOG

AllDebrid 100% supported (if you BYO VPN)

We've had AllDebrid support for our debrid-related apps in testing for a few days, and our elf-a-testers have confirmed it's good to go (thanks elf-a-testers!).

The trick with AllDebrid, it turns out, is that they don't allow use of their API or WebDAV endpoints from "datacenter IPs", which includes our Hetzner IP ranges. (Confusingly, they allow listing of WebDAV files from anywhere, but not actual downloading!).

However, if you configured your AllDebrid account to specify that you're using a VPN, then you can access their endpoints via that VPN provider. Soo.. the workaround to using AllDebrid with ElfHosted apps is to... BYO VPN credentials (like we do with our torrent clients).

Currently this is just working with PrivateInternetAccess (PIA), but we can easily add more VPN flavors in future (let me know if you need one ASAP)

Here are the AllDebrid / VPN-enhanced versions of our regular debrid apps...

If you've been using your Real-Debrid account outside of the RDTClient+Arrs symlink solution (say, with Debrid Media Manager, or Stremio), you'll have lots of content sitting in /storage/realdebrid-zurg/{movies,shows} which is not symlinked to /storage/elfstorage, and thus not sorted into libraries managed by Radarr / Sonarr.

You can work around this by adding additional folders to your Plex libraries, but it's counter-intuitive, and messy, and it can be especially hard to track watched vs unwatched TV episodes, etc.

For media which did arrive via the Aars / RDTClient, the symlink design feature of RDTClient is genius: you end up with a 2kb file instead of a 60GB file, which you can rename and move around your filesystem, sort into folders using the Aars, and delete and replace when an upgrade becomes available, etc.

We can now replicate the symlink design using ElfBot, to import all your existing Debrid content (not sourced via RDTClient) into Radarr / Sonarr for better organizing / managing!

For details, check out these pages:

Scannarr added to Radarr / Sonarr

You might have noticed your Radarr / Sonarr complaining about the lack of RSS-capable indexers. This is because, unlike some popular trackers, the Prowlarr knightcrawler / stremio indexer doesn't (and can't) include an RSS feed.

Sonarr especially relies on an RSS feed to alert it to when new episodes are out, and ready to search. Without an RSS source, you have to manually trigger searches for your TV series, in order to grab new episodes when they're downloaded.

Thanks to Scannarr, we now have a solution!

Stremio Scaling Stresses

In our previous episode, your brave (perhaps foolhardy?) elves had scaled the infrastructure and the humans, preparing to announce our tiered torrentio-like service (free-public, paid-subscription, and free-internal), https://torrentio.elfhosted.com, to the rampaging streaming hordes at r/StremioAddons.

The next morning iPromKnight PR'd a huge (and hugely more efficient) replacement of the original scrapers, so I scrambled to YOLO-implement these scrapers into our instance before launching.

The announcement post went flawlessly, except in my rush I misspelled "ElfHosted" in the title! ๐Ÿคฆโ€โ™‚๏ธ

In the 3 ½ days since the announcement, we've added 39 more (free ElfBuckz) subscribers, and 35 of those are torrentio subscribers.

The extra attention have added stress in a few areas, which I'll detail below..

Supporting Stremio

So we've scaled our infrastructure for growth, and have started to grow the Elf-Vengers team to help new elves get started.

We've spent the last few maintenance windows applying bugfixes, including:

  • RDTClient wasn't symlink-downloading properly due to the tightened restrictions we place on our pods - we don't let them run as root (yay!), but RDTClient is doing something which requires root (boo!), and it's not logging it, so (for now), we're choosing the lesser of the two weevils, and preferring a working pod over a secured, non-working one!

  • ElfBot wouldn't properly claim or restart Plex under some conditions, resulting in users thinking they'd claimed a server, when in fact they hadn't.

  • rclone + zurg was consuming lots of ephemeral space (very quickly), causing Kubernetes to restart the rclone CSI pod, and briefly disrupt connectivity to your RD mounts. This one we finished tonight, it basically required the reinstallation of all our our worker nodes with a larger drive partitioning scheme.

  • I've been replacing our 64GB nodes with 128GB ones - they seem less susceptible to slow I/O issues, perhaps because ceph has more RAM to work with.

Now that we've done some housekeeping, we're ready for some new houseguests...

Scaling the humans (Elf-vengers, Assemble!)

Yesterday's blog post was heavy on the technical details re scaling our infrastructure for growth. Ironically, after bragging about how we're "ready" for growth, today we had an an issue affecting multiple users, which looks like it was caused by utilization / saturation ๐Ÿคฆ

However, I can't confirm this, because.. the latest Grafana / Kube Prometheus Stack update broke graphing, and we've been flying blind for the last few days while I try to fix it while also adding new apps, welcoming new users, and handling routine support!

So the real problem here is the human (me) isn't scaling under load, which is why I've assembled you all to present my new initiative...

Scaling the gigabytes

We saw some stability issues earlier this week, as increased load impacted our ceph cluster, which provides the backend to the application config folders, as well as to ElfStorage.

It turned out that the 1Gbps nodes which run our SSD-backed config storage were also running the Ceph metadata servers, whose job it is to co-ordinate your view of the filesystems of your volumes. The combination of these two roles (storage and metadata) was saturating the 1Gbps NICs, causing slowdowns and the occasional corruption as the fault cascaded.

In parallel, all the fun we've been having with Real-Debrid streaming was impacting our app nodes, in some cases creating so much incoming traffic that the nodes were unable to respond timeously to communications with ceph, again resulting in slowdows and corruption.

Here's are a few recent changes we've made to address growth:

Autoscan Unbundled

Last month, we added Autoscan for free, bundled with Plex, Jellyfin or Emby.

With the Jan 2024 influx of "infinite streaming" users, Autoscan has taken a front seat, since it's key to updating your streamer libraries when new content is added to your remote library.

A hiccup has emerged re our deployment strategy though - it turns out if you have both Plex and Jellyfin, then you'll get two copies of autoscan, and they'll fight with each other for access to their SQLite database ๐Ÿคฆโ€โ™‚๏ธ

The workaround here has been simply to refactor Autoscan into its own product, and make it available in the store (for free).

TL;DR : If you lost Autoscan and you want it back, add it to your subscription via the link above, and then "renew now" your subscription to force the store to update it immediately.

Mooar debrid toys

It's too soon to tell whether it's "just a passing phase", but the "infinite" streaming app bundles seem to be a popular alternative to traditional torrenting / usenet. It's a good fit for us, since (a) the apps can be complicated to interconnect, (and automating complexity is our jam), and (b) they rely primarily on compute and network resources, which are cheap and easily expandable, rather than storage, which is a stone-age PITA ๐Ÿชจ.

So for your instant-streaming enjoyment, I present the following "instant streaming" guides:

We've also got an alldebrid bundle ready for a trial, if that's your flavor.

Read more for capacity expansion, and changes to TDarr and WebDAV access to meet resourcing constraints...