Thanks tdeo for pointing this out
|1 year ago|
|benches||2 years ago|
|src||2 years ago|
|.gitignore||3 years ago|
|Cargo.lock||2 years ago|
|Cargo.toml||2 years ago|
|LICENSE||1 year ago|
|README.md||2 years ago|
|build_release.sh||2 years ago|
Reimplementation of the adaptive_grain mask as a Vapoursynth plugin. For a description of the math and the general idea, see the article.
core.adg.Mask(clip, luma_scaling: float)
You must call
std.PlaneStats() before this plugin
(or fill the PlaneStatsAverage frame property using some other method).
Supported formats are YUV with 8-32 bit precision integer or single precision float.
Half precision float input is not supported since no one seems to be using that anyway.
Since the output is grey and only luma is processed,
the subsampling of the input does not matter.
To replicate the original behaviour of adaptivegrain, a wrapper is provided in kagefunc. It behaves exactly like the original implementation (except for the performance, which is about 3x faster on my machine).
the input clip to generate a mask for.
luma_scaling: float = 10.0
the luma_scaling factor as described in the blog post. Lower values will make the mask brighter overall.
If you’re on Arch Linux, there’s an AUR package for this plugin. Otherwise you’ll have to build and install the package manually.
cargo build --release
That’s it. This is Rust, after all. No idea what the minimum version is, but it works with stable rust 1.41. That’s all I know. Binaries for Windows and Linux are in the release tab.
What’s the no-fma dll? Which one do I need?
There are two Windows builds of the plugin, one for CPUs that support
FMA instructions and one for those that don’t.
If your CPU is a Haswell (for Intel) or Piledriver (for AMD) or newer, you can use the regular version (which is about 20% faster). Otherwise, grab no-fma.
The Linux build uses fma instructions. I trust that if you’re a Linux user on older hardware, you know how to compile your own binaries.
Why do I have to call std.PlaneStats() manually?
Because I didn’t want to reimplement it.
Because I was too dumb to realize this exists.
I’ll fix that at some point.™
kagefunc.adaptive_grain(clip, show_mask=True) does that for you and then just returns the mask.
Why doesn’t this also add grain?
I was going to do that originally, but I didn’t want to reimplement grain when we already have a working grain filter.