Reimplementation of the masking logic of adaptivegrain in rust. It’s almost safe :^)
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Go to file
kageru 1f3957f449
Update dependencies
3 years ago
benches No longer use LUT for 32bit input 3 years ago
src No longer use LUT for 32bit input 3 years ago
.gitignore Add Cargo.lock 4 years ago
Cargo.lock Update dependencies 3 years ago
Cargo.toml Update dependencies 3 years ago Link AUR package in README 4 years ago add script to build release binaries 4 years ago


Reimplementation of the adaptive_grain mask as a Vapoursynth plugin. For a description of the math and the general idea, see the article.


core.adg.Mask(clip, luma_scaling: float)

You must call std.PlaneStats() before this plugin (or fill the PlaneStatsAverage frame property using some other method). Supported formats are YUV with 8-32 bit precision integer or single precision float. Half precision float input is not supported since no one seems to be using that anyway. Since the output is grey and only luma is processed, the subsampling of the input does not matter.

To replicate the original behaviour of adaptivegrain, a wrapper is provided in kagefunc. It behaves exactly like the original implementation (except for the performance, which is about 3x faster on my machine).


clip: vapoursynth.VideoNode

the input clip to generate a mask for.

luma_scaling: float = 10.0

the luma_scaling factor as described in the blog post. Lower values will make the mask brighter overall.

Build instructions

If youre on Arch Linux, theres an AUR package for this plugin. Otherwise youll have to build and install the package manually.

cargo build --release

Thats it. This is Rust, after all. No idea what the minimum version is, but it works with stable rust 1.41. Thats all I know. Binaries for Windows and Linux are in the release tab.


Why do I have to call std.PlaneStats() manually?

Because I didnt want to reimplement it. kagefunc.adaptive_grain(clip, show_mask=True) does that for you and then just returns the mask. Because I was too dumb to realize this exists. Ill fix that at some point.

Why doesnt this also add grain?

I was going to do that originally, but it just goes back to the same point about not wanting to reimplement something that already exists.