MergeRGB

From Avisynth wiki
(Difference between revisions)
Jump to: navigation, search
m (added category)
(formatting, links, phrasing)
Line 1: Line 1:
{{Template:Func2Def|MergeARGB(clip ''clipA'', clip ''clipR'', clip ''clipG'', clip ''clipB'')|MergeRGB(clip ''clipR'', clip ''clipG'', clip ''clipB'' [, string ''pixel_type''])}}
+
{{Template:Func2Def
 +
|MergeARGB(clip ''clipA'', clip ''clipR'', clip ''clipG'', clip ''clipB'')
 +
|MergeRGB(clip ''clipR'', clip ''clipG'', clip ''clipB'' [, string ''pixel_type'' ] )
 +
}}
  
These filters makes it possible to merge the alpha and color channels from the source videoclips into the output videoclip.
+
Merge the ''alpha'' (transparency) and color channels from the source video clips into the output video clip.
  
''clipA'' is the clip that provided the alpha data to merge into the output clip. For a YUV format input clip the data is taken from the Luma channel. For a RGB32 format input clip the data is taken from the Alpha channel. It may not be in RGB24 format.
+
{{FuncArg|clipA}} provides the alpha data to merge into the output clip. For a [[YUV]] format clip, the data is taken from the ''Y'' (luma) channel. For an [[RGB32]] format clip, the data is taken from the ''A'' (alpha) channel. [[RGB24]] clips cannot be used.
  
''clipR'', ''clipG'' and ''clipB'' are the clips that provided the R, G and B data respectively to merge into the output clip. For YUV format input clips the data is taken from the Luma channel. For RGB format input clips the data is taken from the respective source channel. i.e. R to R, G to G, B to B. The unused chroma or color channels of the input clips are ignored.
+
{{FuncArg|cliprR}}, {{FuncArg|clipG}} and {{FuncArg|clipB}} provide the ''R'', ''G'' and ''B'' data respectively to merge into the output clip. For [[YUV]] format clips, the data is taken from the ''Y'' (luma) channel. For [[RGB]] format clips, the data is taken from the respective source channel&ndashi.e. ''R'' to ''R'', ''G'' to ''G'', ''B'' to ''B''. The unused chroma or color channels of the input clips are ignored.
  
All YUV luma pixel data is assumed to be pc-range, [0..255], there is no tv-range, [16..235], scaling. Chroma data from YUV clips is ignored. Input clips may be a mixture of all formats. YV12 is the most efficient format for transporting single channels thru any required filter chains.
+
All YUV luma pixel data is assumed to be pc-range (0..255); there is no tv-range (16..235) scaling. Chroma data from YUV clips is ignored. Input clips may be a mixture of all formats. [[YV12]] is the most efficient format for transporting single channels thru any required filter chains.
  
''pixel_type'' default RGB32, optionally RGB24, specifies the output pixel format.
+
{{FuncArg|pixel_type}} (default "RGB32", optionally "RGB24") specifies the output pixel format.
  
The audio, framerate and framecount are taken from the first clip.
+
Audio, [[Clip_properties|FrameRate]] and [[Clip_properties|FrameCount]] are taken from the first clip.
  
'''Examples:'''
+
=== Examples ===
 
+
<div {{BoxWidthIndent|46|0}} >
  # This will only blur the Green channel.
+
  # Blur the Green channel only.
  MPEG2source("c:\apps\avisynth\main.d2v")
+
  [[DGDecode/MPEG2Source|MPEG2Source]]("main.d2v")
 
  [[ConvertToRGB24]]()
 
  [[ConvertToRGB24]]()
  MergeRGB(Last, Blur(0.5), Last)
+
  MergeRGB(Last, [[Blur]](0.5), Last)
 +
</div>
  
  # This will swap the red and blue channels and
+
<div {{BoxWidthIndent|46|0}} >
  # load the alpha from a second video sources.
+
  # Swap the red and blue channels;
  vid1 = [[AviSource]]("c:\apps\avisynth\main.avi")
+
  # load the alpha from a second source.
  vid2 = AviSource("c:\apps\avisynth\alpha.avi")
+
  vid1 = [[AviSource]]("main.avi")
  MergeARGB(vid2, vid1.[[ShowBlue]]("YV12"), vid1, vid1.[[ShowRed]]("YV12"))
+
  vid2 = AviSource("alpha.avi")
 +
  MergeARGB(
 +
\    vid2,  
 +
\    vid1.[[ShowBlue]]("YV12"),  
 +
\    vid1,  
 +
\    vid1.[[ShowRed]]("YV12")
 +
\ )
 
  [[AudioDub]](vid1)
 
  [[AudioDub]](vid1)
 +
</div>
  
'''Changelog:'''
+
=== Changelog ===
 
{| border="1"
 
{| border="1"
 
|-  
 
|-  

Revision as of 18:11, 24 January 2016

MergeARGB(clip clipA, clip clipR, clip clipG, clip clipB)
MergeRGB(clip clipR, clip clipG, clip clipB [, string pixel_type ] )

Merge the alpha (transparency) and color channels from the source video clips into the output video clip.

clipA provides the alpha data to merge into the output clip. For a YUV format clip, the data is taken from the Y (luma) channel. For an RGB32 format clip, the data is taken from the A (alpha) channel. RGB24 clips cannot be used.

cliprR, clipG and clipB provide the R, G and B data respectively to merge into the output clip. For YUV format clips, the data is taken from the Y (luma) channel. For RGB format clips, the data is taken from the respective source channel&ndashi.e. R to R, G to G, B to B. The unused chroma or color channels of the input clips are ignored.

All YUV luma pixel data is assumed to be pc-range (0..255); there is no tv-range (16..235) scaling. Chroma data from YUV clips is ignored. Input clips may be a mixture of all formats. YV12 is the most efficient format for transporting single channels thru any required filter chains.

pixel_type (default "RGB32", optionally "RGB24") specifies the output pixel format.

Audio, FrameRate and FrameCount are taken from the first clip.

Examples

# Blur the Green channel only.
MPEG2Source("main.d2v")
ConvertToRGB24()
MergeRGB(Last, Blur(0.5), Last)
# Swap the red and blue channels;
# load the alpha from a second source.
vid1 = AviSource("main.avi")
vid2 = AviSource("alpha.avi")
MergeARGB(
\    vid2, 
\    vid1.ShowBlue("YV12"), 
\    vid1, 
\    vid1.ShowRed("YV12")
\ )
AudioDub(vid1)

Changelog

v2.56 Initial release.
Personal tools