video.tools¶
This module regroups advanced, useful (and less useful) functions for editing videos, by alphabetical order.
Credits¶
This module contains different functions to make end and opening credits, even though it is difficult to fill everyone needs in this matter.
-
moviepy.video.tools.credits.
credits1
(creditfile, width, stretch=30, color='white', stroke_color='black', stroke_width=2, font='Impact-Normal', fontsize=60, gap=0)[source]¶ Parameters: - creditfile
A text file whose content must be as follows:
# This is a comment # The next line says : leave 4 blank lines .blank 4 ..Executive Story Editor MARCEL DURAND ..Associate Producers MARTIN MARCEL DIDIER MARTIN ..Music Supervisor JEAN DIDIER
- width
Total width of the credits text in pixels
- gap
Horizontal gap in pixels between the jobs and the names
- color
Color of the text. See
TextClip.list('color')
for a list of acceptable names.- font
Name of the font to use. See
TextClip.list('font')
for the list of fonts you can use on your computer.- fontsize
Size of font to use
- stroke_color
Color of the stroke (=contour line) of the text. If
None
, there will be no stroke.- stroke_width
Width of the stroke, in pixels. Can be a float, like 1.5.
Returns: - image
An ImageClip instance that looks like this and can be scrolled to make some credits:
- Executive Story Editor MARCEL DURAND
- Associate Producers MARTIN MARCEL
DIDIER MARTIN
Music Supervisor JEAN DIDIER
Drawing¶
This module deals with making images (np arrays). It provides drawing methods that are difficult to do with the existing Python libraries.
-
moviepy.video.tools.drawing.
blit
(im1, im2, pos=None, mask=None, ismask=False)[source]¶ Blit an image over another.
Blits
im1
onim2
as positionpos=(x,y)
, using themask
if provided. Ifim1
andim2
are mask pictures (2D float arrays) thenismask
must beTrue
.
-
moviepy.video.tools.drawing.
circle
(screensize, center, radius, col1=1.0, col2=0, blur=1)[source]¶ Draw an image with a circle.
Draws a circle of color
col1
, on a background of colorcol2
, on a screen of sizescreensize
at the positioncenter=(x,y)
, with a radiusradius
but slightly blurred on the border byblur
pixels
-
moviepy.video.tools.drawing.
color_gradient
(size, p1, p2=None, vector=None, r=None, col1=0, col2=1.0, shape='linear', offset=0)[source]¶ Draw a linear, bilinear, or radial gradient.
The result is a picture of size
size
, whose color varies gradually from color col1 in positionp1
to colorcol2
in positionp2
.If it is a RGB picture the result must be transformed into a ‘uint8’ array to be displayed normally:
Parameters: - size
Size (width, height) in pixels of the final picture/array.
- p1, p2
Coordinates (x,y) in pixels of the limit point for
col1
andcol2
. The color ‘before’p1
iscol1
and it gradually changes in the direction ofp2
until it iscol2
when it reachesp2
.- vector
A vector [x,y] in pixels that can be provided instead of
p2
.p2
is then defined as (p1 + vector).- col1, col2
Either floats between 0 and 1 (for gradients used in masks) or [R,G,B] arrays (for colored gradients).
- shape
‘linear’, ‘bilinear’, or ‘circular’. In a linear gradient the color varies in one direction, from point
p1
to pointp2
. In a bilinear gradient it also varies symetrically formp1
in the other direction. In a circular gradient it goes fromcol1
tocol2
in all directions.- offset
Real number between 0 and 1 indicating the fraction of the vector at which the gradient actually starts. For instance if
offset
is 0.9 in a gradient going from p1 to p2, then the gradient will only occur near p2 (before that everything is of colorcol1
) If the offset is 0.9 in a radial gradient, the gradient will occur in the region located between 90% and 100% of the radius, this creates a blurry disc of radius d(p1,p2).
Returns: - image
An Numpy array of dimensions (W,H,ncolors) of type float representing the image of the gradient.
Examples
>>> grad = color_gradient(blabla).astype('uint8')
-
moviepy.video.tools.drawing.
color_split
(size, x=None, y=None, p1=None, p2=None, vector=None, col1=0, col2=1.0, grad_width=0)[source]¶ Make an image splitted in 2 colored regions.
Returns an array of size
size
divided in two regions called 1 and 2 in wht follows, and which will have colors col& and col2 respectively.Parameters: - x: (int)
If provided, the image is splitted horizontally in x, the left region being region 1.
- y: (int)
If provided, the image is splitted vertically in y, the top region being region 1.
- p1,p2:
Positions (x1,y1),(x2,y2) in pixels, where the numbers can be floats. Region 1 is defined as the whole region on the left when going from
p1
top2
.- p1, vector:
p1
is (x1,y1) and vector (v1,v2), where the numbers can be floats. Region 1 is then the region on the left when starting in positionp1
and going in the direction given byvector
.- gradient_width
If not zero, the split is not sharp, but gradual over a region of width
gradient_width
(in pixels). This is preferable in many situations (for instance for antialiasing).
Examples
>>> size = [200,200] >>> # an image with all pixels with x<50 =0, the others =1 >>> color_split(size, x=50, col1=0, col2=1) >>> # an image with all pixels with y<50 red, the others green >>> color_split(size, x=50, col1=[255,0,0], col2=[0,255,0]) >>> # An image splitted along an arbitrary line (see below) >>> color_split(size, p1=[20,50], p2=[25,70] col1=0, col2=1)
Segmenting¶
Subtitles¶
Experimental module for subtitles support.
-
class
moviepy.video.tools.subtitles.
SubtitlesClip
(subtitles, make_textclip=None)[source]¶ Bases:
moviepy.video.VideoClip.VideoClip
A Clip that serves as “subtitle track” in videos.
One particularity of this class is that the images of the subtitle texts are not generated beforehand, but only if needed.
Parameters: - subtitles
Either the name of a file, or a list
Examples
>>> from moviepy.video.tools.subtitles import SubtitlesClip >>> from moviepy.video.io.VideoFileClip import VideoFileClip >>> generator = lambda txt: TextClip(txt, font='Georgia-Regular', fontsize=24, color='white') >>> sub = SubtitlesClip("subtitles.srt", generator) >>> myvideo = VideoFileClip("myvideo.avi") >>> final = CompositeVideoClip([clip, subtitles]) >>> final.write_videofile("final.mp4", fps=myvideo.fps)
-
add_mask
(self)¶ Add a mask VideoClip to the VideoClip.
Returns a copy of the clip with a completely opaque mask (made of ones). This makes computations slower compared to having a None mask but can be useful in many cases. Choose
Set
constant_size
to False for clips with moving image size.
-
afx
(self, fun, *a, **k)¶ Transform the clip’s audio.
Return a new clip whose audio has been transformed by
fun
.
-
blit_on
(self, picture, t)¶ Returns the result of the blit of the clip’s frame at time t on the given picture, the position of the clip being given by the clip’s
pos
attribute. Meant for compositing.
-
close
(self)¶ Release any resources that are in use.
-
copy
(self)¶ Shallow copy of the clip.
Returns a shallow copy of the clip whose mask and audio will be shallow copies of the clip’s mask and audio if they exist.
This method is intensively used to produce new clips every time there is an outplace transformation of the clip (clip.resize, clip.subclip, etc.)
-
cutout
(self, ta, tb)¶ Returns a clip playing the content of the current clip but skips the extract between
ta
andtb
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. If the original clip has aduration
attribute set, the duration of the returned clip is automatically computed as `` duration - (tb - ta)``.The resulting clip’s
audio
andmask
will also be cutout if they exist.
-
fl
(self, fun, apply_to=None, keep_duration=True)¶ General processing of a clip.
Returns a new Clip whose frames are a transformation (through function
fun
) of the frames of the current clip.Parameters: - fun
A function with signature (gf,t -> frame) where
gf
will represent the current clip’sget_frame
method, i.e.gf
is a function (t->image). Parameter t is a time in seconds, frame is a picture (=Numpy array) which will be returned by the transformed clip (see examples below).- apply_to
Can be either
'mask'
, or'audio'
, or['mask','audio']
. Specifies if the filterfl
should also be applied to the audio or the mask of the clip, if any.- keep_duration
Set to True if the transformation does not change the
duration
of the clip.
Examples
In the following
newclip
a 100 pixels-high clip whose video content scrolls from the top to the bottom of the frames ofclip
.>>> fl = lambda gf,t : gf(t)[int(t):int(t)+50, :] >>> newclip = clip.fl(fl, apply_to='mask')
-
fl_image
(self, image_func, apply_to=None)¶ Modifies the images of a clip by replacing the frame get_frame(t) by another frame, image_func(get_frame(t))
-
fl_time
(self, t_func, apply_to=None, keep_duration=False)¶ Returns a Clip instance playing the content of the current clip but with a modified timeline, time
t
being replaced by another time t_func(t).Parameters: - t_func:
A function
t-> new_t
- apply_to:
Can be either ‘mask’, or ‘audio’, or [‘mask’,’audio’]. Specifies if the filter
fl
should also be applied to the audio or the mask of the clip, if any.- keep_duration:
False
(default) if the transformation modifies theduration
of the clip.
Examples
>>> # plays the clip (and its mask and sound) twice faster >>> newclip = clip.fl_time(lambda: 2*t, apply_to=['mask', 'audio']) >>> >>> # plays the clip starting at t=3, and backwards: >>> newclip = clip.fl_time(lambda: 3-t)
-
fx
(self, func, *args, **kwargs)¶ Returns the result of
func(self, *args, **kwargs)
. for instance>>> newclip = clip.fx(resize, 0.2, method='bilinear')
is equivalent to
>>> newclip = resize(clip, 0.2, method='bilinear')
The motivation of fx is to keep the name of the effect near its parameters, when the effects are chained:
>>> from moviepy.video.fx import volumex, resize, mirrorx >>> clip.fx( volumex, 0.5).fx( resize, 0.3).fx( mirrorx ) >>> # Is equivalent, but clearer than >>> resize( volumex( mirrorx( clip ), 0.5), 0.3)
-
get_frame
(self, t)¶ Gets a numpy array representing the RGB picture of the clip at time t or (mono or stereo) value for a sound clip
-
in_subclip
(self, t_start=None, t_end=None)[source]¶ Returns a sequence of [(t1,t2), txt] covering all the given subclip from t_start to t_end. The first and last times will be cropped so as to be exactly t_start and t_end if possible.
-
is_playing
(self, t)¶ If t is a time, returns true if t is between the start and the end of the clip. t can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. If t is a numpy array, returns False if none of the t is in theclip, else returns a vector [b_1, b_2, b_3…] where b_i is true iff tti is in the clip.
-
iter_frames
(self, fps=None, with_times=False, logger=None, dtype=None)¶ Iterates over all the frames of the clip.
Returns each frame of the clip as a HxWxN np.array, where N=1 for mask clips and N=3 for RGB clips.
This function is not really meant for video editing. It provides an easy way to do frame-by-frame treatment of a video, for fields like science, computer vision…
The
fps
(frames per second) parameter is optional if the clip already has afps
attribute.Use dtype=”uint8” when using the pictures to write video, images…
Examples
>>> # prints the maximum of red that is contained >>> # on the first line of each frame of the clip. >>> from moviepy.editor import VideoFileClip >>> myclip = VideoFileClip('myvideo.mp4') >>> print ( [frame[0,:,0].max() for frame in myclip.iter_frames()])
-
on_color
(self, size=None, color=(0, 0, 0), pos=None, col_opacity=None)¶ Place the clip on a colored background.
Returns a clip made of the current clip overlaid on a color clip of a possibly bigger size. Can serve to flatten transparent clips.
Parameters: - size
Size (width, height) in pixels of the final clip. By default it will be the size of the current clip.
- color
Background color of the final clip ([R,G,B]).
- pos
Position of the clip in the final clip. ‘center’ is the default
- col_opacity
Parameter in 0..1 indicating the opacity of the colored background.
-
save_frame
(self, filename, t=0, withmask=True)¶ Save a clip’s frame to an image file.
Saves the frame of clip corresponding to time
t
in ‘filename’.t
can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.If
withmask
isTrue
the mask is saved in the alpha layer of the picture (only works with PNGs).
-
set_audio
(self, audioclip)¶ Attach an AudioClip to the VideoClip.
Returns a copy of the VideoClip instance, with the audio attribute set to
audio
, which must be an AudioClip instance.
-
set_duration
(self, t, change_end=True)¶ Returns a copy of the clip, with the
duration
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip. If change_end is False, the start attribute of the clip will be modified in function of the duration and the preset end of the clip.
-
set_end
(self, t)¶ Returns a copy of the clip, with the
end
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip.
-
set_fps
(self, fps)¶ Returns a copy of the clip with a new default fps for functions like write_videofile, iterframe, etc.
-
set_ismask
(self, ismask)¶ Says wheter the clip is a mask or not (ismask is a boolean)
-
set_make_frame
(self, mf)¶ Change the clip’s
get_frame
.Returns a copy of the VideoClip instance, with the make_frame attribute set to mf.
-
set_mask
(self, mask)¶ Set the clip’s mask.
Returns a copy of the VideoClip with the mask attribute set to
mask
, which must be a greyscale (values in 0-1) VideoClip
-
set_memoize
(self, memoize)¶ Sets wheter the clip should keep the last frame read in memory
-
set_opacity
(self, op)¶ Set the opacity/transparency level of the clip.
Returns a semi-transparent copy of the clip where the mask is multiplied by
op
(any float, normally between 0 and 1).
-
set_pos
(*a, **kw)¶ The function
set_pos
is deprecated and is kept temporarily for backwards compatibility. Please use the new name,set_position
, instead.
-
set_position
(self, pos, relative=False)¶ Set the clip’s position in compositions.
Sets the position that the clip will have when included in compositions. The argument
pos
can be either a couple(x,y)
or a functiont-> (x,y)
. x and y mark the location of the top left corner of the clip, and can be of several types.Examples
>>> clip.set_position((45,150)) # x=45, y=150 >>> >>> # clip horizontally centered, at the top of the picture >>> clip.set_position(("center","top")) >>> >>> # clip is at 40% of the width, 70% of the height: >>> clip.set_position((0.4,0.7), relative=True) >>> >>> # clip's position is horizontally centered, and moving up ! >>> clip.set_position(lambda t: ('center', 50+t) )
-
set_start
(self, t, change_end=True)¶ Returns a copy of the clip, with the
start
attribute set tot
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.If
change_end=True
and the clip has aduration
attribute, theend
atrribute of the clip will be updated tostart+duration
.If
change_end=False
and the clip has aend
attribute, theduration
attribute of the clip will be updated toend-start
These changes are also applied to the
audio
andmask
clips of the current clip, if they exist.
-
subclip
(self, t_start=0, t_end=None)¶ Returns a clip playing the content of the current clip between times
t_start
andt_end
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Ift_end
is not provided, it is assumed to be the duration of the clip (potentially infinite). Ift_end
is a negative value, it is reset to ``clip.duration + t_end. ``. For instance:>>> # cut the last two seconds of the clip: >>> newclip = clip.subclip(0,-2)
If
t_end
is provided or if the clip has a duration attribute, the duration of the returned clip is set automatically.The
mask
andaudio
of the resulting subclip will be subclips ofmask
andaudio
the original clip, if they exist.
-
subfx
(self, fx, ta=0, tb=None, **kwargs)¶ Apply a transformation to a part of the clip.
Returns a new clip in which the function
fun
(clip->clip) has been applied to the subclip between times ta and tb (in seconds).Examples
>>> # The scene between times t=3s and t=6s in ``clip`` will be >>> # be played twice slower in ``newclip`` >>> newclip = clip.subapply(lambda c:c.speedx(0.5) , 3,6)
-
to_ImageClip
(self, t=0, with_mask=True, duration=None)¶ Returns an ImageClip made out of the clip’s frame at time
t
, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.
-
to_RGB
(self)¶ Return a non-mask video clip made from the mask video clip.
-
to_gif
(*a, **kw)¶ The function
to_gif
is deprecated and is kept temporarily for backwards compatibility. Please use the new name,write_gif
, instead.
-
to_images_sequence
(*a, **kw)¶ The function
to_images_sequence
is deprecated and is kept temporarily for backwards compatibility. Please use the new name,write_images_sequence
, instead.
-
to_mask
(self, canal=0)¶ Return a mask a video clip made from the clip.
-
to_videofile
(*a, **kw)¶ The function
to_videofile
is deprecated and is kept temporarily for backwards compatibility. Please use the new name,write_videofile
, instead.
-
without_audio
(self)¶ Remove the clip’s audio.
Return a copy of the clip with audio set to None.
-
write_gif
(self, filename, fps=None, program='imageio', opt='nq', fuzz=1, verbose=True, loop=0, dispose=False, colors=None, tempfiles=False, logger='bar')¶ Write the VideoClip to a GIF file.
Converts a VideoClip into an animated GIF using ImageMagick or ffmpeg.
Parameters: - filename
Name of the resulting gif file.
- fps
Number of frames per second (see note below). If it isn’t provided, then the function will look for the clip’s
fps
attribute (VideoFileClip, for instance, have one).- program
Software to use for the conversion, either ‘imageio’ (this will use the library FreeImage through ImageIO), or ‘ImageMagick’, or ‘ffmpeg’.
- opt
Optimalization to apply. If program=’imageio’, opt must be either ‘wu’ (Wu) or ‘nq’ (Neuquant). If program=’ImageMagick’, either ‘optimizeplus’ or ‘OptimizeTransparency’.
- fuzz
(ImageMagick only) Compresses the GIF by considering that the colors that are less than fuzz% different are in fact the same.
- tempfiles
Writes every frame to a file instead of passing them in the RAM. Useful on computers with little RAM. Can only be used with ImageMagick’ or ‘ffmpeg’.
- progress_bar
If True, displays a progress bar
Notes
The gif will be playing the clip in real time (you can only change the frame rate). If you want the gif to be played slower than the clip you will use
>>> # slow down clip 50% and make it a gif >>> myClip.speedx(0.5).to_gif('myClip.gif')
-
write_images_sequence
(self, nameformat, fps=None, verbose=True, withmask=True, logger='bar')¶ Writes the videoclip to a sequence of image files.
Parameters: - nameformat
A filename specifying the numerotation format and extension of the pictures. For instance “frame%03d.png” for filenames indexed with 3 digits and PNG format. Also possible: “some_folder/frame%04d.jpeg”, etc.
- fps
Number of frames per second to consider when writing the clip. If not specified, the clip’s
fps
attribute will be used if it has one.- withmask
will save the clip’s mask (if any) as an alpha canal (PNGs only).
- verbose
Boolean indicating whether to print information.
- logger
Either ‘bar’ (progress bar) or None or any Proglog logger.
Returns: - names_list
A list of all the files generated.
Notes
The resulting image sequence can be read using e.g. the class
ImageSequenceClip
.
-
write_videofile
(self, filename, fps=None, codec=None, bitrate=None, audio=True, audio_fps=44100, preset='medium', audio_nbytes=4, audio_codec=None, audio_bitrate=None, audio_bufsize=2000, temp_audiofile=None, rewrite_audio=True, remove_temp=True, write_logfile=False, verbose=True, threads=None, ffmpeg_params=None, logger='bar')¶ Write the clip to a videofile.
Parameters: - filename
Name of the video file to write in. The extension must correspond to the “codec” used (see below), or simply be ‘.avi’ (which will work with any codec).
- fps
Number of frames per second in the resulting video file. If None is provided, and the clip has an fps attribute, this fps will be used.
- codec
Codec to use for image encoding. Can be any codec supported by ffmpeg. If the filename is has extension ‘.mp4’, ‘.ogv’, ‘.webm’, the codec will be set accordingly, but you can still set it if you don’t like the default. For other extensions, the output filename must be set accordingly.
Some examples of codecs are:
'libx264'
(default codec for file extension.mp4
) makes well-compressed videos (quality tunable using ‘bitrate’).'mpeg4'
(other codec for extension.mp4
) can be an alternative to'libx264'
, and produces higher quality videos by default.'rawvideo'
(use file extension.avi
) will produce a video of perfect quality, of possibly very huge size.png
(use file extension.avi
) will produce a video of perfect quality, of smaller size than withrawvideo
.'libvorbis'
(use file extension.ogv
) is a nice video format, which is completely free/ open source. However not everyone has the codecs installed by default on their machine.'libvpx'
(use file extension.webm
) is tiny a video format well indicated for web videos (with HTML5). Open source.- audio
Either
True
,False
, or a file name. IfTrue
and the clip has an audio clip attached, this audio clip will be incorporated as a soundtrack in the movie. Ifaudio
is the name of an audio file, this audio file will be incorporated as a soundtrack in the movie.- audiofps
frame rate to use when generating the sound.
- temp_audiofile
the name of the temporary audiofile to be generated and incorporated in the the movie, if any.
- audio_codec
Which audio codec should be used. Examples are ‘libmp3lame’ for ‘.mp3’, ‘libvorbis’ for ‘ogg’, ‘libfdk_aac’:’m4a’, ‘pcm_s16le’ for 16-bit wav and ‘pcm_s32le’ for 32-bit wav. Default is ‘libmp3lame’, unless the video extension is ‘ogv’ or ‘webm’, at which case the default is ‘libvorbis’.
- audio_bitrate
Audio bitrate, given as a string like ‘50k’, ‘500k’, ‘3000k’. Will determine the size/quality of audio in the output file. Note that it mainly an indicative goal, the bitrate won’t necessarily be the this in the final file.
- preset
Sets the time that FFMPEG will spend optimizing the compression. Choices are: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo. Note that this does not impact the quality of the video, only the size of the video file. So choose ultrafast when you are in a hurry and file size does not matter.
- threads
Number of threads to use for ffmpeg. Can speed up the writing of the video on multicore computers.
- ffmpeg_params
Any additional ffmpeg parameters you would like to pass, as a list of terms, like [‘-option1’, ‘value1’, ‘-option2’, ‘value2’].
- write_logfile
If true, will write log files for the audio and the video. These will be files ending with ‘.log’ with the name of the output file in them.
- logger
Either “bar” for progress bar or None or any Proglog logger.
- verbose (deprecated, kept for compatibility)
Formerly used for toggling messages on/off. Use logger=None now.
Examples
>>> from moviepy.editor import VideoFileClip >>> clip = VideoFileClip("myvideo.mp4").subclip(100,120) >>> clip.write_videofile("my_new_video.mp4") >>> clip.close()