video.tools

This module regroups advanced, useful (and less useful) functions for editing videos, by alphabetical order.

Credits

This module contains different fonctions to make end and opening credits, even though it is difficult to fill everyone needs in this matter.

moviepy.video.tools.credits.credits1(creditfile, width, stretch=30, color='white', stroke_color='black', stroke_width=2, font='Impact-Normal', fontsize=60)[source]
Parameters:

creditfile :

A text file whose content must be as follows:

# This is a comment
# The next line says : leave 4 blank lines
.blank 4

..Executive Story Editor
MARCEL DURAND

..Associate Producers
MARTIN MARCEL
DIDIER MARTIN

..Music Supervisor
JEAN DIDIER

width :

Total width of the credits text in pixels

gap :

Gap in pixels between the jobs and the names.

**txt_kw :

Additional argument passed to TextClip (font, colors, etc.)

Returns:

image :

An ImageClip instance that looks like this and can be scrolled to make some credits :

Executive Story Editor MARCEL DURAND
Associate Producers MARTIN MARCEL

DIDIER MARTIN

Music Supervisor JEAN DIDIER

Drawing

This module deals with making images (np arrays). It provides drawing methods that are difficult to do with the existing Python libraries.

moviepy.video.tools.drawing.blit(im1, im2, pos=[0, 0], mask=None, ismask=False)[source]

Blit an image over another.

Blits im1 on im2 as position pos=(x,y), using the mask if provided. If im1 and im2 are mask pictures (2D float arrays) then ismask must be True.

moviepy.video.tools.drawing.circle(screensize, center, radius, col1=1.0, col2=0, blur=1)[source]

Draw an image with a circle.

Draws a circle of color col1, on a background of color col2, on a screen of size screensize at the position center=(x,y), with a radius radius but slightly blurred on the border by blur pixels

moviepy.video.tools.drawing.color_gradient(size, p1, p2=None, vector=None, r=None, col1=0, col2=1.0, shape='linear', offset=0)[source]

Draw a linear, bilinear, or radial gradient.

The result is a picture of size size, whose color varies gradually from color col1 in position p1 to color col2 in position p2.

If it is a RGB picture the result must be transformed into a ‘uint8’ array to be displayed normally:

Parameters:

size :

Size (width, height) in pixels of the final picture/array.

p1, p2 :

Coordinates (x,y) in pixels of the limit point for col1 and col2. The color ‘before’ p1 is col1 and it gradually changes in the direction of p2 until it is col2 when it reaches p2.

vector :

A vector [x,y] in pixels that can be provided instead of p2. p2 is then defined as (p1 + vector).

col1, col2 :

Either floats between 0 and 1 (for gradients used in masks) or [R,G,B] arrays (for colored gradients).

shape :

‘linear’, ‘bilinear’, or ‘circular’. In a linear gradient the color varies in one direction, from point p1 to point p2. In a bilinear gradient it also varies symetrically form p1 in the other direction. In a circular gradient it goes from col1 to col2 in all directions.

offset :

Real number between 0 and 1 indicating the fraction of the vector at which the gradient actually starts. For instance if offset is 0.9 in a gradient going from p1 to p2, then the gradient will only occur near p2 (before that everything is of color col1) If the offset is 0.9 in a radial gradient, the gradient will occur in the region located between 90% and 100% of the radius, this creates a blurry disc of radius d(p1,p2).

Returns:

image :

An Numpy array of dimensions (W,H,ncolors) of type float representing the image of the gradient.

Examples

>>> grad = colorGradient(blabla).astype('uint8')
moviepy.video.tools.drawing.color_split(size, x=None, y=None, p1=None, p2=None, vector=None, col1=0, col2=1.0, grad_width=0)[source]

Make an image splitted in 2 colored regions.

Returns an array of size size divided in two regions called 1 and 2 in wht follows, and which will have colors col& and col2 respectively.

Parameters:

x: (int) :

If provided, the image is splitted horizontally in x, the left region being region 1.

y: (int) :

If provided, the image is splitted vertically in y, the top region being region 1.

p1,p2: :

Positions (x1,y1),(x2,y2) in pixels, where the numbers can be floats. Region 1 is defined as the whole region on the left when going from p1 to p2.

p1, vector: :

p1 is (x1,y1) and vector (v1,v2), where the numbers can be floats. Region 1 is then the region on the left when starting in position p1 and going in the direction given by vector.

gradient_width :

If not zero, the split is not sharp, but gradual over a region of width gradient_width (in pixels). This is preferable in many situations (for instance for antialiasing).

Examples

>>> size = [200,200]
>>> # an image with all pixels with x<50 =0, the others =1
>>> color_split(size, x=50, col1=0, col2=1)
>>> # an image with all pixels with y<50 red, the others green
>>> color_split(size, x=50, col1=[255,0,0], col2=[0,255,0])
>>> # An image splitted along an arbitrary line (see below) 
>>> colorSplit(size, p1=[20,50], p2=[25,70] col1=0, col2=1)

Segmenting

moviepy.video.tools.segmenting.findObjects(clip, rem_thr=500, preview=False)[source]

Returns a list of ImageClips representing each a separate object on the screen.

rem_thr : all objects found with size < rem_Thr will be
considered false positives and will be removed

Subtitles

Experimental module for subtitles support.

class moviepy.video.tools.subtitles.SubtitlesClip(subtitles, make_textclip=None)[source]

Bases: moviepy.video.VideoClip.VideoClip

A Clip that serves as “subtitle track” in videos.

One particularity of this class is that the images of the subtitle texts are not generated beforehand, but only if needed.

Parameters:

subtitles :

Either the name of a file, or a list

Examples

>>> from moviepy.video.tools.subtitles import SubtitlesClip
>>> from moviepy.video.io.VideoFileClip import VideoFileClip
>>> generator = lambda txt: TextClip(txt, font='Georgia-Regular',
                                    fontsize=24, color='white')
>>> sub = SubtitlesClip("subtitles.srt", generator)
>>> myvideo = VideoFileClip("myvideo.avi")
>>> final = CompositeVideoClip([clip, subtitles])
>>> final.to_videofile("final.mp4", fps=myvideo.fps)
in_subclip(t_start=None, t_end=None)

Returns a sequence of [(t1,t2), txt] covering all the given subclip from t_start to t_end. The first and last times will be cropped so as to be exactly t_start and t_end if possible.

moviepy.video.tools.subtitles.file_to_subtitles(filename)[source]

Converts a srt file into subtitles.

The returned list is of the form [((ta,tb),'some text'),...] and can be fed to SubtitlesClip.

Only works for ‘.srt’ format for the moment.

Tracking

This module contains different fonctions for tracking objects in videos, manually or automatically. The tracking functions return results under the form: ( txy, (fx,fy) ) where txy is of the form [(ti, xi, yi)...] and (fx(t),fy(t)) give the position of the track for all times t (if the time t is out of the time bounds of the tracking time interval fx and fy return the position of the object at the start or at the end of the tracking time interval).

moviepy.video.tools.tracking.autoTrack(clip, pattern, tt=None, fps=None, radius=20, xy0=None)[source]

Tracks a given pattern (small image array) in a video clip. Returns [(x1,y1),(x2,y2)...] where xi,yi are the coordinates of the pattern in the clip on frame i. To select the frames you can either specify a list of times with tt or select a frame rate with fps. This algorithm assumes that the pattern’s aspect does not vary much and that the distance between two occurences of the pattern in two consecutive frames is smaller than radius (if you set radius to -1 the pattern will be searched in the whole screen at each frame). You can also provide the original position of the pattern with xy0.

moviepy.video.tools.tracking.findAround(pic, pat, xy=None, r=None)[source]

find image pattern pat in pic[x +/- r, y +/- r]. if xy is none, consider the whole picture.

moviepy.video.tools.tracking.manual_tracking(clip, t1=None, t2=None, fps=None, nobjects=1, savefile=None)[source]

Allows manual tracking of an object(s) in the video clip between times t1 and t2. This displays the clip frame by frame and you must click on the object(s) in each frame. If t2=None only the frame at t1 is taken into account.

Returns a list [(t1,x1,y1),(t2,x2,y2) etc... ] if there is one object per frame, else returns a list whose elements are of the form (ti, [(xi1,yi1), (xi2,yi2), ...] )

Parameters:

t1,t2: :

times during which to track (defaults are start and end of the clip). t1 and t2 can be expressed in seconds like 15.35, in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.

fps: :

Number of frames per second to freeze on. If None, the clip’s fps attribute is used instead.

nobjects: :

Number of objects to click on each frame.

savefile: :

If provided, the result is saved to a file, which makes it easier to edit and re-use later.

Examples

>>> from moviepy.editor import VideoFileClip
>>> from moviepy.tools.tracking import manual_tracking
>>> clip = VideoFileClip("myvideo.mp4")
>>> # manually indicate 3 trajectories, save them to a file
>>> trajectories = manual_tracking(clip, t1=5, t2=7, fps=5,
                                   nobjects=3, savefile="track.txt")
>>> # ...
>>> # LATER, IN ANOTHER SCRIPT, RECOVER THESE TRAJECTORIES
>>> from moviepy.tools.tracking import Trajectory
>>> traj1, traj2, traj3 = Trajectory.load_list('track.txt')
>>> # If ever you only have one object being tracked, recover it with
>>> traj, =  Trajectory.load_list('track.txt')