video.tools

This module regroups advanced, useful (and less useful) functions for editing videos, by alphabetical order.

Credits

This module contains different functions to make end and opening credits, even though it is difficult to fill everyone needs in this matter.

moviepy.video.tools.credits.credits1(creditfile, width, stretch=30, color='white', stroke_color='black', stroke_width=2, font='Impact-Normal', fontsize=60)[source]
Parameters:

creditfile

A text file whose content must be as follows:

# This is a comment
# The next line says : leave 4 blank lines
.blank 4

..Executive Story Editor
MARCEL DURAND

..Associate Producers
MARTIN MARCEL
DIDIER MARTIN

..Music Supervisor
JEAN DIDIER

width

Total width of the credits text in pixels

gap

Gap in pixels between the jobs and the names.

**txt_kw

Additional argument passed to TextClip (font, colors, etc.)

Returns:

image

An ImageClip instance that looks like this and can be scrolled to make some credits :

Executive Story Editor MARCEL DURAND
Associate Producers MARTIN MARCEL

DIDIER MARTIN

Music Supervisor JEAN DIDIER

Drawing

This module deals with making images (np arrays). It provides drawing methods that are difficult to do with the existing Python libraries.

moviepy.video.tools.drawing.blit(im1, im2, pos=[0, 0], mask=None, ismask=False)[source]

Blit an image over another.

Blits im1 on im2 as position pos=(x,y), using the mask if provided. If im1 and im2 are mask pictures (2D float arrays) then ismask must be True.

moviepy.video.tools.drawing.circle(screensize, center, radius, col1=1.0, col2=0, blur=1)[source]

Draw an image with a circle.

Draws a circle of color col1, on a background of color col2, on a screen of size screensize at the position center=(x,y), with a radius radius but slightly blurred on the border by blur pixels

moviepy.video.tools.drawing.color_gradient(size, p1, p2=None, vector=None, r=None, col1=0, col2=1.0, shape='linear', offset=0)[source]

Draw a linear, bilinear, or radial gradient.

The result is a picture of size size, whose color varies gradually from color col1 in position p1 to color col2 in position p2.

If it is a RGB picture the result must be transformed into a ‘uint8’ array to be displayed normally:

Parameters:

size

Size (width, height) in pixels of the final picture/array.

p1, p2

Coordinates (x,y) in pixels of the limit point for col1 and col2. The color ‘before’ p1 is col1 and it gradually changes in the direction of p2 until it is col2 when it reaches p2.

vector

A vector [x,y] in pixels that can be provided instead of p2. p2 is then defined as (p1 + vector).

col1, col2

Either floats between 0 and 1 (for gradients used in masks) or [R,G,B] arrays (for colored gradients).

shape

‘linear’, ‘bilinear’, or ‘circular’. In a linear gradient the color varies in one direction, from point p1 to point p2. In a bilinear gradient it also varies symetrically form p1 in the other direction. In a circular gradient it goes from col1 to col2 in all directions.

offset

Real number between 0 and 1 indicating the fraction of the vector at which the gradient actually starts. For instance if offset is 0.9 in a gradient going from p1 to p2, then the gradient will only occur near p2 (before that everything is of color col1) If the offset is 0.9 in a radial gradient, the gradient will occur in the region located between 90% and 100% of the radius, this creates a blurry disc of radius d(p1,p2).

Returns:

image

An Numpy array of dimensions (W,H,ncolors) of type float representing the image of the gradient.

Examples

>>> grad = color_gradient(blabla).astype('uint8')
moviepy.video.tools.drawing.color_split(size, x=None, y=None, p1=None, p2=None, vector=None, col1=0, col2=1.0, grad_width=0)[source]

Make an image splitted in 2 colored regions.

Returns an array of size size divided in two regions called 1 and 2 in wht follows, and which will have colors col& and col2 respectively.

Parameters:

x: (int)

If provided, the image is splitted horizontally in x, the left region being region 1.

y: (int)

If provided, the image is splitted vertically in y, the top region being region 1.

p1,p2:

Positions (x1,y1),(x2,y2) in pixels, where the numbers can be floats. Region 1 is defined as the whole region on the left when going from p1 to p2.

p1, vector:

p1 is (x1,y1) and vector (v1,v2), where the numbers can be floats. Region 1 is then the region on the left when starting in position p1 and going in the direction given by vector.

gradient_width

If not zero, the split is not sharp, but gradual over a region of width gradient_width (in pixels). This is preferable in many situations (for instance for antialiasing).

Examples

>>> size = [200,200]
>>> # an image with all pixels with x<50 =0, the others =1
>>> color_split(size, x=50, col1=0, col2=1)
>>> # an image with all pixels with y<50 red, the others green
>>> color_split(size, x=50, col1=[255,0,0], col2=[0,255,0])
>>> # An image splitted along an arbitrary line (see below) 
>>> color_split(size, p1=[20,50], p2=[25,70] col1=0, col2=1)

Segmenting

moviepy.video.tools.segmenting.findObjects(clip, rem_thr=500, preview=False)[source]

Returns a list of ImageClips representing each a separate object on the screen.

rem_thr : all objects found with size < rem_Thr will be
considered false positives and will be removed

Subtitles

Experimental module for subtitles support.

class moviepy.video.tools.subtitles.SubtitlesClip(subtitles, make_textclip=None)[source]

Bases: moviepy.video.VideoClip.VideoClip

A Clip that serves as “subtitle track” in videos.

One particularity of this class is that the images of the subtitle texts are not generated beforehand, but only if needed.

Parameters:

subtitles

Either the name of a file, or a list

Examples

>>> from moviepy.video.tools.subtitles import SubtitlesClip
>>> from moviepy.video.io.VideoFileClip import VideoFileClip
>>> generator = lambda txt: TextClip(txt, font='Georgia-Regular',
                                    fontsize=24, color='white')
>>> sub = SubtitlesClip("subtitles.srt", generator)
>>> myvideo = VideoFileClip("myvideo.avi")
>>> final = CompositeVideoClip([clip, subtitles])
>>> final.to_videofile("final.mp4", fps=myvideo.fps)
add_mask()

Add a mask VideoClip to the VideoClip.

Returns a copy of the clip with a completely opaque mask (made of ones). This makes computations slower compared to having a None mask but can be useful in many cases. Choose

Set constant_size to False for clips with moving image size.

afx(fun, *a, **k)

Transform the clip’s audio.

Return a new clip whose audio has been transformed by fun.

blit_on(picture, t)

Returns the result of the blit of the clip’s frame at time t on the given picture, the position of the clip being given by the clip’s pos attribute. Meant for compositing.

copy()

Shallow copy of the clip.

Returns a shwallow copy of the clip whose mask and audio will be shallow copies of the clip’s mask and audio if they exist.

This method is intensively used to produce new clips every time there is an outplace transformation of the clip (clip.resize, clip.subclip, etc.)

cutout(ta, tb)

Returns a clip playing the content of the current clip but skips the extract between ta and tb, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. If the original clip has a duration attribute set, the duration of the returned clip is automatically computed as `` duration - (tb - ta)``.

The resulting clip’s audio and mask will also be cutout if they exist.

fl(fun, apply_to=[], keep_duration=True)

General processing of a clip.

Returns a new Clip whose frames are a transformation (through function fun) of the frames of the current clip.

Parameters:

fun

A function with signature (gf,t -> frame) where gf will represent the current clip’s get_frame method, i.e. gf is a function (t->image). Parameter t is a time in seconds, frame is a picture (=Numpy array) which will be returned by the transformed clip (see examples below).

apply_to

Can be either 'mask', or 'audio', or ['mask','audio']. Specifies if the filter fl should also be applied to the audio or the mask of the clip, if any.

keep_duration

Set to True if the transformation does not change the duration of the clip.

Examples

In the following newclip a 100 pixels-high clip whose video content scrolls from the top to the bottom of the frames of clip.

>>> fl = lambda gf,t : gf(t)[int(t):int(t)+50, :]
>>> newclip = clip.fl(fl, apply_to='mask')
fl_image(image_func, apply_to=[])

Modifies the images of a clip by replacing the frame get_frame(t) by another frame, image_func(get_frame(t))

fl_time(t_func, apply_to=[], keep_duration=False)

Returns a Clip instance playing the content of the current clip but with a modified timeline, time t being replaced by another time t_func(t).

Parameters:

t_func:

A function t-> new_t

apply_to:

Can be either ‘mask’, or ‘audio’, or [‘mask’,’audio’]. Specifies if the filter fl should also be applied to the audio or the mask of the clip, if any.

keep_duration:

False (default) if the transformation modifies the duration of the clip.

Examples

>>> # plays the clip (and its mask and sound) twice faster
>>> newclip = clip.fl_time(lambda: 2*t, apply_to=['mask','audio'])
>>>
>>> # plays the clip starting at t=3, and backwards:
>>> newclip = clip.fl_time(lambda: 3-t)
fx(func, *args, **kwargs)

Returns the result of func(self, *args, **kwargs). for instance

>>> newclip = clip.fx(resize, 0.2, method='bilinear')

is equivalent to

>>> newclip = resize(clip, 0.2, method='bilinear')

The motivation of fx is to keep the name of the effect near its parameters, when the effects are chained:

>>> from moviepy.video.fx import volumex, resize, mirrorx
>>> clip.fx( volumex, 0.5).fx( resize, 0.3).fx( mirrorx )
>>> # Is equivalent, but clearer than
>>> resize( volumex( mirrorx( clip ), 0.5), 0.3)
get_frame(t)

Gets a numpy array representing the RGB picture of the clip at time t or (mono or stereo) value for a sound clip

in_subclip(t_start=None, t_end=None)[source]

Returns a sequence of [(t1,t2), txt] covering all the given subclip from t_start to t_end. The first and last times will be cropped so as to be exactly t_start and t_end if possible.

is_playing(t)

If t is a time, returns true if t is between the start and the end of the clip. t can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. If t is a numpy array, returns False if none of the t is in theclip, else returns a vector [b_1, b_2, b_3...] where b_i is true iff tti is in the clip.

iter_frames(fps=None, with_times=False, progress_bar=False, dtype=None)

Iterates over all the frames of the clip.

Returns each frame of the clip as a HxWxN np.array, where N=1 for mask clips and N=3 for RGB clips.

This function is not really meant for video editing. It provides an easy way to do frame-by-frame treatment of a video, for fields like science, computer vision...

The fps (frames per second) parameter is optional if the clip already has a fps attribute.

Use dtype=”uint8” when using the pictures to write video, images...

Examples

>>> # prints the maximum of red that is contained
>>> # on the first line of each frame of the clip.
>>> from moviepy.editor import VideoFileClip
>>> myclip = VideoFileClip('myvideo.mp4')
>>> print ( [frame[0,:,0].max()
             for frame in myclip.iter_frames()])
on_color(size=None, color=(0, 0, 0), pos=None, col_opacity=None)

Place the clip on a colored background.

Returns a clip made of the current clip overlaid on a color clip of a possibly bigger size. Can serve to flatten transparent clips.

Parameters:

size

Size (width, height) in pixels of the final clip. By default it will be the size of the current clip.

color

Background color of the final clip ([R,G,B]).

pos

Position of the clip in the final clip. ‘center’ is the default

col_opacity

Parameter in 0..1 indicating the opacity of the colored background.

save_frame(filename, t=0, withmask=True)

Save a clip’s frame to an image file.

Saves the frame of clip corresponding to time t in ‘filename’. t can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.

If withmask is True the mask is saved in the alpha layer of the picture (only works with PNGs).

set_audio(audioclip)

Attach an AudioClip to the VideoClip.

Returns a copy of the VideoClip instance, with the audio attribute set to audio, which must be an AudioClip instance.

set_duration(t, change_end=True)

Returns a copy of the clip, with the duration attribute set to t, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip. If change_end is False, the start attribute of the clip will be modified in function of the duration and the preset end of the clip.

set_end(t)

Returns a copy of the clip, with the end attribute set to t, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. Also sets the duration of the mask and audio, if any, of the returned clip.

set_fps(fps)

Returns a copy of the clip with a new default fps for functions like write_videofile, iterframe, etc.

set_ismask(ismask)

Says wheter the clip is a mask or not (ismask is a boolean)

set_make_frame(mf)

Change the clip’s get_frame.

Returns a copy of the VideoClip instance, with the make_frame attribute set to mf.

set_mask(mask)

Set the clip’s mask.

Returns a copy of the VideoClip with the mask attribute set to mask, which must be a greyscale (values in 0-1) VideoClip

set_memoize(memoize)

Sets wheter the clip should keep the last frame read in memory

set_opacity(op)

Set the opacity/transparency level of the clip.

Returns a semi-transparent copy of the clip where the mask is multiplied by op (any float, normally between 0 and 1).

set_pos(*a, **kw)

The function set_pos is deprecated and is kept temporarily for backwards compatibility. Please use the new name, set_position, instead.

set_position(pos, relative=False)

Set the clip’s position in compositions.

Sets the position that the clip will have when included in compositions. The argument pos can be either a couple (x,y) or a function t-> (x,y). x and y mark the location of the top left corner of the clip, and can be of several types.

Examples

>>> clip.set_pos((45,150)) # x=45, y=150
>>>
>>> # clip horizontally centered, at the top of the picture
>>> clip.set_pos(("center","top"))
>>>
>>> # clip is at 40% of the width, 70% of the height:
>>> clip.set_pos((0.4,0.7), relative=True)
>>>
>>> # clip's position is horizontally centered, and moving up !
>>> clip.set_pos(lambda t: ('center', 50+t) )
set_start(t, change_end=True)

Returns a copy of the clip, with the start attribute set to t, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.

If change_end=True and the clip has a duration attribute, the end atrribute of the clip will be updated to start+duration.

If change_end=False and the clip has a end attribute, the duration attribute of the clip will be updated to end-start

These changes are also applied to the audio and mask clips of the current clip, if they exist.

subclip(t_start=0, t_end=None)

Returns a clip playing the content of the current clip between times t_start and t_end, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’. If t_end is not provided, it is assumed to be the duration of the clip (potentially infinite). If t_end is a negative value, it is reset to ``clip.duration + t_end. ``. For instance:

>>> # cut the last two seconds of the clip:
>>> newclip = clip.subclip(0,-2)

If t_end is provided or if the clip has a duration attribute, the duration of the returned clip is set automatically.

The mask and audio of the resulting subclip will be subclips of mask and audio the original clip, if they exist.

subfx(fx, ta=0, tb=None, **kwargs)

Apply a transformation to a part of the clip.

Returns a new clip in which the function fun (clip->clip) has been applied to the subclip between times ta and tb (in seconds).

Examples

>>> # The scene between times t=3s and t=6s in ``clip`` will be
>>> # be played twice slower in ``newclip``
>>> newclip = clip.subapply(lambda c:c.speedx(0.5) , 3,6)
to_ImageClip(t=0, with_mask=True)

Returns an ImageClip made out of the clip’s frame at time t, which can be expressed in seconds (15.35), in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.

to_RGB()

Returns a non-mask video clip made from the mask video clip.

to_gif(*a, **kw)

The function to_gif is deprecated and is kept temporarily for backwards compatibility. Please use the new name, write_gif, instead.

to_images_sequence(*a, **kw)

The function to_images_sequence is deprecated and is kept temporarily for backwards compatibility. Please use the new name, write_images_sequence, instead.

to_mask(canal=0)

Returns a mask a video clip made from the clip.

to_videofile(*a, **kw)

The function to_videofile is deprecated and is kept temporarily for backwards compatibility. Please use the new name, write_videofile, instead.

without_audio()

Remove the clip’s audio.

Return a copy of the clip with audio set to None.

write_gif(filename, fps=None, program='imageio', opt='nq', fuzz=1, verbose=True, loop=0, dispose=False, colors=None, tempfiles=False)

Write the VideoClip to a GIF file.

Converts a VideoClip into an animated GIF using ImageMagick or ffmpeg.

Parameters:

filename

Name of the resulting gif file.

fps

Number of frames per second (see note below). If it isn’t provided, then the function will look for the clip’s fps attribute (VideoFileClip, for instance, have one).

program

Software to use for the conversion, either ‘imageio’ (this will use the library FreeImage through ImageIO), or ‘ImageMagick’, or ‘ffmpeg’.

opt

Optimalization to apply. If program=’imageio’, opt must be either ‘wu’ (Wu) or ‘nq’ (Neuquant). If program=’ImageMagick’, either ‘optimizeplus’ or ‘OptimizeTransparency’.

fuzz

(ImageMagick only) Compresses the GIF by considering that the colors that are less than fuzz% different are in fact the same.

Notes

The gif will be playing the clip in real time (you can only change the frame rate). If you want the gif to be played slower than the clip you will use

>>> # slow down clip 50% and make it a gif
>>> myClip.speedx(0.5).to_gif('myClip.gif')
write_images_sequence(nameformat, fps=None, verbose=True, withmask=True, progress_bar=True)

Writes the videoclip to a sequence of image files.

Parameters:

nameformat

A filename specifying the numerotation format and extension of the pictures. For instance “frame%03d.png” for filenames indexed with 3 digits and PNG format. Also possible: “some_folder/frame%04d.jpeg”, etc.

fps

Number of frames per second to consider when writing the clip. If not specified, the clip’s fps attribute will be used if it has one.

withmask

will save the clip’s mask (if any) as an alpha canal (PNGs only)

verbose

Boolean indicating whether to print infomation

progress_bar

Boolean indicating whether to show the progress bar.

Returns:

names_list

A list of all the files generated.

Notes

The resulting image sequence can be read using e.g. the class ImageSequenceClip.

write_videofile(filename, fps=None, codec=None, bitrate=None, audio=True, audio_fps=44100, preset='medium', audio_nbytes=4, audio_codec=None, audio_bitrate=None, audio_bufsize=2000, temp_audiofile=None, rewrite_audio=True, remove_temp=True, write_logfile=False, verbose=True, threads=None, ffmpeg_params=None, progress_bar=True)

Write the clip to a videofile.

Parameters:

filename

Name of the video file to write in. The extension must correspond to the “codec” used (see below), or simply be ‘.avi’ (which will work with any codec).

fps

Number of frames per second in the resulting video file. If None is provided, and the clip has an fps attribute, this fps will be used.

codec

Codec to use for image encoding. Can be any codec supported by ffmpeg. If the filename is has extension ‘.mp4’, ‘.ogv’, ‘.webm’, the codec will be set accordingly, but you can still set it if you don’t like the default. For other extensions, the output filename must be set accordingly.

Some examples of codecs are:

'libx264' (default codec for file extension .mp4) makes well-compressed videos (quality tunable using ‘bitrate’).

'mpeg4' (other codec for extension .mp4) can be an alternative to 'libx264', and produces higher quality videos by default.

'rawvideo' (use file extension .avi) will produce a video of perfect quality, of possibly very huge size.

png (use file extension .avi) will produce a video of perfect quality, of smaller size than with rawvideo

'libvorbis' (use file extension .ogv) is a nice video format, which is completely free/ open source. However not everyone has the codecs installed by default on their machine.

'libvpx' (use file extension .webm) is tiny a video format well indicated for web videos (with HTML5). Open source.

audio

Either True, False, or a file name. If True and the clip has an audio clip attached, this audio clip will be incorporated as a soundtrack in the movie. If audio is the name of an audio file, this audio file will be incorporated as a soundtrack in the movie.

audiofps

frame rate to use when generating the sound.

temp_audiofile

the name of the temporary audiofile to be generated and incorporated in the the movie, if any.

audio_codec

Which audio codec should be used. Examples are ‘libmp3lame’ for ‘.mp3’, ‘libvorbis’ for ‘ogg’, ‘libfdk_aac’:’m4a’, ‘pcm_s16le’ for 16-bit wav and ‘pcm_s32le’ for 32-bit wav. Default is ‘libmp3lame’, unless the video extension is ‘ogv’ or ‘webm’, at which case the default is ‘libvorbis’.

audio_bitrate

Audio bitrate, given as a string like ‘50k’, ‘500k’, ‘3000k’. Will determine the size/quality of audio in the output file. Note that it mainly an indicative goal, the bitrate won’t necessarily be the this in the final file.

preset

Sets the time that FFMPEG will spend optimizing the compression. Choices are: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo. Note that this does not impact the quality of the video, only the size of the video file. So choose ultrafast when you are in a hurry and file size does not matter.

threads

Number of threads to use for ffmpeg. Can speed up the writing of the video on multicore computers

ffmpeg_params

Any additional ffmpeg parameters you would like to pass, as a list of terms, like [‘-option1’, ‘value1’, ‘-option2’, ‘value2’]

write_logfile

If true, will write log files for the audio and the video. These will be files ending with ‘.log’ with the name of the output file in them.

verbose

Boolean indicating whether to print infomation

progress_bar

Boolean indicating whether to show the progress bar.

Examples

>>> from moviepy.editor import VideoFileClip
>>> clip = VideoFileClip("myvideo.mp4").subclip(100,120)
>>> clip.write_videofile("my_new_video.mp4")
moviepy.video.tools.subtitles.file_to_subtitles(filename)[source]

Converts a srt file into subtitles.

The returned list is of the form [((ta,tb),'some text'),...] and can be fed to SubtitlesClip.

Only works for ‘.srt’ format for the moment.

Tracking

This module contains different functions for tracking objects in videos, manually or automatically. The tracking functions return results under the form: ( txy, (fx,fy) ) where txy is of the form [(ti, xi, yi)...] and (fx(t),fy(t)) give the position of the track for all times t (if the time t is out of the time bounds of the tracking time interval fx and fy return the position of the object at the start or at the end of the tracking time interval).

moviepy.video.tools.tracking.autoTrack(clip, pattern, tt=None, fps=None, radius=20, xy0=None)[source]

Tracks a given pattern (small image array) in a video clip. Returns [(x1,y1),(x2,y2)...] where xi,yi are the coordinates of the pattern in the clip on frame i. To select the frames you can either specify a list of times with tt or select a frame rate with fps. This algorithm assumes that the pattern’s aspect does not vary much and that the distance between two occurences of the pattern in two consecutive frames is smaller than radius (if you set radius to -1 the pattern will be searched in the whole screen at each frame). You can also provide the original position of the pattern with xy0.

moviepy.video.tools.tracking.findAround(pic, pat, xy=None, r=None)[source]

find image pattern pat in pic[x +/- r, y +/- r]. if xy is none, consider the whole picture.

moviepy.video.tools.tracking.manual_tracking(clip, t1=None, t2=None, fps=None, nobjects=1, savefile=None)[source]

Allows manual tracking of an object(s) in the video clip between times t1 and t2. This displays the clip frame by frame and you must click on the object(s) in each frame. If t2=None only the frame at t1 is taken into account.

Returns a list [(t1,x1,y1),(t2,x2,y2) etc... ] if there is one object per frame, else returns a list whose elements are of the form (ti, [(xi1,yi1), (xi2,yi2), ...] )

Parameters:

t1,t2:

times during which to track (defaults are start and end of the clip). t1 and t2 can be expressed in seconds like 15.35, in (min, sec), in (hour, min, sec), or as a string: ‘01:03:05.35’.

fps:

Number of frames per second to freeze on. If None, the clip’s fps attribute is used instead.

nobjects:

Number of objects to click on each frame.

savefile:

If provided, the result is saved to a file, which makes it easier to edit and re-use later.

Examples

>>> from moviepy.editor import VideoFileClip
>>> from moviepy.video.tools.tracking import manual_tracking
>>> clip = VideoFileClip("myvideo.mp4")
>>> # manually indicate 3 trajectories, save them to a file
>>> trajectories = manual_tracking(clip, t1=5, t2=7, fps=5,
                                   nobjects=3, savefile="track.txt")
>>> # ...
>>> # LATER, IN ANOTHER SCRIPT, RECOVER THESE TRAJECTORIES
>>> from moviepy.video.tools.tracking import Trajectory
>>> traj1, traj2, traj3 = Trajectory.load_list('track.txt')
>>> # If ever you only have one object being tracked, recover it with
>>> traj, =  Trajectory.load_list('track.txt')