How to Build a Video Editor in JavaScript

1 September 2023 | 23 min read
Casper Kloppenburg

In this tutorial, I'll show you how to build a web-based video editor with JavaScript, to render videos in the browser and export them as MP4. While I'll be using React, this will work with any frontend library such as Svelte, Vue, and Angular.

In the old days, video production meant expensive equipment and large software suites, like Adobe Premiere Pro. Today, as web-based applications become more and more versatile with technologies like WebGL and WebAssembly, the line between desktop and web is blurring. So much so, even resource-hungry applications such as video editors now run smoothly in any modern browser.

Though the capabilities are there, it wasn't easy to take advantage of them without spending months on software development. With Creatomate, we're building a software platform to change that. Creatomate is a video platform made by developers, for developers. With its JavaScript library, you can quickly build web-based video editing apps, and its API makes it easy to produce videos through a simple REST interface without thinking about server infrastructure. Therefore, adding video rendering to your apps is now just a matter of hours instead of months.

This guide shows you how to use Creatomate's Preview SDK for implementing video editing using React and Next.js, but the SDK is compatible with any web library, including Svelte, Vue, Angular, and even plain JavaScript. Because of this, it's easy to integrate into any existing app. This tutorial aims to develop an online video editing tool where visitors can customize text, background pictures, and scenes. The entire source code of this app is on GitHub, along with a live demo.

Although this article will focus on building a basic video editor, the SDK also allows for full-featured video editors with a timeline and elements that can be moved around. We'll discuss how to do that in step 6: "Adding user interactivity". This allows you to build even more advanced video editing applications like this one, whose source code is also available on GitHub.

For a taste of what's possible, in this tutorial we'll use the same video library that powers our online video editor and form-to-video feature. This means you'll be able to develop everything from simple video editing tools to sophisticated video applications.

How to Build a Video Editor in JavaScript

To build a web-based video editor, we need only two NPM packages: the @creatomate/preview frontend library for rendering video previews in the browser, and the creatomate backend library for producing the final MP4 video.

It is worth noting that, for this demo project, we will be using Next.js and TypeScript, but that is not a requirement. As the code we're writing is almost entirely JavaScript, you can use any frontend or backend library that you're already familiar with. Remember, TypeScript is just a syntactic superset of JavaScript, which adds static typing to the language. If you are developing a complex application such as a video editor, using TypeScript is highly recommended, as it will result in clearer code and make it easier to locate bugs in the early stages of development.

In this tutorial, I am going to demonstrate how to handle the actual video rendering from the backend as well. This is the only reason why I chose Next.js. Because React is primarily a frontend framework, Next.js helps to develop the server-side features. However, for the first part of this tutorial, this doesn't really matter, since we're only going to cover the frontend. Only in one of the last steps will I demonstrate the backend part; step 8: "Rendering the final video".

Let's get started!

1. Setting up a project

This tutorial repository is organized into several folders, titled 1-new-project, 2-basic-composition, etc., each corresponding to a step in this tutorial. This allows you to follow this tutorial step-by-step and experiment with the code before it reaches its final form. You can clone the entire repository using the following command:

$ git clone

In each step, I'll link to the appropriate folder in the repository. I will also embed a live demo so you can see it in action.

For reference, I'm including the empty project I used to bootstrap this demo application, which you can find under the 1-new-project folder. It only contains the Next.js boilerplate code and package dependencies. You are welcome to use this as a starting point for your own video editor application.

2. Creating a basic video composition

Let's take it from the beginning with a really simple video composition. First of all, install the @creatomate/preview package from NPM. It is here that we are introduced to the Preview class, which provides access to Creatomate's video library.

The Preview class can operate in two modes: 'player' and 'interactive'. With the 'player' mode, you get a video player that looks just like a normal HTML5 video element. The component comes with a play/pause button and a progress bar for displaying the current playback time. By contrast, the 'interactive' mode is used when you want to create an interactive editor where the user can select elements within the composition, resize them, and move them around. This guide will mostly cover the player mode, however I'll give an example of the interactive mode further down.

Let's see what the player mode looks like:

The code for this demo can be found under the folder 2-basic-scene on GitHub.

The code below shows how to set up the video preview using the Preview class constructor. We need to specify three parameters here: an HTML element, the SDK mode, and a public token:

  • The HTML element is the container node where the player is to be created. It can be any DIV element in your application. The player will automatically scale according to the available size in the container.
  • The second parameter can be either 'player' or 'interactive'.
  • Lastly, you'll need to provide the public token from your Creatomate account. It can be found under Project Settings in your dashboard. For the purposes of these examples, I will use my own public token, however, it is important that you use your own public token when building your own video apps.
1const setUpPreview = (htmlElement: HTMLDivElement) => {
2  // Clean up an older instance of the preview SDK
3  if (previewRef.current) {
4    previewRef.current.dispose();
5    previewRef.current = undefined;
6  }
8  // Initialize a preview. Make sure you provide your own public token
9  const preview = new Preview(htmlElement, 'player', 'public-0x6hcqpfhrhw16d67ogth7ry');
11  preview.onReady = async () => {
12    // Once the SDK is ready, create a basic video scene
13    await preview.setSource(getBasicComposition());
15    // Skip to 2 seconds into the video
16    await preview.setTime(2);
18    setIsLoading(false);
19  };
21  previewRef.current = preview;

As the SDK takes a few seconds to load, you need to subscribe to the onReady event to be notified when the library is ready to use. Here I'm setting the video composition using the setSource function, then skipping 2 seconds into the video using the setTime function.

For the source, we will use a very basic composition composed of two text elements and a background image, which is provided by the function getBasicComposition. In essence, Creatomate works with a JSON-structured format to create videos. Through simple JSON instructions, you can generate any kind of video scene, from the most basic to the most complex. This format is explained in detail in the developer documentation. To give you an idea, here is what it looks like:

1// Trimmed for brevity
2export function getBasicComposition() {
3  return {
4    output_format: 'mp4',
5    width: 1920,
6    height: 1080,
7    elements: [
8      {
9        id: '48734f5c-8c90-41ac-a059-e949e733936b',
10        name: 'Main-Image',
11        type: 'image',
12        time: 0,
13        source: '',
14        color_overlay: 'rgba(0,0,0,0.25)',
15        animations: [
16          // ...
17        ],
18      },
19      {
20        id: '72ec46a3-610c-4b46-86ef-c9bbc337f012',
21        name: 'Tagline',
22        type: 'text',
23        time: 1,
24        duration: 2.5,
25        text: 'Enter your tagline here',
26        font_family: 'Oswald',
27        font_weight: '600',
28        // ...
29      },
30      // ...
31    ],
32  };

At its most basic level – that is all that is necessary to add a dynamic video player to your application. As of this point, we have created a simple video based on a small piece of JSON code. Next, we'll see how to edit the video programmatically. Let's proceed!

3. Implementing live editing

Now that we have set up a video player, we can modify it by calling the setModifications function. For example, let's say we want to provide the user with the ability to customize the headline. As the headline element is named 'Title', we can apply the following modification:

1const applyTextValue = async (value: string) => {
2  // Change the 'Title' element to the provided text value
3  // For more information:
4  await previewRef.current?.setModifications({
5    Title: value,
6  });

Now we just need to add an input field for the user to type in a text:

1<div className={styles.controls}>
2  <input
3    type="text"
4    placeholder="Enter your text here..."
5    onChange={async (e) => {
6      await applyTextValue(;
7    }}
8  />

Here's what we'll get. You can see that when you type in a text value, the headline in the video changes in real-time:

The code for this demo can be found under the folder 3-live-editing on GitHub.

Modifications aren't just limited to changing text; the video can be altered in anyway you want. You can change the text color, font properties, background image, or even add entire scenes. I'll show you how in step 7: "Advanced video mutation". For now, let's keep the composition simple and proceed to the next step.

4. Controlling the playback state

Although the video player has a default play button and timeline, sometimes you may need to change the playback state programmatically. This can be done using the async play() and pause() functions:

1const playVideo = async () => {
2  await previewRef.current?.play();
5const pauseVideo = async () => {
6  await previewRef.current?.pause();
1<div className={styles.controls}>
2  <button
3    onClick={async () => {
4      await playVideo();
5    }}
6  >
7    Play video
8  </button>
9  &nbsp;
10  <button
11    onClick={async () => {
12      await pauseVideo();
13    }}
14  >
15    Pause video
16  </button>

You can see it in action here:

The code for this demo can be found under the folder 4-play-and-pause on GitHub.

5. Composition state management

As a way of keeping in sync with the state after making changes to the video, the Preview class provides the onStateChange event handler. This function is called whenever the source of the video changes. This is especially useful when reacting to edits made by the user in 'interactive' mode. However, it also proves useful when implementing undo and redo functionality. Let's put this into action with the following event handler. In this example, we're using the state that was received by the onStateChange handler to find an element named 'Title'. When found, we'll show the current text value:

1preview.onStateChange = async (state) => {
2  // Find title element
3  const element = preview.findElement((element) => === 'Title');
4  if (element) {
5    setValue(element.source.text);
6  }

The Preview SDK automatically tracks the state of the video and provides the ability to undo and redo any modifications that have been made to the video. In order to demonstrate this functionality, we will add an undo and redo button along with a 'Change Title' button that allows us to programmatically change the headline:

1{value && <div>State: {value}</div>}
3<div className={styles.controls}>
4  <button
5    onClick={async () => {
6      await changeTitle(`Title ${counter}`);
7      setCounter(counter + 1);
8    }}
9  >
10    Change Title
11  </button>
12  &nbsp;
13  <button
14    onClick={async () => {
15      await previewRef.current?.undo();
16    }}
17  >
18    Undo
19  </button>
20  &nbsp;
21  <button
22    onClick={async () => {
23      await previewRef.current?.redo();
24    }}
25  >
26    Redo
27  </button>

Now run this example and click 'Change Title' a few times. After that, click 'Undo'. You can see that the last modification was reverted. Additionally, any time a modification is made to the video, the state is reported via the onStateChange event handler:

The code for this demo can be found under the folder 5-state-management on GitHub.

6. Adding user interactivity (optional)

So far, we have created a simple video player and made edits to the video using the setModifications interface. There are, however, applications in which you would like the user to interact directly with the canvas, as is the case with video editors such as Adobe Premiere, iMovie, or Creatomate's template editor. This is where the 'interactive' mode comes in:

1const preview = new Preview(htmlElement, 'interactive', 'public-0x6hcqpfhrhw16d67ogth7ry');

By passing the interactive mode to the Preview constructor, the SDK creates a fully interactive drag-and-drop editor that includes a selection rectangle, snap guidelines, and zoom capabilities. And as it is designed as a modular component, it's easy to add your own UI and functionality to it.

Check out our GitHub for more advanced examples like the open source video editor demo. Here's the live version. Feel free to explore the source code and use it however you like.

Here, we'll keep it simple and just demonstrate a few key features. The demo application is best viewed in full screen, so follow this link to check it out. For demonstration purposes, I added two buttons: "Seek to Start" and "Seek to End". In both cases, we can use the setTime function to set the playback time. To jump to the beginning of the video, simply set the time to 0. In order to get to the end of the video, you need to know its length, which you can find in the state property:

2  onClick={async () => {
3    await previewRef.current?.setTime(0);
4  }}
6  Seek to Start
10  onClick={async () => {
11    const preview = previewRef.current;
12    if (preview && preview.state) {
13      // Move the time state right before the end
14      await preview.setTime(preview.state.duration - 0.001);
15    }
16  }}
18  Seek to End

The interactive mode offers several zoom options. By default, the user can freely pan the canvas by using the mouse and trackpad. If you don't want this, you can use the setZoom function, which allows you to choose a behavior (free, auto, fixed, or center), as well as a zoom scale (1.0 being 100%):

2  onClick={async () => {
3    await previewRef.current?.setZoom('fixed', 0.5);
4  }}
6  Zoom 50%

Now let's look at how to implement a playback track that allows users to skip through the video. To implement this, you will need a library to capture mouse dragging. I will use the NPM library react-draggable, but there are many gesture libraries available for other frameworks. If you're using Angular, there's angular2-draggable. What's important is that you can capture mouse events. With react-draggable, you can use the onStart, onDrag, and onStop events.

An example of a minimalistic playhead is below. As you can see, the video's current time is received through the onTimeChange event. As the user drags the playhead around, the time is updated using the setTime function:

1export const ProgressControl: React.FC<ProgressControlProps> = (props) => {
2  const trackRef = useRef<HTMLDivElement>(null);
3  const handleRef = useRef<HTMLDivElement>(null);
5  // The current time of the video
6  const [currentTime, setCurrentTime] = useState(0);
7  const currentTrackProgress = props.preview.state?.duration
8    ? currentTime / props.preview.state.duration
9    : 0;
11  // Listen for time changes
12  useEffect(() => {
13    props.preview.onTimeChange = (time) => {
14      setCurrentTime(time);
15    };
16  }, [props.preview]);
18  // Sets the current time
19  const setTime = async (time: number) => {
20    await props.preview.setTime(time);
21  };
23  // Throttle the 'setTime' function to 15 milliseconds as mouse events
24  // are not throttled by default
25  const setTimeThrottled = useCallback(throttle(setTime, 15), []);
27  // This is where data is stored while dragging the mouse
28  const [dragContext, setDragContext] = useState<{ startX: number; startTime: number }>();
30  return (
31    <div className={styles.progressControl}>
32      <div ref={trackRef} className={styles.progressControlTrack} />
33      <DraggableCore
34        nodeRef={handleRef}
35        onStart={(e, data) => {
36          // Handle drag start...
37        }}
38        onDrag={(e, data) => {
39          // Handle drag movement...
40        }}
41        onStop={() => {
42          // Handle drag stop...
43        }}
44      >
45        <div
46          ref={handleRef}
47          className={styles.progressControlHandle}
48          style={{ left: `${currentTrackProgress * 100}%` }}
49        />
50      </DraggableCore>
51    </div>
52  );

The onStart event handler is straightforward. We just store the current video time and the playhead's X position:

1onStart={(e, data) => {
2  // Set the current X pixel position and current time when dragging starts
3  setDragContext({
4    startX: data.x,
5    startTime: currentTime,
6  });

In the onDrag event handler, we can then calculate the current position based on the current mouse position (passed as data.x), start position (dragContext.startX and dragContext.time), and the width of the timeline:

1onDrag={(e, data) => {
2  if (props.preview.state && trackRef.current && dragContext) {
3    // Width of the track element in pixels
4    const trackWidth = trackRef.current.clientWidth;
6    // Track progress from 0 to 1
7    const trackProgress = (data.x - dragContext.startX) / trackWidth;
9    // Track progress in seconds
10    const trackProgressSeconds = Math.min(
11      Math.max(
12        dragContext.startTime + props.preview.state.duration * trackProgress,
13        0,
14      ),
15      props.preview.state.duration - 0.001,
16    );
18    // Set the time progress
19    setTimeThrottled(trackProgressSeconds);
20  }

7. Advanced video mutation

Let's go back to the application we're building in this tutorial and use the 'player' mode. At the beginning of this article, we talked about how to change content in videos using the setModifications function. While this approach is sufficient for most operations, there are times when you need to do more advanced editing.

Say, for example, that you are developing a video editor to create slideshows. You would like your users to be able to add, remove, and edit slides as they see fit. In these apps, you'd like more control over the composition than the setModifications function provides, and it might be more convenient to edit the JSON directly. Let me give you an example:

The code for this demo can be found under the folder 7-advanced-mutation on GitHub.

In the example above, clicking the 'Add Slide' button adds an entirely new slide to the video. Additionally, the video automatically jumps to where the slide was added.

To accomplish this, a few things need to be done. Take a look at the code below. First, we need to get the source of the video using the getSource function. This is the JSON-structured source code of the video we discussed earlier. This source can now be modified as we like. As we're adding a slide at the end, we first need to figure out where the last slide is. We do this by searching through the source using findLastIndex. The new slide is then added right after the existing slide. Finally, we can apply the source code using setSource:

1export async function addSlide(preview: Preview) {
2  // Get the video source
3  // Refer to:
4  const source = preview.getSource();
6  // Delete the 'duration' and 'time' property values
7  // to make each element (Slide-1, Slide-2, etc.) autosize on the timeline
8  delete source.duration;
9  for (const element of source.elements) {
10    delete element.time;
11  }
13  // Find the last slide element (e.g. Slide-3)
14  const lastSlideIndex = source.elements.findLastIndex(
15    (element: any) =>'Slide-'),
16  );
18  if (lastSlideIndex !== -1) {
19    const slideName = `Slide-${lastSlideIndex}`;
21    // Create a new slide
22    const newSlideSource = createSlide(
23      slideName,
24      `This is the text caption for newly added slide ${lastSlideIndex}.`
25    );
27    // Insert the new slide
28    source.elements.splice(lastSlideIndex + 1, 0, newSlideSource);
30    // Update the video source
31    await preview.setSource(source);
33    // Jump to the time at which the text element is visible
34    await ensureElementVisibility(preview, `${slideName}-Text`, 1.5);
35  }

Using the ensureElementVisibility function, we automatically jump to the new slide's time location. You can find the source code for that function below. In the same way as earlier in this tutorial, we find the newly added slide by its name, then set the current playback time to its appearance time:

1// Jumps to a time position where the provided element is visible
2export async function ensureElementVisibility(preview: Preview, elementName: string,
3                                              addTime: number) {
4  // Find element by name
5  const element = preview.getElements().find(
6    (element) => === elementName,
7  );
9  if (element) {
10    // Set playback time
11    await preview.setTime(element.globalTime + addTime);
12  }

8. Rendering the final MP4 video

As of now, we've only talked about rendering a preview of the video in the browser. If we wanted to create an actual MP4 video that we could download or post on social media, how would we go about it? Unfortunately, there is currently no reliable way to produce MP4 files in the browser. While there are web-based versions of FFmpeg such as FFmpeg.wasm, these ports are not complete and are not suitable for use with less capable devices.

Moreover, longer videos can take considerable time to process and the user must keep their browser open during this time, provided their device is capable of handling such a resource-intensive task to begin with.

As a solution, you can make use of Creatomate's video API. This is a cloud infrastructure designed specifically for rendering video. As the infrastructure is completely managed by us, you do not need to worry about setting up, operating, and scaling a video infrastructure yourself. This will be particularly challenging if your application grows in popularity over time – if your application experiences peak-hours when large amounts of videos need to be processed concurrently, and your infrastructure needs to be scaled in accordance with the demand. With Creatomate's API, all of this is taken care of for you.

Because Creatomate's API is fully compatible with the Preview SDK, it is straightforward to use. It's just a matter of sending the generated JSON to the REST API, and it'll give you the produced MP4 back:

1const client = new Client(process.env.CREATOMATE_API_KEY!);
3export default function handler(req: NextApiRequest, res: NextApiResponse) {
4  return new Promise<void>((resolve) => {
5    if (req.method === 'POST') {
7      // Return an HTTP 401 response when the API key was not provided
8      if (!process.env.CREATOMATE_API_KEY) {
9        res.status(401).end();
10        resolve();
11        return;
12      }
14      /** @type {import('creatomate').RenderOptions} */
15      const options = {
16        // outputFormat: 'mp4' as RenderOutputFormat,
17        source: req.body.source,
18      };
20      client
21        .render(options)
22        .then((renders) => {
23          res.status(200).json(renders[0]);
24          resolve();
25        })
26        .catch(() => {
27          res.status(400).end();
28          resolve();
29        });
30    } else {
31      res.status(404).end();
32      resolve();
33    }
34  });

Make sure this code runs on the server side, as we will authenticate with Creatomate's infrastructure using a secret API key, and we do not want to expose the API key to the frontend. Also note that this demo shows a very basic Next.js API route, but in a real app you'd want better error handling and security practices. Consider this code solely as a demonstration.

Remember, Creatomate's API has a simple REST interface, so you can use it using any language such as PHP, Ruby, Python or .NET. For a quick introduction to Creatomate's API, see this tutorial: Get Started with Creatomate's Video Generation API.

You can find more information about Creatomate's API in the developer documentation.

9. Putting it all together

Now that we've covered all the basics of the Preview SDK, we can put it all together and make the final video editor application. You can check out the live demo here. The source code is in the 8-final-project folder on GitHub.

Taking a look at the code, you'll see that most of the logic is user interface related. Therefore, we don't need to go into too much detail, since you'd probably want to build your own UI anyway. There are a few things I still want to mention about this demo before I conclude this tutorial.

Firstly, we haven't talked about the loadTemplate function yet. Until now, we have loaded videos based using the setSource function, but if you prefer, you can also load videos based on the ID of a template in your Creatomate account:

1// Once the SDK is ready, load a template from our project
2preview.onReady = async () => {
3  await preview.loadTemplate(process.env.NEXT_PUBLIC_TEMPLATE_ID!);
4  setIsReady(true);

You might also want to know about the width and height properties in the state object. If you want the video player to fit a specific width or height, you'll need to know its dimensions. As part of the demo application, the aspect ratio of the video is required to ensure that the user interface scales properly on mobile devices. Here is how you can do that:

1// Listen for state changes of the preview
2preview.onStateChange = (state) => {
3  setCurrentState(state);
4  setVideoAspectRatio(state.width / state.height);

Additionally, while the video player is loading, you may want to display a loading indicator to the user, for example, when the video is buffering. You can do this using the onLoad and onLoadComplete event handlers:

1// Listen for state changes of the preview
2preview.onLoad = () => {
3  setIsLoading(true);
6preview.onLoadComplete = () => {
7  setIsLoading(false);

Final thoughts

So there you have it. In this walkthrough, I've shown you how to create a web-based video editor using JavaScript.

In general, it takes months to build the foundation for a video editor. One part of this work involves developing a system for rendering video in the browser. Another part involves building a scalable video infrastructure capable of creating the final MP4 files.

Fortunately, there is no need to reinvent the wheel. Using Creatomate, you can get all of the work done for you without sacrificing control over the type of video editing experience you want to provide – the only thing left is to build the UI. By using the API, you can programmatically produce any kind of video through code without managing any rendering infrastructure. And by adding the Preview SDK as well, you'll be able to build video editor apps faster than ever before.

In case you have any questions regarding Creatomate, please do not hesitate to contact me directly at [email protected].

Start automating today

Start with a full-featured trial with 50 credits, no credit card required.
Get started for free