One of the features of LTC is that it is the only synchronization protocol whose data can be recorded or played back. All other synchronization protocols are generated only by software.

This LTC capability allows you to insert LTC data directly into the audio track and play audio and LTC data together. Let's take a closer look at how to do this and what features are worth paying attention to in order for this method of synchronization to work.

We will analyze two ways to create an audio track for synchronization, the first method is more versatile and almost any multitrack audio editor will be suitable for its implementation. The second method will be based on the Reaper program and its synchronization tools.

In the first example, I will use the Adobe Audition program, as I already said, it can be any multitrack audio editor.

The first thing to do is to insert an audio track into our project and assign its channels to the required outputs of the sound card. In my example, I assigned two audio channels to the first two analog outputs of the MOTU 828x sound card.

The second step is to insert the LTC track into the project and assign it to the analog output of the sound card. I used the third analog output of the sound card.

Now the important question is, where do you get this LTC audio track from?

As we remember, LTC is a digital timecode transmitted over an audio channel. We also know that it can be recorded as an audio track on a magnetic tape or in a WAV file. If the audio editor does not support generating your own LTC timecode, in our example, this is Audition, then we can use a third-party service to do this. For example, the service On the web site, you can set all the necessary settings of the desired timecode and download the WAV audio file, which we can already insert into our audio project.

When our project contains both an audio and a timecode track, everything is ready and we can play it. The first two outputs will play an audio track, and the third will play LTC timecode. This method of synchronization is quite simple and it would be possible to go further, but nevertheless I want to draw attention to a couple of features, such a method of synchronization and also touch on the typical mistakes that are made in this case.

Depending on the audio editor program, we assign each track of our project to an exact audio channel of the sound card. If we have a stereo soundtrack and one mono LTC track, then the total number of audio channels will be three, respectively, if there are more audio channels or timecode, then the number of output channels of our project will also be greater, which means that to play such a project, we will need a multi-channel audio card, with the required number of audio outputs.

The next point is the volume. Someone by mistake, changes the volume of the audio track by a main master fader. Which is generally not acceptable. Since lowering the volume of the entire project, we lower the volume of the LTC track. As a result, the output level of the linear timecode is too low for clients to identify. If we still need to reduce the volume of the audio track in the project, this should be done with the help of individual volume fader, on each audio track. Also, keep in mind that the volume control in the operating system of our computer also changes the overall volume of all outputs of the sound card. Some professional sound cards block the ability to adjust the volume through the operating system, giving priority only to specialized software.

The next mistake is copying the same LTC track to increase the timecode generation time. Since each SMPTE frame contains an absolute timecode time, the specifically generated LTC track contains a specific time. If, for example, we have an LTC track with a length of just one minute and an audio track of three minutes and at the same time we want to increase the length of the timecode by copying the LTC track several times along the length of the audio track, then in our project, the timecode will start from zero three times! In order to have a unique timecode throughout the entire audio track, we need to generate a new LTC track with a length of three minutes.

It is also necessary to remember the rule of cutting timecode. If the timecode of the track is longer than necessary, then we can trim it from the end, along the length of the audio track. But, if we cut off the timecode from the beginning, then the start time of the timecode will change, from the time we cut it.

After we have prepared our project, checked it and are satisfied with the result, now we can either play it from the audio editor, where we created our project, or export the project to a multi-channel audio file. By default, I always use the WAV format for export, as this format is of good quality and allows you to save up to eight audio tracks in one file. Depending on which software you use, the way of multi-channel export may differ.

Above, we discussed a universal approach to creating an audio track with built-in timecode. In the next article, we will analyze the approach of creating such a track using “Reaper” functionality, which allows working with playback synchronization without using third-party services.