Long ago, the FCC began enacting laws to address captions, and other government entities around the world have done the same. In the U.S., the Twenty-First Century Communications and Video Accessibility Act regulated closed captioning for anyone broadcasting content to viewers in the United States, whether by standard over-the-air distribution or over IP. A more recent FCC ruling took captions into the Internet realm, saying that TV networks and video websites are required to provide closed captions for any TV content available online. The ruling means that, with certain exceptions, any video content that has aired on TV must also have closed captions when streamed online. FCC regulations have also evolved to include requirements for caption correctness, completeness, and timing, and pending review are additional regulations for any video clip being distributed over a streaming service.
There are several issues that could arise when using closed captions such as delay in display, incomplete captions, and caption correctness. All of these issues could affect the content that should be captioned, but there are ways to address these issues. By monitoring your captions, you can have the upper hand in event that something does happen. It takes significant time and resources to monitor your program properly, but the only real way is for humans to watch and listen in real time. Having someone watching and listening to your program can be cumbersome and prone to errors.
Ideally, an automated compliance tool would continuously monitor content within a complete, broadcast-specific quality control workflow. The key to continuous monitoring is audio analysis, a method of automatically analyzing speech and comparing that to the caption file to determine if there are missing or incorrect captions and if the captions are properly aligned.
But despite the tools made for closed captions, there are still problems presented around live captioning and the ability to do so effectively. Live captioning is a difficult task, because with regular closed captions, the audio has already been recorded and can be ran through software to deploy captions where needed. The problem here is that live captioning is near impossible, because of a few reasons. The captions are hard to line up with the audio, because the information must be processed and then coded into captions, which leaves a delay and, ultimately, a skewed broadcast. The second problem also ties into the first problem, and that is the simple fact that not everything is scripted so it’s hard to accurately transcribe everything that will be said during a broadcast.
Closed captions may soon have their solution, but the problem more so lies in live captioning. It is a difficult task to accurately transcribe everything to a hearing impaired individual without delay or confusion from incomplete captioning. There are live captioning tools that address smaller parts of the whole issue, but until users are able to utilize the services in the same manner as all other users, the problem will still be of major concern for users with any sort of disability.