Fox's Merlin Video AI Uses Trailers To Predict Movie Attendance

Movie Trailers AH 01

20th Century Fox has developed AI that draws on trailers to predict movie attendance. The AI is currently referred to as Merlin Video and along with the confirmation of its existence, Fox also recently released a paper that not only describes in detail the fundamentals of the deep learning model, but also provides results to show that the AI works as intended and may have the potential to shape the way movies are marketed in the future.

Essentially, Merlin Video can work on two levels, with the AI either drawing on text-based information gleaned from the trailer, or visual-based information. The AI had previously been fed enough text and visual-based data from a large number of movie trailers, along with data on attendance figures, and some details on user demographic, to the point where these independent pieces of data collectively act as the AI’s baseline knowledge bank. From here, Merlin Video is said to be able to find relevant correlations between the data sets to predict the attendance for the trailers. What’s more, the paper states it has since been able to make similar accurate predictions for movies that had yet to be released when tested.

As a result, there is the suggestion such a model could be used in the future not only to predict the likely attendance of a movie based on traits shown in the trailer, but also the trailer itself could be shaped to better appeal to the demographic and movie-goer the movie is aimed towards. For example, landscapes, colors, faces and objects represent some of the recurring video-based stimuli, while aspects including the movie’s plot act as text-based cues. Building on the suggestions made, the paper also highlights the importance of the trailer for the wealth of other marketing materials and tools that are used prior to the release of the movie. Therefore, and in addition to shaping movie trailers, the results could just as easily be used to shape the rest of the pre-release marketing materials as well. In the meantime, however, the paper looks as Merlin Video as the starting block for future research. For example, while the text-based and video-based methods seem to work well enough on their own, the researchers believe they could prove even more beneficial if used together and as part of a hybrid model in the future.