The problem. Searchable content in a video stream which is currently hard to index. This also has accessibility issues for people with language, hearing, vision differences etc.
The actual content stream could be dialog ( the script track ), sound effects ( both salient and ambient ), characters activities ( salient activities and ambient ) set & location information and finally camera framing, shot length etc. You could also make notes on colour palate, lighting, effects ( slomo, fast speed, cutting, montage etc)
There is a lot that could be borrowed from an animation directors work notes I guess. The point being that with a common and open language specification it would be possible to reverse-engineer any peices of video and apply this meta data to it. This would be useful for film restoration as well as feeding a whole slew of useful data into search engines.
All it needs is a catchy name. Something like OpenVideoDescriptionLanguage (OVDL). Or eXtensibleVideoDescriptionLanguage. (XVDL). Every one likes four letter acronymns....
No comments:
Post a Comment