Introducing: Coactive Dynamic Tags (Beta)
Create metadata at the speed of search.
If you want to understand the content of a video, you likely rely on humans to watch the content and manually add metadata to highlight different attributes within the video. This is a slow, expensive process and metadata will be out-of-date as soon as the next episode or season is released. Today, we're excited to introduce Coactive Dynamic Tags. With this release, you can create metadata at the speed of search.
Dynamic Tags are customizable labels that use Coactive’s multimodal AI to automatically categorize and classify video content based on various attributes. These tags can represent moods, activities, locations, or personalities, making it easy to organize and search through large video libraries efficiently.
Putting Dynamic Tags to Work
Consider a global streaming platform or national media company aiming to personalize content for its users. With Coactive Dynamic Tags, you can quickly assign moods to each video. For example, you might search "Trolls" for scenes that are "brainy.” When Coactive returns the results, you can automatically apply the “brainy” tag to all relevant clips. This precise tagging delivers highly relevant content recommendations. As this feature matures, we expect it could be used in the following ways:
- Create cut sheets: Quickly generate summaries and highlight reels by identifying key moments in your videos minute-by-minute. This means you can put together summaries that capture all the exciting parts without spending hours watching and cutting videos.
- Assign advertisement tags: Categorize videos with relevant advertising tags to optimize targeted marketing campaigns. This helps advertisers place their ads more effectively, ensuring they reach the viewers most likely to be interested.
- Personalized content recommendations: Tailor viewing experiences by automatically tagging videos with moods or themes. This makes movie night a lot more engaging by helping your platform show users videos that match their preferences, making them more likely to stick around and watch more content.
- Identify spoken languages: Automatically detect and tag the languages spoken in different segments of your videos. This helps you manage and deliver your multilingual content better and ensures users always find the right shows or movies by language. You can see how easy it is to identify Portuguese and Italian in seconds
- Metadata enrichment: Use multimodal AI to add detailed, context-specific metadata to videos for improved searchability and asset management. No need to wait for humans to watch every minute of footage and manually add metadata. You can see how easy it is to add metadata based on the mood of the movie.
How it works: Navigating Dynamic Tags
- Upload a list of tags: Start by uploading tags which can include moods like "angry" or "funny," activities such as "swimming," locations like "countryside," or personalities such as "Jennifer Lopez." Then, specify the table where this data will be stored, with each tag representing a category you want to classify your videos into.
- Processing videos: Coactive processes each video by splitting it into different snapshots from the video that represent different scenes or moments within it.
- Computing similarity scores: Coactive's Multimodal Application Platform (MAP) computes similarity scores between each keyframe and your dynamic tags. This involves comparing visual and contextual features of the keyframes to the attributes defined by the tags.
- Querying the data: In the query tab, you’ll find raw data detailing the dynamic tags and similarity scores for each video and keyframe. This data includes Coactive Image IDs for keyframes and the corresponding similarity scores with the tags.
- Aggregating data: By running a query, you can aggregate this data to get video-level analytics. The query identifies the top three matching tags for each keyframe and aggregates these to determine the most frequent tags for each video. This helps in summarizing the dominant themes or moods in each video.
- Analyzing results: Within minutes, you’ll have comprehensive analytics for your entire video collection. The results show the top three tags for each video, providing insights into the video's content and themes.
The power of dynamic tags
Dynamic Tags are powered by the Coactive Multimodal Application Platform (MAP). Coactive’s MAP relies on multimodal AI to search the actual content of a video file rather than searching metadata that has been added manually. For users, this approach makes the process of searching and metadata creation feel like a single, effortless action. The benefits of this approach are significant:
- Rapid classification: AI-Powered Classification: Upload your keyword list and let Coactive's Multimodal application platform do the rest. No human tagging required.
- No-code model tuning: Users are essentially tuning the Coactive classification model with natural language and a few simple reviews. No engineer required. With every search, tag, and reviews the Coactive engine is getting smarter about how your business thinks.
- Flexibility and customization: Define tags or keywords tailored to your specific enterprise needs, whether for mood analysis, content moderation, advertisement targeting, or other applications.
What's Coming Next for Dynamic Tags?
The next release of Dynamic Tags will be even smarter at figuring out what's in a video by using a combination of keywords and images as your prompts. This means they can more accurately guess what each segment of a video is about. We're also making it easier for users to add lots of new tags quickly and organize them into groups for more customization. This update will further simplify the categorization process to help you analyze your video content.
This is an exciting release and we’re excited to show it off. To see how this can work for your business, reach out to us at sales@coactive.ai