Project and Submission Requirements

Embark on a captivating journey at the Human Agent Multimodal Mini Hackathon, powered by Unseen Identity Neuroscience Generative AI and supported by Twelve Labs Video Understanding API and Petkeley, a social app crafted by Berkeley students.

In this unique hackathon, the focus is on ideation and meticulous planning for technical implementation, providing participants with a platform to unleash their creativity and AI prowess.

Ensure your final projects are submitted on Devpost by 16:30pm on 27th Nov . Use Devpost for team formation. In-person participation on 27th Nov is required. You can form a team of max 5 persons.

While registration is FREE for our hackathons, please note that all attendees must be registered on both Devpost and

join the telegram group for hackathon https://t.me/+9rrBBRr42iIxMjRl to participate in Hackathon.

Also register on Luma for check in: https://lu.ma/5zb6ofgh

The winning project will be featured on LinkedIn posts celebrating the first real-time human agent social simulation quest. Participants will be granted exclusive API access and API credits for Unseen Identity and Twelve Labs, fostering the creation of projects that seamlessly blend creativity and AI innovation. Expect hands-on support from Unseen Identity and Twelve Labs as you harness the power of multimodal video understanding using Twelve Labs' cutting-edge technology.

 

The pitch and judging process is primarily conducted in person at Berkeley starting at 16:30pm (exact venue will be announced later). This Devpost serves as a hub for documentation of your submission. When the clock runs out, submit your 3-slide pitch deck, video and/or GitHub, and, most importantly, a demo or mock-up showcasing the essence of your project. Be part of this immersive experience, where collaboration, innovation, and the fusion of AI and creativity take center stage. Stay tuned for an email detailing the venue, agenda, and other essential information for approved participants. Join us in shaping the future of AI and human interaction!

 

Mini Hackathon Challenges - Teams can choose from the following challenges ​:

  1. Intuitive Prompt Refinement: Create a prompt revision augmentor tool that enhances intuitive multimedia content outputs based on human agents’ personas insight about the users.
  2. Background Action/Event Discovery: Build a human agent tool that can work in the background to search for relevant event recommendations on their interests and personas preferences.
  3. Image/Video Engagement: Create an AI model that can detect and summarize what people find most interesting in images or videos based on human agent’s emotion prediction for user individualized engagement.
  4. LinkedIn Expert Tracker/Insight Companion (or other social media): Create a human agent tool that enables human agents to track and update users on the latest multimedia insights shared by industry experts on platforms like LinkedIn.
  5. Personalized Learning Navigator: Build an AI assistant that customizes multimedia learning resources based on individual user’s human agent cognitive thinking preferences for adaptive learning.
  6. Narrative Prompt Generator: Design a tool that generates prompts for social simulation narratives, using user’s human agent personalized engagement preference for storytelling realism.
  7. Customized Visual Tags: Design a tool that generates personalized image and video tags based on user’s human agent emotion and reasoning preferences.
  8. Multimodal Communication accelerator: Develop a system that enables users to communicate faster with human agents through a combination of text, audio, and video.
  9. Emotion Prediction in Video: Build an Emotion Vision AI with human agents that can detect and predict individuals’ emotions in video content.
  10. Multimodal Storytelling: Develop a platform for users to create human agents that share and interact with multimedia narratives and stories.