*About the challenge
Get started
Invitation to the Future: Join the Future of Human-Level AI Interaction
AGI Weekend
Nov 27th 9:30am-6:00pm
1. 10:00 - 11:00 First-ever real-time human agent social simulation brought to life, Build-Your-Human-Agent Workshop and integrated in the dynamic Berkeley campus interface with Berkeley student-built social app Petkeley
2.11:00 - 18:00 Human Agent Multimodal Mini Hackathon
· powered by Unseen Identity Neuroscience Generative AI, and supported by Twelve Labs Video Understanding API and Petkeley, Berkeley student-built social app.
Email with details about venue and detailed agenda will be sent to participants approved.
Also register on Luma for check in: https://lu.ma/5zb6ofgh
join the telegram group for hackathon https://t.me/+9rrBBRr42iIxMjRl to participate in Hackathon.
More Event Updates on LinkedIn and Twitter later in November:
- Twitter: https://twitter.com/UnseenID
- Join Unseen Waiting List to get Early Access https://unseenidentity.xyz/soulme-ai-generative-instant-communication-tool/
A groundbreaking event aimed at bringing Real-Time Human-Level AI Interaction to a new dimension
- Connect with your very own AI Twin Human Agent. Instantly customized to your very own Generative AI mental model built within seconds.
- Watch as your very own AI Twin Human Agent not only understands your words but also reasons with you, grasps your emotions within seconds, and transforms you into an expert across any knowledge domain, all in real-time.
Welcome to a new era where AI reasons with you, understands your emotions within seconds, and transforms you into an expert across any knowledge domain, all in real-time.
Step into the era of reasoning-linked emotionally attuned Human-AI interaction
This breakthrough brings the generative agents social simulation concept to life, where users’ generative AI human agents act as their AI clones, generating scalable, personalized 1x1 interactions within seconds, dynamically mirroring users’ emotional and cognitive thinking traits, achieved through Unseen Identity innovative 30-second generative cognitive screening.
Without requiring users’ info or prior prompt/instructions, watch as users’ human agents replicating genuine social interactions with instant personalized human behaviors and narratives, accelerating user communication and human experience simulation, by confirming highly individualized choices that deeply resonate with users’ individual needs.
Requirements
*What to Build
What to Submit
Embark on a captivating journey at the Human Agent Multimodal Mini Hackathon, powered by Unseen Identity Neuroscience Generative AI and supported by Twelve Labs Video Understanding API and Petkeley, a social app crafted by Berkeley students.
In this unique hackathon, the focus is on ideation and meticulous planning for technical implementation, providing participants with a platform to unleash their creativity and AI prowess.
Ensure your final projects are submitted on Devpost by 15:45pm on 27th Nov . Use Devpost for team formation. In-person participation on 27th Nov is required.
While registration is FREE for our hackathons, please note that all attendees must be registered on both Devpost and
join the telegram group for hackathon https://t.me/+9rrBBRr42iIxMjRl to participate in Hackathon.
Also register on Luma for check in: https://lu.ma/5zb6ofgh
The winning project will be featured on LinkedIn posts celebrating the first real-time human agent social simulation quest. Participants will be granted exclusive API access and API credits for Unseen Identity and Twelve Labs, fostering the creation of projects that seamlessly blend creativity and AI innovation. Expect hands-on support from Unseen Identity and Twelve Labs as you harness the power of multimodal video understanding using Twelve Labs' cutting-edge technology.
The pitch and judging process is primarily conducted in person at Berkeley starting at 16:00pm. This Devpost serves as a hub for documentation of your submission. When the clock runs out, submit your 3-slide pitch deck, video and/or GitHub, and a demo or mock-up showcasing the essence of your project. Be part of this immersive experience, where collaboration, innovation, and the fusion of AI and creativity take center stage. Stay tuned for an email detailing the venue, agenda, and other essential information for approved participants. Join us in shaping the future of AI and human interaction!
Mini Hackathon Challenges - Teams can choose from the following challenges :
- Intuitive Prompt Refinement: Create a prompt revision augmentor tool that enhances intuitive multimedia content outputs based on human agents’ personas insight about the users.
- Background Action/Event Discovery: Build a human agent tool that can work in the background to search for relevant event recommendations on their interests and personas preferences.
- Image/Video Engagement: Create an AI model that can detect and summarize what people find most interesting in images or videos based on human agent’s emotion prediction for user individualized engagement.
- LinkedIn Expert Tracker/Insight Companion (or other social media): Create a human agent tool that enables human agents to track and update users on the latest multimedia insights shared by industry experts on platforms like LinkedIn.
- Personalized Learning Navigator: Build an AI assistant that customizes multimedia learning resources based on individual user’s human agent cognitive thinking preferences for adaptive learning.
- Narrative Prompt Generator: Design a tool that generates prompts for social simulation narratives, using user’s human agent personalized engagement preference for storytelling realism.
- Customized Visual Tags: Design a tool that generates personalized image and video tags based on user’s human agent emotion and reasoning preferences.
- Multimodal Communication accelerator: Develop a system that enables users to communicate faster with human agents through a combination of text, audio, and video.
- Emotion Prediction in Video: Build an Emotion Vision AI with human agents that can detect and predict individuals’ emotions in video content.
- Multimodal Storytelling: Develop a platform for users to create human agents that share and interact with multimedia narratives and stories.
Unseen Identity Neuroscience Generative AI has been building human agent cloning for cognitive thinking and emotion personas. Without requiring user info or prompt/instruction, in 30 seconds users can build their very own AI Twin Human Agent that immersively learns through interaction with the environment (multimodal input data) and helps the AI system acquire intuitive knowledge about user preferences. This enables online interactions, accelerating communication and task automation.
https://unseenidentity.xyz/soulme-ai-generative-instant-communication-tool/
Twelve Labs Video Understanding API, complements this by bridging human agent behavior simulation with video understanding from images, extending the reach of simulated thinking and feeling in generating individualized responses. Harness the power of multimodal video understanding. Whether you have terabytes or petabytes of video, Twelve Labs can help you make sense of it all. It transforms all of that information into vector representations, enabling fast and scalable semantic search.
"Harness the power of multimodal video understanding" using Twelve Labs (https://twelvelabs.io/)
Petkeley: Berkeley student-built social app, featuring virtual pet roams a dynamic campus, connecting with others and syncing real-time events. Discover what kind of buddies you guys are most fit for. Connect and converse with friends here to deepen your relationships. Expect to encounter both familiar and new faces.
Don't miss out on the most exciting event in the field of Artificial Intelligence! We are thrilled to announce the first AGI event in the Bay Area, where experts and enthusiasts will gather to explore the future of human-level AI interaction.
Prizes
Human Agent Multimodal Mini Hackathon AGI 1st Place
1st Prize: $1000 Twelve Labs Video Hours Credits + 1000 Credits in Unseen Identity Human Agent Model
Devpost Achievements
Submitting to this hackathon could earn you:
Judges
James Le
Head of Developer Experience, Twelve Labs
Eva Ngai
Founder & CEO of Unseen Identity Neuroscience Generative AI
Matthew Murrie
The creator of Curiosity-Based Thinking, and the author of The Screaming Hairy Armadillo and The Book of What If...?
Judging Criteria
-
Innovative fusion of multimodal understanding and Human-Like Agents
✨Evaluate the project's proficiency in the innovative fusion of multimodal understanding and Human-Like Agents, focusing on their seamless interaction in predicting AGI human senses and deciphering actions. -
Impact Future Potential
✨Evaluate the project's potential for long-term success, growth, and impactful contributions to the field. -
Initial Technical Implementation
✨Assess the team's proficiency in implementing the project, considering the technical intricacies and execution. -
Creativity
✨Analyze the project's conceptual innovation and uniqueness in addressing challenges -
Pitches
✨Evaluate how effectively the team presents their project, assessing clarity, engagement, and communication skills.
Questions? Email the hackathon manager
Tell your friends
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
