Imagine a boom box that tracks all your movements and proposes music to fit your personal dance style. This is behind “Be the Beat”, one of the MIT courses 4.043/4.044 (Interaction Intelligence) announced at the 38th Annual New Lips (Nerve Information Processing System) Conference. Idea. December 2024. Neurips is a famous meeting specialized in artificial intelligence and machine learning, and specializes in science, as there are more than 16,000 participants in Vancouver, and to introduce cutting -edge developments. It is the best venue.
This course investigates the emerging field of large language objects and how artificial intelligence can be expanded into a physical world. “Be the Beat” changes the creative possibility of the dance, but other students have spread to the fields of music, storytelling, critical thinking, memory, etc., and have creative experiences and new forms. Create the interaction of a human computer. Overall, these projects show a wider vision of artificial intelligence. It is beyond automation to catalyst with creativity, reconstruct education, and reconsider social interaction.
Become a beat
MIT mechanical engineering and design students, Esanchan, and MIT mechanical engineering and music students Zhixing Chen are AI -driven boom boxes that suggest music from dancer movements. Dance has traditionally led to music through history, but the concept of dancing through music beyond culture is rarely exploring the concept of creating music.
“Be the Beat” creates a space for collaboration with freestyle dance people, and dancers can reconsider traditional dynamics between dance and music. You can use Posenet to explain the movement of a large language model, analyze the dance style, query the API, and find a similar style, energy and tempo music. Dancers in dialogue with Boombox reported that they are more controlled with artistic expressions, explained as a novel approach to discovering Boombox and finding a creative choreography.
Mystery for you
“Mysteries for you”, Mrinalini SINGHA Sm’24, a recent graduate of art, culture, and technology programs, and Haoheng Tang, who recently graduated from the Harvard University Design Graduate School, are an educational game designed to grow. 。 Critic thinking and fact confirmation skills of young learners. This game uses a large language model (LLM) and a concrete interface to create immersive surveys. Players will act as citizens’ facters in response to the “News Alert” produced in the AI printed by the game interface. By inserting a combination of cartridge and promoting follow -up “news -up dates”, it navigates an ambiguous scenario, analyzes evidence, examines inconsistent information, and makes a decision based on information.
The experience of this human computer interaction is to eliminate touch screen interfaces, replace permanent scrolling, and replace skimring with a comfortable and rich analog device. By combining slow media afort dance with a new generation media, the game promotes thoughtful and concrete interactions, and flourishes the incorrect information and operational stories to the player. We are prepared to understand the situation better and challenge.
Memory scope
“MemorsCope” by KEUNWOK KIM, a collaborator of Mit Media Lab Research, is a device that creates a collective memory by fusing human experience with a deep interaction with advanced AI technology. Inspired by the method of finding and discovering invisible, hidden invisible details using a microscope and a telescope, two users can “investigate” each other’s faces.
This device uses AI models such as Openai and Midjourney to introduce various beauty and emotional interpretations, resulting in a dynamic and collective memory space. This space is not only a static snapshot, which transcends the limits of traditional shared albums and is formed by continuous relationships between users, but also a living evolutionary story. I will provide you.
Naratron
“Naratron” is an interactive projector that uses a large language model to jointly create children’s stories through a shadow puppet show, using a large language model at Harvard University Design Students XIYING (ARIA) BAO (ARIA) BAO (ARIA) BAO and Yubo Zhao. 。 Users can press the shutter to “capture” the protagonist who wants to participate in the story, and need a shadow of the hand (such as animal form) as the main character input. Next, when a new shadow character is introduced, the system develops a story plot. The story is displayed through a projector as a background of a shadow puppet show, while narrating through speakers to play the crank in real time. By combining vision, hearing, and physical interactions in a single system, this project aims to create creativity with shadow play storytelling and enable collaboration with multi -modal humans.
Perfect syntax
“Perfect Syntax” by Karyn Nakamura ’24 is a video art work that examines the syntax logic behind the video and the video. When a video fragment is operated using AI, the project investigates how to simulate and rebuild the movement and time fluid using a machine. Inspired by both philosophical surveys and artistic practice, Nakamura’s work interrogates the relationship between perception, technology, and movement that forms a world experience. By reconsidering the video through the calculation process, Nakamura investigates the complexity of the machine understands and expresses the progress of time and movement.