麻豆社区app

Applied Computer Science Students Create Real-Time Immersive Art at Getty Center College Night

Creating art on the fly requires both talent and native intelligence and — as demonstrated vividly at , “Dreaming in Color” 鈥- a little Artificial Intelligence doesn鈥檛 hurt, either.

This year鈥檚 gathering at the Getty Center on April 29 celebrated color — color in art and in imagination, the science of color, and the possibilities of a more colorful future. A group of 11 of Woodbury’s Applied Computer Science (ACS) students joined students from other schools in an eclectic series of mind-expanding projects and presentations.

 

In collaboration with the Getty, and with concept/project management by program chair Ana Herruzo and faculty member Nikita Pashenkov, Woodbury鈥檚 ACS students created an immersive, interactive installation merging AI and visual arts. Relying on data from users鈥 facial expressions and their interactions with the screen, members of the group generated graphics in real-time.

Students from Professor Herruzo鈥檚 Media Environments class, led the Project Design and Execution, and the students in Professor Pashenkov鈥檚 Artificial Intelligence course led the Machine Learning Development part of the project

Dubbed 鈥淲ISIWYG: What I See Is What You Get,鈥 the experiential installation produced live, interactive works of art along with text descriptions, by analyzing users鈥 facial expressions, and enlisted machine learning algorithms to train artificial neural network models on the Getty Museum鈥檚 art collection.

鈥淲ith sensors tracking user movements, our students applied that data to design silhouettes, creating a reflection of the user onto the screen,鈥 Professor Herruzo explains. 鈥淕raphics were then displayed on a vertical video wall. The video wall, in turn, had an embedded sensor that obtained live data from the user, and displayed real-time generated graphics and machine learning-generated text. It proved to be a remarkable blend of art and science.鈥

鈥淚n analyzing the Getty鈥檚 art collection, students have experimented with a 鈥榙eep learning鈥 language model GPT-2, released by the non-profit foundation OpenAI in February of this year,鈥 explains Professor Nikita. 鈥淭he language model has been trained by the students on existing descriptions of art works on display at the Getty Center, then prompted by computer vision algorithms detecting participants’ facial expressions in order to generate new synthetic descriptions to accompany the real-time visualizations.鈥

The installation turned out to be a great success, with hundreds of students interacting and engaging with Woodbury鈥檚 Applied Computer Science 鈥 Media Arts project throughout the night.

Learn more about the Applied Computer Science program