The Cincinnati based rock band, LIFE AFTER THIS have released the first full length music video to use an Artificial Intelligence Model to generate all the animations and visual effects. The model is called “Stable Diffusion” and was used to generate their latest video for the song “Exigent,” from their 2022 album, The Countdown.
“We had this idea to let a computer design the video, based on our lyrics. We weren’t sure it had been done before, so I just had to know if it was even possible. The more I dug into it, the more fascinating it became,” said Ray Vitatoe - Bassist for LIFE AFTER THIS, and Software Engineer for 84.51˚/ Kroger.
The song was recorded at Sonic Lounge Studio in Columbus, OH and features Elyssa Girtman, lead singer of another popular Cincinnati based band, Spearpoint.
How it works:
The Stable Diffusion Machine Learning Model is available for use through an Open-Source License, meaning, anyone can use it, or contribute to its development.
“There was a bit of a learning curve, and a lot of trial and error, but the results are astounding,” Ray said.
Stable Diffusion works by adding noise to a digital canvas. The model then reverses the noising process and gradually improves the quality of the image until there is no noise. This process was done 250 times on each frame in the video..
The process took about three weeks (about 100 hours of compute time) to generate and render all the images (Approximately 2700 individual computer-generated images, which doesn’t include the thousands of “trial” images).