The Google Magenta Team recently released a project called Lo-Fi Player, allowing visitors to interact with various objects within a two-dimensional room, to manipulate and customize their own hip-hop style beats. Apart from the choices of a given person tinkering away on the various settings within the player, two different machine learning models are working behind the pixels to help mix the sounds together in unique ways.

Google Magenta is an open source research project created by Google, to explore the role of machine learning as a tool in both art and the creative process. Previously, the Magenta team has released a machine learning model that uses AI to finish people's digital doodles for them, and another that acts as a variational auto-encoder that blends two melodies into one. This is similar machine learning that exists within the Lo-Fi Player. The model itself was created by Vibert Thio, a technologist and artist who recently interned with the Google Magenta team.

Related: Nvidia Broadcast App Uses AI To Turn Rooms Into Home Studios

According to the MIT Technology Review, there are two different machine learning models at work behind the scenes of Lo-Fi Player and the 90s looking webpage. One lives in the clock radio and generates new melodies when it is clicked. The other model lives in the TV, and much like Magenta's previous MusicVAE model, interpolates between two melodies to combine them as best it can. The rest of the objects in the room however, are completely controlled by the person on the page, allowing them to adjust different elements and instruments like the drums, guitar, tone and melody. Sure it's a cool page to mess around and make music on, but the technology itself holds more promise than anything.

The Lo-Fi Player Is The Start Of Larger Opus

What is most intriguing about Vibert the intern's Lo-Fi Player, is the decision and execution to use machine learning as a complimentary tool to the webpage, and not the main focus. While AI's role in previous Magenta models was to finish what a person starts using machine learning, the Lo-Fi Player instead takes whatever custom settings a person has chosen, and adapts the melodies in real time. Throughout the development process, Vibert Thio collaborated with musicians to capture bass lines, and other background vibes that not only sound pleasant, but fit the lo-fi hip-hop genre. Thio himself also wrote four different melodies that anyone can choose from. So while someone plugs away on their preferred sound, tempo, etc., the machine learning then adds just enough of a unique spin to it to give each person their own one of a kind mix.

For now the project is still in its first version, but there is certainly more room for further possibilities. Thio said the goal of this Lo-Fi Player was to make the music-mixing experience as simple and friendly as possible. To that note, he dreams of making a music creation interface similar to TikTok. One that makes it easier for non-musicians to experiment with music editing, share their work, and express themselves. Thio may very well be onto something there. As AI technologies like Google's assist people more and more in their daily lives and in the workplace, it's merely matter of time before machine learning takes a prominent role in assisting in the creative process as well, and that should be music to artists' ears.

More: Google Plans To Test 6GHz Networks In Multiple States, According To FCC Filings

Source: MIT Technology Review, Lo-Fi Player