Futuristic robot DJ

Futuristic robot DJ pointing and playing music on turntables. (© AspctStyle - stock.adobe.com)

SYDNEY, Australia — A pioneering computer algorithm is about to revolutionize music mashups, which DJs use to blend vocals and instrumentals from multiple tracks. Creating these seamless blends has traditionally demanded expert knowledge in selecting and merging tracks. However, this new innovation streamlines this process, focusing on integrating drum tracks from one song with the vocals and instrumentals from another.

“Imagine trying to make a gourmet meal with only a microwave — that’s sort of what automated mashup software is up against compared to a pro chef, or in this case, a professional music composer,” says algorithm creator Xinyang Wu, from the Hong Kong University of Science and Technology, in a media release. “These pros can get their hands on the original ingredients of a song — the separate vocals, drums, and instruments, known as stems — which lets them mix and match with precision.”

creating music using artificial intelligence
creating music using artificial intelligence. (© YarikL – stock.adobe.com)

Comparable to a professional chef using fresh ingredients, the algorithm accesses original song components, enhancing precision in merging different musical elements. Wu’s software identifies dynamic segments, adjusts instrumental track tempos, and impeccably intertwines drum beats for maximum impact.

“From what I’ve observed, there’s a clear trend in what listeners prefer in mashups,” explains Wu. “Hip-hop drumbeats are the crowd favorite — people seem to really enjoy the groove and rhythm that these beats bring to a mashup.”

Having proven effective with drum tracks, Wu’s team aims to extend the algorithm’s capability to bass mashups.

“Our ultimate goal is creating an app where users can pick any two songs and choose how to mash them up — whether it’s switching out the drums, bass, instrumentals, or everything together with the other song’s vocals,” concludes Wu.

The research was presented at Acoustics 2023 in Sydney.

You might also be interested in: 

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Comment