A3E 2025 Anaheim
A3E is excited to be back LIVE and at the NAMM Show 2025! A3E + ARAS will once again be providing their forward thinking educational programming examining the evolutionary process and the Future of Music, Audio + Entertainment Technology™.

A3E 2025 Anaheim will continue to explore the evolving intersection of music, technology, and creativity. This year’s program will emphasize two critical facets for artists and content creators:

1. Tactical understanding and mastery of day-to-day technologies, especially the rising influence of Artificial Intelligence (AI), from ideation and creation to IP protection, to fan engagement and new experiences, to monetization opportunities and strategies;

2. Strategic, gaining critical insights, understandings and perspectives on technologies that are and will be emerging, and that will present both amazing opportunities as well as profound challenges not only to their profession, but to their unique gift of creativity. A3E clearly sees that AI will continue to set its learning sights on, and will be applying its transformative computational powers to, human creativity and associated data sets (neurodata), this is what A3E calls Artificial Creativity (AC).

The two below graphs reinforce A3E’s mission to create the programs, topics and discussions that are needed for musicians, content creators and the entertainment industry as the evolve through and with the co-existence of AI and AC.

The LEFT graph shows the research efforts conducted by A3E and Sitarian Corporation where their communities of music technologists gave insights, based on the Y axis of Interest and the X axis (of emergence, adoption, adaptation, and optimization) – the “A3E Technology Evolution + Impact Curve™” (the “ARAS Wave™”), looking at AI and AC. This was done in 2020/2021.

We then analyzed and plotted (the RIGHT graph) where those same insights and associated data points are now and forecasted for 2025. Many aspects of the tactical, day-to-day, aspects of AI have accelerated to the point of adaptation and are being optimized into content creation (noted between the two vertical dashed red lines). AI in this case has become a valuable part of their profession and efforts.

The crucial and critical aspect of AC is still sitting dormant, low in interest on the ARAS Wave, and this is the profound new frontier that A3E will be injecting into the A3E 2025 Anaheim educational program, and going forward. AC powered by AI needs to discussed and planned for right now!

       

New Sessions, Speakers, Panelists, and Moderators are being added daily. Please check back for updates.

Friday, January 25th, 2024; 10:00AM – 11:00AM Pacific Time; Hilton Anaheim; Level 2, Hilton California Ballroom A

A3E & TEC Tracks Special Keynote Session: The Evolution of Content and the Industry: A Conversation with Rick Beato

A3E & TEC Tracks Special Keynote Session: The Evolution of Content and the Industry: A Conversation with Rick Beato

Please join this distinguished panel, along with NAMM, TEC Tracks and A3E for this crucial and fascinating keynote and industry discussion.

Friday, January 24th, 2025; 11:00AM – 12:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Artificial Creativity (AC): the Intersection of Content Creation with AI, Neuroscience, and Human-Centric Design & Creativity: Part 1

As artificial intelligence technology delves deeper into neuroscience, it opens up new possibilities through neuroscience-driven insights into brain activity and physiological data. Content creators (as well as “other parties”) now have the opportunity to harness this neurodata to craft personalized and adaptive experiences like never before. In parallel, AI will inevitably use this neurodata, and even create its own “synthetic” neurodata, and this is what A3E deems as Artificial Creativity (AC). This evolution is transforming music, video, gaming, film, and social media content. From AI-generated music that adapts to the listener’s mood, and video content that responds to real-time emotional cues, to social media algorithms designed to maximize engagement using neurodata, and immersive games that react to players’ brainwaves—AI and AC are revolutionizing the future of content creation. The opportunities for innovation are vast, especially possibilities when applied to health-focused content that adjusts based on real-time stress or mood.

However, these advancements also present significant challenges, including concerns over data sets/data ownership, intellectual property, data privacy risks, potential manipulation, the risk of formulaic content, commercialization of human expression, and the devaluation of human creativity in favor of data-driven design. The ethical dilemmas content creators must navigate are profound. Join this A3E discussion to explore these possibilities and challenges, leaving you with critical new insights and questions about the future of content creation and entertainment technology.

Friday, January 26th, 2025; 12:00PM – 1:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

TBA 1

Session description TBA1

Friday, January 24th, 2025; 1:00PM – 2:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Top Producers: Managing the Ever-Growing Impact of AI Tools on Music Production & Engineering + An Open Discussion on the Future of Artificial Creativity

AI has become an essential part of the modern music producer’s toolkit, offering creative, technical, and organizational benefits. From automated mastering to intelligent sound design and live performance optimization, AI-driven tools are enabling producers to work more efficiently while expanding creative possibilities. These tools address the need for faster workflows, the demand for high-quality production, and the desire for more dynamic and personalized music experiences. Learning to leverage these technologies not only saves time but also enhances the creative process, allowing producers and engineers to focus on more strategic and artistic decisions.

The session will also explore how these engineering experts view the evolution of AI into Artificial Creativity (AC), examining the blurred lines between human creativity and machine-generated content, its potential impact on originality, and what this shift means for the roles of musicians, producers, and engineers in the creative process.

A3E Music Entertainment Technology
A3E Music Entertainment Technology

Rob Christie (Moderator) | Republic Records* Studios Director | Universal Music Group

2 x Grammy Winning A&R Producer, former Staff Producer for EMI-Capitol Records.

*(Republic Records has been recognized by Billboard as the industry’s #1 label over the last 10 years. It is home to an all-star roster of multi-platinum, award-winning superstar artists such as: The Weeknd, Taylor Swift, Drake, Ariana Grande, John Legend, Post Malone, Metro Boomin, Stevie Wonder, among others.)

Friday, January 24th, 2025; 2:00PM – 3:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

A Legal Perspective & Discussion on the Future of Music, Audio and Vocal Creation in the Age of Deepfake Technologies

The legal landscape around deepfakes in the United States is rapidly evolving, but currently, there is no comprehensive federal law explicitly banning or regulating deepfakes. Instead, various efforts are being made at both the federal and state levels to address the threats posed by this technology. Overall, deepfake laws in the U.S. remain fragmented, with states leading the charge while federal legislation is still in development. Many of the laws target specific issues like election interference, fraud, and non-consensual sexual content. A few noted efforts are the DEEPFAKES Accountability Act, the DEFIANCE Act of 2024, as well as state’s like Texas, Florida, California, New York, that are also mainly focused on elections and sexual content. To note, Tennessee passed the ELVIS Act, which protects individuals’ likenesses from being used without consent, particularly targeting AI-generated audio mimicking voices.

For musicians, audio and content creators, emerging laws regulating deepfakes and AI-generated content and voices could have significant implications, both positive and negative. Here are some potential impacts to discuss: Protection of Likeness and Creative Works; Restrictions on AI-Generated Content; Watermark and Disclosure Requirements; and Economic Impact, Revenue and Licensing. The previous will bring new layers of legal compliance that musicians and content creators will need to navigate. Join this A3E session for a provocative look at this emerging challenge.

Rachel Stilwell | Owner/Founder | Stilwell Law

Rachel Stilwell is the Owner/Founder of Stilwell Law, a music and intellectual property firm based in Los Angeles. Rachel’s law practice focuses on entertainment, intellectual property, licensing, and commercial transactions. She advises and negotiates on behalf of businesses and creative professionals on complex intellectual property matters and business transactions. Stilwell Law serves businesses and individuals from all industries, with special expertise in entertainment and professional audio. Rachel is proud to have been named to Billboard’s Top Music Lawyers List for the last six consecutive years. Prior to becoming an attorney, Rachel held executive positions with several top record labels, including Verve Music Group, where she ran Verve’s multi-format radio promotional department, supervising all radio activities and promotional tours. Rachel is Co-Chair of the Recording Academy Los Angeles Chapter’s Advocacy Committee. She also serves on the national advocacy committee of the Songwriters of North America.

Kevin D. Jablonski | Partner, Chair Electrical/Computer Science Patent Practice Group | Fisher Broyles, LLP

Kevin Jablonski serves as chair of the FisherBroyles Electrical/Computer Science Patent Practice Group. Kevin is a registered patent attorney with a focus on patent portfolio development and patent prosecution. Kevin works with a number of clients across myriad technologies with a lean toward electrical engineering and computer science. Kevin’s practice involves additional aspects of intellectual property law including IP licensing, copyrights, and trademarks. He has experience in technology areas such as semiconductor design, analog and digital circuits, and computer architecture solutions. Kevin has also developed a technical focus in the audio arts and sciences and audio-related technologies. Kevin owns a professional-grade recording studio called Hollywood and Vines. Kevin also manages and plays drums in two bands: Whiskey Gaels and The Soul Proprietors. This music arts background and interest allows Kevin to branch out from patent prosecution into entertainment law to help fledgling artists and music technology companies navigate confusing landscape.

Friday, January 24th, 2025; 3:00PM – 4:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Deepfakes and AI: Emerging Solutions for Artists, Musicians, and the Entertainment Industry

Join A3E as we explore the rapidly advancing field of deepfake technology, where artificial intelligence is used to create hyper-realistic audio and visual content that is nearly indistinguishable from reality. This groundbreaking innovation, while showcasing the potential of AI, also poses significant challenges to the entertainment industry, particularly for artists, musicians, and content creators.

This session will delve into the ethical implications of deepfake technology, the legal challenges in combating it, and strategies for protecting original content in an age where digital manipulation is more accessible than ever. Join us to better understand the growing risks posed by deepfakes and discuss emerging solutions to safeguard the integrity of artistic expression and entertainment in the AI and AC era.

Roman Molino Dunn | Music Technologist | AudioIntell.ai

Roman is a film composer, music technologist, Billboard-charting producer, and entrepreneur. He has scored films and shows for Netflix, HBO, Disney, Paramount, and Hulu, and founded Mirrortone Recording Studios in NYC. As a music software developer, Roman founded The Music Transcriber (now Composer’s Tech), creating innovative music algorithms acquired by leading companies.

At AudioIntell.ai, he leads the development of AI systems for audio detection and analysis, focusing on detecting AI-generated audio and providing insights across industries like security and entertainment. He also serves as a Senior Operating Partner at Ulysses Management’s Private Equity Group, specializing in music technology investments, and is a co-owner of Antares Audio Technologies.

Friday, January 24th, 2025; 4:00PM – 5:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

A3E from Studio to Stage: Advanced Remote Collaboration, Real-Time Content Creation and the Ability to Deliver Next-Generation Entertainment Experiences

The evolution of low-latency synchronized audio technologies, coupled with the growing demand for real-time online collaboration in music, audio, and other creative content, has become a crucial objective for musicians and artists. This A3E session explores the exciting and powerful partnerships between emerging companies and technological giants aimed at advancing and improving the creative process, from studio production all the way to live performances.

The applications are clear: in gaming and eSports, real-time audio synchronization enhances team communication, giving players a competitive edge. In music collaboration, it enables real-time songwriting and online jam sessions. For broadcasting, it streamlines live production by facilitating faster group communication. And in the emerging frontier of the Metaverse and online concerts, it supports real-time group audio interactions between performers and audiences. Discover how music technologists are collaborating to usher entertainment into the next era.

Julian McCrea | CEO | Open Sesame Media

Julian leads Open Sesame Media a B2B CPaaS (communication platform as a service) venture providing low latency synchronized audio over 5G to application developers via its patent-pending SyncStage platform. In short, they ‘sync’ a group of up to 8 devices at extremely low latency (8 times faster than Google Meet, 5 times faster than Discord) to enable instant synchronized digital audio experiences. His company just completed the world’s first music collaboration between London, New York and Toronto using SyncStage with Verizon, Rogers and Vodafone – called the ‘5G Music Festival’. They are working with major telecoms across the globe to bring their technology to market and are receiving application developer demand and in music, broadcast, gaming, metaverse, faith and music education verticals. They are headquartered in Los Angeles, CA.

Friday, January 24th, 2025; 5:00PM – 6:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

A3E from Studio to Stage: “Dynamic Audio” – Utilizing AI and Artificial Creativity to Capture Music’s Infinite Spectrum

The future of music creation, performance, and distribution is expanding exponentially due to AI and AC technologies. These tools allow artists to take a single track and transform it into countless versions that evolve based on the context, live performance, or even audience interaction. The use of AI in meta-tagging, live remixing, adaptive composition, and real-time feedback enables a creative fluidity between studio recordings and live performances, offering a dynamic spectrum for both artists and fans to explore. As these technologies continue to advance, artists will have unprecedented freedom to create in the space between live performance and traditional recording; to innovate new products and experiences that blend the excitement of live performance with the accessibility and reach of conventional recording.

James Hill | Co-founder | ever.fm

James Hill is the co-founder and Artistic Director of ever.fm, the platform where songs change every time they’re played. Hill is an award-winning performer and songwriter, a producer and an educator based in Atlantic Canada. His latest musical project is Champagne Weather, whose semi-improvised style inspired the creation of the ever.fm audio engine. Hill and his collaborators are innovating audio formats that thrive in the space between live performance and traditional recording.

Saturday, January 25th, 2025; 10:00AM – 11:00AM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Navigating AI’s Transformative Impact on Content Creation in Gaming and Game Audio

As AI continues to evolve, content creators in the video game and game audio industries face both unprecedented opportunities and looming threats. This session explores how AI tools are streamlining content production, from procedural world generation to adaptive audio design, while also raising concerns over job displacement, creative control, and intellectual property. A3E and our panel of experts will help you gain insights into leveraging AI to enhance creativity and efficiency without compromising artistic integrity, and strategies for staying competitive in an increasingly automated landscape.

Joe Kataldo | Composer & Sound Designer | Mad Wave Audio, LLC

Joe Kataldo is a composer and sound designer with a rich career spanning over a decade. He boasts a prolific portfolio of work that includes contributions to notable projects such as “Archer: Danger Phone,” “H3VR” and the Team Fortress 2 VR parody, “Terminator Salvation VR Ride”, the recent remaster of “XIII”, “The Last Night”, and “X8”. Notably, his score for “Blind Fate: Edo No Yami” earned a nomination for Best Video Game Score at the Hollywood Music in Media Awards in 2022.

As the owner of Mad Wave Audio, LLC, Joe has crafted audio for countless Toyota commercials and provided immersive VR audio for theme parks like Songcheng’s “Dreamland” in China. He holds degrees from Berklee College of Music in Boston and the Music Conservatory of Napoli in Italy. In addition to his prolific work, Joe Kataldo shares his expertise by teaching a post-graduate masterclass on game audio for ADSUM.it.

Saturday, January 25th, 2025; 11:00AM – 12:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Artificial Creativity (AC): the Intersection of Content Creation with AI, Neuroscience, and Human-Centric Design & Creativity: Part 2

As artificial intelligence technology delves deeper into neuroscience, it opens up new possibilities through neuroscience-driven insights into brain activity and physiological data. Content creators (as well as “other parties”) now have the opportunity to harness this neurodata to craft personalized and adaptive experiences like never before. In parallel, AI will inevitably use this neurodata, and even create its own “synthetic” neurodata, and this is what A3E deems as Artificial Creativity (AC). This evolution is transforming music, video, gaming, film, and social media content. From AI-generated music that adapts to the listener’s mood, and video content that responds to real-time emotional cues, to social media algorithms designed to maximize engagement using neurodata, and immersive games that react to players’ brainwaves—AI and AC are revolutionizing the future of content creation. The opportunities for innovation are vast, especially possibilities when applied to health-focused content that adjusts based on real-time stress or mood.

However, these advancements also present significant challenges, including concerns over data sets/data ownership, intellectual property, data privacy risks, potential manipulation, the risk of formulaic content, commercialization of human expression, and the devaluation of human creativity in favor of data-driven design. The ethical dilemmas content creators must navigate are profound. Join this A3E discussion to explore these possibilities and challenges, leaving you with critical new insights and questions about the future of content creation and entertainment technology.

Maya Ackerman, PhD.  |  CEO/Co-Founder  |  Wave AI

Dr. Maya Ackerman is a globally recognized researcher in generative AI and the CEO/co-founder of Wave AI, a company at the forefront of equipping musicians and technologists with advanced AI systems for music creation. A pioneering figure in this domain, Dr. Ackerman has been building generative AI for text, music, and art since 2014, boasting more than 50 peer-reviewed research publications to her name. Her insights have been sought after by media outlets such as NBC News, New Scientist, NPR, Grammy.com, SiriusXM, and international television channels worldwide. Under her leadership, Wave AI launched both LyricStudio and MelodyStudio, tools embraced by millions of artists and creators around the world, producing chart-topping hits and viral sensations.

Daniel Rowland | Head of Strategy and Partnerships | LANDR

Daniel Rowland is a unique combination of audio engineer, music producer, tech executive, and educator. He is Head of Strategy and Partnerships at LANDR Audio, co-founder of online DAW Audiotool, and longtime instructor of Recording Industry at MTSU. For nearly a decade, his primary focus has been on the empowerment of music creators via LANDR’s ethical AI-driven tools and accessible design, sharing that message on dozens of panels and podcasts.

As a creative, he has worked on Emmy/Oscar-winning and Grammy-nominated music projects totaling 13+ billion streams for Disney/Pixar, John Wick, Star Wars, Marvel, Jason Derulo, Seal, Nine Inch Nails, Nina Simone, Flo Rida, Adrian Belew, Burna Boy, and hundreds of others.

Saturday, January 25th, 2025; 12:00PM – 1:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Leveraging New Technologies and Practices for Music, Content and Data Asset Management During the Creative Process

Incorporating AI tools for meta-tagging, copyright protection, asset organization, and data security early in the creative process allows musicians and content creators to manage their data more effectively. AI not only improves workflow efficiency but also ensures the long-term protection and traceability of creative assets. By embedding AI in your creative process from the start, you’re safeguarding your work and streamlining collaboration, all while staying aligned with ethical data practices.

Saturday, January 25th, 2025; 1:00PM – 2:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

Technologies, Trends and Forces Shaping the Future of Music and Creative Education

Emerging technologies, such as AI and AC, are transforming the way musicians and content creators learn and innovate, opening up new avenues for personalized and adaptive instruction. This A3E session will explore how innovative tools are enhancing education by providing real-time feedback, generating creative content, and offering immersive learning experiences through cutting-edge programs and techniques. We will examine the key factors driving these changes, including the growing demand for scalable, individualized learning, the integration of cross-disciplinary skills, cultural competence, and the shift toward experiential learning. Expert discussions will provide insights into both the opportunities and challenges—such as data privacy and the potential devaluation of human creativity—that these advancements bring to creative education, as well as trends shaping the future.

Michele Darling |  Chair, Electronic Production and Design (EPD) | Berklee College of Music, Boston

Michele Darling, is an accomplished sound designer, composer, recording engineer, and educator. Her work – always rooted in the possibilities of sound – aligns futuristic thinking with music production and sound design. From national television shows and video games to electronic toys and music software, Michele shares her expertise across a range of media types and platforms and her work can be heard among Emmy-winning entertainment, on the top 100 most influential toys list, and throughout multi-genre music productions worldwide. Michele is currently the Chair of the Electronic Production and Design Department at Berklee College of Music in Boston, MA. She leads the EPD department in educating students in electronic music production, sound design, sound for immersive platforms such as video games and mixed reality, new musical instrument design, and electronic music performance.

A3E Entertainment Technology
A3E Entertainment Technology

Jonathan Wyner |  President | M Works Studios

Experienced leader and subject matter expert: Director of Education at iZotope inc. Participating in product development consultation and design strategies, supporting inbound marketing efforts with creation of educational and training assets public and inward facing. Blogger and Web-series host and moderator on topics related to music and audio production. Conceived, directed and implemented technical facility design for iZotope to serve product development, competitive testing and creation of marketing assets. Developing and delivering presentations for domestic and international audiences about new audio technologies, audio signal processing, audio mixing, mastering and production. Facile in most musical idioms, credits from many major artists. Focused on effective and compelling depictions of artistic intent, experienced in immersive audio production. Professor at the Berklee College of Music. Writer and editorial in media and audio production issues and techniques. Technical writing and QA. Literate in the arts with exceptional facility with musical genres and assembling teams well suited to any production task. Past President AES, and founder of the technical committee on AI/ML in audio.

Saturday, January 25th, 2025; 2:00PM – 3:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

An A3E DeepDive – “Three Masters: One World”: How MIDI is Driving Global Content Creation and Collaboration Through the Universal Language of Music

As both an association and a global standard, MIDI continues to drive innovation in products, technologies, and instruments, elevating artistic creativity to new heights. Join this special A3E and MIDI Association session, “Three Masters: One World”, for an in-depth, tactical exploration of how they are leading a global collaboration among musicians from diverse cultures, united by the universal language of music, and presented by these world-class master musicians;

A3E Music Entertainment Technology
A3E Music Entertainment Technology

Athan Billias | President | The MIDI Association

Athan Billias has dedicate his whole life to music, technical innovation and MIDI. He played keyboard with Herb Reed and the Platters and Jerry Martini from Sly and the Family Stone. He then moved into the musical instrument industry as Product Planning Manager at Korg where he helped turn the Korg M1 into the largest selling music workstation of all time. He then Joined Yamaha as a Director of Marketing where he was involved with the product planning and marketing of Yamaha products including the Motif Series and the Montage. He has been on the Executive Board of The MIDI Association for over 25 years.

Saturday, January 25th, 2025; 3:00PM – 4:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

TBA 1

Session description TBA1

Saturday, January 25th, 2025; 4:00PM – 5:00PM Pacific Time; Hilton Anaheim; 4th Floor, Hilton Huntington Room

TBA 1

Session description TBA1

A3E, A3E Research and A3E Summits + Events operate and conduct their practices as an independent, unbiased, objective and agnostic industry resource, with the primary goal of providing realistic, actionable, practicable business insights, information, as well as educational programming and content.