The issues coming from “AI” actually began a long time ago before the notion of AI even existed, with the advent of recordings. Up until then, music meant people in a room makes sounds for folks to enjoy hearing. I still believe that is the purpose of music.
But, with recordings came music preserved as if in amber. Suddenly every pianist is compared to Horowitz, every composer to Beethoven. At the same time, we’re blessed with getting to hear Horowitz (I never had the privilege to hear him in person) and I’ve only ever heard a select few of Beethoven’s symphonies performed, live.
So there’s these two parallel steams of musical existence: recordings and live performance. They intersect. I first heard Stravinsky’s Rite of Spring on records. When I was a kid, you could go in a record store and listen to any record in the store in listening booths. I spent hours in the St. Louis Stix, Baer, and Fuller department store listening to that piece. Had records not existed, I would never have heard it. Much, much later I heard it live. There’s no comparison. I heard things in the live performance—little inner voices wiggling around—that, even though I’d studied the score, I never really “heard” before.
Then came electronic music: the Hammond Organ, musique conrète, the Theremin, Oscar Sala, Karlheinz Stockhausen, the Columbia-Princeton electronic music studio, Bob Moog. All of which, progressed to keyboards in almost every home, and then home studios, now all residing in a laptop that plays sampled sound in convincing combinations.
How does this related to AI and the schlock AI generated (I won’t say “composed”) songs on streaming services? It leaves humans out of the equation.
There’s an ethical dilemma for me. Where do I draw the line. I created this demo of a song I wrote to my sister’s lyrics using a piece of software called “Cantai.” It sings the words. It bothers me that that’s a group of singers that won’t be singing that song. That’s a group of instrumentalists that won’t be playing violins, a piano, a glockenspiel, and fretless bass. A recording studio won’t have the income from me renting time, paying for musicians.
But then…I can’t afford to hire a studio, hire vocalists, instrumentalists. I salve my conscience by telling myself I’m making demos to hopefully convince some real human to perform my music. The pretend world of sampled sounds are as close as I’m going to come to getting to experience most of my music.
A good friend who devoted his life to designing and building important pipe organs in Texas and throughout the south despaired when he heard sampled organ sounds that he spent his life perfecting. What of the organs and organ builders whose sounds were robbed (sampled) for me to use on my laptop?
Fortunately, AI “composed” music and synthesized/sampled electronic sounds do not compare to the real thing. It’d be nice to think they’d never will, but someday in a Star Trek world they may. I’m glad Data realized that making music meant folks (including androids), in a room (perhaps on a starship), making sounds in the physical world for folks (and androids) to hear and enjoy, and to enjoy performing! My sister’s words paint a truth that aligns with that notion of what real music really is.
I guess I’m still not convinced we’re lucky the automobile was invented to replace the horse and buggy…for that matter, maybe the invention of the wheel wasn’t such.a good idea, either.

The title of this movement from Time Grown Old – Images of the Mahabharata, that I composed back 1995, is a literal translation of an actual phrase from the Mahabharata. The entire four movements form a concerto for pipe organ, percussion, and electronic sound. This recording is me as organist, with the University of South Florida Percussion Ensemble, Robert McCormick, Director. It was recorded recorded at the Bayshore Baptist Church, Tampa, Florida.
