Jeri Palumbo Discusses her Impressive Career, eSports and the Future of Audio Mixing

Q: Can you provide some background on who you are and what you do?

A: I have nearly 30 years of technical experience, ranging from live audio engineering of major league sports and entertainment broadcasts to being a recording engineer/producer and arranger, where it all started. I also have a background in IT technology and dabbled as an Avid editor. My first real job, though, was in marketing and merchandising for Radio City Music Hall.

Q: How did you get into the broadcast audio business?

A: As a composition orchestration major at Julliard, I was hired to do an orchestration/arrangement for an artist who was making a small indie record in New York. As the musical director on the project, I would sit with the engineer for the mix-down and part of the tracking. The engineer on that project was using a Fairlight — an audio wave manipulator that was kind of like a precursor to Pro Tools — and it really just rocked my world. It was the first time I ever encountered that type of tool and I was fascinated by it. I started hanging out with the engineers and watching what they were doing — how they were tracking and capturing sound — and just kind of following them around. That’s how I ended up on what I call, “the other side of the glass.”

To be fair, I grew up recording and “hacking and splicing” tape before I even knew what I was doing. Being from a family of musicians, we always had various recording equipment options around, one being a reel-to-reel Revere tape machine; which I still own today. I used it for every single recording I made from about six to 18 years old, when I went off to college.

Q: What challenges come along with being a woman in audio/broadcast?

A: You really have to know your stuff – and even when you know your stuff, I feel women get questioned more than our counterparts because engineering is still a male-dominated industry. When I walk into a new environment, even though I have 27 years of experience, I often face the same misconceptions. The best answer I have found to this is to do my job well, surround myself with good people and never second-guess my abilities. Every time someone asks me about this topic, I reference the Paul F. Davis quote: “Go where you are celebrated, not where merely tolerated.” I truly live my life by that viewpoint — I make my career decisions by it and I stand by it.

Q: How have you seen audio technologies evolve over your career? How has this changed sports programming and the way in which you work?

A: It’s gotten far more complicated; there are far more responsibilities on the A1 than there used to be. We have progressed significantly in the way we handle our source material and the way we disperse it — and that’s all been through the digital age and products like Calrec. Brands that could foresee what was coming, and tried to find a way to not only condense it, but also find different ways to matrix that out seamlessly has really helped the day-to-day for A1s. There is just so much that we’re now doing for domestic feeds, or any other feeds, that we didn’t normally do years ago. We’ve added 5.1 mixes, when it used to be a stereo out. We've added multiple stereo outputs simultaneously for music cuts. We’ve also condensed a lot of elements that would have been impossible to handle by one person. One thing that has not changed for the A1 in the sports broadcast world is that we are still responsible for all the comms as well. In the entertainment world, you rarely see an A1 mixing their own comms, but we still do that in the sports broadcast world. So now our job is far more complicated — but the technology, and we as engineers, have also come a long, long way.

Q: What Calrec consoles do you use during broadcasts and what are some of the standout features or functions that have been helpful?

A: I use Calrec’s Artemis console – the 5.1 direct out is extremely helpful for me because I use it all the time, along with the auto mix. For me, this feature is not so much about the auto mixing, but rather the fact that I never have an issue with volume wars between commentators/reporters, and I know that has a lot to do with the auto mixer on the console. The parametric EQs can be copied from fader-to-fader, and the Artemis’ compressors are dynamite and work seamlessly for me. Also, I use the cloning functionality all the time for my workflow. I make homes for my original sources and I clone everything from my top sources. So, cloning, 5.1 direct out, auto-mixer, parametric EQs and great-sounding compressors are all invaluable tools from Calrec; I use these all the time.

Q: How does mixing for eSports differ from traditional broadcast sports?

A: Mixing for eSports is dramatically different than live sports. For one, there is a significant difference in the air-time between the two; eSports can be live for sometimes 12 hours, or more, depending on the length of a round. But, the other, significant difference is that live sports is somewhat predictable, whereas eSports is like the wild west. But, that’s the fun part.

eSports is mostly Web-based with a lot of audio elements being sent to and stored online while you’re doing the broadcast, versus traditional live sports, which have direct audio runs. In eSports, a typical audio workflow includes a direct output from between four and eight players’ computers, which feed to literally hundreds of players. Or, the audio sources are stored in the cloud, which we access in real-time. This sourcing is ever-changing between games and needs. One way we might have to work is feeding into an observer position/computer matrix, a room full of people who are watching all the screens and cutting between those sources, including the independent audio feeds. At other times, we sourced through Tricasters (a switcher-like device) for game output, or even directly from the web.

As the A1, you are tasked with figuring out a way to collect all the audio into one source. In addition to each players’ game audio, you have the main audio from the gameplay itself – the sounds and music built into the AI of the game – as well as that from each of the players’ headsets because they are also interacting verbally.

Added to that are the sheer number of rounds throughout the competition and the multitude of players at one time. In these cases, we would sometimes “demux” the audio and split it from its embedded source. Sometimes I use the observer position audio feeds, but even that can be complicated. We can choose to focus on just one observer or utilize the pooled clips and audio that are being collected rather than the separated tracks.

In addition to the complicated nature of all the audio ins, we also traditionally mix to both 5.1 and live broadcast stereo formats. During a recent gaming event, I was tasked with the most complicated scenario I've ever encountered in my career; to mix a live broadcast while simultaneously mixing and sending an immersive surround 5.1 to the house...with no front-of-house. To be clear, this workflow is not something I would ever choose, because there is so much that could go wrong, but, that was the option I was given at the time of this particular launch. We hit the Meyer Galaxies direct from the Calrec Artemis and had the feeds broken down by zone. Doing a FOH alone with the immersive surround would be complicated enough and the only reason this format worked was because the audio crew and support from the Calrec LA team was stellar.

Q: What are some of the important elements to mixing for eSports?

I typically build my layers early in the game knowing that I will likely change my layer layout each time a new game comes on-board. For my most recent eSports project, I had 12 layers of 32 channels on the Artemis, and I layered every single element out that was in the studio. I put libraries on the bottom layer so that at any point in time, I could clone them so that I always had them available while I was mixing. Once I had everything stacked, everything would go to the bottom two layers. I had one layer dedicated strictly for transmission because we also had 5.1 going to the floor. When it came time for transmission, I went to that layer and got it locked up quickly. The cloning aspect to my workflow saved me invaluable time because I always had the elements there and it kept me from accidentally deleting something.

Additionally, since this was an unusual situation where we were running a 5.1 mix to the floor with no front of house, I built in a redundant system. I had one layer of stems, 5.1 to the floor broken out completely and then I had all my stems collapsed in another section so all I had to do was move one fader and I would still be hitting all of my 5.1 speakers via the Calrec Direct 5.1 output feature. In other words, I could stem it out to the floor individually or 5.1 direct-out in seconds, if needed. I had a backup plan because anytime somebody asked me for a particular stem on a different speaker send, I would immediately have it.

Q: Can you share any unique aspects about your workflow using the Artemis?

A: There are some really unique things about the Artemis that are brilliant. This includes the auto-mixer (and not being afraid to play with it) and great-sounding parametric EQs and compressors, but, for me, the biggest “a-ha” is the way Calrec does its matrix out. I was able to get really creative with how many IFB sends I had, with the cloning feature and 5.1 direct outs. I set up as many IFB totals and audio feeds as I could and I’d only matrix out each according to the game requirements. For me, no matter the application, I build my shows with every aspect that I can possibly conjure up and have them on the board and ready for recall, whether it’s IFBs, floor-sends or any other request.

In eSports, I often do a direct out 5.1 from the Calrec, which allows me to send a 5.1 out down one path instead of having to break it out into an individual-pack stems, although I keep a layer of 5.1 stems ready at all times as well. For me, that’s a brilliant option because you don’t have time to do a 5.1 immersive mix while you’re doing a broadcast. I keep that structure throughout all the shows, and I only have to adjust the top layers between each broadcast.

Q: Where do you see the future of sports mixing going in the next five years? Any trends that will change how people mix audio?

There’s a very clear shift to IP, similar to how things were in the analog to digital changeover. The technology and workflows are becoming more and more advanced. Localized IP networking is now being set up at the site itself, for cameras, audio and various other components, eliminating the need for a full staff on-site. The technology has been around, but it’s now being used at a far greater, more sophisticated and intelligent level. There are all kinds of ways that delivery is transitioning to IP, and people are now taking advantage of that to the point where I actually think it’s going to change the television landscape forever. Actually, look at the way we access content now – the change is already here.