Coordinating the Superbowl's visual fidelity with Elixir
187 comments
·March 26, 2025laserbeam
abrookewood
Yes, it's one of those super-niche & super important functions that are obvious once you know about them, but would never think about otherwise.
myst
Why is it super-important?
GiorgioG
Because events using many cameras without this type of setup would cause jarring visual differences when switching between one camera/view to another during the broadcast.
lo_zamoyski
I don't think he's saying the end this tech is serving is super important (televised professional sports), only that in order to televise professional sports and other events requiring similar camera work, it is important to do this kind of stuff.
null
pdntspa
If you've ever watched a poorly-produced porno where the colors change with every camera cut, you'll know why....
null
rekttrader
It’s the basis for the 99% invisible podcast.... I loved the one about elevators.
johnisgood
[flagged]
dugmartin
You would be surprised. When my brother was a movie theater manager he got invites to a lot of pre-screenings of movies. I was able to go with him a few times and I'll never forget seeing my first movie print before it went through color correction and ADR/sound balancing. Without those steps (and I'm sure others I'm not aware of) the movie experience was very jarring (and somewhat funny).
sagacity
Your comment is dismissing the entire field of color correction. That is not just a thing for this project, it is a part of literally every movie and TV show you watch and has been since the inception of colour film.
PaulHoule
I got into color grading still photos last summer. In my case it is not "correction" to the truth but rather making a set of images conform to a brand image. (I had a day when I went out to a beauty spot and packed the wrong lens, I made up a story about another photographer who had a camera from an alternate timeline and developed a method to take distinctive pictures with a cheap lens)
Funny the only kind of picture that I don't color grade are sports photos because I don't want to mess up the color of the jerseys, though if I was careful in how I did it, it would be OK.
I have been struggling to develop a reliable process for making red-cyan anaglyphs and one step of the process would be a color grade that moves colors away from reds and cyans that would all be in one eye or the other eye. I've got to figure out how to make my own LUT cubes to do it.
jjulius
Are they dismissing it, or are they just ignorant (which is totally OK) and need to be shown the way? They've literally asked if it's "really important", perhaps we could answer that question?
johnisgood
I did not intend to dismiss color correction as a whole, my bad if it came across as such.
PaulHoule
It's about more than color correction. The software they have lets people in the control room set all the parameters on the cameras, so instead of having a camera operator do it behind the camera they do it from the control room, which might even be on another continent.
andruby
With all the switching between camera angles during a sports broadcast, the difference in white balance, brightness and color grading would be really distracting and annoying.
mikedelfino
Perhaps it’s so important that you take it for granted, even though it took a great deal of effort from others to make sure you don’t notice the problem in the first place.
YesBox
This lovely color correction article was posted on HN years ago: https://prolost.com/blog/2010/2/15/memory-colors.html
dagi3d
It if wasn't that important no one would buy it. Doesn't matter how good your sales people are, if the product doesn't solve a real problem, it's very unlikely you will sell it in a sustainable way.
johnisgood
> if the product doesn't solve a real problem, it's very unlikely you will sell it in a sustainable way.
I do not believe this. If you look around, there are many non-issues being sold as real problems[1], and people buy it. People buy all sorts of crap, that is just consumerism in effect. If you did sales, you probably know this. Same thing with "bullshit jobs". Perhaps "sustainable" is the keyword here, but I am not so sure about that either.
[1] Snake-oils comes to mind. Pretty flourishing business.
chamomeal
This is sorta beside the point about color-grading, but I don't entirely agree about a product needing to solve a real problem.
I worked a startup that had decent tech, but a shit product. Wasn't focused enough to really solve clients' issues. Maybe alleviated some issues, but also introduced more. It was disliked by the people who actually had to use it. But our sales guy was really good at convincing those peoples' bosses that it would make the company more money.
It was a total top-down sales approach. Throw a bunch of buzzwords at the founder/CFO/boss, they force it on the people actually doing the work. I hated it, and it worked so well that fixing the product was never a priority. It was always new "features" to slap on more buzzwords to the sales pitch. I really think it could've been a good product, too!
laserbeam
The article doesn't go in detail about how they solve that. But that's the key problem they highlight as being solved. It's a product which manages multiple cameras for events, and color correction is one of those "obvious in hindsight" problems to be solved.
johnisgood
Managing multiple cameras is definitely something I would consider important, but keep in mind I am not knowledgeable at all about the entertainment industry.
anttiai
I interpreted that Cyanview controls color settings in cameras, but video doesn’t run through their product. I wonder if an AI model could efficiently balance colors after the video mixer, especially if the incoming feed was in 10-bit color depth and the outgoing feed 8-bit.
sschueller
Someone tracked every single camera shot during the halftime show: https://www.youtube.com/watch?v=YXNWfFtgbNI
jcalx
You can also see what it looks like from the control room on Hamish Hamilton's YouTube channel, with the AD calling out shots and all: https://m.youtube.com/watch?v=gfjWjkTP4p8. (Hamish Hamilton has directed every Super Bowl halftime show since 2010.)
oplav
John DeMarsico directs the SNY broadcasts for the NY Mets and sometimes posts behind the scenes for how all the cameras come together into a production. I think they are pretty interesting to watch.
ZeWaka
> Without any marketing, it earned a reputation among seasoned professionals and became a staple at the world’s top live events.
Sounds like the entertainment industry. Everyone truly knows everyone, especially when you're working on the same show with the same crew year after year.
It's definitely a family of sorts.
pharrington
"Without any marketing" is also an obvious lie, since Cyanview has a storefront website and marketing posts on Linkedin.
davidbou
I don't think it was meant to be taken literally (we didn’t write the article). We’d actually love to do more marketing, we barely have time for it though. We don’t have a storefront website—just a basic site with outdated product info but we dedicate all our efforts to the support section. We post on LinkedIn a couple times a year to reassure everyone that we're still alive, but that’s hardly a real marketing strategy. Currently our sales come from word-of-mouth and industry connections, not much from marketing. Hopefully, we’ll find the time to step it up in the future!
pharrington
Yeah, reflecting on it, the article was obviously just being hyperbolic - I think I'm just on a hair's trigger for anything bordering falsehoods because of the current state of my country (USA). Also "storefront" was a poor word choice - I was originally going to say "professional," but decided against it for some reason.
Regardless, just keep making quality software that sells itself!
lawik
I admit to hyperbole.
The interesting part is that the main marketing and sales is by word-of-mouth and quality of product. All the hardware is not even on the website, which was very confusing to my understanding when writing. It makes sense under the resource constraints.
caseyohara
Also this article seems more like an ad for Cyanview than for Elixir. Smells like content marketing to me.
mtndew4brkfst
Most of the Elixir in Production posts on the official blog (including this one) come off this way, IMO:
https://elixir-lang.org/blog/categories.html#Elixir%20in%20P...
It's pretty much just appeal to authority. "These people are successful and they used Elixir, why don't you?"
lawik
Really?
I can spill all the juicy details as the main author and instigator.
Cyanview reached out to me to help find a dev a while back. Hearing about their customers I knew it would be a decently big splash for Elixir. I was surprised that they were unknown and had this succcess with big household name clients.
I like them. I like their whole deal. Small team, punching above their weight. Hardware, software, FPGAs and live broadcasts. The story has so much to it. David and team have been great sports in sharing their story.
Fundamentally I care more about Elixir adoption though, I reached out to the Elixir team and offered to interview them and write something up.
A case study about successful Elixir production deployments is definitely content marketing. But for Elixir. It is a very common question when mentioning a less common language. "Who uses this?" I thought it was a very interesting case. Glad to have it documented. The style of a case study won't suit everyone.
I suppose "without any marketing, before _this_" would have been funny.
ram_rar
Great to see Elixir gaining traction in mission-critical broadcast systems! I wonder, how much of Cyanview's reliability comes from Elixir specifically versus just good implementation of MQTT? and is there any specific Elixir features were essential that couldn't be replicated in other languages?
ghislainle
Main developer here.
We use MQTT a lot, it is really a central piece of our architecture, but Elixir brings a lot of benefits regarding the handling of many processes which are often loosely coupled. The BEAM and OTP offer a sane approach to concurrency and Elixir is a nice language on top. Here is what I find the most important benefits:
- good process isolation, even the heap is per process. This allows us to have robust and mature code running along more experimental features without the fear of everything going down. And you still have easy communication between processes
- supervision tree allows easy process management. I also created a special supervisor with different restart strategies. The language allows this and then, it integrates as any other supervisor. With network connections being broken and later reconnected, the resilience of our system is tested regularly, like a physical chaos monkey
- the immutability as implemented by the BEAM greatly simplifies how to write concurrent code. Inside a process, you don't need to worry about the data changing under you, no other process can change your state. So no more mutex/critical sections (or very little need). You can still have deadlock though, so it is not a silver bullet
somethingsome
Hey it's nice to see a very successful business in Belgium in this space!
I work at the university and we build acquisition systems with exotic cameras and screens, do you think we could meet one time to discuss possible (commercial and research) projects ?
dist-epoch
Have you looked at stuff like NATS/Jetstream instead of raw MQTT?
davidbou
MQTT is used for messaging between processes on the embedded device itself, which can be the remote control panel, or a camera node. The panel itself is driven by a microcontroller which gets all the parameters to display and request changes through MQTT. If the camera is controlled locally, like on a LAN, then another process picks up the action and handles the communication with the camera. If the camera is remote (over cellular for example), we don't rely on the bridging functionality that some MQTT brokers provide but rather use Elixir sockets to send the data over. Typically parameter changes would be sent towards the camera and new status would be populated back to everyone. In most cases it's been a single control room, sometimes 2 at different locations, and one camera site so the needs for a wide distributed architecture hasn't been felt yet.
One of the next steps would be to have a real cloud portal where we could remotely access cameras, manage and shade them from the portal itself. In this context we have been advised to look at NATS. Remote production or REMI is now getting more traction and some of our clients handle 60+ games at the same time from a central location. That definitely creates new challenges as centralizing everything for control is a need but keeping distributed processes and hardware is key to keep the whole system up if one part fails.
travisgriggs
Which MQTT library are you using? Did you roll your own?
travisgriggs
I ask because we’ve taken up Elixir and we use MQTT (also with a custom RPC on top) to coordinate ag irrigation. But I’ve been very frustrated with the state of MQTT implementations on Elixir (or lack of good documentation). I’m wondering if I’ve just missed an obvious one. We currently use a fork of Tortoise, but it has some issues.
Feel free to contact me, details in profile.
jerf
This is Elixir/Erlang/BEAM's core use case, the thing it was designed to do, coordinating and routing with failover and fallbacks a large number of realtime feeds. The original use case was phone calls, but other than the fact these video streams are much much larger per second, most of the principles carry over.
As much as I am a critic of the system, if this is your use case, this is out-of-the-box a very strong foundation for what you need to get done.
davidbou
Yes, this was one of our initial considerations when we first started, and the telecom analogy of the original Erlang development application was one of the main reasons we took this approach. Now, we only "stream" metadata, control data, and status. Even though we manage video pipelines and color correctors, the video stream itself is always handled separately.
For anyone interested in the video stream itself, here's a summary. On-site, everything is still SDI (HD-SDI, 3G-SDI, or 12G-SDI), which is a serial stream ranging from 1.5Gbps (HD) to 12Gbps (UHD) over coax or fiber, with no delay. Wireless transmission is typically managed via COFDM with ultra-low latency H.264/H.265 encoders/decoders, achieving less than 20ms glass-to-glass latency and converting from/to SDI at both ends, making it seamless.
SMPTE 2110 is gaining traction as a new standard for transmitting SDI data over IP, uncompressed, with timing comparable to SDI, except that video and audio are transmitted as separate independent streams. To work with HD, you need at least 10G network ports, and for UHD, 25G is required. Currently, only a few companies can handle this using off-the-shelf IT servers.
Anything streamed over the public internet is compressed below 10 Mbps and comes with multiple seconds of latency. Most cameras output SDI, though some now offer direct streaming. However, SDI is still widely used at the end of the chain for integration with video mixers, replay servers, and other production equipment.
jerf
I was tempted to go into the fact that the video streams wouldn't pass through BEAM, because that would be crazy, but I cut it out.
AIUI, technically, the old phone switches worked the same way. BEAM handled all the metadata and directed the hardware that handled the phone call data itself, rather than the phone call data directly passing through BEAM. In 2025 it would be perfectly reasonable to handle the amount of data those switches dealt with in 2000 through BEAM, but even in 2025, and even with voice data, if you want to maximize your performance for modern times you'd still want actual voice data to be handled similarly to how you handle your video streams, for latency reliability reasons. By great effort and the work of tons of smart people, the latency sensitivity of speech data is somewhat less than it used to be, but one still does not want to "spend" your latency budget carelessly, and BEAM itself is only best-effort soft realtime.
zaik
> couldn't be replicated in other languages?
All programming languages can do any task. It's about how easy they make that task for you.
zwnow
Yea and with that i'd think it would be a pain in the ass trying to replicate BEAM behavior in different langs
thibaut_barrere
This is true in general but only until it gets false.
For instance, Elixir supports compilation targeting GPUs (within exactly the same language, not a fork).
Most languages do not allow that (and for most it would be fairly hard to implement).
goatlover
For any finite, computable task, as long as the language has access to the hardware that can perform the task in practical time, assuming the language doesn't present any compilation or memory issues to take advantage of said hardware in practical time for the task to be worth computing.
jdufawdfas
[flagged]
dorian-graph
> and is there any specific Elixir features were essential that couldn't be replicated in other languages?
From the article:
> “Yes. We’ve seen what the Erlang VM can do, and it has been very well-suited to our needs. You don’t appreciate all the things Elixir offers out of the box until you have to try to implement them yourself.
innocentoldguy
I have implemented Elixir in critical financial applications, B2B growth intelligence applications, fraud detection applications, scan-and-go shopping applications, and several others.
In every case, like the engineering team in this article demonstrates, the developer experience and end results have exceeded expectations. If you haven’t used Elixir, you should give it a try.
Edit: Fixed an editing error.
roughly
Elixir and Erlang have always garnered a lot of respect and praise - I’m always curious why they’re not more widely used (I’m no exception - despite hearing great things for literal decades, I’ve never actually picked it up to try for a project).
solid_fuel
I've thought about this a lot, and I think that part of what hurts Erlang/Elixir adoption is the scale of the OTP. It brings a ton of fantastic tools, like supervision trees, process linking, ETS, application environments & config management, releases, and more. In some ways it's closer to adopting a new OS than a new programming language.
That's what I love about Elixir, but it means that selling it is more like convincing a developer who knows and uses CSV to switch to Postgres. There's a ton of advantages to storing data in a relational DB instead of flat files, but now you have to define a schema up front, deal with table and row locking, figure out that VACUUM thing, etc.
When you're just setting out to learn a new language, trying to understand a new OS on top hurts adoption.
AlchemistCamp
I think most people tend to stick with what they learn first or hop to very similar languages. Schools generally taught Java and then more recently Python and JS, all of which are relatively similar.
Unless someone who knows those three languages is curious or encounters a particular problem that motivates them to explore, they're unlikely to pick up an immutable, functional language.
innocentoldguy
I think you’re right. I only picked up Elixir about 10 years ago after getting frustrated with Python’s GIL and Java’s cumbersomeness, and feeling that object oriented programming over complicates things and never lived up to its hype.
I have never looked back.
Elixir is an absolute joy to use. It simplifies multi-threaded programming, pattern-matching makes code easier to understand and maintain, and it is magnitudes faster to code in than Java. For me, Elixir’s version of functional programming provides the ease of development that OOP promised and failed to deliver.
In my opinion, Elixir is software engineering’s best kept secret.
joehosteny
We use it in our robotics startup, and I wholeheartedly agree.
As an example, we just rolled out a feature in our cloud offering that allows a user to remotely call a robot to a specified waypoint inside a facility, and show real time updates of the robot's position on its map of the world as it navigates there. We did this with just MQTT, LiveView, Phoenix PubSub, and a very small amount of JS for map controls. The cloud portion of this feature was basically built in 2-3 weeks by one person (minus some pre-existing code for handle displaying raw map PNGs from S3, existing MQTT ingress handling, etc.).
Of course you _can_ do things like this with other languages. However, the core language features are just so good that, for our use cases, it blows the other choices out of the water.
_rs
Would you be open to dropping an email address? I'd love to chat about your experience with Elixir for financial applications if you have any time
eggy
Would Gleam be practical for a similar application aside from the OTP/BEAM runtime? I am guessing you'd have to leverage Elixir libraries that are not present for Gleam yet, and you might have slower compile times due to static typing, but you'd catch runtime errors sooner. Would it be more of a debugging vs. fast dynamic iteration trade-off? I am looking to settle on either Gleam or Elixir. I liked Gleam's original ML syntax before, but I like static typing. Thoughts? I am replacing C with Zig, and I am brushing up on my assembly by adding ARM to my x64 skill set.
AlchemistCamp
> you'd catch runtime errors sooner
I don’t think there’s any evidence whatsoever that you would catch runtime bugs sooner with Gleam than with Elixir (or Erlang). Erlang’s record for reliability is stronger than many statically typed languages, including even Java.
There is a certain class of errors static types can prevent but there’s a much larger set of those it can’t. To make the case for a language like TS/Java/Swift/Golang or Gleam actually resulting in fewer runtime defects than Erlang or Elixir, I’d want to see some real world data.
eggy
It depends on what “sooner” means to you. Gleam catches more before the code runs; Elixir catches them when they happen but recovers gracefully. If you’re paranoid about bugs reaching users, I would think Gleam’s your pick, no? If you trust your tests and love dynamic freedom, Elixir should be fine. I don't have much experience with either language. I did more in Erlang 8 years ago, but not much. I am on the edge of choosing Gleam over Elixir. It's mainly subjective: I prefer the syntax in Gleam, although I liked the original ML-like syntax when it first came out.
__jonas
> There is a certain class of errors static types can prevent but there’s a much larger set of those it can’t
Maybe you can go into this more, but I don't really understand what that means, what is this larger set of runtime errors that can't be prevented by static typing?
I use a bit of Elixir, and I'd say most of the errors I'm facing at runtime are things like "(FunctionClauseError) no function clause matching", which is not only avoidable in Gleam, but actually impossible to write without dipping into FFI.
I'm excited for more static typing to come into Elixir, as it stands I'm only really confident about my Elixir code when it has good test coverage, and even then I feel uneasy when refactoring. Still a fun language to use though.
AlchemistCamp
Depending on the language and the static type system, they typically can't prevent errors related to:
- Logic errors
- Null or Undefined values (prevented in many newer languages)
- Out-of-bounds errors
- Concurrency-related issues
- Arithmetic errors (undefined operations, integer overflow, etc)
- Resource management errors
- I/O errors
- External system failures
- Unhandled exceptions (e.g., RuntimeException in Java)
If you use a language like Rust, you can get help from the type system on several of these points, but ultimately there's a limit to what type systems can do before becoming too complex.
ghislainle
One criticism I have with elixir is the lack of typing (they are working on it now, but I have yet to use it). So yes, I think gleam would be nice. But when we started, it was not even version 0.1 (and I had not heard of it)
I suppose we can have a mixed language project, with erlang, elixir and gleam. Not sure about the practicality of it though
eggy
Amazing work, and certainly for such a tentacled project good enough is good enough. I only brought up Gleam vs. Elixir because I am going to pick one to learn this year. I've played with LFE too, and as I wrote earlier, I played with Erlang for a bit.
widdershins
Gleam has a subset of OTP functionality already [1]. It also compiles extremely quickly. I haven't made any huge projects yet, but I've used some fairly chunky libraries and everything compiles super quick.
nesarkvechnep
It’s subpar at the moment.
JSR_FDED
It’s always surprised me how the world of digital video is a cousin of IT yet is impenetrable to people outside the video industry. How they refer to resolutions, colors, networking, storage is (almost deliberately?) different.
davidbou
This gives an idea of the parameters we cover for roughly 200 different models of broadcast cameras we might have so far. These are only to tweak the image quality which is the job of the video engineer (vision engineer in UK). We usually don't cover all the other functions a camera has, which could be more intended for the camera operator himself. The difficulty is to bring some consistency with so many different cameras and protocols.
noisy_boy
Do you "normalize" the parameters to some intermediate config so that everything behind that just needs to work with that uniform intermediate config? What about settings that are unique to a given device?
davidbou
That was the idea—we started by normalizing all the standard parameters found in most cameras. The challenge came when we had to incorporate brand-specific parameters, many of which are only used by a single manufacturer. Operators also weren’t keen on having values changed from what the camera itself provided, as some settings serve as familiar reference points. For example, they know the right detail enhancement values to use for football or studio work. So, we kept normalization for the key functions where it made sense, but for other parameters, we now try to stay as close as possible to the camera’s native values.
As for the topics on MQTT, they function as a kind of universal API—at least internally. Some partners and customers are already using them to automate certain functions. However, we haven’t officially released anything yet, as we wouldn’t be able to guarantee stability or prevent changes at this stage.
walrus01
People who only ever work with 'consumer' video equipment need extra training and a back-to-basics set of reading material to understand things like the difference between a 420 and 422 color space, or why serious cinema cameras record video ungraded, or what the color grading process in a post-production workflow looks like (and the different aesthetic choices of grading that might be possible). That's before even getting into things like raw yuv/y4m uncompressed video, or very-high-bitrate barely compressed video, generating proxy footage to work with in an editor because the raw is too much of a firehose of data to handle even on a serious workstation....
I would say that unless you have a professional reason, there's very little benefit to the average end-user to do a deep dive into it. If your intention is to spend $7000 on a RED camera and then $13,000 on lenses, gimbal, cage, follow focus, matte box, memory cards etc to make a small and cost effective single camera production package, then by all means, dig into it.
keane
4:2:0 vs 4:2:2 for anyone curious: https://youtu.be/7JYZDnenaGc?feature=shared&t=101 and https://www.red.com/red-101/video-chroma-subsampling
dist-epoch
Grading is abused so much these days, it's like a curse. You have a pristine video chain, only to turn it all into yellow/blue at the end.
davidbou
There's a notable difference between shading and grading. Shading is for the TV industry where you adjust all cameras to match perfectly the exposure, tone curve and colors. So when switching between camera angles you don't notice any difference in skin tone or detail, and the green of the grass and blue of the sky are all the same. Also a very important point is to get the color of the sponsor logos right, that would be where to start sometimes... There's less creativity here, you have mainly to follow the standards like ITU-R BT.709 or for HDR HLG and ITU-R BT.2020.
Grading is the creative process of adding a look to your production, which is usually handled in post production but there are now ways to do it live, although by using similar tools as the post production software. And they still re-do it in post production. This is used live for concerts and fashion shows.
There is a significant distinction between shading and grading.
Shading is essential in the TV industry, where the goal is to ensure all cameras are perfectly matched in exposure, tone curve, and colors. This ensures seamless transitions between camera angles, maintaining consistency in skin tones, fine details, and the color of grass and sky. A crucial aspect of shading is accurately reproducing sponsor logos' colors, which can sometimes be the starting point as that's where the money comes from. Creativity plays a lesser role here, as the focus is on following industry standards such as ITU-R BT.709 for SDR or ITU-R BT.2020 and HLG for HDR.
Grading, on the other hand, is a creative process meant to give a distinctive look to a production . Traditionally done in post-production, it can also now be applied in real time using tools similar to those found in post-production software. Despite this, it is often still refined further in post. Live grading is commonly used for events such as concerts and fashion shows, where you want to look different from TV productions.
null
markb139
30 odd years ago, part of my role was to colour balance cameras in a studio environment. We didn’t need computers - but at most there were only 5 cameras :)
frankfrank13
Really cool piece, this jumped out to me:
> The devices in a given location communicate and coordinate on the network over a custom MQTT protocol. Over a hundred cameras without issue on a single Remote Control Panel (RCP), implemented on top of Elixir’s network stack.
Makes sense! MQTT is, if I understand right, built on TCP. Idk if I would have found the same solution, but its seemingly a good one
null
notepad0x90
What is being used in similar broadcast setups outside of this Superbowl?
davidbou
Major events use it for all kind of soecialty cameras as they aready have the technology for the main studio cameras. So we had to develop solutions for everything that was not working. And major productions have budgets for all kind of new toys:mini-cams, drones, cable cams, now cinematic look from small mirrorless cameras, slow motion, etc. That opened up a whole lot of possibilities to be creative but you have to be as reliable as the main cameras and aim for the best image quality.
Now the same products are used for very small productions that don't have the budget for any studio camera (look typically at 50k+ for a camera without lens). In that case we try to provide a similar user experience and functions but with much more ffordable cameras.
Finaly more and more live productions are now handled using cine style cameras which don't have the standard broadcast remote panels and that's another area we cover, by combining camera control with control of many external boxes like external motors to drive manual lenses or 3D Lut video processors. Applications are on fashion shows, concerts, theater, Churches, studio shows, even corporate.
In the end Elixir is used for a lot of small processes which handle very low level control protocols. And then on top of that add a high-level of communication between devices either on local networks or over the cloud.
mcintyre1994
> Now the same products are used for very small productions that don't have the budget for any studio camera
Just out of curiosity, what would be examples of very small productions here? Would an independent YouTube channel with great production quality be using this?
davidbou
Typically 4 cameras setups where a single remote can control all of their cameras. For classical concert, they would use 2 PTZ robotic cameras and 2 mini cams on some artists and instruments. There is no camera operator at the camera side (for costs reasons) so a single operator has to do it all.
One important point, if you are not live, then there's usually the possibility to adjust everyting manually on the camera and then finish in post production so our remotes are nearly never used outside the constraints of live productions.
One the opposite direction, I heard that they had around 250 cameras on Love Island but you can pretty much control everything from one or 2 remotes as there isn't a need for a lot of changes at a single time. The action only happens in front of a few of them. That said, we still have 250 processes running and controlling these cameras continuously.
lawik
The extreme upper range of YouTube channels sometimes use a RED camera. I've not seen a lot of ARRI for YouTuber behind-the-scenes. Usually they go with high-end prosumer full-frame mirrorless Sony, Canon or equivalent. Those are probably below what the Cyanview's stuff is intended for or just on the edge of what gets used.
I suppose FX30, FX3 and FX6 is in Sony's cinema line and may have all the color stuff that these systems want to tweak but I'm not sure. These cameras do get a fair bit of play on YouTube.
imjonse
According to the article this software is used for all major sporting events.
brcmthrowaway
Wait, is Elixir actually accessing color pixel data in realtime?
davidbou
No, it deals with metadata: control and status as explained in a previous reply https://news.ycombinator.com/item?id=43479094#43482362
Elixir does some computations as well but when we had to compute 3D luts based on video processing algorithms, Ghislain had to write them in C to be fast enough for our needs on embedded hardware.
Of course! Of course you have to do color correction on all the different cameras pointed from different angles at a sports event.
I absolutely love reading about hard problems that are invisible to most people.