Skip to content(if available)orjump to list(if available)

Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control

Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control

54 comments

·March 4, 2025

Hey HN, we’re Eric and Kristian of Enhanced Radar. We’re working on making air travel safer by augmenting control services in our stressed airspace system.

Recent weeks have put aviation safety on everyone’s mind, but we’ve been thinking about this problem for years. Both of us are pilots — we have 2,500 hours of flight time between us. Eric flew professionally and holds a Gulfstream 280 type rating and both FAA and EASA certificates. Kristian flies recreationally, and before this worked on edge computer vision for satellites.

We know from our flying experience that air traffic management is imperfect (every pilot can tell stories of that one time…), so this felt like an obvious problem to work on.

Most accidents are the result of an overdetermined “accident chain” (https://code7700.com/accident_investigation.htm). The popular analogy here is the swiss cheese model, where holes in every slice line up perfectly to cause an accident. Often, at least one link in that chain is human error.

We’ll avoid dissecting this year’s tragedies and take a close call from last April at DCA as an example:

The tower cleared JetBlue 1554 to depart on Runway 04, but simultaneously a ground controller on a different frequency cleared a Southwest jet to cross that same runway, putting them on a collision course. Controllers noticed the conflict unfolding and jumped in to yell at both aircraft to stop, avoiding a collision with about 8 seconds to spare (https://www.youtube.com/watch?v=yooJmu30DxY).

Importantly, the error that caused this incident occurred approximately 23 seconds before the conflict became obvious. In this scenario, a good solution would be a system that understands when an aircraft has been cleared to depart from a runway, and then makes sure no aircraft are cleared to cross (or are in fact crossing) that runway until the departing aircraft is wheels-up. And so on.

To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio. It’s trained on a large amount of our own labeled ATC audio collected from our VHF receivers located at airports around the US. We improve performance by injecting context such as airport layout details, nearby/relevant navaids, and information on all relevant aircraft captured via ADS-B.

Our product piggy-backs on the raw signal in the air (VHF radio from towers to pilots) by having our own antennas, radios, and software installed at the airport. This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.

Building models for processing ATC speech is our first step toward building a safety net that detects human error (by both pilots and ATC). The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%. We’re using these transcripts with NLP and ADS-B (the system that tracks aircraft positions in real time) for readback detection (ensuring pilots correctly repeat ATC instructions) and command compliance.

There are different views about the future of ATC. Our product is naturally based on our own convictions and experience in the field. For example, it’s sometimes said that voice comms are going away — we think they aren’t (https://www.ericbutton.co/p/speech). People also point out that airplanes are going to fly themselves — in fact they already do. But passenger airlines, for example, will keep a pilot onboard (or on the ground) with ultimate control, for a long time from now; the economics and politics and mind-boggling safety and legal standards for aviation make this inevitable. Also, while next-gen ATC systems like ASDE-X are already in place, they don’t eliminate the problem. The April 2024 scenario mentioned above occurred at DCA, an ASDE-X-equipped airport.

America has more than 5,000 public-use airports, but only 540 of these have control towers (due to cost). As a result, there are over 100 commercial airline routes that fly into uncontrolled airports, and 4.4M landings at these fields. Air traffic control from first principles looks significantly more automated, more remote-controlled, and much cheaper — and as a result, much more widespread.

We’ve known each other for 3 years, and decided independently that we needed to work on air traffic. Having started on this, we feel like it’s our mission for the next decade or two.

If you’re a pilot or an engineer who’s thought about this stuff, we’d love to get your input. We look forward to hearing everyone’s thoughts, questions, ideas!

csours

The current rash of airline incidents reminds me of the assembly instruction: Torque fastener until you hear expensive sounds and then back off a quarter turn.

We've accelerated past our capabilities and need to slow down. ATC has incentive to slot takeoffs and landings as close as possible, but that is in tension with the goal of safety.

> Air traffic control from first principles looks significantly more automated.

We have a system 'designed' by history, not by intention. The ATC environment is implemented in the physical world, everyone has to work around physical limitations

Automation works best in controlled environments with limited scope. The more workaround you have to add, the noisier things get, and that's why we use humans to filter the noise by picking important things to say. Humans can physically experience changes in the environment, and our filters are built from our experiences in the physical world.

Anyway, sorry that isn't a question.

RockyMcNuts

Maybe the future is structured electronic messaging with the humans in the loop.

Like, check in with the controller but most messages are sent electronically and acknowledged manually.

I have your clearance, advise when ready to copy, then you write everything down on kneeboard with a pencil and then manually put it in the navigation system, is a little archaic.

certainly speech to text is a useful transition but in the long run the controller could click on an aircraft and issue the next clearance with a keyboard shortcut. then the pilot would get a visual and auditory alert in the cockpit and click to acknowledge.

I would hope someone at NASA or DARPA or somewhere is working on it. And then of course the system can detect conflicts, an aircraft not following the clearance etc.

EMM_386

> Maybe the future is structured electronic messaging with the humans in the loop.

There already is: Controller Pilot Data Link Communications (CPDLC).

Get an instruction, press to confirm.

At the moment, this is only used for certain types of things (clearances, frequency changes, speed assignments, etc.) along with voice ATC.

https://en.wikipedia.org/wiki/Controller%E2%80%93pilot_data_...

imadethis

Look up CPLDC - https://en.wikipedia.org/wiki/Controller%E2%80%93pilot_data_...

This is how big operations handle clearances today, complete with integration into the FMS. The pilot simply reviews the clearance and accepts it.

noahnoahnoah

This already exists and is used in much of the US and extensively in Europe for airlines. Look up Controller Pilot Data Link Communications (CPDLC).

RachelF

I have two questions:

1. Good overview of the technologies you are using, but what product are you planning on building or have built? I understand what you are doing and it's "extra safety over existing systems" but how does it work for the end user? Is the end user an ATC or a pilot?

2. You will find that introducing new systems into this very conservative field is hard. I've built avionics and ATC electronics. The problem isn't normally technology, it's the paperwork. How do you plan on handling this?

kristian1109

1. Our first product is post-op review at airports. We're selling that to airport managers who use our system for training and incident review. Today, when a ground ops vehicle (for example) makes a mistake, the airport manager has to note the incident, call the tower, wait a week for them to burn a CD of the audio, scrub through to find the relevant comms, go to a separate source to pull the ADS-B track (if available), fuse all that together, and review with the offending employee. Our product just delivers all that data at their fingertips. For training, we also flag clips where the phraseology isn't quite right, etc. Obviously this isn't the long term product, but it gets us to revenue quickly and side-steps regulation for now.

2. Agree

[edit] (oops, sorry, seeing your edit)

2. The regulation allows for escalating assurance levels. We'll start with low assurance (advisory) and climb that ladder. We're definitely not naive about it; this will be hard and annoying. But it's inconceivable that someone won't do this in the next 10 years. Too important.

ryandrake

> The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%.

I'd be curious about what happens when the ASR fails. This is not the place to guess or AI-hallucinate. As a pilot, I can always ask "Say Again" over the radio if I didn't understand. ASR can't do that. Also, it would be pretty annoying if my readback was correct, but the system misunderstood either the ATC clearance or my readback and said NO.

kristian1109

Good and fair questions.

In the very short term, we're deploying this tech more in a post-operation/training role. Imagine being a student pilot, getting in from your solo cross country, and pulling up the debrief will all your comms laid out and transcribed. In this setting, it's helpful for the student to have immediate feedback such as "your readback here missed this detail...", etc. Controllers also have phraseology and QA reviews every 30 days where this is helpful. This will make human pilots and controllers better.

Next, we'll step up to active advisory (mapping to low assurance levels in the certification requirements). There's always a human in the loop that can respond to rare errors and override the system with their own judgement. We're designing with observability as a first-class consideration.

Looking out 5-10 years, it's conceivable that the error rates on a lot of these systems will be super-human (non-zero, but better than a human). It's also conceivable that you could actually respond "Say Again" to a speech-to-speech model that can correct and repeat any mistakes as they're happening.

Of course, that's a long ways from now. And there will always be a human in the loop to make a final judgement as needed.

btown

One of the challenges I imagine you'll face as you move towards active advisory is that the more an alerting tool is relied upon, the more an absence of a flag from it is considered a positive signal that things are fine. "I didn't hear from Enhanced Radar, so we don't need to worry about ___" is a situation where a hallucinated silence of the alerting tool could contribute to danger, even if it's billed as an "extra" safety net.

I imagine that aviation regulatory bodies have high standards for this - a tool being fully additive to existing tools does not necessarily mean that it's cleared for use in a cockpit or in an ATC tower, right? Do you have thoughts about how you'll approach this? Also curious from a broader perspective - how do you sell any alerting tool into a niche that's highly conscious of distractions, and of not just false positive alerts but false negatives as well?

kristian1109

Yes, fair points. In talking to controllers, this has already come up. There are a few systems that do advisory alerting and controllers have expressed some frustration because each alert triggers a bunch of paperwork and they are not 100% relevant.

There are lots of small steps on this ladder.

The first is post-operational. You trigger an alert async and someone reviews it after the fact. Tools like this help bring awareness to hot spots or patterns of error that can be applied later in real time by the human controller.

A step up from that is real-time alerting, but not to the main station controller. There's always a manager in the tower that's looking over everyone's shoulder and triaging anything that comes up. That person is not as focused on any single area as the main controllers. There's precedence for tools surfacing alerts to the manager, and then they decide whether it's worth stepping in. This will probably be where our product sits for a while.

The bar to get in front of an active station controller is extremely high. But it's also not necessary for a safety net product like this to be helpful in real time.

ryandrake

Thanks for that. It must be exciting to be applying software skills to aviation. Life goals!

To me, speech to text and back seems like an incremental solution, but the holy grail would be the ability to symbolically encode the meaning of the words and translate to and from that meaning. People' phraseology varies wildly (even though it often shouldn't). For example, if I'm requesting VFR flight following, I can do it many different ways, and give the information ATC needs in any order. A system that can convert my words to "NorCal Approach Skyhawk one two three sierra papa is a Cessna one seventy two slant golf, ten north-east of Stockton, four thousand three hundred climbing six thousand five hundred requesting flight following to Palo Alto at six thousand five hundred," is nice, but wouldn't it be amazing if it could translate that audio into structured data:

    {
    atc: NORCAL,
    requester: "N123SP",
    request: "VFR",
    type: CESSNA_172,
    equipment: [G],
    location: <approx. lat/lon>,
    altitude: 4300,
    cruise_altitude: 6500,
    destination: KPAO,
    }
...for ingestion into potentially other digital-only analysis systems. You could structure all sorts of routine and non-routine requests like this, and check them for completeness, use it for training later, and so on. Maybe one day, display it in real time on ATC's terminal and in the pilot's EFIS. With structured data, you could associate people's spoken tail numbers with info broadcast over ADS-B and match them up in real time, too. I don't know, maybe this already exists and I just re-invented something that's already 20 years old, no idea. IMO there's lots of innovation possible bringing VHF transmissions into the digital world!

kristian1109

Who gave you our event schema!? ;)

Kidding aside, yes, you're exactly right. We're already doing this to a large degree and getting better. Lots of our own data labeling and model training to make this good.

threeseed

> Looking out 5-10 years, it's conceivable that the error rates on a lot of these systems will be super-human (non-zero, but better than a human). It's also conceivable that you could actually respond "Say Again" to a speech-to-speech model that can correct and repeat any mistakes as they're happening.

This is effectively AGI.

And I've not seen anyone reputable suggest that our current LLM track will get us to that point. In fact there is no path to AGI. It requires another breakthrough in pure research in an environment where money is coming out of universities.

fartfeatures

It isn't AGI, it is domain specific intelligence.

kristian1109

AGI is a moving target, but agreed, lot's more research to be done.

beebaween

IMO one of the only interesting things "block chain" tech ever produced that had real world value and potential to save lives.

https://aviationsystems.arc.nasa.gov/publications/2019/SciTe...

Curious if OP has seen this paper / project before?

nitin_j11

Your system fuses ATC speech recognition, NLP, and ADS-B signals to detect and mitigate human error in air traffic control. Given the rapid advancements in multimodal AI, have you explored integrating visual data sources (e.g., satellite imagery, radar feeds, or airport surveillance cameras) to further improve situational awareness and error detection? What challenges do you foresee in making Yeager more contextually aware using additional modalities?

lxe

I was thinking along the same exact lines:

Why do we still rely on analog narrowband AM voice over VHF to do vital comms like air traffic control? Same way as we did in the 1940s!

We should be transcribing the messages and relaying them digitally with signatures and receipt acknowledgement.

jimnotgym

Isn't it because AM audio is still understandable under very suboptimal conditions where digital might not get through? Digital narrowband data modes tend to pass very small amounts of data

dharmab

I'm the developer of a speech-to-speech tool for tactical radar control for a combat flight simulator (https://github.com/dharmab/skyeye). My users have often asked to expand it to ATC as well, usually under the impression that it could be done trivially with ChatGPT. I love that I can now link to your post to explain how difficult this problem is! :)

mh-

Being unfamiliar with DCS' architecture, I expected this repo to be in Lua or something. I was surprised to find a very polished, neatly structured, well-documented Go service, haha. Very cool!

kristian1109

Post product market fit :D

Animats

> To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio.

Listening on multiple channels might help at busier airports. Ground, ramp, approach, departure, and enroute are all on different channels. Military aircraft have their own system. (That may have contributed to the DCA accident.)

Something like this was proposed in the late 1940s. The FAA was trying to figure out how to do air traffic control, and put out a request for proposals. General Railroad Signal responded.[1] They proposed to adapt railroad block signalling to the sky. Their scheme involved people listening to ATC communications and setting information about plane locations into an interlocking machine. The listeners were not the controllers; they just did data entry. The controllers then had a big board with lights showing which blocks of airspace were occupied, and could then give permission to aircraft to enter another block.

Then came radar for ATC, which was a much better idea.

[1] https://searchworks.stanford.edu/view/1308783

kristian1109

It's been interesting to see that a product as simple as combining data from multiple frequencies at once has been really compelling to folks. Can't tell you the number of times we've heard "wait, can you compile ground, tower, and approach in one place?"... "... yes, of course."

Military aircraft are typically equipped with UHF radios (in addition to civilian VHF). Many of the same systems apply, just a different RF band. And we're in the process of adding UHF capabilities to our product as a lot of these military aircraft land at civilian airports for training exercises.

I can't imagine what would've happened if we adopted block signaling for ATC ...

vednig

I've watched a lot of Aircraft investigation stories, with incidents like this happening, sadly, rarely there are people who are interested in an intersection of the both technologies to be able to find a proper functioning solution and I think this is pretty interesting stuff you guys are doing, if you'd been working on re-inventing the wheel with new software to automate flight trajectory management, I'd not be as amazed, I think you guys have really taken out the time to understand the problem and worked on a potential solution, that could have a major impact.

> This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.

The part where you explain that it is integrable in the existing chain of command at Airports is proof enough.

Wishing you all the best for your venture.

kristian1109

Thank you — yes, it's super important to us to find the wedge where we can actually ship quickly. Nothing beats feedback from real products in the real world. As much experience as we have being pilots, there's so much to learn on the control side.

qdotme

I'm loving it. As a private pilot working (slowly) towards my CFI rating - this is also such an opportunity to integrate it into training devices.

Bulk of the instrument flight training is "mindgames" anyways - you see nothing other than instruments, your "seat in the pants" is likely to cheat you..

Possibly going a step further, the state of teaching aids available to CFIs is pretty sad, with online quizzes and pre-recorded videos being the pinnacle of what I have experienced... this would be an awesome opportunity to try and build "automatic CFI" - not counting for AATD time under current rules but better than chair-flying (the process of imagining a flight and one's reactions).

kristian1109

I have a bit of an ax to grind with the state of pilot training tools. There's some interesting new work being done here, but agreed lots to do in this realm. What does a compelling "automatic CFI" look like to you?

subhro

> Bulk of the instrument flight training is "mindgames" anyways - you see nothing other than instruments, your "seat in the pants" is likely to cheat you..

Eh, I guess I can flex a little. Living in the Pacific North West, I do not have to play mind games. I can almost get IMC delivered on demand. :P

ammar2

Any plans on open-sourcing your ATC speech models? I've long wanted a system to take ATIS broadcasts and do a transcription to get sort of an advisory D-ATIS since that system is only available at big commercial airports. (And apparently according to my very busy local tower, nearly impossible to get FAA to give to you).

Existing models I've tried just do a really terrible job at it.

kristian1109

I've thought about the same thing; transparently, we were trying to get a reliable source of ATIS to inject into our model context and had the same issue with D-ATIS. What airport are you at? Maybe we whip up a little ATIS page as a tool for GA folks.

ammar2

That would be awesome! My airport is KPDK (sadly it doesn't have a good liveatc stream for its ATIS frequency).

I did collect a bunch of ATIS recordings and hand-transcribed ground-truth data for it a while ago. I can put it up if that might be handy for y'all.

kristian1109

If you're willing, that'd be great. I think our model will do well out of the box, but more data is more better as they say.

I spent a lot of time out at PDK when I worked briefly in aircraft sales. Nice airport!

Let me work on this and come back! I think we can ship you an API for ATIS there...