Skip to content(if available)orjump to list(if available)

Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control

Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control

117 comments

·March 4, 2025

Hey HN, we’re Eric and Kristian of Enhanced Radar. We’re working on making air travel safer by augmenting control services in our stressed airspace system.

Recent weeks have put aviation safety on everyone’s mind, but we’ve been thinking about this problem for years. Both of us are pilots — we have 2,500 hours of flight time between us. Eric flew professionally and holds a Gulfstream 280 type rating and both FAA and EASA certificates. Kristian flies recreationally, and before this worked on edge computer vision for satellites.

We know from our flying experience that air traffic management is imperfect (every pilot can tell stories of that one time…), so this felt like an obvious problem to work on.

Most accidents are the result of an overdetermined “accident chain” (https://code7700.com/accident_investigation.htm). The popular analogy here is the swiss cheese model, where holes in every slice line up perfectly to cause an accident. Often, at least one link in that chain is human error.

We’ll avoid dissecting this year’s tragedies and take a close call from last April at DCA as an example:

The tower cleared JetBlue 1554 to depart on Runway 04, but simultaneously a ground controller on a different frequency cleared a Southwest jet to cross that same runway, putting them on a collision course. Controllers noticed the conflict unfolding and jumped in to yell at both aircraft to stop, avoiding a collision with about 8 seconds to spare (https://www.youtube.com/watch?v=yooJmu30DxY).

Importantly, the error that caused this incident occurred approximately 23 seconds before the conflict became obvious. In this scenario, a good solution would be a system that understands when an aircraft has been cleared to depart from a runway, and then makes sure no aircraft are cleared to cross (or are in fact crossing) that runway until the departing aircraft is wheels-up. And so on.

To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio. It’s trained on a large amount of our own labeled ATC audio collected from our VHF receivers located at airports around the US. We improve performance by injecting context such as airport layout details, nearby/relevant navaids, and information on all relevant aircraft captured via ADS-B.

Our product piggy-backs on the raw signal in the air (VHF radio from towers to pilots) by having our own antennas, radios, and software installed at the airport. This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.

Building models for processing ATC speech is our first step toward building a safety net that detects human error (by both pilots and ATC). The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%. We’re using these transcripts with NLP and ADS-B (the system that tracks aircraft positions in real time) for readback detection (ensuring pilots correctly repeat ATC instructions) and command compliance.

There are different views about the future of ATC. Our product is naturally based on our own convictions and experience in the field. For example, it’s sometimes said that voice comms are going away — we think they aren’t (https://www.ericbutton.co/p/speech). People also point out that airplanes are going to fly themselves — in fact they already do. But passenger airlines, for example, will keep a pilot onboard (or on the ground) with ultimate control, for a long time from now; the economics and politics and mind-boggling safety and legal standards for aviation make this inevitable. Also, while next-gen ATC systems like ASDE-X are already in place, they don’t eliminate the problem. The April 2024 scenario mentioned above occurred at DCA, an ASDE-X-equipped airport.

America has more than 5,000 public-use airports, but only 540 of these have control towers (due to cost). As a result, there are over 100 commercial airline routes that fly into uncontrolled airports, and 4.4M landings at these fields. Air traffic control from first principles looks significantly more automated, more remote-controlled, and much cheaper — and as a result, much more widespread.

We’ve known each other for 3 years, and decided independently that we needed to work on air traffic. Having started on this, we feel like it’s our mission for the next decade or two.

If you’re a pilot or an engineer who’s thought about this stuff, we’d love to get your input. We look forward to hearing everyone’s thoughts, questions, ideas!

Animats

> To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio.

Listening on multiple channels might help at busier airports. Ground, ramp, approach, departure, and enroute are all on different channels. Military aircraft have their own system. (That may have contributed to the DCA accident.)

Something like this was proposed in the late 1940s. The FAA was trying to figure out how to do air traffic control, and put out a request for proposals. General Railroad Signal responded.[1] They proposed to adapt railroad block signalling to the sky. Their scheme involved people listening to ATC communications and setting information about plane locations into an interlocking machine. The listeners were not the controllers; they just did data entry. The controllers then had a big board with lights showing which blocks of airspace were occupied, and could then give permission to aircraft to enter another block.

Then came radar for ATC, which was a much better idea.

[1] https://searchworks.stanford.edu/view/1308783

kristian1109

It's been interesting to see that a product as simple as combining data from multiple frequencies at once has been really compelling to folks. Can't tell you the number of times we've heard "wait, can you compile ground, tower, and approach in one place?"... "... yes, of course."

Military aircraft are typically equipped with UHF radios (in addition to civilian VHF). Many of the same systems apply, just a different RF band. And we're in the process of adding UHF capabilities to our product as a lot of these military aircraft land at civilian airports for training exercises.

I can't imagine what would've happened if we adopted block signaling for ATC ...

VBprogrammer

> I can't imagine what would've happened if we adopted block signaling for ATC ...

You don't have to imagine. We already do in many places. The North Atlantic Tracks are essentially exactly that. Aircraft give position reports and estimates, those positions reports are used to decide whether an aircraft can climb though which levels etc.

It's also used extensively in an IFR non-radar environments. Exactly why aircraft have to cancel IFR at uncontrolled airfields in the US or under a procedural ATC service in the UK. You hear it a lot around the Caribbean and Bahamas too.

dharmab

I'm the developer of a speech-to-speech tool for tactical radar control for a combat flight simulator (https://github.com/dharmab/skyeye). My users have often asked to expand it to ATC as well, usually under the impression that it could be done trivially with ChatGPT. I love that I can now link to your post to explain how difficult this problem is! :)

mh-

Being unfamiliar with DCS' architecture, I expected this repo to be in Lua or something. I was surprised to find a very polished, neatly structured, well-documented Go service, haha. Very cool!

kristian1109

Post product market fit :D

ryandrake

> The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%.

I'd be curious about what happens when the ASR fails. This is not the place to guess or AI-hallucinate. As a pilot, I can always ask "Say Again" over the radio if I didn't understand. ASR can't do that. Also, it would be pretty annoying if my readback was correct, but the system misunderstood either the ATC clearance or my readback and said NO.

kristian1109

Good and fair questions.

In the very short term, we're deploying this tech more in a post-operation/training role. Imagine being a student pilot, getting in from your solo cross country, and pulling up the debrief will all your comms laid out and transcribed. In this setting, it's helpful for the student to have immediate feedback such as "your readback here missed this detail...", etc. Controllers also have phraseology and QA reviews every 30 days where this is helpful. This will make human pilots and controllers better.

Next, we'll step up to active advisory (mapping to low assurance levels in the certification requirements). There's always a human in the loop that can respond to rare errors and override the system with their own judgement. We're designing with observability as a first-class consideration.

Looking out 5-10 years, it's conceivable that the error rates on a lot of these systems will be super-human (non-zero, but better than a human). It's also conceivable that you could actually respond "Say Again" to a speech-to-speech model that can correct and repeat any mistakes as they're happening.

Of course, that's a long ways from now. And there will always be a human in the loop to make a final judgement as needed.

btown

One of the challenges I imagine you'll face as you move towards active advisory is that the more an alerting tool is relied upon, the more an absence of a flag from it is considered a positive signal that things are fine. "I didn't hear from Enhanced Radar, so we don't need to worry about ___" is a situation where a hallucinated silence of the alerting tool could contribute to danger, even if it's billed as an "extra" safety net.

I imagine that aviation regulatory bodies have high standards for this - a tool being fully additive to existing tools does not necessarily mean that it's cleared for use in a cockpit or in an ATC tower, right? Do you have thoughts about how you'll approach this? Also curious from a broader perspective - how do you sell any alerting tool into a niche that's highly conscious of distractions, and of not just false positive alerts but false negatives as well?

kristian1109

Yes, fair points. In talking to controllers, this has already come up. There are a few systems that do advisory alerting and controllers have expressed some frustration because each alert triggers a bunch of paperwork and they are not 100% relevant.

There are lots of small steps on this ladder.

The first is post-operational. You trigger an alert async and someone reviews it after the fact. Tools like this help bring awareness to hot spots or patterns of error that can be applied later in real time by the human controller.

A step up from that is real-time alerting, but not to the main station controller. There's always a manager in the tower that's looking over everyone's shoulder and triaging anything that comes up. That person is not as focused on any single area as the main controllers. There's precedence for tools surfacing alerts to the manager, and then they decide whether it's worth stepping in. This will probably be where our product sits for a while.

The bar to get in front of an active station controller is extremely high. But it's also not necessary for a safety net product like this to be helpful in real time.

ryandrake

Thanks for that. It must be exciting to be applying software skills to aviation. Life goals!

To me, speech to text and back seems like an incremental solution, but the holy grail would be the ability to symbolically encode the meaning of the words and translate to and from that meaning. People' phraseology varies wildly (even though it often shouldn't). For example, if I'm requesting VFR flight following, I can do it many different ways, and give the information ATC needs in any order. A system that can convert my words to "NorCal Approach Skyhawk one two three sierra papa is a Cessna one seventy two slant golf, ten north-east of Stockton, four thousand three hundred climbing six thousand five hundred requesting flight following to Palo Alto at six thousand five hundred," is nice, but wouldn't it be amazing if it could translate that audio into structured data:

    {
    atc: NORCAL,
    requester: "N123SP",
    request: "VFR",
    type: CESSNA_172,
    equipment: [G],
    location: <approx. lat/lon>,
    altitude: 4300,
    cruise_altitude: 6500,
    destination: KPAO,
    }
...for ingestion into potentially other digital-only analysis systems. You could structure all sorts of routine and non-routine requests like this, and check them for completeness, use it for training later, and so on. Maybe one day, display it in real time on ATC's terminal and in the pilot's EFIS. With structured data, you could associate people's spoken tail numbers with info broadcast over ADS-B and match them up in real time, too. I don't know, maybe this already exists and I just re-invented something that's already 20 years old, no idea. IMO there's lots of innovation possible bringing VHF transmissions into the digital world!

kristian1109

Who gave you our event schema!? ;)

Kidding aside, yes, you're exactly right. We're already doing this to a large degree and getting better. Lots of our own data labeling and model training to make this good.

threeseed

> Looking out 5-10 years, it's conceivable that the error rates on a lot of these systems will be super-human (non-zero, but better than a human). It's also conceivable that you could actually respond "Say Again" to a speech-to-speech model that can correct and repeat any mistakes as they're happening.

This is effectively AGI.

And I've not seen anyone reputable suggest that our current LLM track will get us to that point. In fact there is no path to AGI. It requires another breakthrough in pure research in an environment where money is coming out of universities.

fartfeatures

It isn't AGI, it is domain specific intelligence.

kristian1109

AGI is a moving target, but agreed, lot's more research to be done.

Mikhail_K

> I'd be curious about what happens when the ASR fails.

When, not if. The "artificial intelligence" as it is presently understood is statistical in nature. To rely on it for air traffic control seems quite irresponsible.

ibejoeb

I think it would be handy to have it as a check. If I get an alert about a potentially incorrect readback, then I can call back for clarification.

raphting

I worked a few years for German air traffic control and I own a PPL.

From a non-commercial viewpoint, I like to see when people get enthusiastic to make airspace and flying safer. From a commercial perspective, I agree with others writing here that going into a highly regulated market such as air traffic, is very hard, and I can tell you why I think so.

For example German air traffic control (DFS) publishes tools which are not directly ment for ATCOS https://stanlytrack3.dfs.de/st3/STANLY_Track3.html , so they are already covering part of this market. Then there are companies already specialised in tapping into the open data of the skies https://www.skysquitter.com/en/home-2/ (Or check https://droniq.de which is specialised in integrating drones into airspace). They are all either governmental or subsidiaries, or not directly involved in air traffic control itself.

I once built a 3D airspace app which I thought could become a commercial product, but I found it is too hard to compete with companies like DFS or Boeing (ForeFlight) and others. (I published the app for free to play around with: https://raphting.dev/confident/)

Saying that, I think I thought a lot about commercialisation of airspace products and my conclusion is that most countries have a good reason to leave air traffic control governmentally owned and continue gatekeeping for new entries. These gates are very well protected, and if it is only with high fees you need to pay to even gain access to data (like when I purchased airspace data from Eurocontrol for the 3D App).

Focusing on training or "post-ops", what I think you plan to do, is probably the more viable direction.

RockyMcNuts

Maybe the future is structured electronic messaging with the humans in the loop.

Like, check in with the controller but most messages are sent electronically and acknowledged manually.

I have your clearance, advise when ready to copy, then you write everything down on kneeboard with a pencil and then manually put it in the navigation system, is a little archaic.

certainly speech to text is a useful transition but in the long run the controller could click on an aircraft and issue the next clearance with a keyboard shortcut. then the pilot would get a visual and auditory alert in the cockpit and click to acknowledge.

I would hope someone at NASA or DARPA or somewhere is working on it. And then of course the system can detect conflicts, an aircraft not following the clearance etc.

kristian1109

The problem with datalink systems is they are poor substitutes for immediate control & confirmation. My co-founder Eric wrote a short piece about this: https://www.ericbutton.co/p/speech. This is why they are mainly relegated to low-urgency en-route & clearance delivery.

someguydave

He’s write about the bandwidth and latency of voice, but the problem is that you can’t immediately know who should react to instructions. “GO AROUND IMMEDIATE!” - now all the pilots on frequency are wondering who’s the addressee

Also, AM voice on VHF is not full duplex and the blocking problem is very real and could be addressed potentially

RockyMcNuts

interesting! have PP but haven't flown really last couple of decades.

I feel like, with proper UX in the cockpit and on the controller console, making it easy to send/acknowledge the clearance, and intrusively demanding immediate acknowledgment for important messages, with the controller able to talk to the pilot if it isn't immediately acknowledged, structured messages would save time, be more accurate, allow automated checks, i.e. be a superior substitute.

UX needs a ton of work and human factors validation, and would take 20 years to implement. But if you were starting from a blank slate it seems like the way to go!

EMM_386

> Maybe the future is structured electronic messaging with the humans in the loop.

There already is: Controller Pilot Data Link Communications (CPDLC).

Get an instruction, press to confirm.

At the moment, this is only used for certain types of things (clearances, frequency changes, speed assignments, etc.) along with voice ATC.

https://en.wikipedia.org/wiki/Controller%E2%80%93pilot_data_...

imadethis

Look up CPLDC - https://en.wikipedia.org/wiki/Controller%E2%80%93pilot_data_...

This is how big operations handle clearances today, complete with integration into the FMS. The pilot simply reviews the clearance and accepts it.

noahnoahnoah

This already exists and is used in much of the US and extensively in Europe for airlines. Look up Controller Pilot Data Link Communications (CPDLC).

fallingmeat

also requires fairly expensive equipment (FMS with FANS support)

vednig

I've watched a lot of Aircraft investigation stories, with incidents like this happening, sadly, rarely there are people who are interested in an intersection of the both technologies to be able to find a proper functioning solution and I think this is pretty interesting stuff you guys are doing, if you'd been working on re-inventing the wheel with new software to automate flight trajectory management, I'd not be as amazed, I think you guys have really taken out the time to understand the problem and worked on a potential solution, that could have a major impact.

> This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.

The part where you explain that it is integrable in the existing chain of command at Airports is proof enough.

Wishing you all the best for your venture.

kristian1109

Thank you — yes, it's super important to us to find the wedge where we can actually ship quickly. Nothing beats feedback from real products in the real world. As much experience as we have being pilots, there's so much to learn on the control side.

RachelF

I have two questions:

1. Good overview of the technologies you are using, but what product are you planning on building or have built? I understand what you are doing and it's "extra safety over existing systems" but how does it work for the end user? Is the end user an ATC or a pilot?

2. You will find that introducing new systems into this very conservative field is hard. I've built avionics and ATC electronics. The problem isn't normally technology, it's the paperwork. How do you plan on handling this?

kristian1109

1. Our first product is post-op review at airports. We're selling that to airport managers who use our system for training and incident review. Today, when a ground ops vehicle (for example) makes a mistake, the airport manager has to note the incident, call the tower, wait a week for them to burn a CD of the audio, scrub through to find the relevant comms, go to a separate source to pull the ADS-B track (if available), fuse all that together, and review with the offending employee. Our product just delivers all that data at their fingertips. For training, we also flag clips where the phraseology isn't quite right, etc. Obviously this isn't the long term product, but it gets us to revenue quickly and side-steps regulation for now.

2. Agree

[edit] (oops, sorry, seeing your edit)

2. The regulation allows for escalating assurance levels. We'll start with low assurance (advisory) and climb that ladder. We're definitely not naive about it; this will be hard and annoying. But it's inconceivable that someone won't do this in the next 10 years. Too important.

RachelF

Thank you for the detailed reply. Your first product sounds like something that is needed. I wish your startup very good luck and will be watching your progress.

Do ground vehicles also have GPS trackers with a radio transmitter, or do they just use normal ADS-B?

qdotme

I'm loving it. As a private pilot working (slowly) towards my CFI rating - this is also such an opportunity to integrate it into training devices.

Bulk of the instrument flight training is "mindgames" anyways - you see nothing other than instruments, your "seat in the pants" is likely to cheat you..

Possibly going a step further, the state of teaching aids available to CFIs is pretty sad, with online quizzes and pre-recorded videos being the pinnacle of what I have experienced... this would be an awesome opportunity to try and build "automatic CFI" - not counting for AATD time under current rules but better than chair-flying (the process of imagining a flight and one's reactions).

zeroc8

The teaching tools nowadays available to CFIs are fantastic. Get yourself X-Plane 12, Honeycomb yoke, throttles and rudders and a decent aircraft like the Challenger 650 and you are good to go. All of this can be had for around 1000 bucks and is the best investment anyone can make to stay current. I used to be a CFII a long time ago (1995) when sims where expensive and only the best flight schools had them. Back then, instrument training was flying long and boring hours under the hood and if you were lucky, one or two hours actual IFR. Most people were afraid to fly real IFR after the rating, since they were aware that the training was so bad, myself included. A couple of hours flying a PC sim fixed this problem once and forever and made a huge difference in real world flying.

kristian1109

I have a bit of an ax to grind with the state of pilot training tools. There's some interesting new work being done here, but agreed lots to do in this realm. What does a compelling "automatic CFI" look like to you?

subhro

> Bulk of the instrument flight training is "mindgames" anyways - you see nothing other than instruments, your "seat in the pants" is likely to cheat you..

Eh, I guess I can flex a little. Living in the Pacific North West, I do not have to play mind games. I can almost get IMC delivered on demand. :P

lxe

I was thinking along the same exact lines:

Why do we still rely on analog narrowband AM voice over VHF to do vital comms like air traffic control? Same way as we did in the 1940s!

We should be transcribing the messages and relaying them digitally with signatures and receipt acknowledgement.

DF1PAW

AM modulation is perfectly justified in this context: if two (or more) stations accidentally transmit at the same time, this will be noticed. Using FM, only the stronger signal wins and the other signal remains undetected. The advantage over digital transmission is the lack of coding overhead - the voice reaches the receiver without any time delay.

jlewallen

This is true, for anybody curious about this: https://en.wikipedia.org/wiki/Capture_effect

lxe

The justification still holds, but better tech with the same benefits exists nowadays.

As far as digital decoding delay is concerned, this is a negligible number if implemented correctly.

jimnotgym

Isn't it because AM audio is still understandable under very suboptimal conditions where digital might not get through? Digital narrowband data modes tend to pass very small amounts of data

lxe

Quite the opposite. For short messages digital modes can employ layers of redundancy, auto carrier recovery, error correction at all layers all while yielding lower power requirements, and longer distance.

EricButton

FAA actually tried moving to digital voice (has benefits wrt airband congestion) but it didn't go anywhere. I believe a lot came down to the minimal benefit over current solutions, plus the coordination and safety implications of actually making the switch. Tough for an FAA official to pull the trigger on a rollout that has even 0.1% chance of an aircraft crashing.

dehrmann

A lot of ATC seems to use lowest common denominator tech so that you an fly a Cessna into JFK.

kristian1109

But only after 1am when you're not fighting a 15 knot headwind with an A320 cleared number two.

someguydave

- I think the ultimate safety improvement would be to move the human out of the realtime control loops and getting him focused more on the big picture.

- there is opportunity for permissionless passive RF sensors like this startup shows. Imagine the pilots in the CRJ received an immediate notification that an intercepting aircraft was getting blocked with(transmitting over) ATC comms on their UHF frequency. I think this could be done without decoding the voices.

- Passive radar combined with direction-finding of the VHF/UHF voice transmissions could also be integrated as another source of high-resolution tracking data

nitin_j11

Your system fuses ATC speech recognition, NLP, and ADS-B signals to detect and mitigate human error in air traffic control. Given the rapid advancements in multimodal AI, have you explored integrating visual data sources (e.g., satellite imagery, radar feeds, or airport surveillance cameras) to further improve situational awareness and error detection? What challenges do you foresee in making Yeager more contextually aware using additional modalities?

kristian1109

Yes, this is an excellent prompt and we're working on it. One problem is a lot of these visual sources require permission, integration, and regulation. That's going to move slower than something we can proceed directly with (VHF antennas).

I believe scaling laws will hold as we start to feed all of this context data into an integrated model. You could imagine a deep-q style reinforcement learning model that ingests layers of structured and visual data and outputs alerts and eventually commands. The main challenge I foresee here will be observability... it's easy enough to shove a ton of data into a black box and get a good answer 98% of the time. But regulation is likely to require such a system to be highly observable/explainable so the human can keep up with what's going on and step in as needed.

Looking further into the future, it's plausible the concrete structures of today with humans looking out windows will be replaced with sensor packages atop a long flagpole that stream high-res optical/ir camera data, surface radar, weather information, etc into a control room with VR layers that help controllers stay on top of busier and busier airspace.

tjlahr

I was thinking about this the other day. To me, the future is decreasing the amount of coordination that happens verbally over VHF.

Ignoring takeoff clearances for a moment, my limited understanding is that most traffic in and out of an airport follows a prescribed pattern: You take off, turn to some particular bearing, climb to some particular altitude, contact center on some particular frequency... etc. Listening to VASAviation, it seems like this accounts for > 80% of pilot-controller communication.

It's strange to me that, given the amount of automation in a modern airliner, these instructions aren't transmitted digitally directly to the autopilot. Instead of the controller verbally telling the pilot where to go, it seems feasible that the controller (or some coordinating software) could just digitally tell the plane where to go.

I feel like that's how you dramatically decrease workload on both ends, and then maybe there's more bandwidth to focus on those takeoff clearances (and eventually automate those as well?).

So many other aspects of flight safety have been handed over to the computer to solve, it's curious to me why a critical function like air traffic control still happens verbally over VHF.

hugh-avherald

There are STAR and SID procedures that are 90% of what you're proposing. A pilot is at the top of descent and is told "Descend via the SID" which takes them to final approach.

As a pilot, I am surprised by how important audio communication is for retention and awareness. Given that my visual senses are (nearly) overwhelmed with information, I think there is a risk that moving ATC from audio to visual would simply saturate the "visual channel" of pilots.

In terms of automating coordination, it's obviously possible but it would take decades to prove its relative safety. (Aviation is extremely safe.) The system would be very fragile, unless you had 24/7 fully staffed backup human ATC, which rather defeats the purpose. Practically speaking too, planes take a long time to build, and the current system allows planes built 80 years ago to fly alongside brand new ones. The cost of abolishing the 'legacy fleet' (i.e. all current passenger aircraft) is pretty high!

kristian1109

Speaking as a pilot for a moment, I think your instincts are correct in theory but hard to actually implement.

In a critical function like control, you don't want to split a pilot's attention. You wouldn't want them to sometimes be monitoring a datalink system, but then also sometimes be listening to the radio for deviations. Even if it's less efficient 70% of the time, you reduce cognitive load by training a pilot to ALWAYS go to the radio for clearance and command.

Of course, there are edge cases these days where pilots use datalink for some clearance delivery before taxi and enroute, but you can see how these phases of flight (before you push back and after the auto pilot is on) are selected for very low competing load. In a live terminal environment, you want a pilot focused in one place for instructions.

Furthermore, you're correct that most pilot-controller communication falls largely within tight set of procedures, but deviations are common enough (weather, traffic, emergencies, hold patterns, taxi route, etc) that you find yourself on the radio regularly to sort it.

Last thing: pilots are allowed to say "unable" if they deem an instruction unsafe. I've personally had to do that many times (most common case for me is trying to comply with a vector instruction under VFR with a cloud in my way). VFR may seem like an edge case that commercial planes don't deal with, but again that's not always true in a terminal environment. Plenty of these planes fly visual approaches all the time. And if ATC is talking directly to the computer and not through the pilot, you lose the opportunity for the pilot to quickly and clearly decline an instruction.

jcrites

I think this is ultimately true, in a sense, but the challenge is correctly handling all of the edge-cases. It's a challenging problem tantamount to the self-driving car problem.

It happens by humans over VHF because a lot of unpredictable things happen in busy airspace, and it would require a massive investment for machines to automate all of it.

I'm also not sure that people would accept the safety risk of airplanes' autopilots being given automated instructions by ATC over the air. There's a large potential vulnerability and safety risk there. I think there's some potential for automation to replace the role of ATC currently, but I suspect it would still be by transmitting instructions to human pilots, not directly to the autopilot.

Lastly, for such a system to ever be bootstrapped, it would still need to handle all of the planes that didn't have this automation yet; it would still need to support communicating with pilots verbally over VHF. An entirely AI ATC system, that autonomously listens to and responds by voice over VHF seems like a plausible first step though.

stevage

>Instead of the controller verbally telling the pilot where to go, it seems feasible that the controller (or some coordinating software) could just digitally tell the plane where to go.

An intermediate step would at least be transmitting those instructions digitally and showing it on a map that the pilot can follow. There have been a number of incidents where pilots misunderstood where they were, and incorrectly followed instructions.

kristian1109

A lot of glass panels these days will do this in the PFD. You get your clearance via datalink from ATC, it loads everything right up, and you just keep the plane in the box or turn the autopilot on once you're in the air.

Of course, this still keeps the pilot in the loop and ideally they will notice if something seems weird.