Skip to content(if available)orjump to list(if available)

GitHub Copilot Coding Agent

GitHub Copilot Coding Agent

95 comments

·May 19, 2025

taurath

> Copilot excels at low-to-medium complexity tasks in well-tested codebases, from adding features and fixing bugs to extending tests, refactoring, and improving documentation.

Bounds bounds bounds bounds. The important part for humans seems to be maintaining boundaries for AI. If your well-tested codebase has the tests built thru AI, its probably not going to work.

I think its somewhat telling that they can't share numbers for how they're using it internally. I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with success. There's real stuff in there, and my brain has an insanely hard time separating the trillion dollars of hype from the usefulness.

timrogers

We've been using Copilot coding agent internally at GitHub, and more widely across Microsoft, for nearly three months. That dogfooding has been hugely valuable, with tonnes of valuable feedback (and bug bashing!) that has helped us get the agent ready to launch today.

So far, the agent has been used by about 400 GitHub employees in more than 300 our our repositories, and we've merged almost 1,000 pull requests contributed by Copilot.

In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)

(Source: I'm the product lead at GitHub for Copilot coding agent.)

overfeed

> we've merged almost 1,000 pull requests contributed by Copilot

I'm curious to know how many Copilot PRs were not merged and/or required human take-overs.

sethammons

textbook survivorship bias https://en.wikipedia.org/wiki/Survivorship_bias

every bullet hole in that plane is the 1k PRs contributed by copilot. The missing dots, and whole missing planes, are unaccounted for. Ie, "ai ruined my morning"

NitpickLawyer

> In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)

Really cool, thanks for sharing! Would you perhaps consider implementing something like these stats that aider keeps on "aider writing itself"? - https://aider.chat/HISTORY.html

binarymax

So I need to ask: what is the overall goal of your project? What will you do in, say, 5 years from now?

timrogers

What I'm most excited about is allowing developers to spend more of their time working on the work they enjoy, and less of their time working on mundane, boring or annoying tasks.

Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates - and I really think we're heading to a world where AI can take the load of that and free me up to work on the most interesting and complex problems.

ilaksh

That's a completely nonsensical question given how quickly things are evolving. No one has a five year project timeline.

ilaksh

What model does it use? gpt-4.1? Or can it use o3 sometimes? Or the new Codex model?

aaroninsf

Question you may have a very informed perspective on:

where are we wrt the agent surveying open issues (say, via JIRA) and evaluating which ones it would be most effective at handling, and taking them on, ideally with some check-in for conirmation?

Or, contrariwise, from having product management agents which do track and assign work?

9wzYQbTYsAIc

Check out this idea: https://fairwitness.bot (https://news.ycombinator.com/item?id=44030394).

The entire website was created by Claude Sonnet through Windsurf Cascade, but with the “Fair Witness” prompt embedded in the global rules.

If you regularly guide the LLM to “consult a user experience designer”, “adopt the multiple perspectives of a marketing agenc”, etc., it will make rather decent suggestions.

I’ve been having pretty good success with this approach, granted mostly at the scale of starting the process with “build me a small educational website to convey this concept”.

null

[deleted]

twodave

I feel like I saw a quote recently that said 20-30% of MS code is generated in some way. [0]

In any case, I think this is the best use case for AI in programming—as a force multiplier for the developer. It’s for the best benefit of both AI and humanity for AI to avoid diminishing the creativity, agency and critical thinking skills of its human operators. AI should be task oriented, but high level decision-making and planning should always be a human task.

So I think our use of AI for programming should remain heavily human-driven for the long term. Ultimately, its use should involve enriching humans’ capabilities over churning out features for profit, though there are obvious limits to that.

[0] https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

DeepYogurt

> I feel like I saw a quote recently that said 20-30% of MS code is generated in some way. [0]

Similar to google. MS now requires devs to use ai

ilaksh

You might want to study the history of technology and how rapidly compute efficiency has increased as well as how quickly the models are improving.

In this context, assuming that humans will still be able to do high level planning anywhere near as well as an AI, say 3-5 years out, is almost ludicrous.

_se

Reality check time for you: people were saying this exact thing 3 years ago. You cannot extrapolate like that.

greatwhitenorth

How much was previously generated by intellisense and other code gen tools before AI? What is the delta?

tmpz22

How much of that is protobuf stubs and other forms of banal autogenerate code?

twodave

Updated my comment to include the link. As much as 30% specifically generated by AI.

null

[deleted]

ctkhn

That's great, our leadership is heavily pushing ai-generated tests! Lol

Scene_Cast2

I tried doing some vibe coding on a greenfield project (using gemini 2.5 pro + cline). On one hand - super impressive, a major productivity booster (even compared to using a non-integrated LLM chat interface).

I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions (putting things where they don't belong). Unfortunately, there's not that much self-retrospection on these aspects if you ask about the quality of the code or if there are any better ways of doing it. Of course, if you pick up that something is in the wrong spot and prompt better, they'll pick up on it immediately.

I also ended up blowing through $15 of LLM tokens in a single evening. (Previously, as a heavy LLM user including coding tasks, I was averaging maybe $20 a month.)

candiddevmike

> I also ended up blowing through $15 of LLM tokens in a single evening.

This is a feature, not a bug. LLMs are going to be the next "OMG my AWS bill" phenomenon.

Scene_Cast2

Cline very visibly displays the ongoing cost of the task. Light edits are about 10 cents, and heavy stuff can run a couple of bucks. It's just that the tab accumulates faster than I expect.

PretzelPirate

> Cline very visibly displays the ongoing cost of the task

LLMs are now being positioned as "let them work autonomously in the background" which means no one will be watching the cost in real time.

Perhaps I can set limits on how much money each task is worth, but very few would estimate that properly.

eterm

> Light edits are about 10 cents

Some well-paid developers will excuse this with, "Well if it saved me 5 minutes, it's worth an order of magnitude than 10 cents".

Which is true, however there's a big caveat: Time saved isn't time gained.

You can "Save" 1,000 hours every night, but you don't actuall get those 1,000 hours back.

BeetleB

> I also ended up blowing through $15 of LLM tokens in a single evening.

Consider using Aider, and aggressively managing the context (via /add, /drop and /clear).

https://aider.chat/

danenania

My tool Plandex[1] allows you to switch between automatic and manual context management. It can be useful to begin a task with automatic context while scoping it out and making the high level plan, then switch to the more 'aider-style' manual context management once the relevant files are clearly established.

1 - https://github.com/plandex-ai/plandex

Also, a bit more on auto vs. manual context management in the docs: https://docs.plandex.ai/core-concepts/context-management

jstummbillig

If you want to use Cline and are at all price sensitive (in these ranges) you have to do manual context management just for that reason. I find that too cumbersome and use Windsurf (currently with Gemini 2.5 pro) for that reason.

falcor84

> LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt

I wonder if the next phase would be the rise of (AI-driven?) "linters" that check that the implementation matches the architecture definition.

dontlikeyoueith

And now we've come full circle back to UML-based code generation.

Everything old is new again!

tmpz22

While its being touted for Greenfield projects I've notices a lot of failures when it comes to bootstrapping a stack.

For example it (Gemini 2.5) really struggles with newer ecosystem like Fastapi when wiring libraries like SQLAlchemy, Pytest, Python-playwright, etc., together.

I find more value in bootstrapping myself, and then using it to help with boiler plate once an effective safety harness is in place.

nodja

I wish they optimized things before adding more crap that will slow things down even more. The only thing that's fast with copilot is the autocomplete, it sometimes takes several minutes to make edits on a 100 line file regardless of the model I pick (some are faster than others). If these models had a close to 100% hit rate this would be somewhat fine, but going back and forth with something that takes this long is not productive. It's literally faster to open claude/chatgpt on a new tab and paste the question and code there and paste it back into vscode than using their ask/edit/agent tools.

I've cancelled my copilot subscription last week and when it expires in two weeks I'll mostly likely shift to local models for autocomplete/simple stuff.

brushfoot

My experience has mostly been the opposite -- changes to several-hundred-line files usually only take a few seconds.

That said, months ago I did experience the kind of slow agent edit times you mentioned. I don't know where the bottleneck was, but it hasn't come back.

I'm on library WiFi right now, "vibe coding" (as much as I dislike that term) a new tool for my customers using Copilot, and it's snappy.

nodja

Here's a video of what it looks like with sonnet 3.7.

https://streamable.com/rqlr84

The claude and gemini models tend to be the slowest (yes, including flash). 4o is currently the fastest but still not great.

BeetleB

Several minutes? Something is seriously wrong. For most models, it takes seconds.

nodja

2m27s for a partial response editing a 178 line file (it failed with an error, which seems to happen a lot with claude, but that's another issue).

https://streamable.com/rqlr84

muglug

> Copilot excels at low-to-medium complexity tasks

Oh cool!

> in well-tested codebases

Oh ok never mind

lukehoban

As peer commenters have noted, coding agent can be really good at improving test coverage when needed.

But also as a slightly deeper observation - agentic coding tools really do benefit significantly from good test coverage. Tests are a way to “box in” the agent and allow it to check its work regularly. While they aren’t necessary for these tools to work, they can enable coding agents to accomplish a lot more on your behalf.

(I work on Copilot coding agent)

CSMastermind

In my experience they write a lot of pointless tests that technically increase coverage while not actually adding much more value than a good type system/compiler would.

They also have a tendency to suppress errors instead of fixing them, especially when the right thing to do is throw an error on some edge case.

null

[deleted]

abraham

Have it write tests for everything and then you've got a well tested codebase.

danielbln

Caveat empor, I've seen some LLMs mock the living hell out of everything, to the point of not testing much of anything. Something to be aware of.

yen223

I've seen too many human operators do that too. Definitely a problem to watch out for

eikenberry

You forgot the /s

throwaway12361

In my experience it works well even without good testing, at least for greenfield projects. It just works best if there are already tests when creating updates and patches.

azhenley

Looks like their GitHub Copilot Workspace.

https://githubnext.com/projects/copilot-workspace

net01

on a other note https://github.com/github/dmca/pull/17700 GitHub's automated auto-merged DMCA sync PRs get automated copilot reviews for every single one.

AMAZING

shwouchk

I played around with it quite a bit. it is both impressive and scary. most importantly, it tends to indiscriminately use dependencies from random tiny repos, and often enough not the correct ones, for major projects. buyer beware.

boomskats

My buddy is at GH working on an adjacent project & he hasn't stopped talking about this for the last few days. I think I've been reminded to 'make sure I tune into the keynote on Monday' at least 8 times now.

I gave up trying to watch the stream after the third authentication timeout, but if I'd known it was this I'd maybe have tried a fourth time.

unshavedyak

What specific keynote are they referring to? I'm curious, but thus far my searches have failed

babelfish

MS Build is today

tmpz22

I’m always hesitant to listen to the line coders on projects because they’re getting a heavy dose of the internal hype every day.

I’d love for this to blow past cursor. Will definitely tune in to see it.

dontlikeyoueith

>I’m always hesitant to listen to the line coders on projects because they’re getting a heavy dose of the internal hype every day.

I'm senior enough that I get to frequently see the gap between what my dev team thinks of our work and what actual customers think.

As a result, I no longer care at all what developers (including myself on my own projects) think about the quality of the thing they've built.

throwaway12361

Word of advice: just go to YouTube and skip the MS registration tax

jerpint

These kinds of patterns allow compute to take much more time than a single chat since it is asynchronous by nature, which I think is necessary to get to working solutions on harder problems

lukehoban

Yes. This is a really key part of why Copilot coding agent feels very different to use than Copilot agent mode in VS Code.

In coding agent, we encourage the agent to be very thorough in its work, and to take time to think deeply about the problem. It builds and tests code regularly to ensure it understands the impact of changes as it makes them, and stops and thinks regularly before taking action.

These choices would feel too “slow” in a synchronous IDE based experience, but feel natural in a “assign to a peer collaborator” UX. We lean into this to provide as rich of a problem solving agentic experience as possible.

(I’m working on Copilot coding agent)

null

[deleted]

sync

Anthropic just announced the same thing for Claude Code, same day: https://docs.anthropic.com/en/docs/claude-code/github-action...

asadm

In the early days on LLM, I had developed an "agent" using github actions + issues workflow[1], similar to how this works. It was very limited but kinda worked ie. you assign it a bug and it fired an action, did some architect/editing tasks, validated changes and finally sent a PR.

Good to see an official way of doing this.

1. https://github.com/asadm/chota

fvold

The biggest change Copilot has done for me so far is to have me replace my VSCode with VSCodium to be sure it doesn't sneak any uploading of my code to a third party without my knowing.

I'm all for new tech getting introduced and made useful, but let's make it all opt in, shall we?

qwertox

Care to explain? Where are they uploading code to?