Hacker Newsnew | past | comments | ask | show | jobs light | darkhn
Claude Code is suddenly everywhere inside Microsoft (theverge.com)
145 points by Anon84 4 hours ago | past | 195 comments
add comment on default site

Microsoft really needs to get a better handle with the naming conventions.

There is Microsoft Copilot, which replaced Bing Chat, Cortana and uses OpenAI’s GPT-4 and 5 models.

There is Github Copilot, the coding autocomplete tool.

There is Microsoft 365 Copilot, what they now call Office with built in GenAI stuff.

There is also a Copilot cli that lets you use whatever agent/model backend you want too?

Everything is Copilot. Laptops sell with Copilot buttons now.

It is not immediately clear what version of Copilot someone is talking about. 99% of my experience is with the Office and it 100% fails to do the thing it was advertised to do 2 years ago when work initially got the subscription. Point it a SharePoint/OneDrive location, a handful of excel spreadsheets and pdfs/word docs and tell it to make a PowerPoint presentation based on that information.

It cannot do this. It will spit out nonsense. You have to hold it by the hand tell it everything to do step by step to the point that making the PowerPoint presentation yourself is significantly faster because you don’t have to type out a bunch of prompts and edit it’s garbage output.

And now it’s clear they aren’t even dogfooding their own LLM products so why should anyone pay for Copilot?


>Microsoft really needs to get a better handle with the naming conventions

Microsoft cannot and will not ever get better at naming things. It is said the universe will split open and and eldritch beast will consume the stars the day Microsoft stops using inconsistent and overlapping names for different and conflicting products.

Isn't that right .Net/dotnet


I'm "I don't know what Xbox is" years old.

Cries in dapper dapr

Not that I disagree, but this is nothing compared to the ".NET" craze in the early 2000s. Everything had to have ".NET" in its name even if it had absolutely nothing to do with the actual .NET technology.

There was also "Active" before that, but .NET was next level crazy...


You are describing everything Microsoft has done since at least the late 90s.

> There is Github Copilot, the coding autocomplete tool.

No, there is Github Copilot, the AI agent tool that also has autocomplete, and a chat UI.

I understand your point about naming, but it's always helpful to know what the products do.


> No, there is Github Copilot, the AI agent tool that also has autocomplete, and a chat UI.

When it came out, Github Copilot was an autocomplete tool. That's it. That may be what the OP was originally using. That's what I used... 2 years ago. That they change the capabilities but don't change the name, yet change names on services that don't change capabilities further illustrates the OP's point, I would say.


To be fair, Github Copilot has followed the same arc as Cursor, from AI-enhanced editor with smart autocomplete, to more of an IDE that now supports agentic "vibe coding" and "vibe editing" as well.

I do agree that conceptually there is a big difference between an editor, even with smart autocomplete, and an agentic coding tool, as typified by Claude Code and other CLI tools, where there is no editor necessarily involved at all.


That's silly. Gmail is a wildly different product than it was when it launched, but I guess it doesn't count since the name is the same?

Microsoft may or may not have a "problem" with naming, but if you're going to criticize a product, it's always a good starting place to know what you're criticizing.


Gmail is basically the same today as when I signed up for the beta. It’s a mail app.

Gmail is almost identical today as it was when it first launched. It just has fancier JavaScript

GPs point is that it is confusing, I guess point well made?

Only if the naming confusion kept them from actually bothering to understand what the product is?

The confusion is when I say “I have a terrible time using Copilot, I don’t recommend using it” and someone chimes in with how great their experience with Github Copilot is, a completely different product and how I must be “holding it wrong” when that is not the same Copilot. That Microsoft has like 5 different products all using Copilot in the name, even people in this very comment section are only saying “Copilot” so it is hard to know what product they are talking about!

I mean, sure. But aside from the fact that everything in AI gets reduced to a single word ("Gemini", "ChatGPT", "Claude") [1], it's clearly not an excuse for misrepresenting the functionality of the product when you're writing a post broadly claiming that their AI products don't work.

Github Copilot is actually a pretty good tool.

[1] Not just AI. This is true for any major software product line, and why subordinate branding exists.


I specifically mention that my experience is with the Office 365 Copilot and how terrible that is and in online discussions I mention this and then people jump out of the woodwork to talk about how great Github Copilot is so thank you for demonstrating that exact experience I have every time I mention Copilot :)

Apparently, so yes.

Seems like there's another option.

...it gets better:

GitHub Copilot is a service, you can buy subscription from here https://github.com/features/copilot.

GitHub Copilot is available from website https://github.com/copilot together with services like Spark (not available from other places), Spaces, Agents etc.

GitHub Copilot is VSCode extension which you can download at https://marketplace.visualstudio.com/items?itemName=GitHub.c... and use from VSCode.

New version has native "Claude Code" integration for Anthropic models served via GitHub Copilot.

You can also use your own ie. local llama.cpp based provider (if your github copilot subscription has it enabled / allows it at enterprise level).

Github Copilot CLI is available for download here https://github.com/features/copilot/cli and it's command line interface.

Copilot for Pull Requests https://githubnext.com/projects/copilot-for-pull-requests

Copilot Next Edit Suggestion https://githubnext.com/projects/copilot-next-edit-suggestion...

Copilot Workspace https://githubnext.com/projects/copilot-workspace/

Copilot for Docs https://githubnext.com/projects/copilot-for-docs/

Copilot Completions CLI https://githubnext.com/projects/copilot-completions-cli/

Copilot Voice https://githubnext.com/projects/copilot-voice/

GitHub Copilot Radar https://githubnext.com/projects/copilot-radar/

Copilot View https://githubnext.com/projects/copilot-view/

Copilot Labs https://githubnext.com/projects/copilot-labs/

This list doesn't include project names without Copilot in them like "Spark" or "Testpilot" https://githubnext.com/projects/testpilot etc.


I'm currently using GitHub copilot via Zed and tbh I have no idea which of these this relates to. Perhaps a combination of

> GitHub Copilot is a service

and maybe, the api behind

> GitHub Copilot is VSCode extension

???

What an absolute mess.


> Laptops sell with Copilot buttons now.

Is it the context menu key? Or did they do another Ctrl+Alt+Shift+Win+L thing?


This is funny because everyone’s AI strategy should have been

“What do we actually need to be productive?”

Which is how Anthropic pulled ahead of Microsoft, that prioritized

checks notes

Taking screenshots of every windows user’s desktop every few seconds. For productivity.


You were robbed last night. No way Jelly Roll should have won.

Recall actually sounds like it could be useful but there's a snowball's chance in hell that I would trust Microsoft to not spy on me.

On the contrary, you could trust it 100% to spy on you. That's the whole reason that functionality exists.

Anthropic has a model. Microsoft doesn't.

Microsoft can use OpenAI models but it's not the model that's the problem, it's the application of them. Anthropic simply knows how to execute better.

they should just acquire one of the many agent code harnesses. Something like opencode works just as well as claude-code and has only been around half of the time.

A large language model, or a business model?

Microsoft has a model nearly as old as the company.

Attempt to build a product... Fail.

Buy someone else's product/steal someone else's product... Succeed.


For one reason or another everyone seems to be sleeping on Gemini. I have been exclusively using Gemini 3 Flash to code these days and it stands up right alongside Opus and others while having a much smaller, faster and cheaper footprint. Combine it with Antigravity and you're basically using a cheat code.

For all the hype I see about Gemini, we integrated it with our product (an AI agent) and it consistently performs worse[0] than Claude Sonnet, Opus, and ChatGPT 5.2

[0] based on user Thumbs up/Thumbs down voting


Oddly enough, as impressive as Gemini 3 is, I find myself using it infrequently. The thing Gemini 2.5 had over the other models was dominance in long context, but GPT5.2-codex-max and Opus 4.5 Thinking are decent at long context now, and collectively they're better at all the use cases I care about.

It's the opposite experience for me. Gemini mostly produces made up and outdated stuff.

I think counter to the assumption of myself (and many), for long form agent coding tasks, models are not as easily hot swappable as I thought.

I have developed decent intuition on what kinds of problems Codex, Claude, Cursor(& sub-variants), Composer etc. will or will not be able to do well across different axes of speed, correctness, architectural taste, ...

If I had to reflect on why I still don't use Gemini, it's because they were late to the party and I would now have to be intentional about spending time learning yet another set of intuitions about those models.


Maybe it's the types of projects I work on but Gemini is basically unusable to me. Settled on Claude Code for actual work and Codex for checking Claude's work. If I try to mix in Gemini it will hallucinate issues that do not exist in code at very high rate. Claude and Codex are way more accurate at finding issues that actually exist.

Yeah I don't understand why everyone seems to have forgotten about the Gemini options. Antigravity, Jules, and Gemini CLI are as good as the alternatives but are way more cost effective. I want for nothing with my $20/mo Google AI plan.

Yeah I'm on the $20/mo Google plan and have been rate limited maybe twice in 2 months. Tried the equivalent Claude plan for a similar workload and lasted maybe 40 minutes before it asked me to upgrade to Max to continue.

I've never, ever had a good experience with Gemini (3 Pro). It's been embarrassingly bad every time I've tried it, and I've tried it lots of times. It overcomplicates almost everything, hallucinates with impressive frequency, and needs to be repeatedly nudged to get the task fully completed. I have no reason to continue attempting to use it.

For me it just depends on the project. Sometimes one or the other performs better. If I am digging into something tough and I think it's hallucinating or misunderstanding, I will typically try another model.

I think Gemini is an excellent model, it's just not a particularly great agent. One of the reasons is that its code output is often structured in a way that looks like it's answering a question, rather than generating production code. It leaves comments everywhere, which are often numbered (which not only is annoying, but also only makes sense if the numbering starts within the frame of reference of the "question" it's "answering").

It's also just not as good at being self-directed and doing all of the rest of the agent-like behaviors we expect, i.e. breaking down into todolists, determining the appropriate scope of work to accomplish, proper tool calling, etc.


Yeah, you may have nailed it. Gemini is a good model, but in the Gemini CLI with a prompt like, "I'd like to add <feature x> support. What are my options? Don't write any code yet" it will proceed to skip right past telling me my options and will go ahead an implement whatever it feels like. Afterward it will print out a list of possible approaches and then tell you why it did the one it did.

Codex is the best at following instructions IME. Claude is pretty good too but is a little more "creative" than codex at trying to re-interpret my prompt to get at what I "probably" meant rather than what I actually said.


Eh, it's not near Opus at all, closer to Sonnet. It is nice though with Antigravity because it's free versus being paid in other IDEs like Cursor.

It's ok, but it too frequently edits WAY more than it needs to in order to accomplish the task at hand.

GPT-5.2 sometimes does this too. Opus-4.5 is the best at understanding what you actually want, though it is ofc not perfect.


Crazy to think that Github Copilot was the first mainstream AI coding tool. It had all the hype and momentum in the world, and Microsoft decided to do...absolutely nothing with it.

I use Copilot in VSCode at work, and it's pretty effective. You can choose from quite a few models, and it has the agentic editing you'd expect from an IDE based AI development tool. I don't know if it does things like browser integration because I don't do frontend work. It's definitely improved over the last 6 months.

There's also all the other Copilot branded stuff which has varying use. The web based chat is OK, but I'm not sure which model powers it. Whatever it is it can be very verbose and doesn't handle images very well. The Office stuff seems to be completely useless so far.


It was kinda cool for a demo, but Claude Code really was the first game changer in AI coding.

Microsoft is still Microsoft.

Did it have all the hype and momentum, though? It was pretty widely viewed as a low- to negative-value addition, and honestly when I see someone on here talking about how useless AI is for coding, I assume they were tainted by Github copilot and never bothered updating their priors.

just my experience of course, but it had a lot of hype. It got into a lot of people's workflow and really had a strong first mover advantage. The fact that they supported neovim as a first-class editor surely helped a ton. But then they released their next set of features without neovim support and only (IIRC) support VS Code. That took a lot of wind out of the sails. Then combined with them for some reason being on older models (or with thinking turned down or whatever), the results got less and less useful. If Co-pilot had made their agent stuff work with neovim and with a CLI, I think they'd be the clear leader.

It really says something that MS/Github has been trying to shovel Copilot down our throats for years, and Anthropic just builds a tool in a short period of time and it takes off.

It's interesting to think back, what did Copilot do wrong? Why didn't it become Claude Code?

It seems for one thing its ambition might have been too small. Second, it was tightly coupled to VS Code / Github. Third, a lot of dumb big org Microsoft politics / stakeholders overly focused on enterprise over developers? But what else?


because claude code do it fullstack u know, the model and implementation, the interation is seamless,

meanwhile ms and github, is waiting for any breadcrumb that chatgpt leave with


So is GitHub copilot. They run their own models.

Well yeah, it is just better. At my work we have a copilot license, but we use it to access Claude Sonnet/Opus model in OpenCode.


Can't speak for copilot but Gemini cli is unbelievably bad compared to Gemini web.

CC has some magic secret sauce and I'm not sure what it is.

My company pays for both too, I keep coming back to Claude all-round


Claude Code is one of a very few AI tools where I genuinely think the people at the company who build it use it all the time.

They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.


watch the interviews with Boris. He absolutely uses it to build CC.


Agreed. I was an early adopter of Claude Code. And at work we only had Copilot. But the Copilit CLI isn't too bad now. you've got slash commands for Agents.MD and skills.md files now for controlling your context, and access to Sonnet & Opus 4.5.

Maybe Microsoft is just using it internally, to finish copying the rest of the features from Claude Code.

Much like the article states, I use Claude Code beyond just it's coding capabilities....


The Copilot IntelliJ integration on the other hand is atrocious: https://plugins.jetbrains.com/plugin/17718-github-copilot--y...

I'm amazed that a company that's supposedly one of the big AI stocks seemingly won't spare a single QA position for a major development tool. It really validates Claude's CLI-first approach.


It's sluggish in GitHub Codespaces, as it has so many animations.

Kinda reminds of the time Microsoft used git internally but was pushing Team Foundation Server.


GitHub Copilot with Opus 4.5 as the model is great. I have not tried Claude Code, so maybe I don’t know what I’m missing.

I installed Claude Code yesterday after the quality of VSCode Copilot Chat continuously is getting worse every release. I can't tell yet if Claude Code is better or not but VSCode Copilot Chat has become completely unusable. It would start making mistakes which would double the requests to Claude Opus 4.5 which in January is the only model that would work at all. I spent $400 in tokens in January.

I'll know better in a week. Hopefully I can get better results with the $200 a month plan.


Not my experience at all. Copilot launched as a useless code complete, is now basically the same as anything. It's all converging. The features are converging, but the features barely matter anyway when Opus is just doing all the heavy lifting anyway. It just 1-shots half the stuff. Copilot's payment model where you pay by the prompt not by the token is highly abusable, no way this lasts.

I would agree. I've been using VSCode Copilot for the past (nearly) year. And it has gotten significantly better. I also use CC and Antigravity privately - and got access to Cursor (on top of VSCode) at work a month ago

CC is, imo, the best. The rest are largely on pair with each other. The benefit of VSCode and Antigravity is that they have the most generous limits. I ran through Cursor $20 limits in 3 days, where same tier VSCode subscription can last me 2+ weeks


Claude Code’s subscription pricing is pretty ridiculously subsidized compared to their API pricing if you manage to use anywhere close to the quota. Like 10x I think. Crazy value if you were using $400 in tokens.

I just upgraded to the $100 a month 5x plan 5 minutes ago.

Starting in October with Vscode Copilot Chat it was $150, $200, $300, $400 per month with the same usage. I thought they were just charging more per request without warning. The last couple weeks it seemed that vscode copilot was just fucking up making useless calls.

Perhaps, it wasn't a dark malicious pattern but rather incompetence that was driving up the price.


What were you spending on Copilot?

So Copilot is for customers, Claude is for getting actual work done?

Copilot isn't a model, you can use Claude via Copilot.

Neither is Copilot. The title explicitly mentions Claude "Code".

Both use the same models. But Claude Code has something special that Microsoft doesn't have in Github Copilot CLI.

I don’t think that’s what they were insinuating. Claude Code internally, Copilot for customers.

Copilot is anything you want it to be inside Microsoft. Heck even Office is Copilot nowadays.

Seems to be their "Watson."

Copilot in the streets, Claude in the sheets.

And probably running on their macbooks...

True story: a lot of the Microsoft engineers I interact with actually do use Apple hardware. Admittedly, I onto interact with the devs on the .NET (and related technologies) departments.

Specifically WHY they use Apple hardware is something I can only speculate on. Presumably it's easier to launch Windows on Mac than the other way around, and they would likely need to do that as .NET and its related technologies are cross platform as of 2016. But that's a complete guess on my part.

Am *NOT* a Microsoft employee, just an MVP for Developer Technnolgies.


Probably because "Windows Modern Standby" makes laptops unusable by turning them on in your backpack and cooking them.

https://youtu.be/OHKKcd3sx2c


I still don't understand how Microsoft lets standby remain broken. I can never leave the PC in my bedroom ij standby because it will randomly wake up and blast the coolers.

Probably because the quality of PC BIOS/firmware is generally abysmal and getting vendors to follow spec is like herding cats.

Sadly even if Microsoft had a few lineups of laptops that they'd use internally and recommend, companies would still get the shitty ones, if it saves them $10 per device.

Haa, amazing. I had this happen to TWO Dell XPS for me, before finally switching over to Mac.

To be fair, this was also my experience with Macbooks. This "smart sleep" from modern OS manufacturers is the dumbest shit ever, please just give me a hibernate option.

I used to have trouble with sleep on M-series macs on occasion, but after turning off wake on LAN they’ve all slept exactly as expected for the past several years.

You're an MVP? Minimum viable product? Most valuable player?

These days it could also be Most vaunted prompt

One of my friends is a program manager in MS, I think he requested a Macbook but was denied, was given a Surface instead.

He didn't dislike it, but got himself a Macbook nonetheless at his cost.


100% true story - until a couple of months ago, the best place to talk directly to Microsoft senior devs was on the macadmins slack. Loads of them there. They would regularly post updates, talk to people about issues, discuss solutions, even happy to engage in DMS. All posting using their real names.

The accounts have now all gone quiet, guess they got told to quit it.


What are we discussing here?

The tools or the models? It's getting absurdly confusing.

"Claude Code" is an interface to Claude, Cursor is an IDE (I think?! VS Code fork?), GitHub Copilot is a CLI or VS Code plugin to use with ... Claude, or GPT models, or ...

If they are using "Claude Code" that means they are using Anthropic's models - which is interesting given their huge investment in OpenAI.

But this is getting silly. People think "CoPilot" is "Microsoft's AI" which it isn't. They have OpenAI on Azure. Does Microsoft even have a fine-tuned GPT model or are they just prompting an OpenAI model for their Windows-builtins?

When you say you use CoPilot with Claude Opus people get confused. But this is what I do everyday at work.

shrug


That isn't going well for Satya.

Indeed it's not: https://www.windowslatest.com/2026/01/09/is-microsoft-losing... And: https://www.perspectives.plus/p/microsoft-365-copilot-commer...

Tldr: Copilot has 1% marketshare among web chatbots and 1.85% of paid M365 users bought a subscription to it.

As much as I think AI is overrated already, Copilot is pretty much the worst performing one out there from the big tech companies. Despite all the Copilot buttons in office, windows, on keyboards and even on the physical front of computers now.

We have to use it at work but it just feels like if they spent half the effort they spend on marketing on actually trying to make it do its job people might actually want to use it.

Half the time it's not even doing anything. "Please try again later" or the standard error message Microsoft uses for every possible error now: "Something went wrong". Another pet peeve of mine, those useless error messages.


Yeah, my problem the way it has been pushed is that how it doesn't make sense at all.

Improve the workflows that would benefit "AI" algorithms, image recognition, voice control, hand writing, code completion, and so on.

No need to put buttons to chat windows all over the place.


2 week old post feeling like part of the other weirdly promotional "Claude is everywhere right now" pieces that were around. Someone called it an advertising carpet bombing run.

A.I. Tool Is Going Viral. Five Ways People Are Using It

https://www.nytimes.com/2026/01/23/technology/claude-code.ht...

Claude Is Taking the AI World by Storm, and Even Non-Nerds Are Blown Away

https://www.wsj.com/tech/ai/anthropic-claude-code-ai-7a46460...


We can certainly see, every Windows update requires flipping a coin now.

“ Microsoft told me last year that 91 percent of its engineering teams use GitHub Copilot”

Well, that might explain why all their products are unusable lately.


Microsoft have a goal that states they want to get to "1 engineer, 1 month, 1 million lines of code." You can't do that if you write the code yourself. That means they'll always be chasing the best model. Right now, that's Opus 4.5.

> "Microsoft have a goal that states they want to get to "1 engineer, 1 month, 1 million lines of code.""

No, one researcher at Microsoft made a personal LinkedIn post that his team were using that as their 'North Star' for porting and transpiling existing C and C++ code, not writing new code, and when the internet hallucinated that he meant Windows and this meant new code, and started copypasting this as "Microsoft's goal", the post was edited and Microsoft said it isn't the company's goal.


That's still writing new code. Also, its kind of an extremely bad idea to do that because how are you going to test it? If you have to rewrite anything (hint: you probably don't) its best to do it incrementally over time because of the QA and stakeholder alignment overhead. You cannot push things into production unless it works as its users are expecting and it does exactly what stakeholders expect as well.

If it is Windows, then you and I are going to test it :)

No no, your talking common sense and logic. You can't think like that. You have to think "How do I rush out as much code as possible?" After all, this is MS we're talking about, and Windows 11 is totally the shining example of amazing and completely stable code. /s

Porting legacy code is definitely one of its strengths. It can even... do wilder things if you're creative enough.

It is kind of funny that throughout my career, there has always been pretty much a consensus that lines of code are a bad metric, but now with all the AI hype, suddenly everybody is again like “Look at all the lines of code it writes!!”

I use LLMs all day every day, but measuring someone or something by the number of lines of code produced is still incredibly stupid, in my opinion.


I believe the "look at all the lines of code" argument for LLMs is not a way to showcase intelligence, but more-so a way to showcase time saved. Under the guise that the output is the/a correct solution, it's a way to say "look at all the code I would have had to write, it saved so much time".

The line of code that saves the most time is the one you don't write.

Reason went out of fashion like 50 years ago, and it was never really in vogue.

Microsoft never got that memo. They still measure LoC because it’s all MBAs.

Fuck is there a way to have that degree and not be clueless and toxic to your colleagues and users.

It all comes from "if you can't measure it you can't improve it". The job of management is to improve things, and that means they need to measure it and in turn look for measures. When working on an assembly line there are lots of things to measure and improve, and improving many of those things have shown great value.

They want to expand that value into engineering and so are looking for something they can measure. I haven't seen anyone answer what can be measured to make a useful improvement though. I have a good "feeling" that some people I work with are better than others, but most are not so bad that we should fire them - but I don't know how to put that into something objective.


Yes, the problem of accurately measuring software "productivity" has stymied the entire industry for decades, but people keep trying. It's conceivable that you might be able to get some sort of more-usable metric out of some systematized AI analysis of code changes, which would be pretty ironic.

All evidence continues to point towards NO.

They seem better at working in finance and managing money.

Most models of productivity look like factories with inputs, outputs, and processes. This is just not how engineering or craftsmanship happen.


No man, it's in the title, master bullshit artist

If so, it hasn't always been that way. Steve Ballmer on IBM and KLoC's: https://www.youtube.com/watch?v=kHI7RTKhlz0

(I think it is from "Triumph of the Nerds" (1996), but I can't find the time code)


> measuring someone or something by the number of lines of code produced is still incredibly stupid, in my opinion.

Totally agree. I see LOC as a liability metric. It amazes me that so many other people see it as an asset metric.


I think the charitable way to read the quote is that 1M LOC are to be converted, not written.

Yeah. I honestly feel 1m LOC is enough to recreate a fully featured complete modern computing environment if one goes about it sensibly.

it's still a bad metric and OP is also just being loose by repeating some marketing / LinkedIn post by a person who uses bad metrics about an overhyped subject

Ironically, AI may help get past that. In order to measure "value chunks" or some other metric where LoC is flexibly multiplied by some factor of feature accomplishment, quality, and/or architectural importance, an opinion of the section in question is needed, and an overseer AI could maybe do that.

https://devblogs.microsoft.com/engineering-at-microsoft/welc...

"Microsoft has over 100,000 software engineers working on software projects of all sizes."

So that would mean 100 000 000 000 (100 billion) lines of code per month. Frightening.


With those kinds of numbers you don’t need logic anymore, just a lookup table with all possible states of the system.

Absurd. The Linux kernel is 30 million, Postgres is 2, windows is assumed to be about 50.

No, no. 100 trillion lines of code per day is great! The only thing better would be 200 trillion ;)

CEO: I want big numbers of things. Big numbers = success.

Maybe it means "LOCs changed"?

Mutate things so fast cancer looks like stable.

Copilot add a space to every line of code in this repository and commit please.

One of the many reasons why it's such a bad practice (overly verbose solutions id another one of course)


So the recent surge in demand for storage is to because we have to store that code somewhere?

More likely those 100k engineers would shrink to 10k.

Thats still 10 billion lines of code per month if that insane metric were a real goal (it’s not).

That’s 200 Windows’ worth of code every month.


Totally agreed. The numbers are silly. My only point is that you don't need 100k engineers if you're letting Claude dump all that code into production.

Guess Windows 12 is gonna be a bit on the bloated side, Huh?

Maybe they can use 5 - 10 loc to move the classic window shell button so it's not on top of the widgets button

I used to work at a place that had the famous Antoine de Saint-Exupéry quote painted near the elevators where everyone would see it when they arrived for work:

  Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
I miss those days.

Original French: "Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher".

"Il semble" sure gives the quote a different tone to me.


Cool - I was thinking it would be good for them to implode as a company due all the extra harmfull stuff they are doing with Windows recently.

Generating bilions of lines of code that is unmaintainable and buggy should easily achieve that. ;-)


Wow such bad practice, using lines of code as a performance metric has been shown to be really bad practice decades ago. For a software company to do this now...

Looks like the guy who posted that updated his post to say he was just talking about a research project he is working on.

Which is a bald-faced lie written in response to a PR disaster. The original claims were not ambiguous:

> My goal is to eliminate every line of C and C++ from Microsoft by 2030. Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases. Our North Star is “1 engineer, 1 month, 1 million lines of code”.

Obviously, "every line of C and C++ from Microsoft" is not contained within a single research project, nor are "Microsoft's largest codebases".


The original claims were not ambigious, it's "My" goal not "Microsoft's goal".

The fact that it's a "PR disaster" for a researcher to have an ambitious project at one of the biggest tech companies on the planet, or to talk up their team on LinkedIn, is unbelievably ridiculous.


One supposes, when a highly senior employee publicly talks about project goals in recruitment material, that they are not fancifully daydreaming about something that can never happen but are in fact actually talking about the work they're doing that justifies their ~$1,000,000/yr compensation in the eyes of their employer.

Talking about rewriting Windows at a rate of 1 million lines of code per engineer per month with LLMs is absolutely going to garner negative publicity, no matter how much you spin it with words like "ambitious" (do you work in PR? it sounds like it's your calling).


You suppose that there are no highly-paid researchers on the planet working on AGI? Because there are, and that's less proven than "porting one codebase to another language" is. What about Quantum Computers, what about power-producing nuclear fusion? Both less proven than porting code. What about all other blue-sky research labs?

Why would you continue supposing such a thing when both the employee, and the employer, have said that your suppositions are wrong?


Sure, there are plenty of researchers working on fanciful daydreams. They pursue those goals at behest of their employer. You attempted to make a distinction between the employer and the employee's goals, as though a Distinguished Engineer at Microsoft was just playing around on a whim doing hobby projects for fun. If Microsoft is paying him $1m annually to work on this, plus giving him a team to pursue the goal of rewriting Windows, it is not inaccurate to state that Microsoft's goal is to completely rewrite Windows with LLMs, and they will earn negative publicity for making that fact public. The project will likely fail given how ridiculous it is, but it is still a goal they are funding.

The authentic quote “1 engineer, 1 month, 1 million lines of code” as some kind of goal that makes sense, even just for porting/rewriting, is embarassing enough from an OS vendor.

As @mrbungie says on this thread: "They took the stupidest metric ever and made a moronic target out of it"


I mean 100% that was his goal. But that was one guy without the power to set company wide goals talking on LinkedIn.

The fact that there are distinguished engineers at MS who think that is a reasonable goal is frightening though.


Because as we all know, lines of code == quality of code.

I mean, if 1% out of 8 billion is "top" and that applies to Lines of Code, too, than ... more code contains more quality, ... by their logic, I guess ...

What if the % declines proportionally (or worse) to the growth in code.

Do you have a source for that?

This has to be the dumbest thing I’ve heard from microslop this morning. It’s like they are forgetting to be a real software company.

I've not heard that goal before. If true, it makes me sad to hear that once again, people confuse "More LOC == More Customer Value == More Profit". Sigh.

I've written a C recompiler in an attempt to build homomorphic encryption. It doesn't work (it's not correct) but it can translate 5 lines of working code in 100.000 lines of almost-working code.

Any MBAs want to buy? For the right price I could even fix it ...


Is 1 million bugs stated implicitly or explicitly?

We’re back to measuring productivity by lines of code are we? Because that always goes well.

Yay another stupid metric to game!

This will lead to so much enshitification.


Microsoft went from somewhat good in Windows 7 to absolute dog shit in approximately 10 years.

So with this level of productivity Windows could completely degrade itself and collapse in one week instead of 15 years.


They took the stupidest metric ever and made a moronic target out of it.

That’s what MBAs do

Wasn’t this one single researcher?

> “My goal is to eliminate every line of C and C++ from Microsoft by 2030,” Microsoft Distinguished Engineer Galen Hunt writes in a post on LinkedIn. “Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases.

they're fucked


Eliminate C/C++ in favor of what? Perhaps the plan is to use AI to write plain assembler? Why stop there, maybe let's do prompt in - machine-code out?


If remember correctly, Rust.

Yeah. It's using AI agents to rewrite C/C++ to Rust. https://x.com/gounares/status/2003543050698809544

Why are rust people always insane?

I try GitHub Copilot every once in a while, and just last month it still managed to produce diffs with unbalanced curly braces, or tried to insert (what should be) a top-level function into the middle of another function and screw up everything. This wasn’t on a free model like GPT 4.1 or 5-mini, IIRC it was 5.2 Codex. What the actual fuck? Only explanation I can come up with is that their pay-per-request model made GHC really stingy with using tokens for context, even when you explicitly ask it to read certain files it ends up grepping and adding a couple lines.

You're not using the good models and then blaming the tool? Just use claude models.

Copilot's main problem seems to be people don't know how to use it. They need to delete all their plugins except the vscode, CLI ones, and disable all models except anthropic ones.

The Claude Code reputation diff is greatly exaggerated beyond that.


What, 5.2 Codex isn’t a good model? Claude 4.5 and Gemini 3 Pro with Copilot aren’t any better, I don’t have enough of a sample of Opus 4.5 usage with Copilot to say with confidence how it fairs since they charge 3x for Opus 4.5 compared to everything else.

If Copilot is stupid uniquely with 5.2 Codex then they should disable that instead of blaming the user (I know they aren’t, you are). But that’s not the case, it’s noticeably worse with everything. Compared to both Cursor and Claude Code.


so whats the point of billions dollar investment to chatgpt lmao nadella

32 comments and no mention of codex or windsurf or cursor.

have people tried Antigravity

Explains why Windows updates have been more broken than usual lately.

But I guess having my computer randomly stop working because a billion dollar corporation needs to save money by using a shitty text generation algorithm to write code instead of hiring competent programmers is just the new normal now.


Do you have "Get the latest updates as soon as they're available" enabled? This automatically installs preview releases, so you may unwittingly be doing QA for Microsoft.

I switched to Ubuntu last week for my desktop. First time in my 25+ year career I’ve felt like Microsoft was wasting my time more than administering a Linux desktop would take. The slop effect is real.

You won't regret. I have been using debian for last 25 years on and off and for last 8 years non stop. I have no complains.

Unfortunately it'll take time for certain companies to release their applications on Linux distro's. So right now I manage with WSL2 + Win 11.

You might want to change to Debian or some other distro more radical.

https://ubuntu.com/ai


I am not getting what that linked url is supposed to mean. It is a very decent business page where ubuntu is selling consulting for "your" projects and telling why ubuntu is great for developing AI systems.

And eventually on Ubuntu itself, who knows.

Linux kernels will all eventually be permeated with AI-gen code as well. It will just take longer to see and feel the effects.

I'm sure there are a bunch of "Rust is better" people spending all their tokens on rewriting the Linux kernel as we speak.

Your argument is in bad faith because you are using false equivalence bias.

I wasn't making an argument. It was a prediction that all major software, (including the major linux distros) will eventually be majority (>50%) AI generated. Software that is 100% human generated will be like getting a hand knitted sweater at a farmers market. Available, but expensive and only produced at very small scale.

On what reasoning do you make this prediction? Just because corporations are mandating their employees to use AI right now does not mean it will continue.

Any new software developers entering the field from this point on will have to know how to use and be expected to use AI code-gen tools to get employment. Moving forward, eventually all developers use these tools routinely. There will be a point in the future where there is no one left working that has ever coded anything complex thing from scratch without AI tools. Therefore, all* code will have AI code-gen as all* developers will be using them.

* all mean 'nearly all' as of course there will be exceptions.


I have found that Claude Code is better in every way I've used it. I like to use LLM's just as an advanced refactoring tool, especially where plain string search isn't enough. Anyway, my first experience of Copilot was it plainly lying that it deleted files I asked it to, and it insisted the file no longer existed (it did).

The difference between the two is stark.


"my turds now contains 15% candyfloss!"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact |

Search: