Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nvidia is interested in commoditizing their complements. It's a business strategy to decrease the power of OpenAI (for instance).

Nvidia dreams of a world where there are lots of "open" alternatives to OpenAI, like there are lots of open game engines and lots open software in general. All buying closed Nvidia chips.





But AI depends on a small number of tensor operators, primitives which can be relatively easily implemented by competitors, so compute is very close to being a commodity when it comes to AI.

A company like Cerebras (founded in 2015) proves that this is true.

The moat is not in computer architecture. I'd say the real moat is in semiconductor fabrication.


which can be relatively easily implemented by competitors

Oh my.

Please people, try to think back to your engineering classes. Remember the project where you worked with a group to design a processor? I do. Worst semester of my life. (Screw whoever even came up with that damn real analysis math class.) And here's the kicker, I know I'll be dating myself here, but all I had to do for my part was tape it out. Still sucked.

Not sure I'd call the necessary processor design work here "relatively easy"? Even for highly experienced, extremely bright people, this is not "relatively easy".

Far more easy to make the software a commodity. Believe me.


To be totally honest, the only thing I can distill from this is that perhaps you should have picked an education in CS instead of EE.

I mean this is like saying that a class for building compilers sucked. Still, companies make compilers, and they aren't all >$1B companies. In fact, hobbyists make compilers.


I did study CS as well.

That you are comparing designing and writing a compiler with designing and manufacturing a neural processor is only testimony to the futility of my attempt to impress on everyone the difference. So I'll take my leave.

You have a good day sir or ma'am.


But I'm actually saying that manufacturing is the hard part ...

Have you ever tried to run a model from huggingface on an AMD GPU?

Semiconductor fabrication is a high risk business.

Nvidia invested heavily in CUDA and out-competed AMD (and Intel). They are working hard to keep their edge in developer mindshare, while chasing hardware profits at the same time.


>> Have you ever tried to run a model from huggingface on an AMD GPU?

Yes. I'd never touched any of that stuff and then one day decided to give it a shot. Some how-to told me how to run something on Linux which had a choice of a few different LLMs. I picked one of the small ones (under 10B) and had it running on my AMD APU inside of 15 minutes. The weights were IIRC downloaded from huggingface. The wrapper was not. Anyway, what's the problem?

BTW that convinced me that small LLMs are basically worthless. IMA need to go bigger next time. BTW my "old" 5700G has 64GB of RAM, next build I'll go at least double that.


> Have you ever tried to run a model from huggingface on an AMD GPU?

No, but seeing how easily they run on Apple hardware, I don't understand your point, to be honest.


> The moat is not in computer architecture. I'd say the real moat is in semiconductor fabrication.

In the longer run, anything that is very capital intensive, affects entire industries, and can be improved with large amounts of simulation will not be a moat for long. That's because you can increasingly use AI to explore the design space.

Compute not a commodity yet but may be in a few years. Semiconductor fab will take longer, but I wouldn't be surprised to see parts of the fabrication process democratized in a few years.

Physical commodities like copper or oil can't be improved with simulation so they don't fall under this idea.


It's not like you can just stamp out a giant grid of flops and just go brrr. Getting utilization is difficult, and the closer you hew to Nvidia's tradeoffs the more you are going to come out unfavorably against a giant who's working with 10,000X your volume and decades of experience. Nvidia proprietary software is very highly embedded into everyone's stacks. The models undergo co-evolution with the hardware, so they are designed with its capabilities in mind.

It's like trying to take on UPS with some new, not quite drop-in logistics network. Theoretically its just a bunch of empty tubs shuffling around, but not so easy in practice. You have to be multiples better than the incumbent to be in contention. Keep in mind for the startups we don't really know who is setting money on fire running models in unprofitable configurations for revenue.


I thought they assumed AI hardware would become commoditized sooner rather than later, and their play was to sell complete vertically integrated AI solution stacks, mainly a software and services play?

Why is OpenAI a threat to Nvidia? They are still highly dependent on those GPUs

Two concepts

- Monopsony is the inverse of Monopoly -- one buyer. Walmart is often a monopsony for suppliers (exclusive or near exclusive).

- Desire for vertical integration and value extraction, related to #1 but with some additional nuances


Who is the one buyer in the Nvidia scenario? How would that benefit Nvidia?

It would hurt nvidia not benefit, that's why nvidia spends a lot of effort to prevent that from happening, and it's not the case currently.

They really need to avoid the situation in the console market, where the fact there's only 3 customers means almost no margins on console chips.


Prior to the A.I. boom, Nvidia had a much, much more diverse customer base in terms of revenue mix. According to their 2015 annual report[1], their revenues were spread across the following revenue segments: gaming, automotive, enterprise, HPC and cloud, and PC and mobile OEMs. Gaming was the largest segment and contributed less than 50% of revenues. At this time, with a diverse customer base, their gross margins were 55.5%. (This is a fantastic gross margin in any industry outside software).

In 2025 (fiscal year), Nvidia only reported two revenue segments: compute and networking ($116B revenue) and graphics ($14.3B revenue). Within the compute and networking segment, three customers represented 34% of all revenue. Nvidia's gross margins for fiscal 2025 were 75% [2].

In other words, this hypothesis doesn't fit at all. In this case, having more concentration in extremely deep pocketed customers competing over a constrained supply of product has caused margins to sky rocket. Moreover, GP's claim of monopsony doesn't make any sense. Nvidia is not at any risk of having a single buyer, and with the recent news that sales to China will be allowed, the customer base is going to become more diverse, creating even more demand for their products.

[1] https://s201.q4cdn.com/141608511/files/doc_financials/annual...

[2] https://s201.q4cdn.com/141608511/files/doc_financials/2025/a...


I'm not sure your analysis is apples to apples.

Prior to the AI boom, the quality of GPUs slightly favored NVidia but AMD was a viable alternative. Also, there are scale differences between 2025 and before the AI boom -- simply put, there was more competition in the market for a smaller bucket and favorable winds on supplier production costs. Further, they just have better software tooling through CUDA.

Since 2022 and the rise of multi-billion parameter models, NVidia's CUDA has had a lock on the business side, but face rising costs due to terrible trade policy by the US, significant rebound from COVID as well as geopolitical realignments, inflation on the workforce, and rushed/buggy power supplies as their default supply options have made their position quite untenable -- mostly CUDA is their saving grace. If AMD got their druthers about them and focused they'd potentially unseat NVidia. But until ROCm is at least _easy_ nothing will happen there.


I merely comment on the concentration of customers and how it has not at all hurt Nvidia's margins. In fact, they have expanded quite dramatically. All of your other points are moot.

> "rising costs"

Nvidia's margin expansion would suggest otherwise. Or at least, the costs are not scaling with volume/pricing. Again, all we need to do is look at the margins.

> "their position quite untenable ... But until ROCm is at least _easy_ nothing will happen there"

Seems like you're contradicting yourself. Not sure what point you're trying to make. Bottom line is, there is absolutely no concern about monopsony as suggested by the GP. Revenue is booming and margins are expanding. Will it last? Who knows. Fat margins tend to invite competition.


Nobody said this was the case...

The only example I used was the console market which has been ruined because of this issue. They generally left that market because it was that horrible.


The console market is low margin because they seem to find someone ready to take low margin (e.g. AMD). Nvidia was in console market before but left it due to low margin. Nvidia only sells old low development chip with probably good margin to Nintendo. The chips in the Switch 2 are using node from 2020 and are super cheap in manufacturing and Nvidia had low efforts in developing them.

AMD however has to design new special APUs for Xbox and PS. Why do they do that? They could just decide to step away from the tender but they won't because they seem to be desperate for any business. Jensen was like that 20 years ago but he has learned that some business you simply step away from.


This whole subthread is about the claim that Nvidia is at risk of a monopsony situation. I pointed out that while revenue has concentrated on a few customers post-AI boom, margins have improved, suggesting Nvidia is nowhere near and not veering toward that risk. Revenue is exploding, as are margins.

Google shows that Nvidia is not necessary. How long until more follow?

Tbf, goog started a long time ago with their TPUs. And they've had some bumps along the way. It's not as easy as one might think. There are certainly efforts to produce alternatives, but it's not an easy task. Even the ASIC-like providers like cerberas and groq are having problems with large models. They seemed very promising with SLMs, but once MoEs became a thing they started to struggle.

I don't think we can say that until we hear how Genie3 and Veo3 were trained. My hunch is that the next-gen multi-modal models that combine world, video, text, and image models can only be trained on the best chips.

I agree in principle but you can't just yolo fab TPUs and leapfrog google

If OpenAI becomes the only buyer, they can push around Nvidia and invest in alternatives to blunt their power. If OpenAI is one of many customers, then they’re not a strong bargaining position and Nvidia gets to set the terms.

Maybe if they grow too much they'd develop their own chips. Also if one company wins, as in they wipe out the competition, they'd have much less incentive to train more and more advanced models.

One large customer has more bargin power than many big ones. And risk is OpenAI would try to make their own chips if they capture all the market.

> commoditizing their complements

Feels like a modern euphemism for “subjugate their neighbors”.


No, it’s encouraging competition and cost-cutting in a part of the market they don’t control. This can be a reason for companies to support open source, for example.

Meanwhile, the companies running data centers will look for ways to support alternatives to Nvidia. That’s how they keep costs down.

It’s a good way to play companies off each other, when it works.


Business has always been a civilized version of war, and one which will always capture us in similar ways, so I guess wartime analogies are appropriate?

Still it feels awful black and white to phrase it that way when this is a clear net good and better alignment of incentives than before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: