Not sure how that will go. GPU density and cost point to the winners being the large cloud companies for training- Google, Azure, AWS, and maybe even Nvidia. Enterprise adoption may be via utilizing AU as a service (Microsoft).
I am told that a third of the increase in power from one generation to the next is coming from better software (ie AI), a third from throwing hardware at it (stepping up from hundreds of thousands of GPU to millions etc, and cooling and wiring optimization etc), and a third of the benefit is coming from throwing more data in imaginative ways --ie video data instead of just text etc.
Of course commercialization is outside the ambit of the platform companies, and really that growth is far more broad based – product people imagining new use cases, marketers, UI etc.
Nvidia is overhyped at this point, and the thought is that the platform companies may make their own custom chips.
One of the ways many models are improving include increasing the number of parameters used by the model. This requires more time and hardware (much more GPUs) to train the models. Versions of ChatGPT have gone from billions of parameters to 100 trillion ( if I recall correctly).
We’re all pretty open with each other. We know what our kids make; they know what we make; I have a pretty good idea of their investments; and they know a lot about ours.
This topic was automatically closed 180 days after the last reply. If you’d like to reply, please flag the thread for moderator attention.