8/17/2023 0 Comments Free ai enhance imageNot everyone was so critical of AMD, though. Nvidia’s biggest competitors are probably Alphabet Inc.’s Google Cloud and Amazon Web Services Inc., which have developed internal AI chips and made them available to rent through their respective cloud infrastructure platforms. all offer competing hardware but have failed to gain much traction. and startups such as SambaNova Systems Inc. It has very few serious competitors in the space. The excitement around AI has driven massive gains in Nvidia’s stock, and at the end of last month it became the first chipmaker to reach a $1 trillion market capitalization. “They wanted AMD to say they have replaced Nvidia in some design.”Īt present, Nvidia totally dominates the AI computing industry, with a market share of between 80% and 95%, analysts say. “I think the lack of a (large customer) saying they will use the MI300 A or X may have disappointed the Street,” Tirias Research analyst Kevin Krewell told Reuters. The company also neglected to provide details on how much the MI300X will cost. Traditionally, AMD has always showcased big companies that will be early adopters of its new hardware, but it failed to do so today. No doubt AMD’s hardware and software is compelling, but the company’s stock was down more than 3% today due to what analysts said was a lack of any major customer announcements regarding the MI300X, or a smaller version called the MI300A. Once again, AMD is going up against Nvidia’s AI strategy, which is reliant not only on its chip hardware, but also a software ecosystem that helps companies to get the most out of its AI chips. ROCm is intended to support the MI300X accelerator and “bring together an open AI software ecosystem,” Peng said. During the keynote, AMD President Victor Peng, who previously led Xilinx as its CEO, introduced the company’s ROCm software ecosystem for data center accelerators. It’s powered by eight MI300X chips, she said, and it will rival Nvidia’s DGX supercomputer platform for AI applications.Īnalysts have had big expectations of AMD ever since it acquired a chipmaker called Xilinx Inc. “We think about the data center AI accelerator growing from something like $30 billion this year, at over 50% compound annual growth rate, to over $150 billion in 2027,” Su said.Īlongside the MI300X, Su introduced AMD’s Infinity Architecture Platform, which is a self-contained platform for running generative AI inference and training workloads. Su reiterated that today, saying that AI is the company’s “most strategic long-term growth opportunity.” Like Nvidia, AMD has said it sees a massive opportunity in an AI space that has been swept up by the hype around ChatGPT, because the technology requires massive amounts of computing power from the data centers that host it. Customers will be able to sample the chip in the third quarter, with production set to ramp up before the end of the year. Today, she said customers will be able to use the MI300X to run generative AI models with up to 80 billion parameters. Su had previously described the MI300A as the “world’s first data-center integrated CPU + GPU” during a keynote at the 2023 Consumer Electronics Show in January, explaining that it integrates both central processing units and graphics processing units into a single processor. Generative AI is the technology that underpins OpenAI’s famed chatbot ChatGPT, which responds to questions and prompts in a humanlike, conversational way. with the launch of a new artificial intelligence accelerator chip, announced at its Data Center & AI Technology Premiere event today.ĭuring a keynote speech, AMD Chair and Chief Executive Lisa Su announced the company’s new Instinct MI300X accelerator, which is targeted specifically at generative AI workloads.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |