Can tech providers become trusted advisers on an advanced, complicated technology such as artificial intelligence?
Yes, and here’s why: If your customers are using the right processors, AI is just another workload. And your customers already know how to add a new workload.
More specifically, if your customers are running servers based on the latest Intel Xeon Scalable processors, they’re essentially ready to roll with AI.
That’s because Intel has enhanced those CPUs with a built-in engine that can accelerate AI with most popular software. For your users, AI is just another workload.
Plus, your customers already know that Intel Xeon processors work well when running multiple workloads. In fact, most AI workloads (especially inferencing) are already broadly deployed on Intel Xeon processors.
A big, fast-growing market
Why care? Because the AI market is big and growing fast. It’s a rising tide that could truly lift your boat.
How big? This year, worldwide revenue for AI hardware, software and services should approach $433 billion, predicts market watcher IDC.
Looking ahead to 2023, IDC foresees AI sales topping $500 billion worldwide. That would mark annual growth of nearly 20%.
That leads IDC researcher Ritu Jyoti to call AI technology “the next major wave of innovation.”
Is that a wave you’d like to catch? If so, Intel wants to help.
What about GPUs?
Among the first questions you may hear from a customer getting into AI are: “Don’t we need GPUs or special accelerators? And aren’t these costly?”
Here’s your answer: Yes, GPUs can be costly, but no, you don’t always need them. In fact, if you’re now using Intel Xeon Scalable processors, you can already run many AI workloads right out of the box. For certain AI workloads, these Intel CPUs can actually outperform a GPU.
That includes cases where AI is built into a workload. For example, an enterprise application that does some data pre-processing.
Your customers can also use Intel Xeon CPUs when AI is a standalone workload that resides on the same infrastructure as many other workloads. For example, this could include off-peak batch inference or distributed training.
To be sure, there are times when GPUs and accelerators truly are needed. That might be the case for large-scale deep learning, dedicated training or latency-critical inferencing.
But that affects a relatively small number of small and midsize businesses (SMBs) and, for that matter, even larger enterprises. That’s because these types of AI workloads are more commonly in the scope of super-large enterprises — think Facebook, Google, et al.
Intel isn’t opposed to GPUs or in general AI accelerators. After all, Intel is getting ready to introduce AI accelerators, including its Ponte Vecchio, this year. But Intel also knows that while many SMBs want to run AI, they can’t afford to make big investments in AI-specific hardware.
AI on a CPU?
At this point, you might wonder whether general-purpose server CPUs can really do AI.
Intel says they certainly can. As the company points out, across a variety of real-world end-to-end AI workload scenarios, using popular machine- and deep-learning libraries and frameworks, Intel Xeon Scalable processors can either match or even outperform the competition.
Also, Intel Xeon processors can be more cost-effective than GPUs. They’re more widely available, and your customers probably have many of them installed today. And they can run other workloads when not being used for AI. For these reasons and more, doing AI work on Intel Xeon-powered servers can be a major cost-saver.
If your customers already have Intel Xeon CPUs powering their servers, tell them they’re also ready to start running AI. And if they don’t yet have Intel Xeon processors? That’s your cue to do some tech advising.
Become an AI trusted adviser:
> Get training with these 2 Intel Partner University Competencies:
Not yet a member of Intel Partner Alliance? Join now.