In an unfortunate turn of events, something I was hoping would not happen, it seems Nvidia’s “unified” product line strategy will not come into play at least until 2015, possibly 2016, knowing Nvidia and their delays. Nvidia promised a few years ago that Project Denver would arrive by the end of 2013. Then it got pushed to end of 2014, and now it seems we’ll be lucky to see it by the end of 2015.
What is the Project Denver? Project Denver is Nvidia’s attempt at making high-end chips, based on the ARM architecture, for high-end PC’s, servers and supercomputers. Denver is supposed to be a custom CPU design, much like Qualcomm’s Krait or Apple’s Swift, but based on the 64 bit ARMv8 architecture. It was also supposed to arrive with Nvidia’s next-gen Maxwell GPU architecture.
Why is Denver important for Tegra chips? Nvidia has mentioned their company’s mantra a lot of times.”From superphones to supercomputers”, is what they said. Denver could help them put that idea in practice in a much more effective way. They can create one CPU design and one GPU design, and use it from phones to super computers, by scaling them up.
They could use a quad core Denver CPU at 2 Ghz along with a 200 Maxwell GPU cores, and use this set-up in Tegra 6, for smartphones and tablets. Remember this is a for a 2015-2016 time-frame and it could actually be a lot more cores, or fewer, depending how powerful a Maxwell-based GPU core will actually be. We should be expecting at least a 2x improvement in performance every year from where Tegra 4 is right now.
Tegra 6 would be the “low-end” of the Denver/Maxwell-based chips, and then they could increase the clock speed for both the CPU and GPU, and add a lot more cores for both to use them in servers and supercomputers, which are just made of millions of stacked CPU’s and GPU’s these days anyway. This would make the Nvidia line-up of products very cost-effective.
Now, the bad news. As mentioned above, this chip will most likely not arrive until 2015, or even 2016. The roadmap image suggests that through both the position of the names on the timeline, but also through the mentioning of FinFET. FinFET will not arrive until 14/16nm process for ARM. That’s 2015 at the very earliest for some foundries, and probably not even for mass production.
Knowing Nvidia and how they don’t like using a new process from day one, because they think it’s too expensive (hello 40nm Tegra 3 when S4 and Exynos 4 Quad were made at 28/32nm!), I could easily see them not adopting the 14nm FinFET process until mid 2016. At that point we’re not even talking about Tegra 6 anymore. We’re talking about Tegra 7. And that’s very disappointing, considering I was expecting Denver/Maxwell to arrive in Tegra 5 next year. They would’ve needed something like that to make up for how bad an “unattractive” Tegra 4 seems to be, for many reasons (high power consumption, lack of OpenGL ES 3.0, OpenCL, unified architecture, etc).
But perhaps Tegra 5 won’t be so bad either. Although it won’t have the latest Maxwell architecture for its GPU’s, it will still use a very advanced (or the most advanced) GPU architecture for an ARM SoC. Kepler is the same (very efficient) architecture they are using now in their latest and best GPU’s for PC’s with hundreds or thousands of cores and with very high clock speeds.
Nvidia will just scale down the number of cores and the clock speed, and fit it in Tegra 5 for mobile devices. This will give them a unified architecture, immediate support for not just OpenGL ES 3.0, but the full and latest OpenGL 4.3 (which has full backwards compatibility for OpenGL ES 3.0 if developers want to use just that) and CUDA (no OpenCL support?!). All of that, plus the potential for a very high performance GPU.
How Nvidia can get it right with Tegra 5
Now the question is, how far is Nvidia willing to go with this chip. I’m getting really tired of seeing them use an old process, which makes their chips inefficient, or a too small die size, which makes their GPU’s have a lot less performance compared to Apple’s chips (unless they ramp up the clock speed of the GPU’s, which again leads us to inefficiency).
It would be awful if they used Tegra 5 on a 28nm process in mid 2014, instead of switching to 20nm for Kepler and Cortex A57, which Qualcomm will certainly do, just like they did in 2012 when they switched to 28nm as early as possible, and won big time because of that, and their chips were significantly more efficient, which everyone appreciated.
It would also be awful if they keep the die size of the chip the same 80mm2 (even worse if they make the chip on 28nm), when Apple will be using the 20nm process and a 120+ mm2 die size. Whatever benefits they get from Kepler and Cortex A57 could be almost completely wiped out by not switching to a larger chip and making it on 20nm. And even if they manage to stay competitive without that, just imagine how much better it would be with those kind of advantages.
They can get a 2x performance just from keeping power usage the same, but increasing the core count, and another 2x in GPU core count, by using a 120mm2 die instead of 80mm2. Apple manages to stay ahead of the competition in GPU’s because even if they use the same process as them, they can fit a 2x bigger GPU in that 120mm2 chip. It’s time for Nvidia to do that, too.
Those kinds of advantages would make everyone want a Tegra chip in their devices again, so I really hope Nvidia is thinking about the bigger picture here, instead of short term profitability on each chip. They’ve already lost the huge next-gen Nexus 7 contract with that kind of thinking. People want exciting chips that are the best on the market, not chips that are just inexpensive but awful. If people want them, Nvidia’s customers will be willing to pay a higher price for their chips, too, as long as they know it will make their devices popular for having those chips.