DigiTimes has managed to get Intel’s leaked roadmap for 2014 and 2015 for the mobile market. Apparently, Intel is planning to release the 14nm Atom tablet chips codenamed “Cherry Trail” and “Willow Trail”, in the 3rd and 4th quarter of 2014, respectively.
It also plans to introduce 22nm Atom “Merrifield” chip at the end of 2013, even though initial reports said it would come in the first quarter of 2014. Merrifield might only arrive in a limited amount in 2013, maybe in a smartphone or two at most. But then the new leak says there will be a new chip called “Moorefield” that will be released in the first half of 2014 (also 22nm).
This makes me think, that Intel may be renaming the previously planned Merrifield to Moorefield, and they will rename something else as Merrifield – possibly a slightly more optimized/lower clock speed Bay Trail chip, that’s a “tablet chip” and will also come out at the end of this year.
Then, the leak says there will be a “Morganfield” chip in the first quarter of 2015, that will be made at 14nm. Intel seems to be about a whole year behind in node technology with its smartphone chips, compared to the notebook chips (14nm Broadwell in 2014), and about 9 months behind compared to the Atom “tablet chips”. This gives the ARM competition a huge opportunity to match Intel’s node shrink, because 20nm ARM chips are coming in 2014 in smartphones, and then 14nm and 16nm ones in 2015.
Even though Intel will start to adopt its notebook chips’ GPU’s in Atom, such as the Sandy Bridge GPU, and then IVB GPU and so on, it will adopt them not only years after they first appeared in notebooks, but they will also use much fewer GPU “cores”, which means much lower GPU performance than the notebook counter-parts, which could put it behind even current ARM GPU’s. By my calculations, the Bay Trail GPU that will come out this year will have about half the performance of Tegra 4’s GPU (which in turn only matches iPad 4’s GPU from last year), unless Intel will go very aggressive on the clock speed of these GPU cores.
However, I worry that just like with the Atom CPU’s, Intel will start winning, or at least doing much better than they should in GPU benchmarks, if they implement some kind of turbo-boosting technology for Atom’s GPU, too. I think for the most part Turbo-Boost (for both CPU’s and GPU’s) is a “scam”, because they get to show high maximum clock speed numbers, and they can also achieve high scores in benchmarks, because the benchmarks will maximize the chips’ processing power for the duration of the benchmark.
But in real world scenarios, that kind of speed won’t be sustained for a very long time, because it’s not meant to. If it was then they wouldn’t need something called “Turbo-Boost”. They would just use that speed as the default one. But they don’t because the chips can’t handle that much power for very long, and it would overheat quickly, and the battery will die quicker, too.
So again, for the most part, Turbo-Boost is a marketing trick, that allows Intel (and others who use it, including Nvidia for their PC GPU’s) to claim very good battery life and heat efficiency (under normal conditions), and also get maximum performance in benchmarks. I hope Intel doesn’t become even more misleading with its chips than it has been so far, if its GPU’s receive turbo-boost, too, and I hope other companies won’t take the same route in the future. Qualcomm has been the most “fair” so far with its claimed performance and battery efficiency, which is probably why they want to help build a new independent benchmark, and I’d rather see more companies follow their lead.