Ram
The example in this blog in fact is something that Google tried on TensorFlow with CPU-only and with GPU-based logistic regression. Since this data set and similar ad-tech and mar-tech data sets have a lot of sparse data, GPUs are not effective in accelerating logistic regression, when using Intel-based servers. This is where the value of IBM’s Power9 high-speed connection the NVIDIA Volta V100 GPUs via NVLink, enables fast data transfer. This gives us the acceleration using GPUs, whereas the Google team was unable to do so with their x86-based GPU servers.
Here is the Google blog with same data set:
The Google blog says
“(Note that we used CPUs to train this model; for the sparse data in this problem, GPUs do not result in a large improvement.)”