Mining Theory: GPU Computation vs Co-processor Computation
I am very new to bitcoin and I am trying to understand a bit about mining theory. I do not dispute the convention of using GPU based computation for bitcoin mining, but rather I'm curious about why they outperform co-processors like this for mining. Maybe a nuanced discussion of a few of the various trade-offs would be the easiest and clearest way to understand things. Let me break it down like this:
Parallelism: Co-processors are mostly used for task parallel code (non-vectorized code) whereas GPUs are adept at data parallel (vectorized) code. Two questions come to mind:
- Are all mining codes in vectorized languages?
- Can there ever be a benefit to mine in task parallel code? It seems like with bitcoin mining, there is only 1 task: crunching numbers. Am I oversimplifying it though? Perhaps with task parallelism one could run other algorithms to assist with the number crunching (such as compression, flush to zero, ect).
Calculations: Speaking of flush to zero, GPU's tend to perform better than co-processors with floating point calculations. However, co-processors perform better in logical as well as arithmetic calculations.
- Is floating point calculation more important than logical/arithmetic in bitcoin mining?
Latency:
Since GPUs run in sync, the whole dataset has to be transferred at the beginning and end of each task, but co-processors can move all the data from RAM to Host RAM in a fraction of a second.
- Would the issue of latency ever warrant the use of co-processors for mining, or is latency not a big issue?
Main Question Could there ever be a pure co-processor mining machine? What about a hybrid system using an array of co-processors and GPUs for the best of both worlds? Or would GPUs beat co-processors every time? Why or why not?
You may approach this question from any angle as per the content above. The answer doesn't have to answer all questions, I thought I would just include a few interrelated questions for robustness/comparison purposes.
I want to stipulate some assumptions to make sure the answer is mostly from a theoretical standpoint. I want to make the logistics of it all mostly out of scope.
Assumptions
- Power consumption is out of scope
- Unit price of GPUs and co-processors is out of scope (obviously co-processors are ungodly expensive)
- Cost of electricity is out of scope
Comments
Post a Comment