When should I use iterative algorithms ?
The calculation power of the GPU
Nvidia is the most famous graphics board manufacturer. Ten years ago, they had the excellent idea of opening up the architecture of its graphic boards to more than just visualization. This formidable parallel calculation system very quickly aroused our curiosity. From prototype to prototype and after laying siege to Nvidia’s offices, we finally obtained one of the first multi-GPU units.
At that time, it was not just a board as it is today. But it was a massive piece of hardware almost the same size as the PC to which it was connected. We were very soon able to confirm our hypotheses with incredible acceleration factors from 10 to 100.
Wow! It was a revolution and we could have stopped there…
But that is not the way we do things at Digisens. We very soon understood that apart from this incredible speed increase over the conventional FDK reconstruction methods, the calculation power of the GPU could bring back into the spotlight reconstruction techniques. The research uses those sophisticated techniques when the time factor is unimportant. Because in this case we are talking about calculations extending over several weeks. This was the case of all the iterative reconstruction methods. Thus totally incompatible with routine usage but they could be put back on the map due to the newly available calculation powers.
Field of application of the Iterative algorithms
The advantages of iterative reconstruction algorithms are known. They are less sensitive to missing views and can handle images with more noise. Some disadvantages are worth noting, in addition to excessively long reconstruction time (before the GPUs), the image quality is slightly different from the FDK images and seems more synthetic.
They are also more complicated to adjust whereas an FDK can be easily automated, iterative reconstruction requires more finely adjusted parameters.
The first field of application was electron microscopy where the conditions are the following: very limited angles of view (+ 60 /-60°) and very small images.
It is not surprising that the second field of application was the medical field, because the possibility to reconstruct noisy images is synonymous with a significant reduction of the dose. In the medical field, this type of iterative reconstruction is an incontestable success. The relatively small size of the volumes gives reasonable reconstruction times. But major efforts and adaptations were deployed to get the image quality accepted by the practitioners. Many studies have shown that iterative detectability and FDK detectability are equivalent, and above all the medical systems do not use a 100% iterative technique but a mixed reconstruction which combines iterative and FDK passes. This is done to achieve the image quality and reconstruction speed objectives.
The iterative algorithms adapt to the problem
The first conclusion is that the iterative techniques are not universal but require major adaptation to the problem. This is both a strength and a weakness.
In the medical field, the oldest one in terms of tomography, the applications are well defined and the profitability of the adaptation was able to be quickly achieved. In addition, as previously mentioned, the volumes are of a reasonable size, which simplifies things because medical imaging is restricted by the dose and also by the need for contrast rather than maximum resolution.
For industrial applications, it is still early days. On the one hand we have a veritable Swiss Army Knife for reconstruction with FDK, which is fast and gives quality reconstructions. On the other hand, reconstruction is slow (at the minimum seven times slower), and also which would be difficult to adjust if we did not have the auto-convergence function.
Is there an advantage in using iterative reconstructions?
In a conventional configuration, is there an advantage in using iterative reconstructions? Frankly, our experience leads us to say that there is not. We must therefore look at other applications, where the use of iterative reconstruction algorithms is unavoidable.
Mainly the fast growing field of tomography with nonlinear trajectories. All robotic or fixed-base linear tomography will be performed using algorithms which can interpret non circular trajectories and therefore iterative approaches.
It must also be noted that the more the specific applications will be developed, the more the applications can be tailor-made. Here again the iterative applications can show their superiority because the capability to inject preconceived notions, or in other words to inject knowledge. Into this type of algorithm, better reconstructions will be produced in difficult conditions. And also, in the machine learning approaches, we are pushing to integrate the preconceived notions well upstream. They are, in our view, a winning strategy for the automatic interpretation of tomographic data.