Graphics Processing Units (GPU) are becoming a widespread tool for general-purpose scientific computing, and are attracting interest for future onboard satellite image processing payloads due to their ability to perform massively parallel computations. This paper describes the GPU implementation of an algorithm for onboard lossy hyperspectral image compression, and proposes an architecture that allows to accelerate the compression task by parallelizing it on the GPU. The selected algorithm was amenable to parallel computation owing to its block-based operation, and has been optimized here to facilitate GPU implementation incurring a negligible overhead with respect to the original single-threaded version. In particular, a parallelization strategy has been designed for both the compressor and the corresponding decompressor, which are implemented on a GPU using Nvidia’s CUDA parallel architecture. Experimental results on several hyperspectral images with different spatial and spectral dimensions are presented, showing significant speed-ups with respect to a single-threaded CPU implementation. These results highlight the significant benefits of GPUs for onboard image processing, and particularly image compression, demonstrating the potential of GPUs as a future hardware platform for very high data rate instruments.

Highly-Parallel GPU Architecture for Lossy Hyperspectral Image Compression / L., Santos; Magli, Enrico; R., Vitulli; J. F., López; R., Sarmiento. - In: IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING. - ISSN 1939-1404. - 6:2(2013), pp. 670-681. [10.1109/JSTARS.2013.2247975]

Highly-Parallel GPU Architecture for Lossy Hyperspectral Image Compression

MAGLI, ENRICO;
2013

Abstract

Graphics Processing Units (GPU) are becoming a widespread tool for general-purpose scientific computing, and are attracting interest for future onboard satellite image processing payloads due to their ability to perform massively parallel computations. This paper describes the GPU implementation of an algorithm for onboard lossy hyperspectral image compression, and proposes an architecture that allows to accelerate the compression task by parallelizing it on the GPU. The selected algorithm was amenable to parallel computation owing to its block-based operation, and has been optimized here to facilitate GPU implementation incurring a negligible overhead with respect to the original single-threaded version. In particular, a parallelization strategy has been designed for both the compressor and the corresponding decompressor, which are implemented on a GPU using Nvidia’s CUDA parallel architecture. Experimental results on several hyperspectral images with different spatial and spectral dimensions are presented, showing significant speed-ups with respect to a single-threaded CPU implementation. These results highlight the significant benefits of GPUs for onboard image processing, and particularly image compression, demonstrating the potential of GPUs as a future hardware platform for very high data rate instruments.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2513744
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo