Date of Award

Spring 5-1-2013

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computing

School

Computing Sciences and Computer Engineering

Committee Chair

Dia Ali

Committee Chair Department

Computing

Committee Member 2

Chaoyang Zhang

Committee Member 2 Department

Computing

Committee Member 3

Beddhu Murali

Committee Member 3 Department

Computing

Committee Member 4

Bikramjit Banerjee

Committee Member 4 Department

Computing

Committee Member 5

Ras Pandey

Committee Member 5 Department

Physics and Astronomy

Abstract

Hyperspectral imagery (HSI) is often processed to identify targets of interest. Many of the quantitative analysis techniques developed for this purpose mathematically manipulate the data to derive information about the target of interest based on local spectral covariance matrices. The calculation of a local spectral covariance matrix for every pixel in a given hyperspectral data scene is so computationally intensive that real-time processing with these algorithms is not feasible with today’s general purpose processing solutions. Specialized solutions are cost prohibitive, inflexible, inaccessible, or not feasible for on-board applications.

Advances in graphics processing unit (GPU) capabilities and programmability offer an opportunity for general purpose computing with access to hundreds of processing cores in a system that is affordable and accessible. The GPU also offers flexibility, accessibility and feasibility that other specialized solutions do not offer. The architecture for the NVIDIA GPU used in this research is significantly different from the architecture of other parallel computing solutions. With such a substantial change in architecture it follows that the paradigm for programming graphics hardware is significantly different from traditional serial and parallel software development paradigms.

In this research a methodology for mapping an HSI target detection algorithm to the NVIDIA GPU hardware and Compute Unified Device Architecture (CUDA) Application Programming Interface (API) is developed. The RX algorithm is chosen as a representative stochastic HSI algorithm that requires the calculation of a spectral covariance matrix. The developed methodology is designed to calculate a local covariance matrix for every pixel in the input HSI data scene.

A characterization of the limitations imposed by the chosen GPU is given and a path forward toward optimization of a GPU-based method for real-time HSI data processing is defined.

Share

COinS