Friday, September 25, 2009 5:19:40 PM

Pixel based fusion using IKONOS imagery

Uttam Kumar, Chiranjit Mukhopadhyay and T. V. Ramachandra*
Indian Institute of Science, Bangalore, India.
Back Email: uttam@ces.iisc.ernet.in, cm@mgmt.iisc.ernet.in, cestvr@ces.iisc.ernet.in Next

Introduction


Most Earth observational satellites are not capable of providing high spatial and spectral resolution data simultaneously because of design or observational constraints. To overcome such limitations, image fusion (also called pansharpening) techniques are used to integrate the high resolution (HR) Panchromatic (PAN) and low resolution (LR) multispectral (MS) bands that exhibit complementary characteristics of spatial and spectral details for image analysis and automated tasks such as feature extraction, segmentation, classification, etc. The standard merging methods are based on the following steps: (i) Registration of the LR MS image to the same size as the HR PAN image in order to be superimposed. Usually, the images are registered up to within 0.25 pixels, resampling the MS using control points using a nearest neighbour or bi-cubic polynomial fit, (ii) Transformation of the LR MS bands to other space using any standard technique (such as PCA, Correspondence analysis, RGB to IHS, Wavelet, etc.), (iii) Replacement of the first component of the transformed LR MS image by the HR PAN image, (iv) Inverse transformation to get back the original MS images with high spatial and spectral resolutions. In many fusion techniques (such as High pass filtering, High pass modulation) instead of performing steps (ii), (iii) and (iv), the HR PAN image is convolved with a user designed filter and the resultant image is added to the LR MS image to obtain the HR MS image. In this communication, we implement six image fusion techniques – À Trous algorithm based wavelet transform, Mulitresolution Analysis based Intensity Modulation, Gram Schmidt fusion, CN Spectral, Luminance Chrominance and High pass fusion.