Monday, November 21, 2016

LAB 6: Geometric Correction

INTRODUCTION

The following lab demonstrates the ability to perform geometric correction. This is an important activity used in satellite image preprocessing before interpreting and extracting data. Both a 1st order and a 3rd order polynomial transformation was performed. 

METHODOLOGY

For the first order polynomial transformation, a Landsat TM image of the Chicago metropolitan Area was corrected using a USGS digital raster graphic of the same area. Using ERDAS Imagine, both images were displayed side-by-side in 2d-viewers. The process began by selecting a polynomial geometric model from control points. The input image was the Landsat TM and the reference image used was the USGS image. Four ground control points were then placed in the same location on both images, all spread away from each other to produce an accurate geometric correction for areas on the imagery. A careful examination of spatial accuracy of matching GCPs was made by maintaining a root mean square (RMS) error of less than 0.05 and a control point error total of less than 0.5.

For the third order polynomial transformation, a corrected Landsat TM image of the Eastern Sierra Leone region was used to correct satellite imagery of the same region. The same process was executed, except a third-order polynomial transformation was performed. A total of 12 GCPs were matched and the RMS error and total control point error was maintained at less than 1.

RESULTS

Figure 1 shows the matching four ground control points located at nearly the same exact location in both the input and reference image. This is only a first-order polynomial transformation, so only three matching GCPs are required. 
Figure 1
Figure 2 shows the same view as figure 1 except for the Eastern Sierra Leone region. Since a third-order polynomial transformation is performed, a total of 10 GCPs is required.
Figure 2
Figure 3 shows the geometrically-corrected image over the pre-existing Landsat TM imagery. The grayer image is the output image and the correction can be seen by the curving of the left side of the image and the abrupt change on the right edge where the cutoff point is between images. 
Figure 3


SOURCES

Satellite images were obtained from Earth Resources Observation and Science Center, United Sates Geological Survey.
Digital raster graphic was obtained from Illinois Geospatial Data Clearing House.

Thursday, November 10, 2016

Lab 5

INTRODUCTION

The following lab demonstrates knowledge in basic LiDAR data structure and processing using ERDAS Imagine and ArcGIS. Specifically, it demonstrates the ability to process and retrieve digital surface models (DSMs) and digital terrain models (DTMs), and processing and creation of imagery from the point cloud. The scenario is to envision being a GIS manager working to check LiDAR point cloud data and verify classification in Eau Claire County, WI. After verifying and correcting point cloud data, a DSM, a DTM, and an intensity image was produced.

METHODOLOGY

Creating an LAS Dataset

Using ArcMap, a new LAS Dataset was made and all LAS files, showing points in tiles, were added. Calculating statistics for the dataset gave information to determine quality assurance and quality control of individual files and the overall dataset. Comparing the Min Z and Max Z with known elevation of the AOI gives this answer.
Next, to assign the correct coordinate system to the LAS dataset, the metadata from the original point cloud data obtained from Eau Claire County, 2013 was analyzed and assigned accordingly to the XY and Z projected coordinate systems. The dataset was then added to ArcMap with a shapefile of Eau Claire County obtained from Price, 2016 to ensure spatial accuracy. A reclassification from 9 to 8 classes allowed for better representation of elevation.
The aspect, slope, and contour of the AOI can then be examined and be used to provide supporting data to improve classification within an Expert System classifier.

Visualization as Point Cloud

To further analyze any given feature of the AOI, a profile can be derived using the LAS Dataset Profile View tool. Instead of showing an aerial image of the first return echoes of LiDAR, this allows for a 3D profile image containing multiple return echoes to determine a feature's actual shape.

Generating DSMs and DTMs from Point Clouds

For better visualization of the AOI, a DSM and DTM each enhanced by hillshade was generated. To determine the spatial resolution for these derivative products, the average nominal pulse spacing (NPS) was obtained from the dataset properties. The dataset was then set to points coded by elevation and filtered by first returns.
Using the Arc Toolbox to generate a DSM, the LAS dataset was converted to a raster. Binning interpolation type with a maximum cell assignment and natural neighbor void fill method was used. The sampling type was set to approximately 2 meters. To enhance this DSM, a hillshade was applied, narrowing the grayscale of the imagery to just the shades of objects. This was done using the hillshade function in the 3D analyst tools.
To obtain a conversion from LAS dataset to a DTM, the dataset was left on points coded by elevation, but instead filtered by ground. The same conversion was performed with the same parameters except for a minimum cell assignment. A hillshade of the DTM was performed with the exact methodology of the DSM.
For the final derivative, a LiDAR intensity image was produced by setting the LAS dataset to points and a first return set on the filter. For the LAS dataset conversion to raster, the same parameters were used as the DTM except the value field switched to intensity and call assignment type to average. Autocorrections to brightness were made switching the image from ArcMap to ERDAS Imagine.

RESULTS

Creating an LAS Dataset

Figure 1 shows Eau Claire point cloud data according to aspect. The different shades delineate specific parts and features of the aerial photography.
Figure 1: Aspect view
Figure 2 shows Eau claire point cloud data according to slope. Red indicates the most dramatic change in slope, while green is the neutral, flat slope area.

Figure 2: Slope view
Figure 3 shows Eau Claire point cloud data according to contour. The deep red color is used to indicate the outlines of specific objects and changes in elevation.

Figure 3: Contour view

Visualization as Point Cloud

Figure 4 shows an Eau Claire walking bridge as orange points in a linear fashion crossing the white Eau Claire River as detailed by LiDAR. This is a 2D view of first-return echoes captured by the LiDAR sensor.
Figure 4: 2D bridge view
Using a profile view, the shape of the actual walking bridge is obtained by showing multiple return echoes captured by LiDAR, thus giving a better interpretation of the actual shape of a feature.

Figure 5: 3D bridge view

Generating DSMs and DTMs from Point Clouds

The average NPS for Eau Claire was 1.55 meters which gave the 2m spatial resolution used for Figures 6-10. The digital surface model in Figure 6 is a raster showing elevation collected at the first return by LiDAR. This imagery can be used as supporting data for classification of forest and buildings and other structures above ground surface. 
Figure 6: DSM of Eau Claire County
The hillshade function as shown by Figure 7 narrows the gradient, providing the grayscale to only include shades of objects in Eau Claire. 
Figure 7: Enhanced DSM using hillshade technique
The DTM of Eau Claire shown in Figure 8 represents the bare-Earth terrain and excludes first return echoes for ground return echoes collected by LiDAR. 

Figure 8: DTM of Eau Claire County
Just as Figure 7 did for the DSM, the hillshade again narrows the gradient to show only shades of the elevation in the terrain, providing much more useful imagery for the DTM as shown by Figure 9.

Figure 9: Enhanced DTM using hillshade technique

For Figure 10, a LiDAR intensity map was derived from original point cloud data and transferred from ArcMap to ERDAS to adjust brightness for better aerial interpretation.

Figure 10: LiDAR Intensity of Eau Claire County 


SOURCES

LiDAR point cloud and Tile Index drawn from Eau Claire County, 2013.

Eau Claire County shapefile was from Price, M. H. (2012). Mastering ArcGIS. Dubuque, IA: McGraw-Hill. 




Wednesday, November 2, 2016

LAB 4

INTRODUCTION

The following lab demonstrates the ability to delineate from large satellite imagery and syncing satellite imagery with Google Earth. It also showcases optimization of spatial resolution and radiometric enhancement used for better interpretation. Lastly, the lab demonstrates resampling, image mosaicking, and binary change detection techniques. All techniques used and skills demonstrated were done so using satellite imagery over Eau Claire, WI. The lab is divided into seven parts: Image Subsetting, Image Fusion, Radiometric Enhancement, Synchronizing Google Earth, Resampling, Image Mosaicking, and Binary Change Detection.

METHODOLOGY

IMAGE SUBSETTING

To create a smaller study area (Eau Claire/Chippewa Falls) from a large satellite image (Eau Claire and surrounding counties), an inquire box was drawn and a subset and chip function was performed. Next, to subset an image that is non-rectangular, a delineation of an area of interest (AOI) was executed. A shapefile of the AOI was uploaded over the original satellite image, delineated using the paste from selected object function, and saved as an AOI layer. Again, a subset & chip function was performed.

IMAGE FUSION

To optimize spatial resolution for visual interpretation purposes, a course resolution image was combined with a panchromatic image with a higher spatial resolution. This image fusion retained the reflective image while improving spatial resolution to that of the panchromatic image. This was done by using a resolution merge function in the Pan Sharpen icon under the Raster tab with a multiplicative method and nearest neighbor resampling technique.

RADIOMETRIC ENHANCEMENT

To enhance spectral and radiometric quality for image analysis, a simple haze reduction function was performed. This was done by going to the Raster tab, and finding the haze reduction function under the Radiometric icon. Default parameters were used.

SYNCHRONIZING GOOGLE EARTH

To provide additional information and a current satellite image to aid in image classification, Google Earth was linked to the image viewer in ERDAS. This was done by navigating to the Google Earth tab and connecting and syncing to view. 

RESAMPLING

To increase spatial resolution of an image for enhanced clarity and accuracy, resampling up is performed to reduce the size of pixels and increase file size. Two resampling functions were used--one using the nearest neighbor technique and for the other, bi-linear interpolation. Both techniques used the resample pixel size in the Spatial icon under the Raster tab. Pixel size for each was resampled up from 30x30m to 15x15m. The only difference between techniques was choosing nearest neighbor or bi-linear interpolation methods. 

IMAGE MOSAICKING

When the AOI is greater than the spatial extent of a single satellite image scene, an image mosaicking is performed to combine multiple satellite images to fully extend over the AOI. Image mosaicking was performed twice, first using Mosiac Express and then Mosiac Pro. Both were found in the Mosaic icon under the Raster tab. For Mosiac Express, the multiple images in virtual mosaic icon was selected under the Multiple tab and background transparent and fit to frame was selected under the Raster Options tab. After the two images were uploaded, all default options were used. For Mosaic Pro, after uploading both images, the use histogram matching option was selected in the Color Corrections icon and the overlap areas was used for the matching method shown in the Set icon. 

BINARY CHANGE DETECTION

To estimate and map the BVs of pixels that changed in Eau Claire County and neighboring counties, binary change detection was performed. The Two Image Functions was selected from the Functions icon under the Raster tab. Only layer 4 of the two images were analyzed to detect change for simplification. Under the output options section, the Operator was switched to negative. After this function was performed the resulting image's histogram was analyzed to predict the threshold where the areas of change would occur.
In order to map the resulting image, a model was made in Model Maker under the Toolbox tab. Two rasters were created of both images which led to a function and an output raster. The function used was raster 2 subtracted from raster 1 adding the constant of 127 to avoid getting negative BVs.
This constant shifted the histogram positively and sent the change threshold to the upper tail of the histogram. Another calculation was made using 3 standard deviations added to the mean to find a new threshold. Using the current image as a raster, a new model was produced where a conditional function was performed giving a value of 0 to all values under this threshold and a value of 1 for all values above.
The resulting image was then transferred to ArcMap and overlaid by another image of the surrounding area. All areas shown from the ERDAS image produced from the model maker represent BVs that changed from our original images.

RESULTS

IMAGE SUBSETTING

Figure 1 is a subsetted image of Chippewa Falls/Eau Claire area from the original Eau Claire and surrounding county satellite image. This subsetted image is limited to a square area.
Figure 1
Figure 2 is also a subsetted image of the Chippewa Falls/Eau Claire area from the original Eau Claire and surrounding county satellite image. However, the AOI (area of interest) was delineated by a previously-made shapefile--this method allows for non-rectangular AOIs to be examined.
Figure 2
Both the inquire box and AOI methods can be used to subset a larger satellite image and have their advantages. Although the inquire box method might be faster and adequate for less-intensive work, the AOI method allows for specific delineation and takes more effort to create a shapefile to subset.

IMAGE FUSION

The subsetted image from Figure 2 was used. However, the 30x30m spatial resolution needed to be upgraded to a 15x15m resolution. The reflective image was fused with a 15x15m panchromatic image to retain the reflective properties of Figure 2's image but increase the spatial resolution to that of the panchromatic image. Results can be seen by Figure 3 and compared to that of Figure 2.
Figure 3

RADIOMETRIC ENHANCEMENT

Figure 4 shows satellite imagery of the Eau Claire County area after a haze reduction function has been performed. In the original satellite imagery collected, a white haze was collected congregating around the lower right portion of the image. After the basic haze reduction, all that is shown is a slight shadow where the originally corrupted data was recorded. 
Figure 4

SYNCHRONIZING GOOGLE EARTH

After connecting, linking, and synchronizing Google Earth to Erdas, the screencapture of Figure 5 shows the supplementary information that can be gathered such as labels attached to cities, rivers, and features and a plethora of pictures taken in the area. 
Figure 5

RESAMPLING

After resampling up the Chippewa Falls/Eau Claire area from 30x30m to 15x15m, the results are shown when zoomed in with the left window representing the nearest neighbor method and the right representing the bi-linear interpolation method. 
Figure 6

When resampling up, the new pixels created by reducing pixel size need new brightness values (BVs). Using the nearest neighbor method, these new pixels are assigned BVs by taking the BV from the nearest pixel. This is advantageous because it doesn't take much computer processing power and is fairly quick. However, when observed closely there is a stair-stepping effect along boundary lines. In this case the Eau Claire River is seen as a blurrier, less-defined boundary from land. When observing the right window which used bi-linear interpolation, the resulting image has a smoother, more-defined boundary between the river and land. This is because using the bi-linear interpolation method, the new pixels are assigned a new BV using an average of all its neighboring pixels. This provides a more seamless transition from boundaries where there is a great BV difference in pixels.

IMAGE MOSAICKING

For Figure 7, the mosaic performed using Mosiac Express is shown as the top image and the Mosiac Pro image is shown on bottom. Using Mosiac Express, the pixels in the overlapped area between images simply use the BVs from the top image. This retains data from the original histogram, but provides a very abrupt transition from the top and bottom images and won't match values to the bottom image's histogram. 
Figure 7

As shown by the bottom image, the mosaic using Mosaic Pro offers a much smoother transition of BVs between the two images. This takes more computer processing and therefore takes longer. However the results are much better because the pixels of the overlapped area are averaged out between the histograms of the two images and new BVs are assigned to the overlapping pixels using this information.

BINARY CHANGE DETECTION

After running the image differencing fuction, the resulting image's histogram is shown by Figure 8. A calculation of 1.5 standard deviations added to the mean of the BVs was performed and gave a predicted positive (70.08) and negative (-24.26) threshold for pixel change in BV on the histogram. 
Figure 8
Figure 8 shows the ERDAS image after the first function to subtract the 2nd raster from the 1st was run and the second function to adjust for only values that were above the threshold of change or no change was also run.

This image was overlaid upon another image of the same area in ArcMap where all red values represent change from one raster to the other.

SOURCES

Data was obtained from the drive at the University of Wisconsin-Eau Claire at Q/StudentCoursework/Wilson/GEOG338-001/Lab4
Satellite imagery was obtained  from Earth Resources Observation and Science Center, United States Geological Survey.
The shapefile used for image subsetting was obtained from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill, 2014.