Abstract
Using solely an optical remotely sensed dataset to obtain an accurate thematic map of land use and land cover (LU/LC) is a serious challenge. The dataset fusion of multispectral and panchromatic images play a big role and provide an accurate estimation of LU/LC map simply because using a dataset from different spectrum portions with different spatial and spectral characteristics will improve image classification. For this study, the Landsat operational land imager multispectral and panchromatic images were adopted. This study aimed to investigate the effectiveness of using a panchromatic highly spatial resolution to refine the methodology for LU/LC mapping in Baghdad city, Iraq, by performing a comparison of classifications using different algorithms on multispectral and fused images. Different classification algorithms were employed to classify the data set; minimum distance (MD) and the maximum likelihood classifier (MLC). A suitable classification method was proposed to map LU/LC based on the outcome results. The result evaluation was conducted by applying a confusion matrix. An overall accuracy of a fused image using a principal component-based spectral sharpening algorithm and classified by the MLC classifier reveals the highest accurate results with an overall accuracy and kappa coefficient of 98.90% and 0.98, respectively. Results showed that the best methodology for LU/LC mapping of the study area is found from fusion of multispectral with panchromatic images via principal component-based spectral algorithm with MLC approach for classification.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Reliable and accurate land use and land cover (LU/LC) classification is important for applications in a wide range (Zhang 2010). Examples of these classification approaches that have been employed in various disciplines and applications include the following: global change monitoring, land use detection, geographical information data updating, natural hazards modeling, and urban expansion prediction (Cihlar and Jansen 2001; Lu et al. 2011; Sang et al. 2014; Hayder 2015; Otukei et al. 2015). Remote sensing (RS) is useful for determining LU/LC estimation using suitable datasets and classification techniques. Imagery classification is considered the most common method to map LU/LC; it is a very complicated process which considers several inputs and factors. The primary step in imagery classification may start with determining a suitable classification approach (Chasmer et al. 2014), training site selection, imagery pre-processing and processing, feature segmentation and extraction, suitable classifier selection, conducting post-classification processing, and performing accuracy assessment (Lu and Weng 2007). Selection of a suitable classifier approach is critical for obtaining an accurate LU/LC thematic map. For LU/LC, estimating techniques, methodologies, and algorithms have been adopted, including the MLC as referenced by Gevana et al. (Gevana et al. 2015) and advanced approaches such as the artificial neural networks (ANN) (Elatawneh et al. 2014; Ghosh et al. 2014; Hayder et al. 2018; Cavur et al. 2019). Decision trees (DT) (Chasmer et al. 2014; Löw et al. 2015), object-based algorithm and support vector machine (SVM) (Iounousse et al. 2015), dataset fusion technique (Temesgen et al. 2001), and sensor integration techniques (Lucas et al. 2014; Hayder et al. 2017; Kamrul et al. 2018; Cavur et al. 2019). Nowadays, researchers and analysts have faced difficulties and challenges in terms of selecting which classification algorithm to use (Srivastava et al. 2012; Chasmer et al. 2014; Anjan and Arun 2019).
Imagery fusion provides superior spatial details and information (Garzelli and Filippo 2005). Several studies deal with imagery fusion between panchromatic (PAN) and multispectral (MS) images; it works by combining the PAN spatial information image that has high-frequency features with the MS spectral information image that has low-frequency features (Jing and Cheng 2009). Replacing the MS image low-frequency features with the PAN image high-frequency features will enhance the spatial resolution with loss of some spectral information (Guo et al. 2010). The purposes of using imagery fusion include improving spatial resolution, advancing geometric accurateness, improving topographic presentation, and refiningclassification precision (Pohl and Genderen 1998). This study aims to investigate the effectiveness of integrating the PAN and MS images of Landsat-8 OLI and examine it with some classification algorithms in pixel-based level to propose a refined classification methodology to map LU/LC. A refined classification method is a powerful tool that can be used to obtain accurate LU/LC. The proposed procedure is a good tool for analysts and researchers who are interested in extending the technique to different datasets and regions.
2 Materials and Methods
The detection approach to map LU/LC was tested and examined by previous works, and selecting the best algorithm to detect the LU/LC is a difficult task to perform, simply because it relates to the study area location and to the kind of features that are located in the study area, for example, the North Pole area will have different features and feature reflections compared with the tropical area (Jia et al. 2014). For this study, Landsat-8 satellite imagery of 2018 was adopted in order to perform the pre-processing and processing steps. Different kinds of supervised classification pixel-based algorithms were employed on the processed datasets, as follows: (1) the original MS Landsat-8 image with a low spatial resolution of 30 m and panchromatic Landsat-8 image with a high spatial resolution of 15 m, (2) two types of fused images made between the MS image that has a low spatial resolution of 30 m with the high-resolution PAN image of 15 m of Landsat-8 by spectral sharpening algorithms. Each fusion level has tested a method that could potentially be applied. Applying the accuracy assessment technique was done by using collecting ground truth data from the 2018 Google Earth Pro map; the best technique was then recognized. The overall flowchart of this methodology is shown in Fig. 1.
2.1 Study Area Description
The investigated area for this study was Baghdad city, Iraq, as illustrated in Fig.�2. It is the Iraqi capital city and has a population of about 7,216,040, and this population makes Baghdad the biggest city in Iraq. Baghdad city is located between (33.332–33.329) latitude and (44.551–44.239) longitude. It occupies an area of 380 km2. In addition, it is also considered the biggest economic and administrative center of Iraq. Baghdad city is divided into two parts by the Tigris river; Al-Karkh (Baghdad western part) and Al-Resafa (Baghdad eastern part). For this research, both of these sides were considered the research study area, as shown in Fig. 2. The elevation ranges of Baghdad city range between 31 and 39 m above the mean sea level (MSL). Baghdad city is almost covered by urbanization areas.
2.2 The Used Dataset and Reference Data
The used satellite dataset includes the Landsat-8 image obtained in 2018. The Landsat-8 satellite sensor was launched on 11 Feb 2013; it carries two sensors; (1) operational land imager (OLI) and (2) thermal infrared sensor (TIRS). Landsat-8 satellite data have eleven bands with 30 m as a spatial resolution for each band of 1–7 and 9. However, in the panchromatic band (band 8) with a spatial resolution of 15 m for bands 10 and 11, the spatial resolution is 100 m. Tables 1 and 2 are describing the characteristics of OLI and TIRS. The imagery of this study was freely downloaded from US Geological Survey (USGS) with path = 168 and row = 37 as referenced from http://earthexplorer.usgs.gov. It was acquired at 12 April 2018 with level processing (1 T) standard terrain correction, WGS-84 as a datum. The adopted image is free of the cloud. Figure 3a, b shows the MS and PAN images of the study area.
Ground truth data should be obtained to perform supervised classifications, training site selection, and classification accuracy evaluation. For ground truth data in this study, the Google Earth Pro mapping was adopted to collect training and testing interest sites. Simply because, the Google Earth Pro image has a high spatial resolution, the study area classes following interpretation were defined, and five LU/LC classes (urbanization area, water bodies, roads, soil area, and vegetation) were identified. The adopted Google image was from 2018. The random sampling procedure was applied in terms of collecting samples for every single class. The sampling sites have been distributed throughout all the study area in order to ensure obtaining accurate classification results. More than 200 pixels were selected for every single class. Figure 4a, b shows the collected training and testing samples for image classifications.
2.3 Pre-processing
An accurate geometric correction performance for satellite images is required to conduct an accurate image fusion (Otukei et al. 2015). The suitable location of ground control points (GCPs) was selected. A first polynomial transformation and the nearest-neighbor resampling method were applied; the obtained root mean square error (RMSE) was equal to 1.35 pixels. For this study, the geometric correction of Landsat MS (OLI sensor) and PAN (TIRS sensor) images was done based on the collected GCPs from the Google Earth map. Ten GCPs were distributed in the study area to correct the MS image, as indicated in Table 3 and Fig. 5. Another correction was applied on Landsat images, and radiometric correction with previous studies was proved as an essential method in imagery pre-processing to remove sun illumination effects (Pohl and Van 1998; Zhang 2010; Idi and Nejad 2013). Thus, both of the Landsat MS and PAN images were corrected to convert the digital numbers (DNs) of the MS and PAN images into meaningful radiance; the dark object subtraction (DOS) was employed to correct the MS and PAN images for atmospheric conditions (Zhang et al. 2014). Figure 6 shows the corrected images from geometric and radiometric errors. Most of the collected GCPs were of road intersections of the study area that suffered from many wars. So, most of the buildings could be destroyed. However, road intersections will stay even if the buildings are destroyed.
2.4 Image Fusion
The MS and PAN images data were integrated after performing image sub-setting and layer stacking. In this study, two approaches have been applied by conducting image fusions between the MS and PAN datasets (Jing and Cheng 2009; Li et al. 2012; Idi and Nejad 2013; Löw et al. 2015). The first fusion level was conducted between the MS and PAN images using the principal component-based spectral sharpening method (Ban et al. 2010; Amarsaikhan et al. 2012; Idi and Nejad 2013). Whereas, the second fusion level applied the color-normalized spectral sharpening method. Figure 7a, b shows the fused images after employing the principal component (PC)-based and color-normalized (CN) spectral sharpening algorithms. The PC pan-sharpening analysis approach is known as one of the methods of imagery pan-sharpening. It belongs to both projection and substitution algorithm. The PC method is the correlated variable transformation into a set of uncorrelated variables; these uncorrelated variables are known as principal components (Paidamwoyo et al. 2020). The CN spectral sharpening is known as stands of color-normalized approach, it is often applied to pan-sharpening of the MS images. It can be employed to simultaneously sharpen imagery band numbers (Ghassemian 2016). Each multispectral band is multiplied by the panchromatic band. The normalized fusion image was included in all the MS bands.
2.5 Image Classification and Accuracy Assessment
The confusion matrix approach is often used to evaluate LU/LC classification outcomes. It provides an assessment of the classified results and referenced data (Li et al. 2012). Pixel-based image classification approaches were applied (Qi et al. 2010). Two classification algorithms were selected and evaluated in this research; MD and MLC to map the LU/LC in Baghdad city (Jing and Cheng 2009; Ban et al. 2010; Bhaskaran et al. 2010; Qi et al. 2010; Otukei and Blaschke 2010; Taubenböck et al. 2012; Anjan and Arun 2019). The overall accuracy (OA) and kappa coefficient are commonly used as tools for assessing the quality of classification results (Nishii and Tanaka 1999; Li et al. 2012). In this study, both of these measurement approaches were used, and their calculation equations are provided below:
where n is the total number of pixels, nij is a total number of classified pixels, ni represents the number of instances, label (i), that have been classified into label (j).
3 Results
3.1 First Experiment (Classification of Landsat MS Image)
For the first experiment in this study, the authors applied MLC and MD classifiers on the Landsat MS image that has a low resolution of about 30�m, to map the LU/LC of Baghdad city. Two LU/LC thematic maps were generated of the study area. The confusion matrix was employed in terms of classification result evaluation. Figure 8 shows the result of the LU/LC classification performed with Envi 5.3 software. The result reveals that the MLC produced better accuracy than another tested method in this research. The overall accuracy is about 94.09%. A difference was noted between both the two classifiers, MLC and MD methods, with a difference of about more than 0.20%. Figure 8 shows the classification of Landsat MS image. Table 4 indicates the overall accuracy and kappa coefficient of MD and MLC classifiers. For image classification, it is assumed that the red and green colors represent the urban and vegetation areas, respectively. However, water bodies, soil area, and roads are represented by the colors blue, yellow, and black, respectively.
3.2 Second Experiment: Fused Image of MS and PAN Image Classification
In the second experiment, a fused image was generated by applying integration between Landsat low spatial resolution MS and high spatial resolution PAN images in order to improve and enhance the estimating and mapping of LU/LC and obtain the highest accuracy. Two types of spectral sharpening algorithms were applied to perform image fusion. The first one was by applying the principal component-based spectral sharpening and the second once was by applying the color-normalized spectral algorithm; two fused images were generated (Garzelli and Filippo 2005). For image classification, both MLC and of MD were adopted to map LU/LC and to make a comparison between the results of these classifiers to obtain accurate results, as shown in Fig. 9. The confusion matrix was also used to evaluate the results of MLC and MD. Statistically, the MLC shows higher OA (97.78 and 98.90%) for fused images by employing color-normalized and principal component-based spectral sharpening methods (Pohl and Van 1998; Lu and Weng 2007). These results should be compared with results of the Landsat MS image classification in order to evaluate the effectiveness of using the PAN image to map the LU/LC. Therefore, the outcome of this second experiment and the first experiment were compared. Figures 9 and 10 and Tables 5 and 6 reveal that integrating the PAN image with the MS image will improve and enhance the result and the accuracy of image classification to produce LU/LC maps with the highest accuracy.
4 Discussion
Regarding the two experiments, a table of comparison was performed in this study. Table 7 presents the overall accuracy and the kappa coefficients of all the six LU/LC thematic maps on the Landsat-8 multispectral and fused images that were employed in this study.
In Table 6, the highest OA was calculated for each adopted classifier approach for the MS and the fused images. The MLC approach was recognized to work best when employed on Landsat image fused by the principal component-based spectral sharpening approach. The MD also has a good performance when applied on fused Landsat by using principal component-based spectral algorithm. The results reveal that the classification accuracy obtained from applying the MLC classifier is the highest accuracy when applied on the fused image between MS and PAN; the overall accuracy was about 98.90% with a kappa coefficient of about 0.98. On the other side, the MD classification accuracy for MS data was 86.03% for overall accuracy and 0.81 for the kappa coefficient. Throughout all stages of this study, the most accurate procedure to clearly produce the LU/LC map is generated by the fused images by integrating the PAN and MS images of Landsat-8 and fused using the principal component-based spectral algorithm, as shown in Fig. 11. So, regarding the result of this investigation, the optimal procedure to generate the most accurate thematic LU/LC map of the following method is illustrated in Fig. 12. This technique showing the best performance based on the analysis of the results was used to produce the optimal LU/LC map for the study area. Figure 11 indicates the LU/LC map of the study area with an OA of 98.90% has five different classes (urbanization area, vegetation area, water bodies, soil area, and roads). This map was generated using MLC applied for the fused Landsat MS and PAN data using principal component-based spectral sharpening algorithm. The refined classification procedure is shown in Fig. 12.
5 Conclusion
The study analyzed the use of Landsat-8 MS and PAN images for estimating LU/LC in Baghdad city, Iraq. The research aims to find out the improving methodology for LU/LC mapping Baghdad city by conducting classification comparisons using different approach (PC and CN) methods on MS and fused images. Different classification methods were adopted to classify the images, MD and MLC classifiers. The improved classification method was proposed for LU/LC mapping based on the result. The result validations were conducted by applying a confusion matrix. An overall accuracy of the fused image using the principal component-based spectral sharpening algorithm and classified by the MLC classifier shows the highest accurate result with an overall accuracy and kappa coefficient of 98.90% and 0.98, respectively. However, the image classification of MS data was obtained at about an overall accuracy and kappa coefficient of 94.09% and 0.93, respectively. In addition, the fused color-normalized classification result has an overall accuracy and kappa coefficient of 97.78% and 0.93, respectively. Therefore, all these results confirm that the image fusion using the principal component-based spectral algorithm and MLC classifier were determined as the best technique to estimate the thematic map of LU/LC for this study.
References
Amarsaikhan, D., Saandar, M., Ganzorig, M., Blotevogel, H. H., Egshiglen, E., Gantuyal, R., Nergui, B., & Enkhjargal, D. (2012). Comparison of multisource image fusion methods and land cover classification. International Journal of Remote Sensing, 33(8), 2532–2550. https://doi.org/10.1080/01431161.2011.616552.
Anjan, R., & Arun, B. I. (2019). Multi-temporal land use land cover (LULC) change analysis of a dry semi-arid river basin in western India following a robust multi-sensor satellite image calibration strategy. Heliyon, 5(4), 2405–8440. https://doi.org/10.1016/j.heliyon.2019.e01478.
Ban, Y., Hu, H., & Rangel, I. M. (2010). Fusion of Quickbird MS and RADARSAT SAR data for urban land-cover mapping: Object-based and knowledge-based approach. International Journal of Remote Sensing, 31(6), 1391–1410. https://doi.org/10.1080/01431160903475415.
Bhaskaran, S., Paramananda, S., & Ramnarayan, M. (2010). Per-pixel and object-oriented classification methods for mapping urban features using Ikonos satellite data. Applied Geography, 30(4), 650–665. https://doi.org/10.1016/j.apgeog.2010.01.009.
Cavur, M., Duzgun, H. S., Kemec, S., & Demirkan, D. C. (2019). Land use and land cover classification of Sentinel 2-A: St Petersburg case study. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, XLII-1/W2, pp. 16–17
Chasmer, L., Hopkinson, C., Veness, T., Quinton, W., & Baltzer, J. (2014). A decision-tree classification for low-lying complex land cover types within the zone of discontinuous permafrost. Remote Sensing of Environment, 143, 73–84. https://doi.org/10.1016/j.rse.2013.12.016.
Cihlar, J., & Jansen, L. J. M. (2001). From land cover to land use: A methodology for efficient land use mapping over large areas. The Professional Geographer, 53(2), 275–289. https://doi.org/10.1080/00330124.2001.9628460.
Elatawneh, A., Kalaitzidis, C., Petropoulos, G. P., & Schneider, T. (2014). Evaluation of diverse classification approaches for land use/cover mapping in a Mediterranean region utilizing Hyperion data. International Journal of Digital Earth, 7(3), 194–216. https://doi.org/10.1080/17538947.2012.671378.
Garzelli, A., & Filippo, N. (2005). Interband structure modeling for pan-sharpening of very high-resolution multispectral images. Information Fusion, 6(3), 213–224.
Gevana, D., Camacho, L., Carandang, A., Camacho, S., & Im, S. (2015). Land use characterization and change detection of a small mangrove area in Banacon Island, Bohol, Philippines using a maximum likelihood classification method. Forest Science and Technology, 11(4), 97–205.
Ghassemian, H. (2016). A review of remote sensing image fusion methods. Information Fusion, 32, 75–89.
Ghosh, A., Sharma, R., & Joshi, P. K. (2014). Random forest classification of urban landscape using Landsat archive and ancillary data: Combining seasonal maps with decision level fusion. Applied Geography, 48, 31–41. https://doi.org/10.1016/j.apgeog.2014.01.003.
Guo, Q., Chen, S., Leung, H., & Liu, S. (2010). Covariance intersection based image fusion technique with application to pan-sharpening in remote sensing. Information Sciences, 180(18), 3434–3443.
Hayder, D. (2015). Shattri Mansor, Noordin Ahmad, and Biswajeet Pradhan. Band-to-band registration model for near-equatorial earth observation satellite images with the use of automatic control point extraction. International Journal of Remote Sensing, 36(8), 2184–2200.
Hayder, D., Mohammed, O. I., Goma, B. A. A. (2017). Hierarchical classification approach for mapping rubber tree growth using per-pixel and object-oriented classifiers with SPOT-5 imagery. The Egyptian Journal of Remote Sensing and Space Sciences, Production and hosting by Elsevier, B.V., https://doi.org/10.1016/j.ejrs.2017.01.004.
Hayder, D., Suhad, A. H., & Hasan, S. A. K. (2018). Extraction detailed buildings 3D model with using high resolution satellite imagery by remote sensing and GIS analysis; AL-Qqasim Green University a case study. International Journal of Civil Engineering & Technology (IJCIET), 9(7), 1097–1108.
Idi, B. Y., & Nejad, P. G. (2013). Fusion of RADARSAT-2 and IKONOS images for land cover mapping: Performance analysis. Applied Remote Sensing Journal, 3(1), 18.
Iounousse, J., Er-Raki, S., El Motassadeq, A., & Chehouani, H. (2015). Using an unsupervised approach of probabilistic neural network (PNN) for land use classification from multitemporal satellite images. Applied Soft Computing, 30, 1–13. https://doi.org/10.1016/j.asoc.2015.01.037.
Jia, M., Wang, Z., Li, L., Song, K., Ren, C., Liu, B., & Mao, D. (2014). Mapping China’s mangroves based on an object-oriented classification of Landsat imagery. Wetlands, 34(2), 277–283. https://doi.org/10.1007/s13157-013-0449-2.
Jing, L., & Cheng, Q. (2009). Two improvement schemes of PAN modulation fusion methods for spectraldistortion minimization. International Journal of Remote Sensing, Taylor & Francis Group, 30(8), 2119–2131.
Kamrul, I., Mohammed, J., Biswajit, N., & Tapan, K. N. (2018). Land use classificationnand change detection by using multi-temporal remotely sensed imagery. The case of Chunati wildlife sanctuary, Bangladesh. The Egyptian Journal of Remote Sensing and Space Sciences, 21, 37–47.
Li, G., Lu, D., Moran, E., Dutra, L., & Batistella, M. (2012). A comparative analysis of ALOS PALSAR L-Bandand RADARSAT-2 C-band data for land-cover classification in a tropical moist region. ISPRS Journal of Photogrammetry and Remote Sensing, 70, 26–38. https://doi.org/10.1016/j.isprsjprs.2012.03.010.
Löw, F., Conrad, C., & Michel, U. (2015). Decision fusion and non-parametric classifiers for land use mapping using multi-temporal Rapideye data. ISPRS Journal of Photogrammetry and Remote Sensing, 108, 191–204. https://doi.org/10.1016/j.isprsjprs.2015.07.001.
Lu, D., & Weng, Q. (2007). A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing, 28(5), 823–870. https://doi.org/10.1080/01431160600746456.
Lu, D., Li, G., Moran, E., Dutra, L., & Batistella, M. (2011). A comparison of multisensor integration methods for land cover classification in the Brazilian Amazon. GIScience & Remote Sensing, 48(3), 345–370. https://doi.org/10.2747/1548-1603.48.3.345.
Lucas, R. M., Clewley, D., Accad, A., Butler, D., Armston, J., Bowen, M., Bunting, P., Carreiras, J., Dwyer, J., Eyre, T., Kelly, A., McAlpine, C., Pollock, S., & Seabrook, L. (2014). Mapping Forest growth and degradation stage in the Brigalow Belt bioregion of Australia through integration of ALOSPALSAR and Landsat-derived foliage projective cover data. Remote Sensing of Environment, 155, 42–57. https://doi.org/10.1016/j.rse.2013.11.025.
Nishii, R., & Tanaka, S. (1999). Accuracy and inaccuracy assessments in land-cover classification. IEEE Transactions on Geoscience and Remote Sensing, 37(1), 491–498. https://doi.org/10.1109/36.739098.
Otukei, J. R., & Blaschke, T. (2010). Land cover change assessment using decision trees, support vector machines and maximum likelihood classification algorithms. International Journal of Applied Earth Observation and Geoinformation, 12, S27–S31. https://doi.org/10.1016/j.jag.2009.11.002.
Otukei, J. R., Blaschke, T., & Collins, M. (2015). Fusion of Terrasar-X and Landsat ETM+ data for protected area mapping in Uganda. International Journal of Applied Earth Observation and Geoinformation, 38, 99–104. https://doi.org/10.1016/j.jag.2014.12.012.
Paidamwoyo, M., Willard, M., & Naledzani, M. (2020). Comparison of image fusion techniques using satellite pour l’Observation de la Terre (SPOT) 6 satellite imagery. Applied Sciences, 10, 1881. https://doi.org/10.3390/app10051881.
Pohl, C., & Genderen, J. L. V. (1998). Multisensor image fusion in remote sensing: Concepts, methods andapplications. International Journal of Remote Sensing, 19(5), 823–854.
Pohl, C., & Van, G. J. L. (1998). Review article multisensor image fusion in remote sensing: Concepts, methods and applications. International Journal of Remote Sensing, 19(5), 823–854. https://doi.org/10.1080/014311698215748.
Qi, Z., Yeh, A. G., Li, X., & Lin, Z (2010). Land use and land cover classification using RADARSAT-2 Polarimetric SAR Image. In: Proceedings of the ISPRS Technical Commission VII Symposium: 100 Years ISPRS Advancing Remote Sensing Science, 38, 198–203. Vienna: ISPRS.
Sang, H., Zhang, J., Zhai, L., Qiu, C., & Sun, X. (2014). Analysis of rapid eye imagery for agricultural land cover and land use mapping. In: 2014 3rd International Workshop on Earth Observation and Remote Sensing Applications (EORSA), 366–369. Changsha: IEEE.
Srivastava, P. K., Han, D., Rico-Ramirez, M. A., Bray, M., & Islam, T. (2012). Selection of classification techniques for land use/land cover change investigation. Advances in Space Research, 50(9), 1250–1265. https://doi.org/10.1016/j.asr.2012.06.032.
Taubenböck, H., Felbier, A., Esch, T., Roth, A., & Dech, S. (2012). Pixel-based classification algorithm for mapping urban footprints from radar data: A case study for RADARSAT-2. Canadian Journal of Remote Sensing, 38(3), 211–222. https://doi.org/10.5589/m11-061.
Temesgen, B., Mohammed, M. U., & Korme, T. (2001). Natural hazard assessment using GIS and remote sensing methods, with reference to the landslide in the Wondogenet area, Ethiopia. Physics and Chemistry of the Earth, 26(9), 665–675.
Zhang, J. (2010). Multi-source remote sensing data fusion: Status and trends. International Journal of I20mage and Data Fusion, 1(1), 5–24. https://doi.org/10.1080/19479830903561035.
Zhang, J., Dong, W., Wang, J. X., & Liu, X. N. (2014). A method to enhance the fog image based on dark object subtraction. Applied Mechanics and Materials, 543, 2484–2487.
Funding
Open access funding provided by Lulea University of Technology.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dibs, H., Hasab, H.A., Al-Rifaie, J.K. et al. An Optimal Approach for Land-Use / Land-Cover Mapping by Integration and Fusion of Multispectral Landsat OLI Images: Case Study in Baghdad, Iraq. Water Air Soil Pollut 231, 488 (2020). https://doi.org/10.1007/s11270-020-04846-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11270-020-04846-x