Forestry And Environmental Science, Shahajalal University Science Technology, Sylhet
add

Remote Sensing in Forestry


Remote sensing

Synthetic aperture radar image of Death Valley colored using polarimetry
In the broadest sense, remote sensing is the measurement or acquisition of information of an object or phenomenon, by a recording device that is not in physical or intimate contact with the object. In practice, remote sensing is the utilization at a distance (as from aircraft, spacecraft, satellite, or ship) of any device for gathering information about the environment. Thus an aircraft taking photographs, Earth observation and weather satellites, monitoring of a pregnancy via ultrasound, and space probes are all examples of remote sensing. In modern usage, the term generally refers to techniques involving the use of instruments aboard aircraft and spacecraft, and is distinct from other imaging-related fields such as medical imaging or photogrammetry.
While all
astronomy could be considered remote sensing (in fact, extremely remote sensing) the term "remote sensing" is normally only applied to terrestrial and weather observations.
Contents
1 Data acquisition techniques
1.1 Radiometric
1.2 Geodetic
1.3 Acoustic
2 Data processing
3 History
4 Further reading
5 See also
6 External links
//
Data acquisition techniques
Data may be acquired through a variety of devices depending upon the object or phenomena being observed. Most remote sensing techniques make use of emitted or reflected
electromagnetic radiation of the object of interest in a certain frequency domain (infrared, visible light, microwaves). This is possible due to the fact that the examined objects (plants, houses, water surfaces, air masses...) reflect or emit radiation in different wavelengths and in different intensities according to their current condition. Some remote sensing systems use sound waves in a similar way, while others measure variations in gravitational or magnetic fields.

Radiometric
Radar may be used for ranging and velocity measurements of either hard targets (i.e. an aircraft) or distributed targets (such as a cloud of water vapor in meteorology, or plasmas in the ionosphere). Synthetic aperture radar can produce precise digital elevation models of terrain (See RADARSAT, Magellan).
Laser and radar
altimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wave-length of ocean waves, the altimeters measure wind speeds and direction, and surface ocean currents and directions.
LIDAR LIght Detection And Ranging - Using a laser pulse, ground based LIDAR may be used to detect and measure the concentration of various chemical in the atmosphere, while airborne LIDAR can be used to measure heights of objects and features on the ground more accurately than with radar technology.
Radiometers and photometers are the most common instrument in use, collecting reflected and emitted radiation in a wide range of frequencies. The most common are visible and infrared sensors, followed by microwave and rarely, ultraviolet. They may also be used to detect the emission spectra of various chemical species, thus providing information on chemical concentrations in the atmosphere.
Stereographic pairs of aerial photographs have often been used to make Topographic maps. Satellite imagery has also been used.
Thematic mappers take images in multiple wavelengths of electro-magnetic radiation (multi-spectral) and are usually found on
earth observation satellites, including (for example) the Landsat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, measure land usage, and examine the health of plants, including entire farming regions or forests.
Geodetic
Satellite measurements of minute perturbations in the Earth's
gravitational field (geodesy) may be used to determine changes in the mass distribution of the Earth, which in turn may be used for geological or hydrological studies.
Acoustic
Sonar may be utilized for ranging and measurements of underwater objects and terrain.
Seismograms taken at different locations can locate and measure earthquakes (after the fact) by comparing the relative intensity and precise timing.
In order to coordinate a series of observations, most sensing systems need to know where they are, what time it is, and the rotation and orientation of the instrument. High-end instruments now often use positional information from
satellite navigation systems. The rotation and orientation is often provided within a degree or two with electronic compasses. Compasses can measure not just azimuth (i.e. degrees to magnetic north), but also altitude (degrees above the horizon), since the magnetic field curves into the Earth at different angles at different latitudes. More exact orientations require gyroscopic pointing information, periodically realigned in some fashion, perhaps from a star or the limb of the Earth.
The resolution determines how many pixels are available in measurement, but more importantly, higher resolutions are more informative, giving more data about more points. However, more resolution occasionally yields less data. For example, in thematic mapping to study plant health, imaging individual leaves of plants is actually counterproductive. Also, large amounts of high resolution data can clog a storage or transmission system with useless data, when a few low resolution images might be a better use of the system.
Data processing
See also:
Inverse problem
Generally speaking, remote sensing works on the principle of the inverse problem. While the object or phenomenon of interest (the state) may not be directly measured, there exists some other variable that can be measured (the observation), which may be related to the object of interest via some (usually mathematical) model. The common analogy given to describe this is trying to determine the type of animal from its footprints. For example, while it is impossible to directly measure temperatures in the upper atmosphere, it is possible to measure the spectral emissions from a known chemical species (such as carbon dioxide) in that region. The frequency of the emission may then be related to the temperature in that region via various
thermodynamic relations.
The quality of remote sensing data consists of its spatial, spectral, radiometric and temporal resolutions. Spatial resolution refers to the size of a
pixel that is recorded in a raster image - typically pixels may correspond to square areas ranging in side length from 1 to 1000 metres. Spectral resolution refers to the number of different frequency bands recorded - usually, this is equivalent to the number of sensors carried by the satellite or plane. Landsat images have seven bands, including several in the infra-red spectrum. The MODIS satellites are the highest resolving at 31 bands. Radiometric resolution refers to the number of different intensities of radiation the sensor is able to distinguish. Typically, this ranges from 8 to 14 bits, corresponding to 256 to 16,384 intensities or "shades" of colour, in each band. The temporal resolution is simply the frequency of flyovers by the satellite or plane, and is only relevant in time-series studies or those requiring an averaged or mosaic image. This may be necessary to avoid cloud cover. Finally, some people also refer to the "economic resolution", that is, how much data you get or are able to process per unit money.
In order to generate maps, most remote sensing systems expect to convert a photograph or other data item to a distance on the ground. This almost always depends on the precision of the instrument. For example, distortion in an aerial photographic lens or the platen against which the film is pressed can cause severe errors when photographs are used to measure ground distances. The step in which this problem is resolved is called
georeferencing, and involves matching up points in the image (typically 30 or more points per image) with points in a precise map, or in a previously georeferenced image, and finally "warping" the image to reduce any distortions. As of the early 1990s, most satellite images are sold fully georeferenced.
In addition, images may need to be radiometrically and atmospherically corrected. Radiometric correction gives a scale to the pixel values, e.g. the scale of 0 to 255 will be converted to actual radiance values. Atmospheric correction eliminates atmospheric haze by rescaling each frequency band so that its minimum value (usually realised in water bodies) corresponds to a pixel value of 0.
Interpretation is the critical process of making sense of the data. Traditionally, this was a human being, perhaps with a few measurement tools and a light table, but the analyses are becoming increasingly sophisticated and automated.
Old data from remote sensing is often valuable because it may provide the only long-term data for a large extent of geography. At the same time, the data is often complex to interpret, and bulky to store. Modern systems tend to store the data digitally, often with lossless compression. The difficulty with this approach is that the data is fragile, the format may be archaic, and the data may be easy to falsify. One of the best systems for archiving data series is as computer-generated machine-readable
ultrafiche, usually in typefonts such as OCR-B, or as digitized half-tone images. Ultrafiches survive well in standard libraries, with lifetimes of several centuries. They can be created, copied, filed and retrieved by automated systems. They are about as compact as archival magnetic media, and yet can be read by human beings with minimal, standardized equipment.
History
The 2001 Mars Odyssey Spacecraft used spectrometers and imagers to hunt for evidence of past or present water and volcanic activity on Mars.
Beyond the primitive methods of remote sensing our earliest ancestors used (ex.: standing on a high cliff or tree to view the landscape), the modern discipline arose with the development of flight. The balloonist G. Tournachon (alias
Nadar), who made photographs of Paris from his balloon in 1858, is considered to be the first aerial photographer. Messenger pigeons, kites, rockets and unmanned balloons were also used for early images. These first, individual images were not particularly useful for map making or for scientific purposes.
Systematic
aerial photography was developed for military purposes beginning in World War I and reaching a climax during the Cold War with the development of reconnaissance aircraft such as the U-2.
The development of artificial satellites in the latter half of the 20th century allowed remote sensing to progress to a global scale. Instrumentation aboard various Earth observing and weather satellites such as
Landsat, the Nimbus and more recent missions such as RADARSAT and UARS provided global measurements of various data for civil, scientific, and military purposes. Space probes to other planets have also provided the opportunity to conduct remote sensing studies in extra-terrestrial environments, synthetic aperture radar aboard the Magellan spacecraft provided detailed topographic maps of Venus, while instruments aboard SOHO allowed studies to be performed on the Sun and the solar wind, just to name a few examples.
Further strides were made in the 1960s and 1970s with the development of
image processing of satellite imagery, beginning in the United States. Several research groups in Silicon Valley including NASA Ames Research Center, GTE and ESL Inc. developed Fourier transform techniques leading to the first notable image enhancement of aerial photographs.
See also
Aerial photography
Geographic information system (GIS)
Geomatics
Land cover
Pictometry
Radar
·
Weather radar
Radiometry
Satellite
List of Earth observation satellites
Space probe
Vector Map
Hyperspectral
Geography

Satellites (missions, platforms and sensors)
AVHRR
OrbView
EO-1
ASTER
ENVISAT-1
ERS
GOES
IKONOS
IRS
Landsat
MODIS
Quickbird
RADARSAT
SeaWifs
SPOT ImageRemote Sensing Tutorial Introduction - Part 2 Page 16



The Multispectral Scanner (MSS) has been the "workhorse" instrument on the first 5 Landsats. Its design, parameters, and operational mode are described here in some detail. This includes a first look at how the signals it creates by scanning the Earth�s surface can be played back to produce a monitor display or photo image.


History of Remote Sensing: Landsat's Multi-Spectral Scanner (MSS)

The MSS instrument has operated on the first five Landsat spacecraft. Although the basics of scanning spectroradiometric sensors were reviewed earlier in this Section, because of MSS's important role in these missions which extended over 31 years some of this information is repeated and expanded on this page. This is a drawing of this venerable instrument, built by the Hughes Aircraft Corp. of Santa Barbara, CA:The MSS gathers light through a ground-pointing telescope (not shown). The scan mirror oscillates (1 cycle every 33 milliseconds) over an angular displacement of ± 2.89 degrees that is perpendicular to the orbital track. In the sideward (lateral) scan, the mirror covers an angle of 11.56 degrees (Angular Field of View or AFOV) that from an orbital altitude of 917 km (about ~570 miles) encompasses a swath length across the orbital track of 185 km (115 miles). During a near-instantaneous forward movement of the spacecraft (direction of orbital flight), which takes about 16 milliseconds, the mirror as it sweeps laterally (across track) is also covering a ground strip of about ~ 474 m (1554 ft) from one side of the track to the other. Said another way, this means that in the time it took to oscillate laterally, the spacecraft has advanced 454 m relative to its ground track.

Light reflected from the surface (and atmosphere) as gathered by this scan passes through an optical lens train, during which its beam is split (divided) so as to pass through 4 bandpass filters that produce images in spectral bands at MSS 4 = 0.5 - 0.6 µm (green), MSS 5 = 0.6 - 0.7 µm (red), MSS 6 = 0.7 - 0.8 µm (photo-IR), and MSS 7 = 0.8 - 1.1 µm (near-IR). (The band numbering begins with 4 because bands 1-3 were assigned to the RBV sensor.) The radiation is carried by fiber optics to six electronic detectors for each band. Bands 4 through 6 used photomultiplier tubes as detectors, and Band 7 used silicon photodiodes. Light through each filter reaches its set of six electronic detectors (24 in all, for the 4 bands) that subdivide the across-track scan into 6 parallel lines, each equivalent to a ground width of 79 m (259 ft; taken together the width covered is 6 x 259 = 1554 ft, the number stated above). The mirror movement rate (nominally, its instantaneous scan moves across the ground being imaged at a rate of 6.8 m/µsec along a scan line) is such that, at the orbital speed of 26,611 kph (16,525 mph), after the return oscillation during which no photons are collected, the next lateral swing produces a new across-track path of 6 lines (79 x 6 = 474 m) just overlapping the previous group of 6 lines. This is illustrated below: Note that the individual scan lines in this diagram are slanted relative to lines across track perpendicular to the track boundaries. These are valid traces with respect to the ground, since as the mirror moves sidewards the spacecraft is moving ever forward so that each successive moment in the scan finds its ground target (represented by the pixel) being slightly forward of the previous moment's view.

Perhaps the reader wonders, in examining the previous illustration, about the fact that no data are acquired during the return swing (other than looking then at a light source within the sensor whose known radiometric output helps to calibrate the external reflectances). Textbooks describing the operation of the MSS tend to ignore this conundrum. The writer will attempt a simple explanation: During the forward swing (for Landsat, with its southward advancing path, from west to east), the six lines are created. The reverse oscillation leads to no data. But look at the diagram. Line 1 moves left, as do the other lines. When the next forward swing occurs, line 1 is now just below line 6. The very movement of the scanned across-track scene is such that, for the scan rate involved, the next acquisition of the 6 lines starts with where the previous line 6 is located relative to the new line 1.

I-20: Individual scan lines are commonly visible (stand out) in a printed or displayed image of a Landsat scene. Can you think of a technical reason why these may be seen? ANSWER

This question suggests that anomalous scan lines are found in individual scenes. These are usually shown in black (meaning no data received). They are frequent in this Landsat-5 image (a scanner timing failure occurred that presents this problem). At each detector, the incoming light (photons) from the target frees electrons in numbers proportional to the number of photons striking the detector. These electrons move as a continuous current that passes through a counting system, which measures the quantity of electrons released (thus, indicating radiation intensity) during each nine microsecond detection interval. Over that minute time interval (called the dwell time) the advancing mirror picks up light coming from a lateral ground distance of 79 m (259 ft). The detector thus images a two-dimensional, instantaneous field of view (IFOV, usually expressed in steradians, which denotes the solid angle that subtends a spherical surface and, in scanning, connotes the tiny area, within the total area being scanned), that at any instant amounts to 0.087 mrad (milliradian, or 0.0573°). At Landsat's orbital altitude of 917 km, the effective resolving power of the instrument is based on the 79 x 79 m2 ground equivalent (pixel) dimensions described above. Each detector is then cleared of its charge so as to produce the next batch of electrons generated from the next IFOV photon inputs during the mirror's lateral sweep. As the scanning continues through the full lateral sweep the set of all IFOV pixels in the line are rapidly read in succession. The onboard computer converts this succession of analog signals (voltages) into digital values which the onboard communication system telemeters (sends) to Earth by radio.

For each band detector, the electronic signal from this IFOV results in a single digital value (called its DN or digital number, which, for the MSS, can range from 0 - 255 [28]). The value relates to the proportionally averaged reflectances from all materials within the each IFOV. Since the mix of objects on the ground constantly changes, the DN numbers vary from one IFOV to the next. Each IFOV is represented in a b & w image as a tiny point of uniform gray-level tone, the pixel described earlier in this Section, whose brightness is determined by its DN value. In a Landsat MSS band image, owing to a sampling rate (every nine microseconds) effect in which there is some overlap between successive spatial intervals on the ground, a pixel has an effective ground-equivalent dimension of 79 x 57 m (259 x 187 ft) but contains the reflectances of the full 79 m2 actually viewed. This "peculiarity", illustrated in this diagram, needs further explanation: The wider rectangle (a square for the MSS), which can be designated the Ground Resolution Cell (GRC) size, is established by the IFOV of the scanner. But because the sampling interval Δt is finite, i.e., cannot be zero, the previous and next cells contribute parts of the their represented ground scene that overlap (by 11.5 m) into each individual GRC rectangle/square. This requires removal (by resampling) of the overlap effects leading to a new resolution cell that represents the actual Ground Sampled Distance (GSD). Thus, for the Landsat MSS the GRD of 79 x 79 m becomes a GSD of 79 x 57 m. Each GSD contains all the radiation sent from the GRC for each band spectral interval, integrated into single values expressed by the DNs.

The average number of pixels within a full scan line (representing 185 km) across the orbital track is 3240 (185 km/ 0.057 km). In order to image an equi-dimensional square scene, which requires 185 km of down track coverage, the average total number of lines to do this is set at 2340 (185 km/0.079 km). Each band image therefore consists of approximately (again variable) 7,581,600 (3240 x 2340) pixels - a lot to handle during computer processing, over 30 million pixels when the 4 bands are considered. The number of pixels actually does change somewhat owing to satellite attitude (shifts in orientation (wobble) called pitch, roll, and yaw) and instrument performance that lead to slight variations in the pixel total.

Image producers can use the continuous stream of pixel values to drive an electronic device that generates a uninterrupted light beam of varying intensity, which sweeps systematically over film to produce a b & w photo image. The resulting tone variations on the image are proportional to the DNs in the array. In a different process, we can display the pixels generated from these sampling intervals as an image of each band by storing their DN values sequentially in an electronic signal array. We can then project this array line by line on to a TV monitor, and get an image made of light-sensitive spots (also called pixels) of varying brightnesses. Or, these DNs can be handled numerically, not to produce images, but to be inputs for data analysis programs (such as scene classifications as described in Section 1).

Little has been said on this page about the appearance of a basic Landsat image. This is deferred until the next page, although Landsat images have already been shown in the Overview. We comment here about several general characteristics of a Landsat image. Look at this illustration: In this scene covering part of the southern Asian country of Kyrgyzstan, one sees four strips of imagery, joined to produce a mosaic (considered in Section 7). Each strip is a swath from one orbital pass. Notice that its imagery has a slanted appearance (assuming the vertical is true north). Why this slant: because as the MSS sensor looks down at Earth while moving in its orbit, the Earth's surface underneath has been moving from west to east owing to the planet's general rotation. At the 99° inclination (relative to longitude) of Landsat's orbit, the orbit plane precesses about the Earth at the same angular rate that the Earth moves about the Sun. In the image, each successive line slips slightly westward. The accumulation of these progressive offsets results in a figure that, for any individual scene, would be a parallelgram with inclined sides. This in part also accounts for the orbital inclination of the spacecraft, to compensate somewhat for that rotation.

We said above that a Landsat scene is produced by arbitrarily stopping it at 185 km from top to bottom. But that trimming need not happen. one can continue to produce a more continuous swath scene that is elongate in the direction of orbit. In producing an individual parallelogram scene, it is customary to have about 10% of the top consisting of the bottom 10% of the previous (more northern) scene; there is a similar 10% continuance on the bottom. There is also overlap on the left-right margins; this is called sidelap. Its amount varies with latitude. At the equator, the sidelap ranges from 7% (MSS) to 14% (TM); the amount increases going towards the poles so that at high latitudes the sidelap between adjacent scenes can be as high as 80+%. This north-south and east-west overlap allows a crude form of stereo-viewing to be possible within these margins. In practice, this stereo capability is seldom utilized.