EP0622641B1 - Sonar systems - Google Patents

Sonar systems Download PDF

Info

Publication number
EP0622641B1
EP0622641B1 EP94303002A EP94303002A EP0622641B1 EP 0622641 B1 EP0622641 B1 EP 0622641B1 EP 94303002 A EP94303002 A EP 94303002A EP 94303002 A EP94303002 A EP 94303002A EP 0622641 B1 EP0622641 B1 EP 0622641B1
Authority
EP
European Patent Office
Prior art keywords
range
pixels
pixel
cross
intensities
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP94303002A
Other languages
German (de)
French (fr)
Other versions
EP0622641A3 (en
EP0622641A2 (en
Inventor
Paul P. Audi
Michael A. Deaett
Stephen G. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Raytheon Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raytheon Co filed Critical Raytheon Co
Publication of EP0622641A2 publication Critical patent/EP0622641A2/en
Publication of EP0622641A3 publication Critical patent/EP0622641A3/en
Application granted granted Critical
Publication of EP0622641B1 publication Critical patent/EP0622641B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8902Side-looking sonar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Definitions

  • This invention relates to a sonar system for mapping the floor of a body of water to identify a submerged foreign object, such mapping being formed from a sequence of echo returns, each one of such echo returns being produced as a range scan in response to a transmitted pulse directed toward the floor in a predetermined direction, a sequence of the sonar returns being received in cross-range directions perpendicular to the predetermined direction, such sonar system storing in a two dimensional array digital signals as pixels each representing digitally the intensity of the echo return at a predetermined range position from the system in the predetermined direction and a predetermined cross-range position from a reference position of the system.
  • a common sonar type used for examining the sea bottom is a Side Looking Sonar (SLS).
  • SLS Side Looking Sonar
  • An SLS is either towed or mounted on an underwater vehicle and is moved through the water in a forward direction at an approximately constant speed and depth.
  • the sonar transmits a short (typically 0.10 to 0.20 ms), high frequency (typically 500 kHz) pulse into the water and has a very narrow horizontal beam-width (typically 1 degree or less) in a direction perpendicular to the forward direction.
  • the pulse propagates through the water and reflects off the sea bottom, and the echo returns to the sonar. After transmission, the sonar begins receiving the echoes.
  • the received signal maps to a long, thin strip of the sea bottom and is called a range scan.
  • the sonar stops receiving and begins a new transmission.
  • the length of the fixed elapsed receive time determines the maximum range of the sonar along the sea bottom.
  • the range may be also limited by the sonar power.
  • the received intensity decreases with range (time elapsed from transmission). This is compensated for in the sonar by a Time Varying range-variable Gain (TVG).
  • TVG Time Varying range-variable Gain
  • the beamwidth and pulse length determine the sonar's azimuth and range resolutions, respectively.
  • the range scans correspond to subsequent parallel strips along the sea bottom thereby producing a two dimensional "map" of the sea bottom: sonar received intensity (i.e. the z axis) vs. range (i.e. the x axis) and cross-range (i.e. the y axis).
  • sonar received intensity i.e. the z axis
  • range i.e. the x axis
  • cross-range i.e. the y axis
  • the first echoes are very faint and are a product of volume scattering in the water between the sonar and the sea bottom. This is called the water column and its length (in time) depends on the sonar's altitude. These faint echoes give no information about the sea bottom and are therefore removed from the scan data.
  • the two-dimensional "map" of the sea bottom sonar received intensity vs. range and cross-range; this is called the raw image data. Because the grazing angle decreases with range and a shallow grazing angle produces less backscatter, the image intensity decreases with range. This is apparent in the raw image data: the near range data is much more intense that the far range data.
  • the raw image data is normalized to eliminate the effect of grazing angle to form the normalized image data.
  • the normalized image data can be thought of as a map of the sea bottom but this can be misleading. Although elevations will often produce stronger echoes (and therefore higher image intensity) and depressions will often produce weaker echoes (lower image intensity), echo intensity is also affected by the reflectivity of the sea bottom, the texture of the sea bottom and the local grazing angle at the sea bottom.
  • a mine on the sea bottom may produce a region of high intensity in the image (highlight) by reflecting directly back to the sonar. It may also produce a region of low intensity in the image (shadow) by blocking the sea bottom beyond itself from ensonification; these shadows are sometimes very long. If a mine is partially buried, it may not reflect any energy back to the sonar but instead reflect it away, this produces a shadow without a highlight. A method is required to analyse these patterns of shadow, background and highlight regions of SLS imagery to recognise the existence of candidate mine objects. Subsequent use of a neutralization system to remove candidate objects which pose obstructions to safe navigation follows the mine recognition processing effort.
  • GB 2 251 310A (US-A-5 181 254) describes a method for automatically identifying targets in sonar images, by detecting and classifying features in a sonar image frame comprised of rows and columns of pixels, each pixel representing a greyness level expressed digitally.
  • the image frame is normalised by taking log transform of the greyness level of each pixel, and matched filters are used to identify highlights and shadows.
  • the highlights and shadows identified are used with other data based on statistical analysis and neural network recognition to classify regions in the sonar image frame as targets, anomalies or background.
  • a sonar system of the kind defined hereinbefore at the beginning is characterised by :
  • a sequence of the sonar pulse is transmitted and directed toward the bottom.
  • the mapping is formed from a sequence of echo returns.
  • Each one of the echo returns is produced as a range scan in a range direction in response to a corresponding one of the transmitted sonar pulses.
  • the sonar system stores signals representative of the intensity of the echo returns in a two dimensional array of pixels.
  • Each one of the pixels represents the intensity of the echo return at a predetermined range position from the system in the range direction and a predetermined cross-range position from a reference position of the system in the cross-range direction.
  • a preferred embodiment of the sonar system provides an automated mine recognition system using computer evaluation of imagery to make a mine/non-mine classification decision.
  • the automated system may run unattended or serve as a decision aid to an operator, prioritizing mine-like objects. This relieves the operator from tiresome evaluation of areas which contain no candidate mines. The operator can thus spend more quality time with the mine-like objects.
  • the automatic system provides important backup for inexperienced or distracted operators.
  • a preferred sonar system produces an intensity range data image derived from backscatter of acoustic energy.
  • the locally imaged sea bottom produces a characteristic background in the sonar imagery made up of highlights and shadows.
  • a mine object in typical backgrounds ranging from smooth sand and mud to rocky areas with dense vegetation disturbs the statistical stationarity of the background resulting in a statistical anomaly.
  • the area of the anomaly is defined by the mine size, imaging resolution and geometry, beampattern and other sonar characteristics.
  • the sonar system exploits the statistical anomaly produced by the mine in the sonar imagery.
  • the side looking sonar utilizes non-linear combinations of statistically advanced "goodness-of-fit" tests and distributional analysis calculations to identify the mines.
  • a preferred embodiment of the invention in the form of an SLS system produces an intensity range scan data derived from backscatter of transmitted pulse acoustic energy. These sequential intensity range scans are then organised in memory as sequential rasters of a two dimensional (range and track) image which is then normalized according to common practice by use of a moving window normaliser to produce a normalized SLS image which is processed by the mine recogniser.
  • the mine recogniser first divides the image into contiguous frames of 512 sequential rasters each. Each of these frames is then separately processed.
  • the image is amplitude segmented by multipass median filtering of the data, then by amplitude segmenting the pixels into highlight, shadow and background categories according to data dependent thresholds.
  • a subsequent median filter is employed on the three part data to eliminate spurious collections of gathered noise pixels.
  • the segmented pixel data then contains regions of contiguous highlight, or shadow pixels which are associated and ascribed a symbolic label (numerical index).
  • the labelled regions are then sorted by area and regions falling below an area threshold or above a second area threshold are eliminated from the region list.
  • the labelled regions are then made available to a split window processor.
  • a window or data mask of fixed dimension in range and track is formed and is sequentially displaced at various regularly spaced locations across the image.
  • the pixel intensities in each of the three parallel and associated subwindows are analysed according to several statistical features to determine if the statistics of the pixel intensity distributions in the associated subwindows reliably correspond to differing underlying statistical distributions.
  • the placement of the established highlight and shadow regions across the subwindows at each location as well as the regions/sizes are used to qualify the statistically based decisions for each window placement.
  • the qualified decisions are produced according to a set of threshold manipulated rules for operating on the output of the windowed statistics generator.
  • Each window location which generates qualified statistics exceeding threshold limits is identified as a likely location for a bottom marine mine. Generated marking boxes enclosing the imagery of such identified locations are then passed on to an operator for subsequent evaluation.
  • the marked regions may be passed on to the mission controller/navigator of an autonomous marine vehicle for proper evasive action.
  • a side looking sonar system 10 maps the bottom 12 to identify a submerged foreign object.
  • the mapping is formed from a sequence of echo returns. Each one of the echo returns is produced as a range scan in the range direction 16 in response to a corresponding one of the transmitted sonar pulses.
  • the sonar system 10 includes a digital computer 17 and stores in a memory 18 thereof signals representative of the intensity of the echo returns in a two dimensional array (FIG. 3) of cells or pixels 20.
  • Each one of the pixels 20 represents the intensity I of the echo return at a predetermined range position from the system in the range direction 16 and a predetermined cross-range position from a reference position of the system in the range direction 14.
  • the system quantizes the intensity of each one of the pixels into one of a plurality of levels, and compares the distribution of the levels of pixels over a range scan at a cross-range position with the distribution of levels of pixels over a range scan at a different cross-range position to identify the existence of an underwater object.
  • a submerged, towed body 22 (FIG. 1) is tethered at the end of a cable 24, here a 25 to 150 meter tow cable.
  • the tow cable 24 supplies power, control signals, and returns detected sonar signals to a surveillance vehicle, here a ship 26.
  • the surveillance vehicle may be a helicopter or a remotely controlled underwater vessel.
  • Port and starboard transmit/receive transducer arrays (only starboard array 28s being shown) are mounted on the sides of the towed body 22 to provide, here, approximately 50 degrees of vertical directivity and 1.5 degrees of horizontal beamwidth at 100 kHz and 0.5 degrees at 400 kHz.
  • the towed body 22 normally operates at a height (h) 23 above the sea bottom 12 which, here is 10 percent of the maximum range.
  • the high resolution 400 kHz selection produces a resolution of 15 centimeters in both range and azimuth with the range resolution being provided by a 0.2 msec CW sonar pulse.
  • the high resolution mode typically yields minimum and maximum ranges of 5 meters and 50 meters, respectfully.
  • the speed of the towed body is here typically 2 to 5 knots..
  • the sonar system 10 includes a conventional sonar transmitter 30.
  • a sequence of pulses produced by the transmitter 30 is fed to a projector 32 through a matching network 34 in a conventional manner to produce transmitted sonar pulses, or pings.
  • the pings are transmitted periodically at a rate which covers the selected range without ambiguity. Thus, for a 50 meter selection, a repetition rate of about 15 transmissions per second suffices.
  • the echo return produced in response to each transmitted sonar pulse is received by an array of sonar transducers 36 and passes through a transmit/receive switch (not shown) to preamplifiers 38, in a conventional manner, as shown.
  • a subsequent amplifier 40 applies a time varying gain (TVG) selected in a conventional manner to match the expected decay curve of the return energy.
  • TVG time varying gain
  • the resulting signals are combined in a conventional beamforming network 42.
  • the output of the beamforming network 42 is passed through a filter 44 which is matched to the waveform of the transmitted sonar pulse and the filter 44 output represents the energy detected.
  • the detected energy signal is synchronized with the waveform of the transmitted pulse via synchronizer and format network 46 to produce raster-formatted floor-backscatter raw image. Since this image still contains wide area fluctuations due to bottom type and bottom slope variations, a conventional moving average normalizer 48 is passed over the data.
  • the resulting image is then displayed by chart 50 in conventional waterfall, strip chart format, as shown in FIG. 4.
  • the image content shown in FIG. 4, includes a first area of minimal volume reverberation 50 is followed by a strong bottom return 52.
  • the mid range typically displays a high contrast producing strong backscatter or highlights from proud targets 54 (i.e. mines) and associated shadows due to the absence of downrange reflected energy.
  • An idea of the height and shape of the bottom object can sometimes be obtained from the shape of the shadow although floor texture variations and local undulations cause many distortions.
  • Another common feature is the anchor drag 56 shown by a line-like shadow which is here shown oriented in a downrange direction 16. This extended shadow-has a downrange highlight, 58 due to the depression of the anchor drag 56.
  • the side looking sonar system 10 may run unattended or serve as a decision aid to an operator, prioritizing mine-like objects. This relieves the operator from tiresome evaluation of areas which contain no candidate mines. The operator can thus spend more quality time with the mine-like objects.
  • the automatic system provides important backup for inexperienced or distracted operators.
  • the side looking sonar system (SLS) 10 is shown to include a digital computer, here a conventional workstation type computer having the memory 18 and processor 19.
  • the memory 18 is used to store data and a program.
  • the program includes a set of instructions to enable the processor 19 to process data stored in the memory 18 in accordance with the program.
  • FIG. 5 is a flow diagram of the stored program. Steps 1 and 2 correspond to the sonar ranging and normalization which produce frames of raw image data. Each frame is a two dimensional map formed from a sequence of echo returns (here a frame is 512 range scans each of which consists of 1024 positions (or pixels), for example).
  • Each one of the echo returns is produced as a range scan in the range direction 16 in response to a corresponding one of the transmitted sonar pulses.
  • the sonar system 10 stores in memory 18 thereof signals representative of the intensity of the echo returns in a two dimensional array (FIG. 2) of pixels 20.
  • Each one of the pixels 20 represents the intensity I of the echo return at a predetermined range position from the system in the range direction 16 and a predetermined cross-range position in the cross-range direction 14 from a reference position of the system 10.
  • the processor 19 normalizes the data. In Steps 3 through 7 the processor 19 processes the raw image data to quantize the intensity of each one of the pixels into one of three levels (i.e.
  • Steps 8 through 13 comprise the window-based statistical feature extraction steps which are based on probability distributions and on segmentation-derived region window coverage percentages.
  • Steps 14 through 17 are the decision calculations which evaluate the statistical features and window coverage percentages to determine if a mine-like object is present in the imagery. The locations of such objects are then fed out to memory 18 along with sections of imagery data as is indicated in step 18. Each of these steps is discussed in more detail below.
  • Image quantization (or segmentation) is used to determine region features.
  • the original byte-per-pixel intensity data image in memory 18 is partitioned (i.e. quantized) spatially (i.e. segmented) into areas whose overall intensity is high (highlights), areas whose overall intensity is low (shadows) and areas whose overall intensity is average (background).
  • Quantizing the image to the three levels (high, low, average) in this way produces contiguous image areas called regions whenever pixels in an area have similar low or high values. Such regions may be due to an image highlight or to a shadow resulting from a mine.
  • the normalized image data in memory 18 is examined by the processor 19 on a frame basis; each frame has the same range dimension as the normalized data and contains a fixed number of scans.
  • the rest of the processing is done on the image frame, which is typically 1024 pixels in range by 512 pixels in cross-range (scans).
  • the normalized pixel or position data are typically one byte integers with minimum intensity at 0 and maximum intensity at 255.
  • the steps subsequent to normalization are:
  • the five-point median filter determines a new value for each pixel or position in the image frame (except the edges) from the five pixels, or positions, that form a cross centered on the pixel of interest, as shown in FIG. 6.
  • the median value of the five pixels is determined; this determines the new value of the pixel, or position, of interest (i,j), where i is the range position and j is the cross-range position. This is done for every pixel or position in the image frame except the edges.
  • the new pixel values are stored in a section of memory 18 and do not replace the old pixel values stored in a different section of the memory 18 until all of the pixels are done. (This process is repeated three times for obtaining an average).
  • the quantization process performed by processor 19, which follows, replaces the 256-level image data stored in a section of the memory 18 by one of three values, (i.e. here 0, 1 or 2) which represent background, shadow or highlight, respectively. This is done by thresholding the pixel data at two levels: the shadow threshold, t s , and the highlight threshold, t h .
  • the quantization process is performed by the processor 19 on a sub-frame basis where the sub-frames are 64 by 512 pixel sections of the image frame, as shown in FIG. 7.
  • Different shadow and highlight thresholds t s , t h are determined for each of the 16 sub-frames and are used to segment the pixels in that sub-frame.
  • the thresholds t s , t h are selected so that a fixed percentage of the pixels are shadows and fixed percentage of the pixels are highlights.
  • This is done by the processor 19 forming the cumulative distribution function (CDF) of the pixel data in the sub-frame.
  • CDF cumulative distribution function
  • the processor 19 examines it to determine the shadow and highlight thresholds, t s , t h . Let p s (0 ⁇ p s ⁇ 1) be the desired percentage of shadow pixels and p h be the desired percentage of highlight pixels.
  • the CDF must be examined to determine at what levels the above equations are satisfied.
  • the thresholds t s , t h are then used by the processor 19 to convert the pixel image data to a quantized image of three pixel values (i.e. here 0. 1 or 2). This is done by the processor 19 for each subframe.
  • the shadow threshold t s and a highlight threshold t h are used to partition the pixels into the three segmentation values (or classes). These thresholds t s , t h are determined based, as described above, on the image data statistical character to provide robustness to bottom-type variation and an enhanced ability to detect in low contrast situations.
  • the threshold selection is done by processor 19 on subframes of the frame as shown in FIG. 7.
  • the image is statistically non-stationary in the range direction 16 due to sub-optimally matched sonar TVG curves, imperfect normalization and reduced energy in the far field.
  • the discrete probability distribution is found in 16 subframes resulting in 32 thresholds for a much improved and more faithful segmentation.
  • the second set of median filtering performed by processor 19 is essentially the same as the first set except that a three by three square of pixels, centered on the pixel of interest is used, as shown in FIG. 8.
  • the second set of median filters have the function of smoothing the quantized image data.
  • the three level image resulting from the Step 5 is then filtered to remove small, isolated regions. Median filtering is used to preserve region edges. Here a 3 x 3 kernel iterated 3 times.
  • a connected components algorithm is performed by the processor 19 on the quantized data levels stored in memory 18, for example, the quantized data levels shown in FIG. 9A, to identify regions of contiguous shadow or highlight pixels.
  • a shadow region is a group of contiguous shadow pixels (i.e. pixels having a quantized data level of 1) and a highlight region is a group of contiguous highlight pixels (i.e. pixels having a quantized data level of 2).
  • the connected components algorithm labels each pixel with a region number so that all contiguous pixels belonging to the same contiguous region have the same "region number". For example, in FIG. 9A, there are four labelled region numbers 1-4: Region numbers 1 and 3 are shadow regions; and, region numbers 2 and 4 are highlight regions. The remaining regions are background regions.
  • the region identification process is performed by the processor 19 making three "passes" through data stored in memory 18.
  • highlight regions are separated from shadow regions by at least one pixel whenever a highlight region is bounded by a shadow region. This is performed by the processor 19 investigating each pixel, sequentially.
  • the quantized data level of the pixel under investigation is changed to a background pixel (i.e., the quantized data level of the central pixel is changed from 1 to 0).
  • the completion of this first "pass" results in a separated segmentation array of quantized data levels, as for example, the array shown in FIG. 9A which is now stored by processor 19 in memory 18.
  • the processor 19 assigns "integer indices" to highlight and shadow pixels as codes for eventually producing, in the third "pass", the "region numbers”.
  • the "integer indices” are assigned by passing a 2x2 pixel kernel, shown in FIG. 9B, sequentially over each pixel P n (i,j) in the segmented array. As indicated in FIG. 9B, the pixel P n (i,j) is located in the upper right hand corner of the kernel.
  • the pixel to the left of pixel P n (i,j) is a position labelled "a"
  • the pixel diagonally down to the left of pixel P n (i,j) is a position labelled "b”
  • the pixel below pixel P n (i,j) is a position labelled "c”.
  • the kernel For each pixel, P n (i,j) under investigation during the second "pass", the kernel is placed over four pixels, with the pixel P n (i,j) under investigation being in the upper right hand corner, as described above in connection with FIG. 9B.
  • the kernel moves from the left of the bottom row of the segmented array, (i.e. pixel P(2,2)), FIG. 9A, to the right until the pixel in the last column of the second row (i.e. pixel P(2,8)) is investigated.
  • the kernel then moves to the second left hand column of the next higher row (pixel P(3,2)) and the process continues until the kernel reaches the right column of the top row (pixel P(8,8)).
  • FIG. 9A stored in memory 18 is modified by processor 19, into an "integer index” array, as shown in FIG. 9K, as a result of an investigation made during the second "pass".
  • an "integer index” array (FIG. 9K) an "integer index” is assigned to the pixel position under investigation in accordance with a set of rules (illustrated in FIGs. 9C to 9I).
  • FIG. 9J an "integer index table” shown in FIG. 9J is maintained by processor 19 to record, sequentially from the top of the "integer index table", each "integer index” (r) used in the array modification process and the "value” of such "integer index”, as will be described in detail hereinafter.
  • the set of rules is as follows: At each kernel position, if the quantized data level of the pixel P n (i,j) (FIG. 9B) under investigation is 0, then the "integer index" assigned to such pixel P n (i,j) in the modified array of FIG. 9K is 0 (i.e., Rule 1, FIG. 9C); else, if the "integer index" of the pixel in the "b" position of the kernel is non-zero (i.e., nz), then the "integer index” in the "b” position is assigned to the pixel P n (i,j) in the modified array (FIG. 9K) (i.e., Rule 2, FIG.
  • the first pixel under investigation by the processor 19 during the second "pass" is pixel P(2,2).
  • the modified, "integer index” array (FIG. 9K) is assigned an “integer index” of 0.
  • the use of the "integer index” 0 is recorded at the top of the “integer index table” (FIG. 9J) and such recorded “integer index” is assigned a "value” of 0, as shown.
  • the next pixel under investigation, P(2,3) is assigned an "integer index” of 0 in the "integer index” array, FIG. 9K.
  • P(2,4) is assigned an "integer index” equal to the next sequentially available "integer index” (that is, the last "integer index” used (and as recorded in the "integer index table” (FIG. 9J) is 0), incremented by 1); that is, pixel P(2,4) is assigned an "integer index” of 1, as shown in FIG. 9K.
  • pixel P(2,8) has, as shown in FIG. 9A, a quantized data level of 1. Further, the "integer index" in the position "b" of the kernel (i.e., pixel P(1,7) is 0 (and the “integer index” thereof has not been previously modified, as shown in FIG 9K). Still further, the "integer indices" of pixels P(2,7) and P(1,8) are both zero. Thus, from Rule 3, pixel P(2,8) is assigned the next sequential "integer index", 2, (i.e., the last "integer index” used (i.e. 1) incremented by 1, as shown in FIG. 9K. The use of a new "integer index", 2, is recorded in the "integer index table" (FIG.
  • Pixel P(3,3) is assigned an "integer index” of 0 in accordance with Rule 1 because it has a quantized data level of 0.
  • Pixel P(3,4) is assigned an "integer index” of 1 in accordance with Rule 5 because it does not have a quantized data level of 0, and because the "integer index" of pixel P(2,3) (i.e., the pixel in position "b" of the kernel) is not non-zero (FIG.
  • Pixel P(3,5) is assigned an “integer index” of 1 in accordance with Rule 2.
  • Pixel P(3,6) is assigned an “integer index” of 0 in accordance with Rule 1.
  • Pixel P(3,7) is assigned a new “integer index” of 4 in accordance with Rule 3 and such is recorded in the "integer index table" of FIG.
  • Pixel P(3,8) is assigned an "integer index" of 4 in accordance with Rule 7 because it does not have a quantized data level of 0, and the "integer index" of pixel P(2,7) (i.e., the pixel in position "b" of the kernel) is not non-zero, and because the "integer indices" of the pixels in positions "a” and "c" of the kernel (i.e., pixels P(3,7) and P(2,8) are both not zero and are unequal.
  • the processor 19 assigns as an "integer index” to pixel P(3,8) the “integer index” of the pixel in position "a" of the kernel (i.e., the “integer index” of pixel P(3,7), here an “integer index” of 4.
  • the processor 19 then reads each of the “integer indices" in the “integer index table” (FIG. 9J) and each time, if the "value” of the read index equals the "value" of the "integer index” in position "c" of the kernel, then the “value” for the currently read “integer index” read from the "integer index table” is reset to a “value” equal to the "integer index” of position "a” of the kernel.
  • the processor 19 then resets the "value” of the "integer index” found (i.e. here the “value” 2 of the found “integer index” 2) to the "value” of the "integer index” in the "a” position of the kernel (i.e., the processor 19 resets the "value” 2 of the “integer index” 2 in position “c” to the "value” 4, the "value” of the “integer index” in the "a” position).
  • the resetting of the "value” 2 to the "value” of 4 is recorded in the second "value” column of the table (FIG. 9J) for the "integer index" 2.
  • Pixel P(4,2) is assigned an "integer index” of 3 in accordance with Rule 5 (i.e., the "integer index” of position "c" of the kernel, here pixel P(3,2)).
  • Pixels P(4,3), P(4,4), P(4,5), and P(4,6) are assigned "integer indices” of 0, 0, 1, 1, respectively in accordance with Rules 1, 1, 2, and 2, respectively.
  • pixel P(4,7) is investigated. It is first noted that Rule 7 applies. Pixel P(4,7) is assigned as an "integer index", the “integer index” of position "a” in the kernel (i.e., the “integer index” of pixel P(4,6), here 1).
  • the processor 19 reads each of the "integer indices" in the “integer index table” (FIG. 9J) and each time, if the "value" of the read index equals the "value” of the “integer index” in position "c" of the kernel, then the "value” for the currently read “integer index” read from the “integer index table” is reset to a "value” equal to the "integer index” of position "a” of the kernel.
  • pixel P(3,7) has an assigned “integer index” of 4 and such “integer index” has a “value” of 4.
  • the processor 19 resets the "value" of the "integer index” associated with the position "c" (i.e.
  • FIG. 9K The process continues with the results produced being shown in FIG. 9K. It is noted that pixel P(5,6) is assigned a "integer index” of 1 from Rule 2. It is also noted that pixel P(6,3) is assigned an “integer index” of 5 in accordance with Rule 3 and that such is recorded in “integer index table” of FIG. 9J. It is also noted that pixels P(7,2), P(7,3), P(7,5), P(7,6) are assigned “integer indices” of 6, 6, 7, 7 in accordance with Rules 3, 7, 3, and 4 respectively. Also note that the "integer index table" (FIG. 9J) was modified when Rule 7 was applied to pixel P(7,3) by resetting the "value” of "integer index” 5 to a "value” of 6, as shown in FIG. 9J.
  • each "value” recorded is associated with a "region number”.
  • there are four “values” i.e. "values” 1, 3, 6 and 7) which is consistent from FIG. 9A, where four regions are also indicated.
  • “value” 1 is associated with “integer indices” 1, 2, and 4;
  • “value” 3 is associated with “integer index” 3;
  • “value” 6 is associated with “integer indices” 5 and 6; and
  • value” 7 is associated with “integer index” 7.
  • the processor 19 labels the different "values" with sequential "region numbers".
  • pixels with “integer indices” having “values" 1, 3, 6 and 7 are assigned “region numbers” 1, 2, 3 and 4, respectively, as shown in FIG. 9A.
  • each one of the non-background regions i.e. regions 1, 3 and 4
  • the area of each one of the non-background regions are calculated by the processor 19 counting the number of pixels belonging to each one of the non-background regions via the label map.
  • the non-background regions are sorted by area and those regions which are too small or too large to be conceivably related to mine-like objects are removed from the region table.
  • the region label map (FIG. 9) is altered to replace the eliminated region pixels with background pixels.
  • the processor 19 forms a double split window 60.
  • the window 60 is used by the processor 19 to examine both the normalized image data (Steps 1 and 2) and the region label map (Steps 3 - 7).
  • the window 60 is here 128 pixels long in the range direction 16 and 39 pixels wide in cross-range direction 14.
  • the window 60 is elongated in the range direction 16 and is divided into three main sections: a pair of outer window sections 62, 64; and, a middle window section 66.
  • the middle window section 66 is positioned between an upper one of the outer window sections 62 and a lower one of the outer window sections 64.
  • the upper window section 62 is positioned over pixels forward, in the cross-range direction 14, of the middle window section 66.
  • the lower window section 64 is positioned over pixels rearward, in the cross-range direction 14 of the middle window section 66.
  • Each of the three window sections 62, 64, 66 is 128 pixels long in the range direction 16.
  • the middle section 66 is 11 pixels wide in the cross-range direction 14 and is centered within the window 60.
  • Each of the outer sections 62, 64 is 12 pixels wide in the cross-range direction 14.
  • There is a 2 pixel wide gap 68, 70 in the cross-range direction 14 between the middle section 66 and each of the outer sections 62, 64. Note that: 12+2+11+2+12 39.
  • the window 60 is placed at various locations throughout the image and is used to compare the pixels in the middle section 66 to pixels in each of the outer sections 62, 64 at each window location. The pixels contained in the gaps 68, 70 are not examined.
  • the window 60 is placed at regular intervals throughout the image frame in order to examine the whole image frame.
  • the window 60 is placed at intervals of 16 pixels in the range direction 16 and at intervals of 2 pixels in the cross-range direction 14.
  • Steps 8, 9, 11 and 12 When the window 60 is placed on the normalized data (Steps 8, 9, 11 and 12), statistics are determined for each section 62, 64, 66 and the middle section 66 is compared to each of the outer sections 62, 64 using the advanced statistics. Statistical tests determine if there is an anomaly in the middle section 66 that may be caused by a mine.
  • the split window 60 is placed on the region label map (steps 8, 10 and 13)
  • the percentage of region pixels in each window section (percent fill) and the total area of all the regions that are contained (in full or in part) in the middle section 66 (area sum) are determined .
  • the percent fill is used to ensure that any regions of interest are in the middle section 66 as opposed to the outer sections 62, 64.
  • the area sum is used to eliminate very large regions that extend outside of the window 60.
  • histograms are determined in each of the window sections (i.e. the upper window section 62, the middle window section 66, and the lower window section 64). That is, the processor 19 considers pixel intensity (grey-level) values in the three window sections 62, 64, 66 involved in the double split window 60 element as previously described.
  • the center window center 66 The pixels which define the image contained in the center section 66 have a spatial organization which defines the image.
  • An example of a simple summary statistic is the mean (average) grey level intensity.
  • the variance of the grey level intensity gives more information (in a general sense) about the distribution of pixel grey levels and gives the dispersion about the mean of the grey level values.
  • the mean and variance summarize properties of the pixel grey level empirical probability density function (histogram or pdf) which description contains all first-order statistical information in the pixel grey levels.
  • the SLS 10 statistical features compare grey level histograms and functions of these and thus uses the complete first-order statistical information rather than the incomplete summary statistics.
  • Each of three sections of the window i.e. upper 62, lower 64, middle 66
  • the processor compares a distribution of the levels of cells over range scans at a first set of adjacent cross-range positions (i.e., the cells in a first one of the three window sections) with the distribution of levels of cells over a range scan at a second, different set of adjacent cross-range positions (i.e., the cells in a second one of the three window sections) and the distribution of the levels of cells over range scans at a third set of adjacent cross-range positions (i.e., the cells in a third one of the three window sections) to identify the existence of an underwater object.
  • the first step (Step 9) in the statistical feature calculations performed by processor 19 is to determine histograms in each of the three window sections (i. e. the upper window section 62, the middle window section 66 and the lower window section 64.)
  • the next step (Step 11) is to calculate, for each of the three window sections 62, 64, 66, the probability density functions (pdf) for each of the three window sections: u i , the pdf for the upper window section 62; s i , the pdf for the center window section 66; and d i , the pdf for the lower window section 64 (where i is the ith bin of the histogram).
  • PDF probability density functions
  • Step 11 also calculates, for each of the three window sections 62, 64, 66, the cumulative distribution functions (CDF) for each of the three window sections: U i , the CDF for the upper window section 62; S i , the CDF for the center window section 66; and D i , the CDF for the lower window section 64 (where i is the ith bin of the histogram).
  • CDF cumulative distribution functions
  • the processor 19 calculates advanced statistics (Step 12). These advanced statistics are: Modified Pearson's Detection Feature; Kolmogorov Statistic Feature; Grey Level Entropy Detection Feature; and, Multinomial Statistic Feature. Modified Pearson's Detection Feature - A modification of Pearson's chi-squared test for the difference between two populations is used by processor 19 to develop a measure of the statistical difference between the center window section 66 and the upper and lower adjacent window sections 62, 64, respectively.
  • advanced statistics are: Modified Pearson's Detection Feature; Kolmogorov Statistic Feature; Grey Level Entropy Detection Feature; and, Multinomial Statistic Feature. Modified Pearson's Detection Feature - A modification of Pearson's chi-squared test for the difference between two populations is used by processor 19 to develop a measure of the statistical difference between the center window section 66 and the upper and lower adjacent window sections 62, 64, respectively.
  • Pearson's chi-squared test statistic is used to indicate whether or not a test population can statistically be regarded as having the same distribution as a standard population.
  • the test statistic is expressed as where N i are the observed frequencies and nq i are the expected frequencies.
  • ⁇ 2 is a measure of the departure of the observed N i from their expectation nq i .
  • the outer windows 62, 64 are used to define the expected population.
  • a modified Pearson's test statistic to remove the expected population bias introduced by the denominator term and a test statistic is computed by the processor 19 using the empirical probability density functions (pdf) as follows. Let s i , u i , d i be the value of the ith bin of the pdf from the center (s), upper (u), and lower (d) window sections 66, 62, 64, respectively. The "modified Pearson's" test statistic is thus, for the upper and lower window sections 62, 64, respectively.
  • Kolmogorov Statistic Feature This statistical analysis provides a "goodness-of-fit" test. The Kolmogorov statistic tests the hypothesis that two populations are statistically the same by use of the cumulative distribution function. Instead of using the test in the context of analysis, the test statistic is used as a feature.
  • the processor 19 takes the difference unlike the Kolmogorov feature because the sign of the result is significant. That is, the entropy in a single one of the three window sections 62, 64, 66 alone has an intrinsic significance which indicates the presence of unusual grey level distributions. Thus, the entropy feature has an absolute significance in contrast to the previous two features which are relative comparisons.
  • Multinomial Statistic Feature This feature essentially measures the likelihood of the empirical pdf in the center window section 66 relative to each of the outer window sections 62, 64 which are assumed to represent universal populations. It treats the empirical pdf of the center window section 66 as the outcome of a series of multinomial trials. The probability of the so-formed pdf is given by the multinomial density.
  • the processor 19 assumes the population pdf is given by either outer window sections 62, 64.
  • n ⁇ k i is the number of trials or pixels in the one of the outer window sections 62, 64 under consideration
  • b is the number of bins
  • ⁇ k i / i is the probability of the ⁇ i pixel level occurring k i times in the n pixel population of the window section.
  • the processor 19 implicitly calculates component probabilities for upper and lower window sections 62, 64 by deriving ⁇ from u or d (the pdfs associated with upper and lower window sections 62, 64).
  • the k i are determined from the center window 66 pdf.
  • the processor 19 actually calculates the information associated with the two probabilities.
  • the T(1) and log functions are table look-ups.
  • Constraints on acceptable region properties are used by processor 19 to improve statistical detector performance.
  • the presence of a predominance of pixels in the center window section 66 belonging to contiguous regions suggests the influence of a mine-like object. Typical noise backgrounds will not produce significant contiguous regions as a result of the quantization and region selection (i.e., culling) operations.
  • the percent fill constraint is used to ensure that the regions of interest are in the middle window section 66 as opposed to the outer window sections 62, 64. For each window location:
  • the area sum constraint is used to eliminate very large regions that extend outside of the window 60. For each window location:
  • FIG. 11 shows the acceptance region related to the percent fill constraint in cross hatching.
  • "sfill”, “ufill” and “dfill” are the proportion of region pixels for the center, upper and lower window sections 66, 62, 64, respectively.
  • the constraint insures that the center window section 66 be largely composed of region pixels at the same time that region pixels are limited in the worst of the outer window sections 62, 64.
  • Parameters t7 and t8 have been determined empirically and a quadratic function is used to partition the space. Particular t7 and t8 and the partition function are not essential, but use of percent fill and comparison of "sfill", “ufill” and “dfill” is significant.
  • a final region constraint requires that the total number of pixels of regions intersected by the center window section 66 be less than a multiple of the number of pixels in the center window section 66. This multiple is varied between 2.0 and 3.0 as a linear function of "sfill".
  • the area constraint feature guards against the spurious detection of large contiguous regions while the percent fill constraint principally guards against spurious detection of finely fragmented regions.
  • the SLS 10 processor 19 relies on the combination of the statistical features to evaluate evidence for mines.
  • the logical decision space for the statistical features is best represented in logical equation form as (p>t 3 & k>t 2 ) or (p>t 4 & k>t 1 ) or (m>t 5 & e>0.0) or (m>t 6 ) where p is the modified Pearson's feature, k is the Kolmogorov feature, e is the grey level entropy feature and m is the multinomial feature.
  • Optimum t i are determined empirically by training on data with identified mine targets. Note that Clause (1) above works with units that are probability increments while Clause (2) above works with units of information in bits (because when logarithms are taken for the calculation base 2 is used).
  • Step 16 Constraints on acceptable region properties are used in Steps 16 and 17 to improve statistical detector performance.
  • Step 16 the presence of a predominance of pixels in the center window section belonging to contiguous regions suggests the influence of a mine-like object. (Typical noise backgrounds will not produce significant contiguous regions through the segmentation and region selection (i.e., culling) operations).
  • the percent fill constraint is used to ensure that the regions of interest are in the middle window section 66 as opposed to the outer window sections 62, 64.
  • Step 16 the area sum constraint is used to eliminate very large regions that extend outside of the window 60.
  • the general principles of the SLS are applied to find evidence of mine-like objects in SLS images by means of specific parameter selections.
  • the center and adjacent window section lengths, widths, spacings and increment overlaps are broadly matched to expected mine-like evidence spatial dimensions.
  • Parameterization of the statistical feature thresholds has as much to do with background and sonar characteristics as it does with mine-like object characteristics. It is, therefore, conceivable that the SLS window sections could be adjusted to suit detection of other kinds of objects with known SLS image properties.
  • the SLS is able to work with very marginal image evidence (at or near zero dB SNR) to produce the mine-like object decision.
  • Step 17 If the region appears to contain a mine-like object, such region is added to a list of mine-like objects (Step 17).
  • the SLS does not falsely detect rocks, vegetation or transition areas and correctly finds the mine-like objects, the evidence for which in the image is marginal. The process continues until data from the last window has been processed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Image Processing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Radar Systems Or Details Thereof (AREA)

Description

  • This invention relates to a sonar system for mapping the floor of a body of water to identify a submerged foreign object, such mapping being formed from a sequence of echo returns, each one of such echo returns being produced as a range scan in response to a transmitted pulse directed toward the floor in a predetermined direction, a sequence of the sonar returns being received in cross-range directions perpendicular to the predetermined direction, such sonar system storing in a two dimensional array digital signals as pixels each representing digitally the intensity of the echo return at a predetermined range position from the system in the predetermined direction and a predetermined cross-range position from a reference position of the system.
  • As is known in the art, it is sometimes desirable to identify submerged foreign objects such as mines, cables or oil pipe lines using sonar. A common sonar type used for examining the sea bottom is a Side Looking Sonar (SLS). An SLS is either towed or mounted on an underwater vehicle and is moved through the water in a forward direction at an approximately constant speed and depth. The sonar transmits a short (typically 0.10 to 0.20 ms), high frequency (typically 500 kHz) pulse into the water and has a very narrow horizontal beam-width (typically 1 degree or less) in a direction perpendicular to the forward direction. The pulse propagates through the water and reflects off the sea bottom, and the echo returns to the sonar. After transmission, the sonar begins receiving the echoes. Echoes that arrive later in time come from further away on the sea bottom. The received signal maps to a long, thin strip of the sea bottom and is called a range scan. After a fixed elapsed time, and after the vehicle has moved a short distance in the forward (cross-range) direction, the sonar stops receiving and begins a new transmission. The length of the fixed elapsed receive time determines the maximum range of the sonar along the sea bottom. The range may be also limited by the sonar power. Because of spreading and absorption loss, the received intensity decreases with range (time elapsed from transmission). This is compensated for in the sonar by a Time Varying range-variable Gain (TVG). The beamwidth and pulse length determine the sonar's azimuth and range resolutions, respectively. As the sonar moves in the forward (i.e. cross-range) direction, the range scans correspond to subsequent parallel strips along the sea bottom thereby producing a two dimensional "map" of the sea bottom: sonar received intensity (i.e. the z axis) vs. range (i.e. the x axis) and cross-range (i.e. the y axis). An SLS sometimes transmits and receives on both the port and starboard sides and produces two images.
  • Because the sonar travels at a certain altitude above the sea bottom, the first echoes are very faint and are a product of volume scattering in the water between the sonar and the sea bottom. This is called the water column and its length (in time) depends on the sonar's altitude. These faint echoes give no information about the sea bottom and are therefore removed from the scan data. What remains is the two-dimensional "map" of the sea bottom: sonar received intensity vs. range and cross-range; this is called the raw image data. Because the grazing angle decreases with range and a shallow grazing angle produces less backscatter, the image intensity decreases with range. This is apparent in the raw image data: the near range data is much more intense that the far range data. The raw image data is normalized to eliminate the effect of grazing angle to form the normalized image data.
  • The normalized image data can be thought of as a map of the sea bottom but this can be misleading. Although elevations will often produce stronger echoes (and therefore higher image intensity) and depressions will often produce weaker echoes (lower image intensity), echo intensity is also affected by the reflectivity of the sea bottom, the texture of the sea bottom and the local grazing angle at the sea bottom.
  • A mine on the sea bottom may produce a region of high intensity in the image (highlight) by reflecting directly back to the sonar. It may also produce a region of low intensity in the image (shadow) by blocking the sea bottom beyond itself from ensonification; these shadows are sometimes very long. If a mine is partially buried, it may not reflect any energy back to the sonar but instead reflect it away, this produces a shadow without a highlight. A method is required to analyse these patterns of shadow, background and highlight regions of SLS imagery to recognise the existence of candidate mine objects. Subsequent use of a neutralization system to remove candidate objects which pose obstructions to safe navigation follows the mine recognition processing effort.
  • Today, mine recognition in sonar imagery is generally performed manually. Human operators are trained to evaluate the high-resolution imagery and look for clues to a mine, which as a set of typical characteristics. Human interpretation is the current state of the art for SLS imaging sonars. The operator must spend considerable time analysing the data to determine which returns are from mines.
  • GB 2 251 310A (US-A-5 181 254) describes a method for automatically identifying targets in sonar images, by detecting and classifying features in a sonar image frame comprised of rows and columns of pixels, each pixel representing a greyness level expressed digitally. The image frame is normalised by taking log transform of the greyness level of each pixel, and matched filters are used to identify highlights and shadows. The highlights and shadows identified are used with other data based on statistical analysis and neural network recognition to classify regions in the sonar image frame as targets, anomalies or background.
  • According to the present invention, a sonar system of the kind defined hereinbefore at the beginning is
    characterised by:
  • means for further quantizing the intensity of each one of the pixels to one of three different values, one of the values being indicative of a shadow and one of the values being indicative of a highlight; and
  • means for comparing a distribution of the intensities of pixels over a range scan at a cross-range position with the distribution of the intensities of pixels over a range scan at a different cross-range position to identify the existence of an underwater object consistent with said pixel values indicative of shadow or highlight.
  • In the operation of a sonar system, embodying the present invention, for mapping the bottom of the body of water to identify a submerged foreign object, a sequence of the sonar pulse is transmitted and directed toward the bottom. The mapping is formed from a sequence of echo returns. Each one of the echo returns is produced as a range scan in a range direction in response to a corresponding one of the transmitted sonar pulses. The sonar system stores signals representative of the intensity of the echo returns in a two dimensional array of pixels. Each one of the pixels represents the intensity of the echo return at a predetermined range position from the system in the range direction and a predetermined cross-range position from a reference position of the system in the cross-range direction.
  • A preferred embodiment of the sonar system provides an automated mine recognition system using computer evaluation of imagery to make a mine/non-mine classification decision. The automated system may run unattended or serve as a decision aid to an operator, prioritizing mine-like objects. This relieves the operator from tiresome evaluation of areas which contain no candidate mines. The operator can thus spend more quality time with the mine-like objects. The automatic system provides important backup for inexperienced or distracted operators.
  • A preferred sonar system produces an intensity range data image derived from backscatter of acoustic energy. Generally, in an area where there are no mines, the locally imaged sea bottom produces a characteristic background in the sonar imagery made up of highlights and shadows. A mine object in typical backgrounds ranging from smooth sand and mud to rocky areas with dense vegetation disturbs the statistical stationarity of the background resulting in a statistical anomaly. The area of the anomaly is defined by the mine size, imaging resolution and geometry, beampattern and other sonar characteristics. The sonar system exploits the statistical anomaly produced by the mine in the sonar imagery. The side looking sonar utilizes non-linear combinations of statistically advanced "goodness-of-fit" tests and distributional analysis calculations to identify the mines.
  • A preferred embodiment of the invention in the form of an SLS system produces an intensity range scan data derived from backscatter of transmitted pulse acoustic energy. These sequential intensity range scans are then organised in memory as sequential rasters of a two dimensional (range and track) image which is then normalized according to common practice by use of a moving window normaliser to produce a normalized SLS image which is processed by the mine recogniser.
  • The mine recogniser first divides the image into contiguous frames of 512 sequential rasters each. Each of these frames is then separately processed.
  • In a first step, the image is amplitude segmented by multipass median filtering of the data, then by amplitude segmenting the pixels into highlight, shadow and background categories according to data dependent thresholds. A subsequent median filter is employed on the three part data to eliminate spurious collections of gathered noise pixels. The segmented pixel data then contains regions of contiguous highlight, or shadow pixels which are associated and ascribed a symbolic label (numerical index). The labelled regions are then sorted by area and regions falling below an area threshold or above a second area threshold are eliminated from the region list. The labelled regions are then made available to a split window processor.
  • According to a double split window aspect of the processing, a window or data mask of fixed dimension in range and track is formed and is sequentially displaced at various regularly spaced locations across the image. At each placement, the pixel intensities in each of the three parallel and associated subwindows are analysed according to several statistical features to determine if the statistics of the pixel intensity distributions in the associated subwindows reliably correspond to differing underlying statistical distributions. In addition, the placement of the established highlight and shadow regions across the subwindows at each location as well as the regions/sizes are used to qualify the statistically based decisions for each window placement. The qualified decisions are produced according to a set of threshold manipulated rules for operating on the output of the windowed statistics generator. Each window location which generates qualified statistics exceeding threshold limits is identified as a likely location for a bottom marine mine. Generated marking boxes enclosing the imagery of such identified locations are then passed on to an operator for subsequent evaluation.
  • The marked regions may be passed on to the mission controller/navigator of an autonomous marine vehicle for proper evasive action.
  • Brief Description of the Drawings
  • For a more complete understanding of the concepts of the invention, reference is now made to the following drawings, in which:
  • FIG. 1 is a diagram of a side looking sonar system adapted to map the bottom of a body of water and to identify submerged foreign objects according to the invention;
  • FIG. 2 is a block diagram of the sonar system of FIG. 1;
  • FIG. 3 is a diagram used to describe the range direction and cross range direction used by the sonar system of FIG. 1;
  • FIG. 4 is a sketch illustrating submerged foreign objects typically found on the bottom of water being mapped by the sonar system of FIG. 1;
  • FIG. 5 is a flow diagram of the processing used by the sonar system of FIG. 1;
  • FIGs. 6, 7 and 8 are diagrams useful in understanding the processing described in accordance with the flow diagram of FIG. 5;
  • FIGs. 9A - 9J are diagrams useful in understanding the region labelling process used in the processing described in connection with the flow diagram of FIG. 5;
  • FIG. 10 shows three windows of cells over range scans, each window having data at different cross range positions, the data in such windows being used by-the sonar system of FIG. 1 to determine the cross range length of submerged foreign objects; and
  • FIG. 11 is a diagram useful in understanding the operation of the sonar system of FIG. 1 in determining mine like submerged objects.
  • Description of the Preferred Embodiments
  • Referring now to FIGs. 1 and 2, a side looking sonar system 10 is shown. The side looking sonar system 10 maps the bottom 12 to identify a submerged foreign object. As the sonar system 10 moves in a forward, or cross-range direction 14, a sequence of the sonar pulses is transmitted and directed toward the bottom in a range direction 16 perpendicular to the cross-range direction 14. The mapping is formed from a sequence of echo returns. Each one of the echo returns is produced as a range scan in the range direction 16 in response to a corresponding one of the transmitted sonar pulses. The sonar system 10 includes a digital computer 17 and stores in a memory 18 thereof signals representative of the intensity of the echo returns in a two dimensional array (FIG. 3) of cells or pixels 20. Each one of the pixels 20 represents the intensity I of the echo return at a predetermined range position from the system in the range direction 16 and a predetermined cross-range position from a reference position of the system in the range direction 14. The system quantizes the intensity of each one of the pixels into one of a plurality of levels, and compares the distribution of the levels of pixels over a range scan at a cross-range position with the distribution of levels of pixels over a range scan at a different cross-range position to identify the existence of an underwater object.
  • More particularly, a submerged, towed body 22 (FIG. 1) is tethered at the end of a cable 24, here a 25 to 150 meter tow cable. The tow cable 24 supplies power, control signals, and returns detected sonar signals to a surveillance vehicle, here a ship 26. (Alternatively, the surveillance vehicle may be a helicopter or a remotely controlled underwater vessel). Port and starboard transmit/receive transducer arrays (only starboard array 28s being shown) are mounted on the sides of the towed body 22 to provide, here, approximately 50 degrees of vertical directivity and 1.5 degrees of horizontal beamwidth at 100 kHz and 0.5 degrees at 400 kHz. The towed body 22 normally operates at a height (h) 23 above the sea bottom 12 which, here is 10 percent of the maximum range. The high resolution 400 kHz selection produces a resolution of 15 centimeters in both range and azimuth with the range resolution being provided by a 0.2 msec CW sonar pulse. The high resolution mode typically yields minimum and maximum ranges of 5 meters and 50 meters, respectfully. The speed of the towed body is here typically 2 to 5 knots..
  • The sonar system 10 includes a conventional sonar transmitter 30. A sequence of pulses produced by the transmitter 30 is fed to a projector 32 through a matching network 34 in a conventional manner to produce transmitted sonar pulses, or pings. The pings are transmitted periodically at a rate which covers the selected range without ambiguity. Thus, for a 50 meter selection, a repetition rate of about 15 transmissions per second suffices. The echo return produced in response to each transmitted sonar pulse is received by an array of sonar transducers 36 and passes through a transmit/receive switch (not shown) to preamplifiers 38, in a conventional manner, as shown. A subsequent amplifier 40 applies a time varying gain (TVG) selected in a conventional manner to match the expected decay curve of the return energy. The resulting signals are combined in a conventional beamforming network 42. The output of the beamforming network 42 is passed through a filter 44 which is matched to the waveform of the transmitted sonar pulse and the filter 44 output represents the energy detected. The detected energy signal is synchronized with the waveform of the transmitted pulse via synchronizer and format network 46 to produce raster-formatted floor-backscatter raw image. Since this image still contains wide area fluctuations due to bottom type and bottom slope variations, a conventional moving average normalizer 48 is passed over the data. The resulting image is then displayed by chart 50 in conventional waterfall, strip chart format, as shown in FIG. 4.
  • The image content, shown in FIG. 4, includes a first area of minimal volume reverberation 50 is followed by a strong bottom return 52. The mid range typically displays a high contrast producing strong backscatter or highlights from proud targets 54 (i.e. mines) and associated shadows due to the absence of downrange reflected energy. An idea of the height and shape of the bottom object can sometimes be obtained from the shape of the shadow although floor texture variations and local undulations cause many distortions. Another common feature is the anchor drag 56 shown by a line-like shadow which is here shown oriented in a downrange direction 16. This extended shadow-has a downrange highlight, 58 due to the depression of the anchor drag 56. In addition, there are many other side-scan image characteristics too numerous to discuss here. It is noted, however, that the side looking sonar system 10 may run unattended or serve as a decision aid to an operator, prioritizing mine-like objects. This relieves the operator from tiresome evaluation of areas which contain no candidate mines. The operator can thus spend more quality time with the mine-like objects. The automatic system provides important backup for inexperienced or distracted operators.
  • Referring again to FIG. 2, the side looking sonar system (SLS) 10 is shown to include a digital computer, here a conventional workstation type computer having the memory 18 and processor 19. The memory 18 is used to store data and a program. The program includes a set of instructions to enable the processor 19 to process data stored in the memory 18 in accordance with the program. FIG. 5 is a flow diagram of the stored program. Steps 1 and 2 correspond to the sonar ranging and normalization which produce frames of raw image data. Each frame is a two dimensional map formed from a sequence of echo returns (here a frame is 512 range scans each of which consists of 1024 positions (or pixels), for example). Each one of the echo returns is produced as a range scan in the range direction 16 in response to a corresponding one of the transmitted sonar pulses. The sonar system 10 stores in memory 18 thereof signals representative of the intensity of the echo returns in a two dimensional array (FIG. 2) of pixels 20. Each one of the pixels 20 represents the intensity I of the echo return at a predetermined range position from the system in the range direction 16 and a predetermined cross-range position in the cross-range direction 14 from a reference position of the system 10. The processor 19 normalizes the data. In Steps 3 through 7 the processor 19 processes the raw image data to quantize the intensity of each one of the pixels into one of three levels (i.e. background, highlight, and shadow) and implement the region formation and selection (i.e., culling) process based on the three level quantization amplitude segmentation. Steps 8 through 13 comprise the window-based statistical feature extraction steps which are based on probability distributions and on segmentation-derived region window coverage percentages. Steps 14 through 17 are the decision calculations which evaluate the statistical features and window coverage percentages to determine if a mine-like object is present in the imagery. The locations of such objects are then fed out to memory 18 along with sections of imagery data as is indicated in step 18. Each of these steps is discussed in more detail below.
  • Image Quantization and Region Extraction (Steps 3 - 7)
  • Image quantization (or segmentation) is used to determine region features. The original byte-per-pixel intensity data image in memory 18 is partitioned (i.e. quantized) spatially (i.e. segmented) into areas whose overall intensity is high (highlights), areas whose overall intensity is low (shadows) and areas whose overall intensity is average (background). Quantizing the image to the three levels (high, low, average) in this way produces contiguous image areas called regions whenever pixels in an area have similar low or high values. Such regions may be due to an image highlight or to a shadow resulting from a mine.
  • More particularly, after normalization, the normalized image data in memory 18 is examined by the processor 19 on a frame basis; each frame has the same range dimension as the normalized data and contains a fixed number of scans. The rest of the processing is done on the image frame, which is typically 1024 pixels in range by 512 pixels in cross-range (scans). The normalized pixel or position data are typically one byte integers with minimum intensity at 0 and maximum intensity at 255. The steps subsequent to normalization are:
  • 1. Three five-point median filters (Step 3)
  • 2. Quantization (Step 4)
  • 3. Three nine-point median filters (Step 5)
  • 4. Connected-components (Step 6 )
  • 5. Region formation (Step 7).
  • Three five-point median filters (Step 3)
  • The first set of median filtering performed by processor 19, smooths the normalized data to remove speckle noise. The five-point median filter determines a new value for each pixel or position in the image frame (except the edges) from the five pixels, or positions, that form a cross centered on the pixel of interest, as shown in FIG. 6. The median value of the five pixels is determined; this determines the new value of the pixel, or position, of interest (i,j), where i is the range position and j is the cross-range position. This is done for every pixel or position in the image frame except the edges. The new pixel values are stored in a section of memory 18 and do not replace the old pixel values stored in a different section of the memory 18 until all of the pixels are done. (This process is repeated three times for obtaining an average).
  • Quantization (Step 4)
  • The quantization process performed by processor 19, which follows, replaces the 256-level image data stored in a section of the memory 18 by one of three values, (i.e. here 0, 1 or 2) which represent background, shadow or highlight, respectively. This is done by thresholding the pixel data at two levels: the shadow threshold, ts, and the highlight threshold, th.
    • Pixels having intensities below the shadow threshold, ts, are mapped to shadows (i.e. here value 1)
    • Pixels having intensity levels above the highlight threshold, th, are mapped to highlights (i.e. here value 2)
    • Pixels having intensity levels between the two thresholds, ts, th are mapped to background (i.e. here value 0).
  • The quantization process is performed by the processor 19 on a sub-frame basis where the sub-frames are 64 by 512 pixel sections of the image frame, as shown in FIG. 7. Different shadow and highlight thresholds ts, th, respectively, are determined for each of the 16 sub-frames and are used to segment the pixels in that sub-frame. The thresholds ts, th are selected so that a fixed percentage of the pixels are shadows and fixed percentage of the pixels are highlights. This is done by the processor 19 forming the cumulative distribution function (CDF) of the pixel data in the sub-frame. To form the CDF, the histogram of the pixel data in the sub-frame is determined first. This is done by counting the number of occurrences of each of the 256 levels between 0 and 255. Then, the probability density function (pdf) is determined by dividing the histogram by the number of pixels in the sub-frame (64 by 512 = 32,768).
  • The CDF is determined by the processor 19 from the pdf as follows: C(O) = P(O) C(k) = C(k-1) + P(k) for k = 1,2,...,255 where P is the pdf, C is the CDF and k is the intensity level (note that C(255)=1). Once the CDF is calculated, the processor 19 examines it to determine the shadow and highlight thresholds, ts, th. Let ps(0<ps <1) be the desired percentage of shadow pixels and ph be the desired percentage of highlight pixels. Then, the shadow threshold (ts) and the highlight threshold (th) are determined by: C(ts) = ps C(th) = 1 - ps. The CDF must be examined to determine at what levels the above equations are satisfied. The thresholds ts, th are then used by the processor 19 to convert the pixel image data to a quantized image of three pixel values (i.e. here 0. 1 or 2). This is done by the processor 19 for each subframe.
  • The shadow threshold ts and a highlight threshold th are used to partition the pixels into the three segmentation values (or classes). These thresholds ts, th are determined based, as described above, on the image data statistical character to provide robustness to bottom-type variation and an enhanced ability to detect in low contrast situations. The thresholds ts, th are selected as predetermined percentage points of the discrete probability distribution of the pixel values in the input image, in this case, the output of the speckle filter. Thus, for example, here pixels in the top 10% of pixel values are defined as highlight pixels (i.e. C(th) = 0.9) and pixels in the bottom 10% as shadow pixels (i.e. C(th) = 0.1) and others as background.
  • The threshold selection is done by processor 19 on subframes of the frame as shown in FIG. 7. The image is statistically non-stationary in the range direction 16 due to sub-optimally matched sonar TVG curves, imperfect normalization and reduced energy in the far field. Thus, in this case, the discrete probability distribution is found in 16 subframes resulting in 32 thresholds for a much improved and more faithful segmentation.
  • Three nine-point median filters (Step 5)
  • The second set of median filtering performed by processor 19 is essentially the same as the first set except that a three by three square of pixels, centered on the pixel of interest is used, as shown in FIG. 8. The second set of median filters have the function of smoothing the quantized image data.
  • The three level image resulting from the Step 5 is then filtered to remove small, isolated regions. Median filtering is used to preserve region edges. Here a 3 x 3 kernel iterated 3 times.
  • Connected-components (Step 6 )
  • A connected components algorithm is performed by the processor 19 on the quantized data levels stored in memory 18, for example, the quantized data levels shown in FIG. 9A, to identify regions of contiguous shadow or highlight pixels. A shadow region is a group of contiguous shadow pixels (i.e. pixels having a quantized data level of 1) and a highlight region is a group of contiguous highlight pixels (i.e. pixels having a quantized data level of 2). As will be described, the connected components algorithm labels each pixel with a region number so that all contiguous pixels belonging to the same contiguous region have the same "region number". For example, in FIG. 9A, there are four labelled region numbers 1-4: Region numbers 1 and 3 are shadow regions; and, region numbers 2 and 4 are highlight regions. The remaining regions are background regions.
  • More particularly, the region identification process is performed by the processor 19 making three "passes" through data stored in memory 18. In the first "pass", highlight regions are separated from shadow regions by at least one pixel whenever a highlight region is bounded by a shadow region. This is performed by the processor 19 investigating each pixel, sequentially. If the pixel under investigation is a shadow (i.e., has a quantized data level of 1) and if there is a highlight pixel in any one of the eight pixels contiguous to it, (i.e., if any one of the eight pixels surrounding the central pixel under investigation has a quantized data level of 2) then the quantized data level of the pixel under investigation is changed to a background pixel (i.e., the quantized data level of the central pixel is changed from 1 to 0). The completion of this first "pass" results in a separated segmentation array of quantized data levels, as for example, the array shown in FIG. 9A which is now stored by processor 19 in memory 18.
  • During the second "pass", the processor 19 assigns "integer indices" to highlight and shadow pixels as codes for eventually producing, in the third "pass", the "region numbers". The "integer indices" are assigned by passing a 2x2 pixel kernel, shown in FIG. 9B, sequentially over each pixel Pn(i,j) in the segmented array. As indicated in FIG. 9B, the pixel Pn(i,j) is located in the upper right hand corner of the kernel. The pixel to the left of pixel Pn(i,j) is a position labelled "a", the pixel diagonally down to the left of pixel Pn(i,j) is a position labelled "b", and the pixel below pixel Pn(i,j) is a position labelled "c".
  • For each pixel, Pn(i,j) under investigation during the second "pass", the kernel is placed over four pixels, with the pixel Pn(i,j) under investigation being in the upper right hand corner, as described above in connection with FIG. 9B. The kernel moves from the left of the bottom row of the segmented array, (i.e. pixel P(2,2)), FIG. 9A, to the right until the pixel in the last column of the second row (i.e. pixel P(2,8)) is investigated. The kernel then moves to the second left hand column of the next higher row (pixel P(3,2)) and the process continues until the kernel reaches the right column of the top row (pixel P(8,8)). The quantized data level array in FIG. 9A stored in memory 18 is modified by processor 19, into an "integer index" array, as shown in FIG. 9K, as a result of an investigation made during the second "pass". In the "integer index" array (FIG. 9K) an "integer index" is assigned to the pixel position under investigation in accordance with a set of rules (illustrated in FIGs. 9C to 9I). Before discussing the rules, it is first noted that an "integer index table" shown in FIG. 9J is maintained by processor 19 to record, sequentially from the top of the "integer index table", each "integer index" (r) used in the array modification process and the "value" of such "integer index", as will be described in detail hereinafter. Suffice it to say here, however, that the "value" initially assigned to the "integer index" in the "integer index table" is the same as the "integer index". This is based on an assumption that each time a new "integer index" is determined, a new region number has been generated, and thus the "value" in the "integer index table" represents the "region number". However, such "value" may be changed if the processor 19 recognizes that, after a new, subsequent "integer index" has been generated, the previously established "index integer" really belongs to the new, subsequent "region number". Thus, the old "index number" must be assigned the same "value" as the new "integer index".
  • The set of rules is as follows: At each kernel position, if the quantized data level of the pixel Pn(i,j) (FIG. 9B) under investigation is 0, then the "integer index" assigned to such pixel Pn (i,j) in the modified array of FIG. 9K is 0 (i.e., Rule 1, FIG. 9C); else, if the "integer index" of the pixel in the "b" position of the kernel is non-zero (i.e., nz), then the "integer index" in the "b" position is assigned to the pixel Pn(i,j) in the modified array (FIG. 9K) (i.e., Rule 2, FIG. 9D); else, if the "integer indices" of the pixels in the positions "a" and "c" of the kernel are 0, then the "integer index" assigned to the pixel Pn(i,j) in the modified array (FIG. 9K) is equal to the last "integer index" used, as recorded in the "integer index table" (FIG. 9J), incremented by 1 (i.e., the next, incremented integer, rnew, where rnew = 1 + r, and where r is the last recorded "integer index") and the assigned "integer index", rnew is entered an "integer index table" (FIG. 9J) (i.e., Rule 3, FIG. 9E); else, if the "integer index" of the pixel in the position "c" of the kernel is 0, the pixel Pn(i,j) is assigned, in the modified, "integer index" array (FIG. 9K) the "integer index" in the "a" position of the kernel (i.e., Rule 4, FIG. 9F); else, if the "integer index" of the pixel in the position "a" of the kernel is 0, the pixel Pn(i,j) is assigned, in the modified "integer index" array (FIG. 9K), the "integer index" in the "c" position of the kernel (i.e., Rule 5, FIG. 9G); else, if the "integer indices" in both the "a" and "c" positions of the kernel are non-zero (i.e., nz) and equal to each other, then the pixel Pn(i,j) is assigned the same non-zero "integer index" as the "integer index" in position "a" (or position "c") (i.e., Rule 6, FIG. 9H); else, the pixel Pn(i,j) is assigned the "integer index" of the "integer index" in position "a" of the kernel and the processor 19 reads each of the "integer indices" in the "integer index table" (FIG. 9J) and each time, if the "value" of the read index equals the "value" of the "integer index" in position "c" of the kernel, then the "value" for the currently read "integer index" read from the "index integer table" is reset to a "value" equal to the "integer index" of position "a" of the kernel.
  • Referring again also to FIG. 9A, the first pixel under investigation by the processor 19 during the second "pass" is pixel P(2,2). Thus, according to Rule 1, because the pixel under investigation, P(2,2), has a 0 quantized data level, the modified, "integer index" array (FIG. 9K) is assigned an "integer index" of 0. The use of the "integer index" 0 is recorded at the top of the "integer index table" (FIG. 9J) and such recorded "integer index" is assigned a "value" of 0, as shown. Likewise, the next pixel under investigation, P(2,3), is assigned an "integer index" of 0 in the "integer index" array, FIG. 9K. The next pixel under investigation is pixel P(2,4) and because the quantized data level of such pixel is not zero, and because the "integer index" in position "b" of the kernel is zero, and because the "integer index" of positions "a" and "c" of the kernel are 0, then from Rule 3, such pixel, P(2,4) is assigned an "integer index" equal to the next sequentially available "integer index" (that is, the last "integer index" used (and as recorded in the "integer index table" (FIG. 9J) is 0), incremented by 1); that is, pixel P(2,4) is assigned an "integer index" of 1, as shown in FIG. 9K. The use of a new "integer index" (i.e., 1) is recorded in the "integer index table" (FIG. 9J) and is assigned a "value" 1, as shown in the first "value" column of the "integer index table". Pixels P(2,5), P(2,6), and P(2,7) are assigned an "integer index" of 0 in accordance with Rule 1 because the quantized data levels thereof are 0 as shown in the quantized data array of FIG. 9A.
  • Continuing, pixel P(2,8) has, as shown in FIG. 9A, a quantized data level of 1. Further, the "integer index" in the position "b" of the kernel (i.e., pixel P(1,7) is 0 (and the "integer index" thereof has not been previously modified, as shown in FIG 9K). Still further, the "integer indices" of pixels P(2,7) and P(1,8) are both zero. Thus, from Rule 3, pixel P(2,8) is assigned the next sequential "integer index", 2, (i.e., the last "integer index" used (i.e. 1) incremented by 1, as shown in FIG. 9K. The use of a new "integer index", 2, is recorded in the "integer index table" (FIG. 9J) and it is assigned a "value" of 2 in the first "value" column, as shown. Next, pixel P(3,2), which has a quantized data level of 2 (FIG. 9A), is assigned an "integer index" of 3 (FIG. 9K) in accordance with Rule 3 because the quantized data level thereof is not 0, and the "integer index" of pixel (2,1) (i.e., the pixel in position "b" of the kernel) is not non-zero (i.e., it is 0, FIG. 9K), and because the "integer indices" in pixels P(3,1) and P(2,2) are 0. The use of a new "integer index", 3, is recorded in the "integer index table" of FIG. 9J, and is assigned a "value" of 3 in the first "value" column, as shown. Pixel P(3,3) is assigned an "integer index" of 0 in accordance with Rule 1 because it has a quantized data level of 0. Pixel P(3,4) is assigned an "integer index" of 1 in accordance with Rule 5 because it does not have a quantized data level of 0, and because the "integer index" of pixel P(2,3) (i.e., the pixel in position "b" of the kernel) is not non-zero (FIG. 9K), and because the "integer indices" of the pixels in position "a" and "c" of the kernel (i.e., pixels P(3,3) and P(2,4) are not both zero, and because the "integer index" of pixel, P(2,4) in position "c" of the kernel is not zero, and because the "integer index" of pixel P(3,3) in position "a" is zero. Pixel P(3,5) is assigned an "integer index" of 1 in accordance with Rule 2. Pixel P(3,6) is assigned an "integer index" of 0 in accordance with Rule 1. Pixel P(3,7) is assigned a new "integer index" of 4 in accordance with Rule 3 and such is recorded in the "integer index table" of FIG. 9J along with its "value" of 4, which is recorded in the first "value" column, as shown. Pixel P(3,8) is assigned an "integer index" of 4 in accordance with Rule 7 because it does not have a quantized data level of 0, and the "integer index" of pixel P(2,7) (i.e., the pixel in position "b" of the kernel) is not non-zero, and because the "integer indices" of the pixels in positions "a" and "c" of the kernel (i.e., pixels P(3,7) and P(2,8) are both not zero and are unequal. Thus, in accordance with Rule 7, the processor 19 assigns as an "integer index" to pixel P(3,8) the "integer index" of the pixel in position "a" of the kernel (i.e., the "integer index" of pixel P(3,7), here an "integer index" of 4. The processor 19 then reads each of the "integer indices" in the "integer index table" (FIG. 9J) and each time, if the "value" of the read index equals the "value" of the "integer index" in position "c" of the kernel, then the "value" for the currently read "integer index" read from the "integer index table" is reset to a "value" equal to the "integer index" of position "a" of the kernel. Here the "integer index" of pixel, P(2,8), is here 2. Thus, the processor 19 then resets the "value" of the "integer index" found (i.e. here the "value" 2 of the found "integer index" 2) to the "value" of the "integer index" in the "a" position of the kernel (i.e., the processor 19 resets the "value" 2 of the "integer index" 2 in position "c" to the "value" 4, the "value" of the "integer index" in the "a" position). The resetting of the "value" 2 to the "value" of 4 is recorded in the second "value" column of the table (FIG. 9J) for the "integer index" 2.
  • Continuing, pixel P(4,2) is assigned an "integer index" of 3 in accordance with Rule 5 (i.e., the "integer index" of position "c" of the kernel, here pixel P(3,2)). Pixels P(4,3), P(4,4), P(4,5), and P(4,6) are assigned "integer indices" of 0, 0, 1, 1, respectively in accordance with Rules 1, 1, 2, and 2, respectively. Next, pixel P(4,7) is investigated. It is first noted that Rule 7 applies. Pixel P(4,7) is assigned as an "integer index", the "integer index" of position "a" in the kernel (i.e., the "integer index" of pixel P(4,6), here 1). Next, the processor 19 reads each of the "integer indices" in the "integer index table" (FIG. 9J) and each time, if the "value" of the read index equals the "value" of the "integer index" in position "c" of the kernel, then the "value" for the currently read "integer index" read from the "integer index table" is reset to a "value" equal to the "integer index" of position "a" of the kernel. Here pixel P(3,7) has an assigned "integer index" of 4 and such "integer index" has a "value" of 4. Thus the processor 19 resets the "value" of the "integer index" associated with the position "c" (i.e. the "integer index" 4) to the "value" associated with the "integer index" in the position "a" of the kernel. Here two "integer indices" now have a "value" of 4; "integer index" 2 and "integer index" 4. Thus, the "value" of the "integer index" 4 is reset to a "value" of 1, as shown in the second "value" column of the "integer index table " (FIG. 9J) for "integer index" 4 and in the third "value" column for "integer index" 2. As noted above, pixel P(4,7) is assigned the "integer index" of the "integer index" of the pixel in position "a" of the kernel, here 1, as shown in FIG. 9K.
  • The process continues with the results produced being shown in FIG. 9K. It is noted that pixel P(5,6) is assigned a "integer index" of 1 from Rule 2. It is also noted that pixel P(6,3) is assigned an "integer index" of 5 in accordance with Rule 3 and that such is recorded in "integer index table" of FIG. 9J. It is also noted that pixels P(7,2), P(7,3), P(7,5), P(7,6) are assigned "integer indices" of 6, 6, 7, 7 in accordance with Rules 3, 7, 3, and 4 respectively. Also note that the "integer index table" (FIG. 9J) was modified when Rule 7 was applied to pixel P(7,3) by resetting the "value" of "integer index" 5 to a "value" of 6, as shown in FIG. 9J.
  • From FIGS. 9K and 9I, each "value" recorded is associated with a "region number". Thus, from FIG. 9J there are four "values" (i.e. "values" 1, 3, 6 and 7) which is consistent from FIG. 9A, where four regions are also indicated. Further, "value" 1 is associated with "integer indices" 1, 2, and 4; "value" 3 is associated with "integer index" 3; "value" 6 is associated with "integer indices" 5 and 6; and "value" 7 is associated with "integer index" 7.
  • During the third "pass", the processor 19 labels the different "values" with sequential "region numbers". Thus, pixels with "integer indices" having "values" 1, 3, 6 and 7 are assigned "region numbers" 1, 2, 3 and 4, respectively, as shown in FIG. 9A.
  • Region selection (i.e., culling ) (Step 7).
  • The area of each one of the non-background regions (i.e. regions 1, 3 and 4) are calculated by the processor 19 counting the number of pixels belonging to each one of the non-background regions via the label map. The non-background regions are sorted by area and those regions which are too small or too large to be conceivably related to mine-like objects are removed from the region table. The region label map (FIG. 9) is altered to replace the eliminated region pixels with background pixels.
  • Double Split Window Feature Extraction (Steps 8 - 13)
  • The processor 19 forms a double split window 60. The window 60 is used by the processor 19 to examine both the normalized image data (Steps 1 and 2) and the region label map (Steps 3 - 7). Shown in Figure 10, the window 60 is here 128 pixels long in the range direction 16 and 39 pixels wide in cross-range direction 14. The window 60 is elongated in the range direction 16 and is divided into three main sections: a pair of outer window sections 62, 64; and, a middle window section 66. The middle window section 66 is positioned between an upper one of the outer window sections 62 and a lower one of the outer window sections 64. The upper window section 62 is positioned over pixels forward, in the cross-range direction 14, of the middle window section 66. The lower window section 64 is positioned over pixels rearward, in the cross-range direction 14 of the middle window section 66. Each of the three window sections 62, 64, 66 is 128 pixels long in the range direction 16. The middle section 66 is 11 pixels wide in the cross-range direction 14 and is centered within the window 60. Each of the outer sections 62, 64 is 12 pixels wide in the cross-range direction 14. There is a 2 pixel wide gap 68, 70 in the cross-range direction 14 between the middle section 66 and each of the outer sections 62, 64. Note that: 12+2+11+2+12=39. The window 60 is placed at various locations throughout the image and is used to compare the pixels in the middle section 66 to pixels in each of the outer sections 62, 64 at each window location. The pixels contained in the gaps 68, 70 are not examined.
  • The window 60 is placed at regular intervals throughout the image frame in order to examine the whole image frame. The window 60 is placed at intervals of 16 pixels in the range direction 16 and at intervals of 2 pixels in the cross-range direction 14.
  • When the window 60 is placed on the normalized data (Steps 8, 9, 11 and 12), statistics are determined for each section 62, 64, 66 and the middle section 66 is compared to each of the outer sections 62, 64 using the advanced statistics. Statistical tests determine if there is an anomaly in the middle section 66 that may be caused by a mine. When the split window 60 is placed on the region label map (steps 8, 10 and 13), the percentage of region pixels in each window section (percent fill) and the total area of all the regions that are contained (in full or in part) in the middle section 66 (area sum) are determined . The percent fill is used to ensure that any regions of interest are in the middle section 66 as opposed to the outer sections 62, 64. The area sum is used to eliminate very large regions that extend outside of the window 60.
  • Statistical Feature Calculations (Steps 9, 11 and 12)
  • In the statistical feature calculations (Steps 9, 11 and 12) performed by processor 19, histograms are determined in each of the window sections (i.e. the upper window section 62, the middle window section 66, and the lower window section 64). That is, the processor 19 considers pixel intensity (grey-level) values in the three window sections 62, 64, 66 involved in the double split window 60 element as previously described. Consider, for example, the center window center 66. The pixels which define the image contained in the center section 66 have a spatial organization which defines the image. One could compute summary statistics involving the pixel grey level values without regard to the spatial organization. An example of a simple summary statistic is the mean (average) grey level intensity. The variance of the grey level intensity gives more information (in a general sense) about the distribution of pixel grey levels and gives the dispersion about the mean of the grey level values. The mean and variance summarize properties of the pixel grey level empirical probability density function (histogram or pdf) which description contains all first-order statistical information in the pixel grey levels. Here, however, the SLS 10 statistical features compare grey level histograms and functions of these and thus uses the complete first-order statistical information rather than the incomplete summary statistics.
  • Each of three sections of the window (i.e. upper 62, lower 64, middle 66) are examined by processor 19 separately; statistics are determined for each section 62, 64, 66 and the statistics of the outer sections 62, 64 are compared-to the statistics of the middle section 66 to determine if an elongated object (elongated in the range direction 16) exists within the middle section 66. That is, the processor compares a distribution of the levels of cells over range scans at a first set of adjacent cross-range positions (i.e., the cells in a first one of the three window sections) with the distribution of levels of cells over a range scan at a second, different set of adjacent cross-range positions (i.e., the cells in a second one of the three window sections) and the distribution of the levels of cells over range scans at a third set of adjacent cross-range positions (i.e., the cells in a third one of the three window sections) to identify the existence of an underwater object.
  • Several different methods of comparing the window sections were developed, but the underlying principle in each is that if both of the outer sections 62, 64 are significantly different from the middle section 66 and certain other region conditions are met (Step 13), then a mine-like object exists in the middle section 66 (Steps 14 - 18). The region conditions that must be met (to be discussed in connection with Step 13) have to do with the percentages of region pixels in each of the window sections 62. 64. 66 (to make sure the regions are in the middle window section 66) and with the total area of the detected regions (to avoid detecting very large regions that extend outside the window 60).
  • The first step (Step 9) in the statistical feature calculations performed by processor 19 (Steps 9, 11 and 12), is to determine histograms in each of the three window sections (i. e. the upper window section 62, the middle window section 66 and the lower window section 64.) The next step (Step 11) is to calculate, for each of the three window sections 62, 64, 66, the probability density functions (pdf) for each of the three window sections: ui, the pdf for the upper window section 62; si, the pdf for the center window section 66; and di, the pdf for the lower window section 64 (where i is the ith bin of the histogram). Step 11 also calculates, for each of the three window sections 62, 64, 66, the cumulative distribution functions (CDF) for each of the three window sections: Ui, the CDF for the upper window section 62; Si, the CDF for the center window section 66; and Di, the CDF for the lower window section 64 (where i is the ith bin of the histogram).
  • From the calculated probability density functions (pdf): ui; si; and di, and from the calculated cumulative distribution functions (CDF): Ui; Si; and Di (i.e. Step 11), the processor 19 calculates advanced statistics (Step 12). These advanced statistics are: Modified Pearson's Detection Feature; Kolmogorov Statistic Feature; Grey Level Entropy Detection Feature; and, Multinomial Statistic Feature. Modified Pearson's Detection Feature - A modification of Pearson's chi-squared test for the difference between two populations is used by processor 19 to develop a measure of the statistical difference between the center window section 66 and the upper and lower adjacent window sections 62, 64, respectively.
  • Pearson's chi-squared test statistic is used to indicate whether or not a test population can statistically be regarded as having the same distribution as a standard population. The test statistic is expressed as
    Figure 00370001
    where Ni are the observed frequencies and nqi are the expected frequencies. χ2 is a measure of the departure of the observed Ni from their expectation nqi.
  • For the side looking sonar system 10 the outer windows 62, 64 are used to define the expected population. A modified Pearson's test statistic to remove the expected population bias introduced by the denominator term and a test statistic is computed by the processor 19 using the empirical probability density functions (pdf) as follows. Let si, ui, di be the value of the ith bin of the pdf from the center (s), upper (u), and lower (d) window sections 66, 62, 64, respectively. The "modified Pearson's" test statistic is thus,
    Figure 00380001
    Figure 00380002
    for the upper and lower window sections 62, 64, respectively. The processor 19 combines pu and pd to produce a test statistic for the window set defined as follows: Pu,d = min (pu, pd)
  • This logical ANDing requires both component test statistics (pu, pd) to simultaneously indicate the presence of a statistical anomaly. This protects against spurious detections due to background artifacts in only one window. Kolmogorov Statistic Feature - This statistical analysis provides a "goodness-of-fit" test. The Kolmogorov statistic tests the hypothesis that two populations are statistically the same by use of the cumulative distribution function. Instead of using the test in the context of analysis, the test statistic is used as a feature.
  • For this SLS 10 feature the cumulative distribution functions (CDF) of the grey levels in the three window sections 62, 64, 66 are computed. Let Si, Ui, Di; represent the value in the ith bin of the CDF from center, upper, and lower window sections 66, 62, 64, respectively. Then,
    Figure 00390001
    are the component statistics. As before the processor 19 combines these to form the detection statistic. Thus, ku,d = min { ku, kd } Grey Level Entropy Detection Feature - The entropy of a statistical distribution is defined as,
    Figure 00390002
  • It is interpreted as the expected value of information (in the sense of Shannon) in p, where p in this expression is the pdf. Given the pdf's in the three window sections 66, 62, 64, respectively, (s,u,d) , the processor 19 calculates Hs, Hu, Hd, the entropy for each of the window sections 66, 62, 64, respectively. The difference in entropy between the center window section 66 and the outer window sections 62, 64 is then used as a detection feature. Thus, Eu,d = min { Hu - Hs, Hd - Hs }
  • The processor 19 takes the difference unlike the Kolmogorov feature because the sign of the result is significant. That is, the entropy in a single one of the three window sections 62, 64, 66 alone has an intrinsic significance which indicates the presence of unusual grey level distributions. Thus, the entropy feature has an absolute significance in contrast to the previous two features which are relative comparisons.
    Multinomial Statistic Feature - This feature essentially measures the likelihood of the empirical pdf in the center window section 66 relative to each of the outer window sections 62, 64 which are assumed to represent universal populations. It treats the empirical pdf of the center window section 66 as the outcome of a series of multinomial trials. The probability of the so-formed pdf is given by the multinomial density. Here, the processor 19 assumes the population pdf is given by either outer window sections 62, 64. The multinomial density is given by, p( k ,) = p(k 1,...,kb ,) = n! k 1!...kb ! k1 1... kb b where n = Σki is the number of trials or pixels in the one of the outer window sections 62, 64 under consideration, is a probability vector, i.e., i > 0.0 ∀i and Σi = 1.0, b is the number of bins and  ki / i is the probability of the i pixel level occurring ki times in the n pixel population of the window section.
  • The processor 19 implicitly calculates component probabilities for upper and lower window sections 62, 64 by deriving from u or d (the pdfs associated with upper and lower window sections 62, 64). The ki are determined from the center window 66 pdf. For computational reasons the processor 19 actually calculates the information associated with the two probabilities. The information is a function of the pdf thus, I(p) = - log2(p(k,)). This has the benefit of the multinomial density becoming computationally tractable for the parameters. Thus,
    Figure 00410001
    where
    Figure 00420001
    becomes the information measure of the occurrence of a particular histogram in the. center window section 66 given an outer window section population. As here implemented, the T(1) and log functions are table look-ups. The processor 19 now forms mu,d = min { I(u), I(d) } for the aggregate multinomial detection feature.
  • Percentage Fill and Total Area of Region Features (Steps 8, 10 and 13)
  • Constraints on acceptable region properties are used by processor 19 to improve statistical detector performance. The presence of a predominance of pixels in the center window section 66 belonging to contiguous regions suggests the influence of a mine-like object. Typical noise backgrounds will not produce significant contiguous regions as a result of the quantization and region selection (i.e., culling) operations.
  • The percent fill constraint is used to ensure that the regions of interest are in the middle window section 66 as opposed to the outer window sections 62, 64. For each window location:
  • 1. The total number of highlight or shadow region pixels in each of the lower, middle and upper window sections 62, 66, 64 is counted.
  • 2. The number of region pixels in each of the window sections 62, 66, 64 is divided by the total number of pixels in the window section 60. (The middle window section 66 has 11 x 128 = 1,408 pixels and each of the outer window sections 62, 64 have 12 x 128 = 1536 pixels.) This gives the "percent fill" for each of the three window sections 62, 66, 64.
  • 3. The maximum of the lower and upper window sections 62, 64 percent fill is determined: fo = max(f1,fu) where f1 is the lower window section 64 percent fill, fu is the upper window section 62 percent fill and fo is the maximum of the lower and upper window sections percent fill.
  • 4. The percent fill threshold is a function of the middle window section 66 percent fill. The percent fill calculation is determined as follows: foth = 0.0   if fm ≤ 0.1 foth = 0.4 fm -0.11.0-0.1    if 0.1 ≤ fm ≤ 0.4515625 foth = 0.25   if fm ≥ 0.4515625 where fm is middle window section 66 percent fill and foth is the percent fill threshold.
  • 5. If the maximum of the outer window section 66 percent fills is less than the percent fill threshold, then a detection may be made. FIG. 11 shows the relationship between the middle window section 66 percent fill and the maximum of the outer window section 62, 64.percent fills that must be true in order for a detection to be made.
  • The area sum constraint is used to eliminate very large regions that extend outside of the window 60. For each window location:
  • 1. The area sum threshold is determined; it is a function of the middle window section 66 percent fill: asth = nm x 2.0   if fm≤0.4 asth = nm x [2.0+6.0(fm-0.4)]   if 0.4≤fm≤0.65 asth = nm x 3.5   if fm≥0.65 where fm is the middle window section 66 percent fill, nm is the number of pixels in the middle window section 66 (11 x 128 = 1,408) and asth is the area sum threshold.
  • 2. The regions that are contained (in full or in part) in the middle window section 66 are determined by processor 19. This is done by examining every pixel in the middle window section 66 when the window 60 is placed on the region label map (FIG. 9). Any region number that appears is added to a list and duplicates are removed. The end result is a list in memory 18 of all regions in the middle window section 66.
  • 3. The areas of the regions in the list are summed to determine the area sum.
  • 4. If the area sum is less than the area sum threshold then a detection may be made.
  • FIG. 11 shows the acceptance region related to the percent fill constraint in cross hatching. "sfill", "ufill" and "dfill" are the proportion of region pixels for the center, upper and lower window sections 66, 62, 64, respectively. Generally, the constraint insures that the center window section 66 be largely composed of region pixels at the same time that region pixels are limited in the worst of the outer window sections 62, 64. Parameters t7 and t8 have been determined empirically and a quadratic function is used to partition the space. Particular t7 and t8 and the partition function are not essential, but use of percent fill and comparison of "sfill", "ufill" and "dfill" is significant.
  • A final region constraint requires that the total number of pixels of regions intersected by the center window section 66 be less than a multiple of the number of pixels in the center window section 66. This multiple is varied between 2.0 and 3.0 as a linear function of "sfill". The area constraint feature guards against the spurious detection of large contiguous regions while the percent fill constraint principally guards against spurious detection of finely fragmented regions.
  • Decision Space Calculation (Steps 14 - 17)
  • There are seven features which are combined by processor 19 in Steps 14 - 17 to produce the mine-present decision; the four advanced statistical features (modified Pearson's, Kolmogorov, grey level entropy, multinomial described in connection with Step 12) together with three region features (percent fill, percent fill threshold and area sum threshold described in connection with Step 13) which are used as constraints. The four statistical features (modified Pearson's, Kolmogorov, grey level entropy, multinomial) are first examined by the processor 19 in a logical comparison of threshold exceedances. The three region-derived features (percent fill, percent fill threshold and area sum threshold) are then used to accept or reject a statistically affirmative decision.
  • Generally, large values of any of the statistical features might be taken to indicate a significant statistical difference between the center and adjacent windows. However, this proves not to be sufficient for the mine detection problem due to fluctuation in the naturally occurring anisotropic background. The SLS 10 processor 19 relies on the combination of the statistical features to evaluate evidence for mines.
  • The logical decision space for the statistical features is best represented in logical equation form as (p>t3 & k>t2) or (p>t4 & k>t1) or (m>t5 & e>0.0) or (m>t6) where p is the modified Pearson's feature, k is the Kolmogorov feature, e is the grey level entropy feature and m is the multinomial feature. Optimum ti are determined empirically by training on data with identified mine targets. Note that Clause (1) above works with units that are probability increments while Clause (2) above works with units of information in bits (because when logarithms are taken for the calculation base 2 is used).
  • Constraints on acceptable region properties are used in Steps 16 and 17 to improve statistical detector performance. In Step 16 the presence of a predominance of pixels in the center window section belonging to contiguous regions suggests the influence of a mine-like object. (Typical noise backgrounds will not produce significant contiguous regions through the segmentation and region selection (i.e., culling) operations). The percent fill constraint is used to ensure that the regions of interest are in the middle window section 66 as opposed to the outer window sections 62, 64. In Step 16 the area sum constraint is used to eliminate very large regions that extend outside of the window 60.
  • More particularly, the general principles of the SLS are applied to find evidence of mine-like objects in SLS images by means of specific parameter selections. Most notably, the center and adjacent window section lengths, widths, spacings and increment overlaps are broadly matched to expected mine-like evidence spatial dimensions. Parameterization of the statistical feature thresholds has as much to do with background and sonar characteristics as it does with mine-like object characteristics. It is, therefore, conceivable that the SLS window sections could be adjusted to suit detection of other kinds of objects with known SLS image properties.
  • The SLS is able to work with very marginal image evidence (at or near zero dB SNR) to produce the mine-like object decision.
  • If the region appears to contain a mine-like object, such region is added to a list of mine-like objects (Step 17). The SLS does not falsely detect rocks, vegetation or transition areas and correctly finds the mine-like objects, the evidence for which in the image is marginal. The process continues until data from the last window has been processed.

Claims (7)

  1. A sonar system for mapping the floor of a body of water to identify a submerged foreign object, such mapping being formed from a sequence of echo returns, each one of such echo returns being produced as a range scan (16) in response to a transmitted pulse directed toward the floor in a predetermined direction, a sequence of the sonar returns being received in cross-range directions (14) perpendicular to the predetermined direction, such sonar system (10) storing in a two dimensional array digital signals as pixels (20) each representing digitally the intensity of the echo return at a predetermined range position from the system (10) in the predetermined direction and a predetermined cross-range position from a reference position of the system (10), characterised by
    means (17) for further quantizing the intensity of each one of the pixels to one of three different values, one of the values being indicative of a shadow and one of the values being indicative of a highlight; and
    means (17) for comparing a distribution of the intensities of pixels over a range scan at a cross-range position with the distribution of the intensities of pixels over a range scan at a different cross-range position to identify the existence of an underwater object consistent with said pixel values indicative of shadow or highlight.
  2. A sonar system for mapping the floor of a body of water to identify a submerged foreign object, such mapping being formed from a sequence of echo returns, each one of the echo returns being produced as a range scan (16) in response to a transmitted pulse directed toward the floor in a predetermined direction, a sequence of the sonar returns being received in cross-range directions (14) perpendicular to the predetermined direction, such sonar system (10) storing in a two dimensional array signals as pixels (20) each representative of the intensity of the echo return at a predetermined range position from the system (10) in the predetermined direction and a predetermined cross-range position from the reference position on the system (10), and comprising:
       means (17) for quantizing the intensity of each one of the pixels into one of a plurality of levels, one of the levels being indicative of a shadow and one of the levels being indicative of a highlight, said quantizing means comprising:
    means for smoothing the normalized data to remove speckle noise comprising a five-point median filter to determine a new intensity for each pixel by determining a median intensity value of the five pixels about the pixel of interest and storing that median intensity value in a corresponding pixel bin location;
    means for setting a shadow threshold level and a highlight threshold level and for tagging pixels having intensities below the shadow threshold level as shadows, pixels having intensities above the highlight threshold level as highlights, and pixels having intensities between the two thresholds as background and for filtering pixel by pixel the resulting three value image into regions;
    means for separating highlight regions from shadow regions by at least one pixel, assigning an integer indicator to each pixel and assigning region numbers to each resulting region in a region table;
    means for calculating an area for each of the regions and removing from the region table regions having areas too small or too large to be underwater objects of a predetermined type; and the system further comprising means (17) for comparing a distribution of the intensities of pixels over a range scan at a cross- range position with the distribution of the intensities of pixels over a range scan at a different cross-range position to identify the existence of an underwater object of the predetermined type.
  3. A sonar system according to claim 1 or 2, characterised in that the comparing means (17) is adapted to compare a distribution of the intensities of pixels over range scans at a first set of adjacent cross-range positions with the distribution of the intensities of pixels over a range scan at a second, different set of adjacent cross-range position to identify the existence of an underwater object of the predetermined type.
  4. A system according to claim 3, characterised in that the comparing means (17) includes means for comparing a distribution of the intensities of pixels over range scans at a third set of adjacent cross-range positions with the distribution of the intensities of pixels over a range scan at the first set of adjacent cross-range position to identify the existence of an underwater object of the predetermined type.
  5. A system according to claim 4, characterised in that the first set of adjacent cross-range positions is disposed at cross-range positions between the cross-range positions of the second and third sets of cross-range positions.
  6. A sonar system according to claim 5, characterised in that the comparing means is such as to compare a probability distribution of pixel intensities over range scans at the first set of positions with probability distributions of pixel intensities over range scans at the second and third sets of positions to identify the existence of an underwater object of the predetermined type.
  7. A sonar system according to any preceding claim, characterised in that the comparing means is such that the said distributions are compared on the basis of statistical features including at least one of the following features: a Modified Pearson's Detection Feature; a Kolmogorov Statistic Feature; Grey Level Entropy Detection Feature; and, a Multinomial Statistic Feature.
EP94303002A 1993-04-27 1994-04-26 Sonar systems Expired - Lifetime EP0622641B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US54771 1993-04-27
US08/054,771 US5321667A (en) 1993-04-27 1993-04-27 Sonar systems

Publications (3)

Publication Number Publication Date
EP0622641A2 EP0622641A2 (en) 1994-11-02
EP0622641A3 EP0622641A3 (en) 1996-05-08
EP0622641B1 true EP0622641B1 (en) 2001-11-21

Family

ID=21993429

Family Applications (1)

Application Number Title Priority Date Filing Date
EP94303002A Expired - Lifetime EP0622641B1 (en) 1993-04-27 1994-04-26 Sonar systems

Country Status (4)

Country Link
US (2) US5321667A (en)
EP (1) EP0622641B1 (en)
JP (1) JP3573783B2 (en)
DE (1) DE69429128T2 (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5442358A (en) * 1991-08-16 1995-08-15 Kaman Aerospace Corporation Imaging lidar transmitter downlink for command guidance of underwater vehicle
US5321667A (en) * 1993-04-27 1994-06-14 Raytheon Company Sonar systems
JP3490490B2 (en) * 1994-01-28 2004-01-26 株式会社東芝 Pattern image processing apparatus and image processing method
US6002644A (en) * 1998-04-20 1999-12-14 Wilk; Peter J. Imaging system and associated method for surveying underwater objects
US6483778B1 (en) * 1999-04-02 2002-11-19 Raytheon Company Systems and methods for passively compensating transducers
US6678403B1 (en) 2000-09-13 2004-01-13 Peter J. Wilk Method and apparatus for investigating integrity of structural member
AUPR560001A0 (en) * 2001-06-08 2001-07-12 Hostetler, Paul Blair Mapping method and apparatus
US7242819B2 (en) * 2002-12-13 2007-07-10 Trident Microsystems, Inc. Method and system for advanced edge-adaptive interpolation for interlace-to-progressive conversion
US7417666B2 (en) * 2003-04-01 2008-08-26 University Of South Florida 3-D imaging system
US7796809B1 (en) 2003-12-15 2010-09-14 University Of South Florida 3-D imaging system with pre-test module
US7221621B2 (en) * 2004-04-06 2007-05-22 College Of William & Mary System and method for identification and quantification of sonar targets in a liquid medium
US7315485B1 (en) * 2005-12-20 2008-01-01 United States Of America As Represented By The Secretary Of The Navy System and method for target classification and clutter rejection in low-resolution imagery
JP5369412B2 (en) * 2007-09-13 2013-12-18 日本電気株式会社 Active sonar device and dereverberation method using active sonar device
DE102009024342B9 (en) * 2009-06-09 2012-01-05 Atlas Elektronik Gmbh Method for detecting anomalies on an underwater object
US8300499B2 (en) 2009-07-14 2012-10-30 Navico, Inc. Linear and circular downscan imaging sonar
US8305840B2 (en) * 2009-07-14 2012-11-06 Navico, Inc. Downscan imaging sonar
FR2951830B1 (en) * 2009-10-23 2011-12-23 Thales Sa METHOD OF SIMULTANEOUS LOCATION AND MAPPING BY ELASTICAL NONLINEAR FILTRATION
JP5593204B2 (en) * 2010-11-01 2014-09-17 株式会社日立製作所 Underwater acoustic imaging device
US8743657B1 (en) * 2011-04-22 2014-06-03 The United States Of America As Represented By The Secretary Of The Navy Resolution analysis using vector components of a scattered acoustic intensity field
US9142206B2 (en) 2011-07-14 2015-09-22 Navico Holding As System for interchangeable mounting options for a sonar transducer
US9182486B2 (en) 2011-12-07 2015-11-10 Navico Holding As Sonar rendering systems and associated methods
US9268020B2 (en) 2012-02-10 2016-02-23 Navico Holding As Sonar assembly for reduced interference
US9354312B2 (en) 2012-07-06 2016-05-31 Navico Holding As Sonar system using frequency bursts
GB2505966A (en) 2012-09-18 2014-03-19 Seebyte Ltd Target recognition in sonar imaging using test objects
AU2013248207A1 (en) * 2012-11-15 2014-05-29 Thomson Licensing Method for superpixel life cycle management
US9223310B2 (en) * 2013-02-08 2015-12-29 The Boeing Company Ship course obstruction warning transport
US9405959B2 (en) * 2013-03-11 2016-08-02 The United States Of America, As Represented By The Secretary Of The Navy System and method for classification of objects from 3D reconstruction
JP2014041135A (en) * 2013-09-06 2014-03-06 Nec Corp Active sonar device
RU2602759C1 (en) * 2015-09-07 2016-11-20 Акционерное Общество "Концерн "Океанприбор" Method of object in aqueous medium automatic detection and classification
US10151829B2 (en) 2016-02-23 2018-12-11 Navico Holding As Systems and associated methods for producing sonar image overlay
US11367425B2 (en) 2017-09-21 2022-06-21 Navico Holding As Sonar transducer with multiple mounting options
US11353566B2 (en) * 2018-04-26 2022-06-07 Navico Holding As Sonar transducer having a gyroscope
US11221403B2 (en) 2018-05-21 2022-01-11 Navico Holding As Impact detection devices and methods
US11357161B2 (en) 2018-12-07 2022-06-14 Harvest International, Inc. Gauge wheel arm with split end and threaded bore
JP7269784B2 (en) * 2019-04-18 2023-05-09 古野電気株式会社 Underwater detection device, underwater detection method and program
CN110954904B (en) * 2019-12-04 2022-08-30 宁波羽声海洋科技有限公司 Single-shot orthogonal time-sharing transmitting synthetic aperture sonar and imaging method and equipment
KR102604114B1 (en) * 2022-01-17 2023-11-22 전남대학교산학협력단 Method for Detecting Fish in Sonar Video using Deep Learning

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3943482A (en) * 1967-10-03 1976-03-09 The United States Of America As Represented By The Secretary Of The Navy Marine mine detector
FR2241078B1 (en) * 1973-08-16 1977-08-12 France Etat
US3967234A (en) * 1974-03-06 1976-06-29 Westinghouse Electric Corporation Depth-of-field arc-transducer and sonar system
US4422166A (en) * 1981-08-17 1983-12-20 Klein Associates, Inc. Undersea sonar scanner correlated with auxiliary sensor trace
FR2631455B1 (en) * 1988-05-10 1991-01-25 Thomson Csf PROCESS FOR THE SONAR CLASSIFICATION OF UNDERWATER OBJECTS, PARTICULARLY FOR MINES IN ORIN
US5018214A (en) * 1988-06-23 1991-05-21 Lsi Logic Corporation Method and apparatus for identifying discrete objects in a visual field
US5025423A (en) * 1989-12-21 1991-06-18 At&T Bell Laboratories Enhanced bottom sonar system
US5181254A (en) * 1990-12-14 1993-01-19 Westinghouse Electric Corp. Method for automatically identifying targets in sonar images
US5155706A (en) * 1991-10-10 1992-10-13 Westinghouse Electric Corp. Automatic feature detection and side scan sonar overlap navigation via sonar image matching
US5321667A (en) * 1993-04-27 1994-06-14 Raytheon Company Sonar systems

Also Published As

Publication number Publication date
US5321667A (en) 1994-06-14
EP0622641A3 (en) 1996-05-08
JP3573783B2 (en) 2004-10-06
DE69429128T2 (en) 2002-07-25
JPH06347547A (en) 1994-12-22
EP0622641A2 (en) 1994-11-02
US5438552A (en) 1995-08-01
DE69429128D1 (en) 2002-01-03

Similar Documents

Publication Publication Date Title
EP0622641B1 (en) Sonar systems
Crisp The state-of-the-art in ship detection in synthetic aperture radar imagery
US7916933B2 (en) Automatic target recognition system for detection and classification of objects in water
US6868041B2 (en) Compensation of sonar image data primarily for seabed classification
Snellen et al. Performance of multibeam echosounder backscatter-based classification for monitoring sediment distributions using multitemporal large-scale ocean data sets
US9235896B2 (en) Sonar imaging
LeFeuvre et al. Acoustic species identification in the Northwest Atlantic using digital image processing
Chapple Automated detection and classification in high-resolution sonar imagery for autonomous underwater vehicle operations
Atallah et al. Wavelet analysis of bathymetric sidescan sonar data for the classification of seafloor sediments in Hopvågen Bay-Norway
CN116562472B (en) Method and system for identifying and predicting target species of middle-upper marine organisms
CN116243289A (en) Unmanned ship underwater target intelligent identification method based on imaging sonar
CN115223044A (en) End-to-end three-dimensional ground penetrating radar target identification method and system based on deep learning
WO2003021288A2 (en) Surface texture determination method and apparatus
US6763303B2 (en) System for classifying seafloor roughness
Atallah et al. Automatic seabed classification by the analysis of sidescan sonar and bathymetric imagery
Maussang et al. Fusion of local statistical parameters for buried underwater mine detection in sonar imaging
Dura et al. Image processing techniques for the detection and classification of man made objects in side-scan sonar images
Müller et al. Seabed classification of the South Tasman Rise from SIMRAD EM12 backscatter data using artificial neural networks
WO2003093868A1 (en) Compensation of sonar image data primarily for seabed classification
Hammond A Bayesian interpretation of target strength data from the Grand Banks
Rangole et al. Sediment Classification using Spectral Features of Side-Scan Sonar Images
Siccardi et al. Seabed vegetation analysis by a 2 MHz sonar
Coiras et al. Reliable seabed characterization for MCM operations
Tian Automatic target detection and analyses in side-scan sonar imagery
CN116403100A (en) Sonar image small target detection method based on matrix decomposition

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB

17P Request for examination filed

Effective date: 19960927

17Q First examination report despatched

Effective date: 19990323

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

REF Corresponds to:

Ref document number: 69429128

Country of ref document: DE

Date of ref document: 20020103

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20130508

Year of fee payment: 20

Ref country code: GB

Payment date: 20130424

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20130625

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 69429128

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20140425

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20140425

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20140429