KR101696086B1 - Method and apparatus for extracting object region from sonar image - Google Patents

Method and apparatus for extracting object region from sonar image Download PDF

Info

Publication number
KR101696086B1
KR101696086B1 KR1020150113497A KR20150113497A KR101696086B1 KR 101696086 B1 KR101696086 B1 KR 101696086B1 KR 1020150113497 A KR1020150113497 A KR 1020150113497A KR 20150113497 A KR20150113497 A KR 20150113497A KR 101696086 B1 KR101696086 B1 KR 101696086B1
Authority
KR
South Korea
Prior art keywords
reference value
value
background
image
sonar
Prior art date
Application number
KR1020150113497A
Other languages
Korean (ko)
Inventor
유선철
조한길
조현우
구정회
표주현
Original Assignee
포항공과대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 포항공과대학교 산학협력단 filed Critical 포항공과대학교 산학협력단
Priority to KR1020150113497A priority Critical patent/KR101696086B1/en
Application granted granted Critical
Publication of KR101696086B1 publication Critical patent/KR101696086B1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8977Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using special techniques for image reconstruction, e.g. FFT, geometrical transformations, spatial deconvolution, time deconvolution
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52001Auxiliary means for detecting or identifying sonar signals or the like, e.g. sonar jamming signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52046Techniques for image enhancement involving transmitter or receiver

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a method and an apparatus for extracting an object part from a sonar image. According to an embodiment of the present invention, the method for extracting an object part from a sonar image comprises the steps of: obtaining a background learning image; calculating an average value and a standard deviation value for each pixel on the obtained background learning image, and calculating two reference values by using the average value and the standard deviation value; classifying a pixel of a sonar image obtained by the two reference values; and extracting an object region from the classified sonar image. The method for extracting an object part from a sonar image can extract only object and shadow parts from a background in the sonar image.

Description

BACKGROUND OF THE INVENTION Field of the Invention [0002]

The present invention relates to a method for extracting an object part in a sonar image and a method for extracting an apparatus object part.

Recently, the use of imaging sonar (sonar) has been increasing to explore the underwater environment. Sonar is a device that detects the direction and distance of an object underwater by ultrasonic waves and is also called an acoustic detection device or a sound detector.

The sonar image obtained by this imaging sonar is a single color (gray-scale) and the boundary between the object part and the background part is not clear. Therefore, in order to analyze the sonar image, it is necessary to classify only the object part separately. As described above, if the object part can be separately classified in the sonar image, it can be applied variously such as recognition or search of an object, navigation technology, and the like.

However, it is not easy to extract the object part due to the characteristics of the sonar image. In addition, when thresholding used in conventional optical images is applied to extract objects from a sonar image, not only the object part but also the background (underwater floor) is detected. Therefore, there is a demand for a method capable of detecting only the object portion except for the background portion.

An embodiment of the present invention is to provide a method and an apparatus for extracting an object part in a sonar image which can extract only an object and a shadow part from a background in a sonar image.

According to an aspect of the present invention, there is provided a method of generating a background learning image, Calculating an average value and a standard deviation value for each pixel with respect to the obtained background learning image, and calculating two reference values using the average value and the standard deviation value; Classifying pixels of a sonar image obtained according to the two reference values; And extracting an object region from the classified sonar image.

In this case, the calculating step may calculate a difference between the average value and the standard deviation value as a first reference value, and calculate a sum of the average value and the standard deviation value as a second reference value.

If the corresponding pixel value is smaller than the first reference value, the classification is classified as an object if it is larger than the first reference value, is smaller than the second reference value, and is larger than the second reference value .

According to another aspect of the present invention, there is provided an image processing apparatus including an image obtaining unit obtaining a background learning image; A background learning processing unit for calculating an average value and a standard deviation value for each pixel for the background learning image and calculating two reference values using the average value and the standard deviation value; An object extracting unit for sorting pixels of a sonar image obtained according to the two reference values and extracting an object region from the classified sonar image and a storage unit for storing the calculated two reference values, An extraction device is provided.

In this case, the background learning processing unit may calculate the difference between the average value and the standard deviation value as a first reference value, and the sum of the average value and the standard deviation value as a second reference value.

At this time, if the pixel value is smaller than the first reference value, the object extracting unit can classify the object as a shadow if it is larger than the first reference value, is smaller than the second reference value, and is larger than the second reference value have.

The method and apparatus for extracting an object part in a sonar image according to an embodiment of the present invention can easily separate an object part from a background using classification of individual pixels after background learning.

1 is a flowchart of an object part extraction method in a sonar image according to an embodiment of the present invention.
2 is a diagram for explaining a learning step of an object part extraction method in a sonar image according to an embodiment of the present invention.
3 is a diagram showing a result of a conventional object part extraction method.
4A and 4B are views for explaining an object part extraction method in a sonar image according to an embodiment of the present invention.
5 is a block diagram illustrating a detailed configuration of an object portion extraction device in a sonar image according to an embodiment of the present invention.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art to which the present invention pertains. The present invention may be embodied in many different forms and is not limited to the embodiments described herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and the same or similar components are denoted by the same reference numerals throughout the specification.

 FIG. 1 is a flow chart of a method of extracting an object part in a sonar image according to an embodiment of the present invention. FIG. 2 is a diagram for explaining a learning step of an object part extraction method in a sonar image according to an embodiment of the present invention, FIG. 3 is a view showing a result of a conventional object part extraction method, and FIGS. 4A and 4B are views for explaining an object part extraction method in a sonar image according to an embodiment of the present invention.

Hereinafter, an object part extraction method in a sonar image according to an embodiment of the present invention will be described in more detail with reference to the drawings.

Referring to FIG. 1, a method 100 for extracting an object part in a sonar image according to an embodiment of the present invention includes calculating a reference value according to a background learning (steps S101 and S102), and classifying pixels according to the calculated reference value And extracting an object part (S103 to S105).

In the object part extraction method according to an embodiment of the present invention, two segmentation reference values are determined through background learning and the individual pixels are classified using the determined two reference values, thereby separating the object part from the background.

In general, segmentation refers to classifying image pixels, which determines a threshold value and makes the threshold value have a different classification value on both sides of the reference value.

However, in the conventional segmentation method, since one reference value is commonly applied to all pixels, the background is also extracted together with the extraction of the object. Therefore, the present invention applies a unique reference value to each pixel of the sonar image .

More specifically, the method 100 for extracting object parts in a sonar image according to an exemplary embodiment of the present invention may first acquire a background learning image using an imaging sonar (step S101). Here, the learning image refers to a sonar image at the bottom of which the imaging sonar is currently photographing. In particular, it refers to an image of an empty floor without any object.

At this time, the imaging sonar can keep the imaging sonar constantly looking at the altitude from the floor and the bottom surface during underwater photographing for background learning. Thus, if a certain altitude and angle are maintained, the floor image, which is a learning image, does not change significantly even if the imaging sonar moves.

That is, as shown in FIG. 2A, the portion corresponding to the bottom is located almost at the same position in an elliptic shape in the image. Therefore, it is possible to acquire a plurality of learning images in which a specific object does not exist and only the bottom portion appears with almost no change.

Next, a reference value for distinguishing shadows, backgrounds, and object zones for each pixel from the obtained plurality of images can be calculated (step S102). Here, the reference value determined for each pixel is determined from the bottom image, which means that the reference image is used as the reference image.

That is, if there is an image showing an object, only the object part can be obtained by subtracting the value of the background image that has been learned from the value of the part recognized as the object. For example, subtracting the image of Figure 2a from the image of Figure 2b.

More specifically, it is assumed that there are K learning images, and each learning image has NXN pixels. At this time,

Figure 112015077937812-pat00001
Denotes a pixel value corresponding to the (i, j) position of the k-th learning image. The reference value corresponding to the pixel position (i, j) is obtained by the following equation.

Figure 112015077937812-pat00002

Figure 112015077937812-pat00003

Figure 112015077937812-pat00004

Figure 112015077937812-pat00005

Where a is a user-specified constant that adjusts the segmentation detail effect as a parameter value. Equation 1 is a pixel value average of an arbitrary pixel position (i, j) for K training images, Equation 2 is a standard deviation of pixel values, and Equations 3 and 4 use an average and standard deviation Is an expression for calculating two reference values TH1 and TH2.

The reference values TH1 and TH2 are determined using the average and standard deviation of the pixel values corresponding to the pixel positions (i, j).

At this time, the reason why the average and the standard deviation are obtained is that the statistical value of the background image is used rather than the single background, which reduces the deviation of the segmentation result.

As a result, if a range is determined and classified by using the average and standard deviation calculated for each pixel for a plurality of background learning images, the object and the background portion can be separated separately.

In the above equation, TH1 is a reference value for distinguishing a background from a shadow, and TH2 is a reference value for distinguishing a background from an object.

TH1 and TH2 can be calculated for all the pixels (i, j) through these equations. At this time, TH1 and TH2 can be stored as the size of the matrix such as the size of the target sonar image.

In this way, segmentation using two reference values is for separating the signal reflected from the floor and the signal reflected from the object in the sonar image and extracting and processing the object part separately.

Next, the area of each pixel can be classified according to the calculated reference value (step S103). Specifically, a reference value can be applied to all the pixels of the image obtained in the imaging sonar, and classified into a shadow region, a background region, and an object region.

For example, pixels having a value smaller than the reference value TH1 can be determined as a shadow region by using the reference values TH1 and TH2 in a specific pixel of the acquired sonar image, and the pixels are larger than the reference value TH1 and larger than the reference value TH2 ) Can be classified into a background area, and a pixel larger than the reference value TH2 can be classified into an object area.

4A, a pixel having a value smaller than the reference value TH1 is black, a pixel having a value larger than the reference value TH1 and smaller than the reference value TH2 is gray, and a pixel having a value smaller than the reference value TH2 A pixel having a large value can be output in white.

Next, the object region can be post-processed with the classified image (Step S104). For example, if the pixels are classified according to the reference value, a speckle or an empty space appears in the image as shown in FIG. 4B, so that the processing for sharpening the image can be performed.

Next, the object part can be extracted (step S105). Specifically, an image obtained by sorting the pixels according to the reference value of the obtained sonar image has only an object and a shadow portion except for the background portion, as shown in FIG. 4A. An object part can be extracted from such a sonar image.

As a result, in the conventional segmentation, it is difficult to accurately detect the object region by detecting the object region as well as the object region together with the background region as shown in Fig. 3, but the segmentation of the present invention is shown in Fig. 4 (a) As described above, the shadow region, the background region, and the object region can be separately extracted from the sonar image.

According to this method, the object part extraction device in the sonar image according to the embodiment of the present invention can easily separate the object part from the background by using the classification of individual pixels after background learning.

Hereinafter, an apparatus for extracting object parts in a sonar image according to an embodiment of the present invention will be described in detail with reference to FIG.

5 is a block diagram illustrating a detailed configuration of an object portion extraction device in a sonar image according to an embodiment of the present invention.

The object part extracting apparatus 500 may include an image obtaining unit 510, an image processing unit 520, and a storage unit 530.

The image obtaining unit 510 may obtain a sonar image by transmitting an ultrasonic signal and receiving an ultrasonic signal reflected from the object or the floor. At this time, the image obtaining unit 510 of the image sonar equipped with the object portion extracting apparatus 500 can move while maintaining a certain altitude and angle.

The image processing unit 520 includes a background learning processing unit 521 and an object extraction unit 522.

The background learning processing unit 521 may calculate a reference value for distinguishing shadow, background, and object regions for each pixel from a plurality of background learning images. Here, the reference value can be calculated based on the average value and the standard deviation value for each pixel of the background learning image.

That is, the background learning processing unit 521 can calculate the reference value TH1 for distinguishing the background from the shadow, and the reference value TH2 for distinguishing the background and the object. At this time, for example, the reference value TH1 is a difference between an average value and a standard deviation value, and the reference value TH2 can be calculated as a sum of an average value and a standard deviation value. Optionally, the standard deviation value can be multiplied by a user-specified constant for adjusting the segmentation detail effect.

The object extracting unit 522 can classify the region of each pixel according to the calculated reference value, and extract the object region from the classified sonar image.

If the corresponding pixel value is smaller than the first reference value TH1, the object extracting unit 522 extracts the second reference value TH1 as a background when the pixel value is larger than the first reference value TH1 and is smaller than the second reference value TH2, (TH2), it can be classified as an object.

Accordingly, the object extracting unit 522 can use the reference values TH1 and TH2 in a specific pixel of the acquired sonar image, and determine that pixels having a value smaller than the reference value TH1 are shadow regions (black) A pixel larger than the reference value TH1 and smaller than the reference value TH2 may be divided into a background region (gray), and a pixel larger than the reference value TH2 may be divided into an object region (white).

The object extracting unit 522 may perform a post-processing to remove the speckles or vacancies included in the pixel-classified image, thereby sharpening the shape in the image.

The storage unit 530 may store information extracted from the learned background. For example, the average and standard deviation values of the shadow region and the background region extracted from the background learning processing section 521 can be stored. In addition, the storage unit 530 may store a reference value TH1 for distinguishing the shadow region and the background region, which are determined from the average and standard deviation values, and a reference value TH2, for distinguishing the background region and the object region.

With this arrangement, the object portion extraction device in the sonar image according to the embodiment of the present invention can easily separate the object portion from the background by using the classification of individual pixels after the background learning.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

500: object extraction device
510: Image obtaining unit 520: Image processing unit
521: background learning processing unit 522: object extracting unit
530:

Claims (6)

Acquiring a plurality of background learning images which are images of an empty floor without an object using an imaging sonar;
Calculating a mean value and a standard deviation value for each of the plurality of acquired background learning images, calculating a first reference value for distinguishing a background and a shadow from each other, and a second reference value for distinguishing a background and an object using the average value and the standard deviation value, ;
Classifying pixels of a sonar image including an object obtained according to the first reference value and the second reference value; And
And extracting the object region by removing the background region from the classified sonar image.
The method according to claim 1,
Wherein the calculating step calculates the difference between the average value and the standard deviation value as a first reference value and the sum of the average value and the standard deviation value as a second reference value.
3. The method of claim 2,
Wherein the classifying step includes classifying the image into a background if the pixel value is smaller than the first reference value and is larger than the first reference value and smaller than the second reference value and is larger than the second reference value, Method of extracting object part.
An image obtaining unit that obtains a plurality of background learning images which are images of an empty floor without an object using an imaging sonar;
A first reference value for distinguishing a background and a shadow from each other, and a second reference value for distinguishing a background and an object from each other using the average value and the standard deviation value for each of the plurality of background learning images, A background learning processing unit for calculating a background learning process;
An object extracting unit for classifying pixels of the sonar image obtained according to the first reference value and the second reference value and extracting the object region by removing the background region from the classified sonar image;
And a storage unit for storing the first reference value and the second reference value.
5. The method of claim 4,
Wherein the background learning processing unit calculates a difference between the average value and a standard deviation value as a first reference value and a sum of the average value and the standard deviation value as a second reference value.
5. The method of claim 4,
Wherein the object extracting unit extracts an object in the sonar image to be classified as an object if the corresponding pixel value is smaller than the first reference value and is larger than the first reference value and smaller than the second reference value, Partial extraction device.
KR1020150113497A 2015-08-11 2015-08-11 Method and apparatus for extracting object region from sonar image KR101696086B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150113497A KR101696086B1 (en) 2015-08-11 2015-08-11 Method and apparatus for extracting object region from sonar image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150113497A KR101696086B1 (en) 2015-08-11 2015-08-11 Method and apparatus for extracting object region from sonar image

Publications (1)

Publication Number Publication Date
KR101696086B1 true KR101696086B1 (en) 2017-01-13

Family

ID=57835468

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150113497A KR101696086B1 (en) 2015-08-11 2015-08-11 Method and apparatus for extracting object region from sonar image

Country Status (1)

Country Link
KR (1) KR101696086B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101952291B1 (en) 2017-09-11 2019-05-09 포항공과대학교 산학협력단 Object appearance detection method using multi-beam sonar camera
KR20190092869A (en) * 2018-01-31 2019-08-08 한양대학교 에리카산학협력단 Dangerous substance detecting system and method and computer program based visual information
KR102186733B1 (en) 2019-09-27 2020-12-04 포항공과대학교 산학협력단 3D modeling method for undersea topography

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10206525A (en) * 1997-01-24 1998-08-07 Hitachi Ltd Target-detecting device
KR20010014492A (en) * 1999-02-19 2001-02-26 더 존 피. 로바츠 리서치 인스티튜트 Automated segmentation method for 3-dimensional ultrasound
JP2001083236A (en) * 1999-09-10 2001-03-30 Hitachi Ltd Object display method
KR20150089835A (en) * 2014-01-28 2015-08-05 삼성메디슨 주식회사 Method and ultrasound apparatus for displaying a ultrasound image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10206525A (en) * 1997-01-24 1998-08-07 Hitachi Ltd Target-detecting device
KR20010014492A (en) * 1999-02-19 2001-02-26 더 존 피. 로바츠 리서치 인스티튜트 Automated segmentation method for 3-dimensional ultrasound
JP2001083236A (en) * 1999-09-10 2001-03-30 Hitachi Ltd Object display method
KR20150089835A (en) * 2014-01-28 2015-08-05 삼성메디슨 주식회사 Method and ultrasound apparatus for displaying a ultrasound image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101952291B1 (en) 2017-09-11 2019-05-09 포항공과대학교 산학협력단 Object appearance detection method using multi-beam sonar camera
KR20190092869A (en) * 2018-01-31 2019-08-08 한양대학교 에리카산학협력단 Dangerous substance detecting system and method and computer program based visual information
KR102308974B1 (en) * 2018-01-31 2021-10-05 한양대학교 에리카산학협력단 Dangerous substance detecting system and method and computer program based visual information
KR102186733B1 (en) 2019-09-27 2020-12-04 포항공과대학교 산학협력단 3D modeling method for undersea topography

Similar Documents

Publication Publication Date Title
US11360571B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
JP6125188B2 (en) Video processing method and apparatus
CN110378945B (en) Depth map processing method and device and electronic equipment
US9704017B2 (en) Image processing device, program, image processing method, computer-readable medium, and image processing system
US10216979B2 (en) Image processing apparatus, image processing method, and storage medium to detect parts of an object
AU2011301774B2 (en) A method for enhancing depth maps
JP6482195B2 (en) Image recognition apparatus, image recognition method, and program
US20150139533A1 (en) Method, electronic device and medium for adjusting depth values
US10748294B2 (en) Method, system, and computer-readable recording medium for image object tracking
US20130004082A1 (en) Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
EP3168810A1 (en) Image generating method and apparatus
JP2008257713A (en) Correcting device and method for perspective transformed document image
CN101383005B (en) Method for separating passenger target image and background by auxiliary regular veins
KR20160037643A (en) Method and Apparatus for Setting Candidate Area of Object for Recognizing Object
KR20160044316A (en) Device and method for tracking people based depth information
KR101696086B1 (en) Method and apparatus for extracting object region from sonar image
JP2007272292A (en) Shadow recognition method and shadow boundary extraction method
JP2015148895A (en) object number distribution estimation method
KR101557271B1 (en) Method for detecting a circle-type object and approximating a substitute circle based on Image processing
KR101696089B1 (en) Method and apparatus of finding object with imaging sonar
WO2017032096A1 (en) Method for predicting stereoscopic depth and apparatus thereof
JP2010113562A (en) Apparatus, method and program for detecting and tracking object
GB2522259A (en) A method of object orientation detection
CN110441315B (en) Electronic component testing apparatus and method
KR102660089B1 (en) Method and apparatus for estimating depth of object, and mobile robot using the same

Legal Events

Date Code Title Description
GRNT Written decision to grant