KR20080079443A - Method and apparatus for extracting object from image - Google Patents

Method and apparatus for extracting object from image Download PDF

Info

Publication number
KR20080079443A
KR20080079443A KR1020070019591A KR20070019591A KR20080079443A KR 20080079443 A KR20080079443 A KR 20080079443A KR 1020070019591 A KR1020070019591 A KR 1020070019591A KR 20070019591 A KR20070019591 A KR 20070019591A KR 20080079443 A KR20080079443 A KR 20080079443A
Authority
KR
South Korea
Prior art keywords
array
image
color component
texture information
values
Prior art date
Application number
KR1020070019591A
Other languages
Korean (ko)
Inventor
김종성
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020070019591A priority Critical patent/KR20080079443A/en
Publication of KR20080079443A publication Critical patent/KR20080079443A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/40Tree coding, e.g. quadtree, octree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A method and an apparatus for detecting an object from an image are provided to enable a user to select a type of the object to detect and extract the object suitable for a request of the user. A method for detecting an object from an image comprises the following steps of: acquiring the image from a subject(S204); selecting the candidate region of the object to detect from the acquired image(S206); extracting a texture information feature value, eigen characteristics, in the selected candidate region(S208); and classifying the extracted texture information feature value to extract the object(S210).

Description

Method and apparatus for detecting object from image {Method and apparatus for extracting object from image}

1 is a configuration diagram schematically showing an internal configuration of an object detecting apparatus according to an embodiment of the present invention;

2 is a flowchart illustrating an object detection method from an image according to an embodiment of the present invention;

3 is a view for explaining a classification process through a binary classification tree;

4 is a diagram illustrating an input image obtained by photographing a subject;

5 is a diagram illustrating a screen for selecting a candidate region of an object;

6 is a diagram illustrating blocking of image data for selecting a candidate region of an object;

7 is a diagram illustrating a screen for filtering color information in a block by color information of an object;

8 is a diagram illustrating a screen for setting candidate regions in an image obtained by binarization filtering; and

9 is a diagram illustrating a screen in which brightness information in a candidate area block is changed to Y information in a color space.

<Description of Symbols for Main Parts of Drawings>

100: object detection device 110: image acquisition unit

120: storage unit 130: display unit

140: object extraction unit 142: preprocessing unit

144: SGLD processing unit 146: CART processing unit

150: control unit 160: input unit

170: communication unit

The present invention relates to a method and apparatus for detecting an object from an image.

Techniques for detecting an object from a still image or a moving image acquired by a portable terminal through a camera include an object detection technique for a face of high interest to the user, a car, or a natural object.

In the object detection technique, a classifier is used to analyze a unique property of a specific object or to extract texture information and shape information. Neural networks and support vector machines (SVMs) are used as classifiers.

On the other hand, most portable terminals do not employ a neural network or a classifier such as SVM, and the image obtained through the camera in the portable terminal has a low resolution due to the limitation of the camera lens, and thus the sharpness is reduced. There was a problem with difficulty in detection.

In addition, when extracting texture information or shape information through the above-described classifier, there is a problem in that it cannot be applied to a portable terminal or a portable multimedia device in which the processing speed of the central processing unit (CPU) is limited.

In addition, even if the above-described classifier is applied to the portable terminal, the above-described classifier learning method is complicated, and there is a problem in that it is difficult to generate each classifier depending on various objects and camera modules.

In order to solve the above problems, the present invention provides a method for detecting an object from an image by selecting a candidate region of an object from an input image, extracting characteristic texture information, and detecting a desired object through the binary classification tree; The object is to provide a device.

Object detection method from an image according to the present invention for achieving the above object comprises the steps of: obtaining an image from a subject; Selecting a candidate region of an object to be detected in the acquired image; Extracting a texture information feature value that is a unique feature in the selected candidate region; And classifying the extracted texture information feature values to extract the object.

The selecting of the candidate region may include filtering average values existing within a predetermined range with respect to the unique color component of the object, and selecting regions corresponding to the color component of the object from the filtered region.

In the selecting of the candidate region, the acquired image is divided into a predetermined block size, and an average value of color component values is calculated in the divided block.

In the extracting of the texture information feature value, the texture information feature value is extracted using a spatial gray level dependency (SGLD) matrix.

In addition, the SGLD matrix includes an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array.

Here, the inertia array represents the degree of change of two adjacent pixel values, and the inverse difference array is an element value of the array increases when the local region in the color component is composed of homogeneous pixels, and the array is composed of heterogeneous pixels. The element value of is reduced.

In addition, the correlation arrangement indicates a degree of correlation for the entire area of each pixel in the color component.

In the extracting of the object, the object having a minimum standard error is extracted from the extracted texture information feature value through a binary classification and regression tree (CART).

On the other hand, the object detecting apparatus according to the present invention for achieving the above object, the image acquisition unit for obtaining an image from the subject; A preprocessor configured to select candidate regions of an object to be detected from the acquired image; A spatial gray level dependency (SGLD) processing unit for extracting a texture information feature value that is a unique characteristic in the selected candidate region; A classification and regression tree (CART) processing unit for classifying the extracted texture information feature values and extracting the object; And a controller configured to acquire the image through the image acquisition unit and to control the preprocessor, the SGLD processor, and the CART processor to extract the object from the acquired image.

The controller may filter the average values existing within a predetermined range with respect to the intrinsic color component of the object through the preprocessor, and select regions corresponding to the color component of the object from the filtered region.

The controller may divide the image acquired through the image acquisition unit into a predetermined block size through the preprocessing unit, and calculate an average value of color component values in the divided block.

The controller may extract the texture information feature value using the SGLD matrix through the SGLD processor.

In addition, the SGLD matrix includes an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array.

Here, the inertia array represents the degree of change of two adjacent pixel values, and the inverse difference array is an element value of the array increases when the local region in the color component is composed of homogeneous pixels, and the array is composed of heterogeneous pixels. The element value of is reduced.

In addition, the correlation arrangement indicates a degree of correlation for the entire area of each pixel in the color component.

The controller extracts the object having the minimum standard error from the extracted texture information feature value by using a binary classification tree (CART) through the CART processing unit.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

First of all, in adding reference numerals to the components of each drawing, it should be noted that the same reference numerals are used as much as possible even if displayed on different drawings.

In addition, in describing the present invention, when it is determined that the detailed description of the related well-known configuration or function may obscure the gist of the present invention, the detailed description thereof will be omitted.

1 is a configuration diagram schematically showing an internal configuration of an object detecting apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the object detecting apparatus 100 according to the present invention may include an image acquisition unit 110, a storage unit 120, a display unit 130, an object extraction unit 140, a control unit 150, and an input unit 160. ) And the communication unit 170.

Here, the object extractor 140 includes a preprocessor 142, a spatial gray level dependency (SGLD) processor 144, and a classification and regress tree (CART) processor 146.

The image acquisition unit 110 includes a small camera and acquires image data by photographing a subject through the camera.

The storage unit 120 stores the image data acquired by the image acquisition unit 110.

The display unit 130 displays the obtained image data or displays the object image extracted by the object extractor 140.

The object extractor 140 selects a candidate region of an object to be extracted from the image data, extracts characteristic texture information, and detects an object selected by the user through a binary classification tree (CART) from the extracted texture information.

The preprocessor 142 selects a candidate region of the object to be detected from the image data. That is, the preprocessor 142 divides the image data into a predetermined block size such as 8x8 or 16x16, and calculates an average value of the color component Cb and Cr values in the divided block. The preprocessor 142 filters average values existing within a predetermined range with respect to the unique color component of the object to be detected. In this case, areas determined as color components of the object in the filtered area may be referred to as candidate groups of the object.

The SGLD processing unit 144 extracts texture information feature values that are unique in the candidate region of the object to be detected. The SGLD processor 144 uses an SGLD matrix to extract such texture information feature values, which will be described later.

The CART processor 146 extracts an object having a minimum standard error through a binary classification tree (CART) from the extracted texture information feature value. Here, the binary classification tree will be described later.

The controller 150 controls the object to be extracted through the object extractor 140 according to an object extract command from the user through the input unit 160. In addition, the controller 150 may display the extraction process of the object or display the extracted object through the display unit 130.

The input unit 160 inputs a user's operation command and transmits it to the controller 150. That is, the input unit 160 inputs a command for photographing a subject or a command for extracting an object from the acquired image data.

The communicator 170 transmits and receives a voice signal or data related to a telephone call with another terminal through a communication network.

2 is a flowchart illustrating an object detection method from an image according to an exemplary embodiment of the present invention.

First, the SGLD matrix used by the SGLD processing unit 144 of the object detection apparatus 100 according to the present invention will be described.

The SGLD matrix is a vector (m, n) (where m = 1, 2, when a pixel value having a range of [0, L-1] at the pixel (i, j) position is l (i, j)). The occurrence frequency P ab (m, n) of neighboring pixel values for ..., M, n = 1, 2, ..., N) can be obtained from Equation 1 below.

Figure 112007016913034-PAT00001

Here, # denotes a frequency of occurrence for the set {a, b}, W denotes the width of the image, and H denotes the height of the image.

By approximating and normalizing P ab (m, n) obtained by Equation 1, a normalized feature value as shown in Equation 2 can be obtained.

Figure 112007016913034-PAT00002

Meanwhile, the SGLD matrix uses features related to an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array.

Here, the inertia array B I (m, n) can be expressed as Equation 3 below.

Figure 112007016913034-PAT00003

In this case, the inertia array indicates the degree of change of the two adjacent pixel (a, b) values.

In addition, the inverse difference array B D (m, n) may be represented by Equation 4 below.

Figure 112007016913034-PAT00004

In this case, the reverse order shows homogeneity for local regions within m and n. That is, m, when the local area in the n be of a homogeneous pixel B D (m, n) element of the array is increased, and, when configured with a dissimilar pixel B D (m, n) element of the array is Will decrease. Inverse array has a range of [0, 1], and 0 means that the degree of homogeneity is minimal.

In addition, the correlation array B C (m, n) can be represented by the following equation (5).

Figure 112007016913034-PAT00005

In this case, the correlation array refers to the degree of correlation for the entire area of the image. B C (m, n) has values of a and b in m and n as +1 for high correlation, +0 for lower correlation, and -1 for higher correlation for negative. . In Equation 5, μ represents an average over the entire image, and σ represents a standard deviation over the entire image.

Meanwhile, the energy array may be expressed as in Equation 6 below.

Figure 112007016913034-PAT00006

In addition, the entropy array may be represented by Equation 7 below.

Figure 112007016913034-PAT00007

Meanwhile, a binary classification tree (CART) used in the CART processing unit 146 of the object detecting apparatus 100 according to the present invention will be described.

CART is the tree-forming algorithm of the decision tree first proposed by Breiman, Friedman, Olshen and Stone. CART is a tree mining technology that searches and finds information such as hidden patterns, rules, and relationships in the data, and performs classification and prediction by charting the tree structure as well as complex decision rules. It is an analysis method.

When the total number of data is N, if each sequence set is called S and M feature extraction vectors corresponding to each sequence are X, the sequence set S and the feature extraction vector X may be represented by Equation (8).

Figure 112007016913034-PAT00008

The feature vector obtained from the classification data must satisfy condition 1 for generating the classification rule and condition 2 of the subset obtained from the result.

Here, condition 1 obtains a subset Aj in which the classification rule belongs to one class among all the classes 1, 2, ..., J whose result of performing the classification function d (X) on the feature vector (X).

According to this condition 1, each step passes the video sequence into a subset and the remaining sets. Here, a subset is created because condition 1 is repeated as various classes are included in the remaining sets. And each subset meets condition 2.

Condition 2 is X ∈ A j for class J when the condition of subset A 1 , A 2 , ..., A j is equal to the following equation (9).

Figure 112007016913034-PAT00009

The decision tree created by the conditions as described above has one root node as shown in FIG. 3 and is composed of one or more nodes and branches. Each node is represented by either internal or final nodes. 3 is a diagram illustrating a classification process through a binary classification tree. Internal nodes have two child nodes, and the final node cannot have child nodes. In addition, the final node is a class label (Class Label) is determined by the attribute of the feature data that contains each node. That is, any data must reach the last node, and the result of misclassification by the majority rule when determining the class label of the last node to which the data belongs is indicated as misclassification rate.

If any feature value (Xm) is defined as a classification rule to use the decision tree, then each data will be moved to the left child node if the defined rule is met, and to the right child node if it is not satisfied. An example of the classification process through the binary classification tree shown in FIG. 3 will be described later.

1 to 9, in the object detecting apparatus 100, the controller 150 receives an image acquisition command from the user through the input unit 160 (S202).

Accordingly, the controller 150 acquires an image as illustrated in FIG. 4 by photographing a subject through the image acquirer 110 (S204). 4 is a diagram illustrating an input image obtained by photographing a subject.

That is, the image acquisition unit 110 acquires an image signal by capturing a subject, converts the image signal into image data in digital form, and transmits the image signal to the controller 150.

The controller 150 stores the image data received from the image acquisition unit 110 in the storage unit 120 and displays the image data through the display unit 130.

Next, the controller 150 selects a candidate region of the object to be detected from the acquired image data (S206).

That is, the controller 150 displays a selecting indicator 510 for selecting a candidate region of the object with respect to the image data being displayed through the display 130, as shown in FIG. 5. On video data. 5 is a diagram illustrating a screen for selecting a candidate region of an object.

Accordingly, the user selects the candidate region of the object by moving the selection indicator 510 to a place to be selected as the candidate region of the object through the direction button provided in the input unit 160. In this case, the input unit 160 may include a specific key for selecting a candidate area, and a user selects a candidate area of an object by inputting a selection key (OK button) or a specific key where a selection indicator is located. can do.

In addition, the controller 150 may select a candidate region of an object to be detected through the preprocessor 142 of the object extractor 140 with respect to the acquired image.

To this end, the controller 150 performs 8 × 8 or 16 × 16 blocking on the image data as shown in FIG. 6 through the preprocessor 142. 6 is a diagram illustrating blocking of image data for selecting a candidate region of an object.

Subsequently, the controller 150 calculates average values of Cb and Cr, which are color information in the blocked region, and extracts average color information in the block. To this end, the controller 150 filters the color information in the blocked region by the color information of the detection object as shown in FIG. 7. In this case, the controller 150 binarizes and filters color information in the blocked region. 7 is a diagram illustrating a screen for filtering color information in a block by color information of an object.

Then, the controller 150 selects the candidate region from the image data obtained by the binarization filtering of the color information in the blocked region, as shown in FIG. 8. 8 is a diagram illustrating a screen for selecting a candidate region from an image obtained by binarization filtering. The controller 150 stores the coordinate values of the selected candidate region in the storage 120, respectively.

Thereafter, the controller 150 extracts the texture information feature value, which becomes a unique characteristic, through the SGLD processor 144 in the selected candidate region (S208).

That is, the controller 150 performs 8x8 or 16x16 blocking on the candidate area through the SGLD processing unit 144 and extracts an average value of brightness (Y) color information in the blocking area. That is, the controller 150 changes the brightness information in the candidate region block through the SGLD processing unit 144 to Y information in the YcbCr color space as shown in FIG. 9. 9 is a diagram illustrating a screen in which brightness information in a candidate area block is changed to Y information in a color space.

Subsequently, the controller 150 extracts an SGLD value, which is texture information, from the Y information of the color space of the brightness information in the blocking region according to the SGLD matrix. Here, the SGLD matrix may be expressed as the following Equation 10 as two dimensions of M × N.

Figure 112007016913034-PAT00010

That is, as shown in Equation 10, the SGLD matrix uses an inertia array B I , an inverse difference array B D , and a correlation array B C as described above.

The controller 150 obtains SGLD texture information by extracting a one-dimensional feature value from the SGLD matrix through the SGLD processor 144.

The controller 150 classifies the extracted texture information feature value through a binary classification tree used by the CART processor 146 to extract a desired object (S210).

That is, the CART processing unit 146 sets weights for classification learning, applies weights to the branches of the binary classification tree, modifies information on classification conditions in the classification tree, and desires the binary classification process illustrated in FIG. 3. Extract the object.

When the extracted texture information feature value is applied to the binary classification tree of FIG. 3, when the texture information feature value is greater than the weight "2" at the first node, the texture node proceeds to the third node and the texture information feature value at the third node. If it is smaller than the weight "5", the process proceeds to the fourth node.

If the texture information feature value in the fourth node is greater than the weight "7", go to the sixth node to obtain a first class, and if the texture information feature value is less than the weight "7", go to the seventh node and You get 2 classes. The controller 150 stores the classification condition by the binary classification tree learned through this process in the storage 120.

The controller 150 obtains an image corresponding to the first class or an image corresponding to the second class by the above-described process. Therefore, the object detecting apparatus 100 may obtain the objects of various classes by repeating the above process for the still image or the moving image.

In the present invention, it is possible to improve the performance of the built-in classifier or to generate a classifier that does not exist by designating an object type and an object area to be automatically extracted in the future with respect to an image acquired by the object detecting apparatus 100. Such an adaptive classifier can satisfy the user's demand for various object extraction, and it is possible to make a database of images through automatic object extraction.

As described above, according to the present invention, a method and apparatus for detecting an object from an image selecting a candidate region of an object from an input image, extracting characteristic texture information, and detecting a desired object through the binary classification tree Can be realized.

The above description is merely illustrative of the technical idea of the present invention, and those skilled in the art to which the present invention pertains may make various modifications and changes without departing from the essential characteristics of the present invention.

Therefore, the embodiments disclosed in the present invention are not intended to limit the technical idea of the present invention but to describe the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments.

The protection scope of the present invention should be interpreted by the following claims, and all technical ideas within the equivalent scope should be interpreted as being included in the scope of the present invention.

As described above, according to the present invention, an object meeting a user's request may be extracted by selecting and specifying a type of an object to be extracted by the user. In addition, classification performance for object extraction can be greatly improved by using SGLD texture information and CART classification. In addition, it can be used as a classifier that can classify another object by extracting feature values from the object specified by the user. And a multimedia application device which extracts texture information and shape information can be implement | achieved.

Claims (18)

Acquiring an image from a subject; Selecting a candidate region of an object to be detected in the acquired image; Extracting a texture information feature value that is a unique feature in the selected candidate region; And Extracting the object by classifying the extracted texture information feature values; Object detection method from an image including The method of claim 1, The selecting of the candidate region may include filtering average values existing within a predetermined range with respect to a unique color component of the object, and selecting regions corresponding to the color component of the object from the filtered region. Object detection from. The method of claim 1, The selecting of the candidate area may include dividing the acquired image into a predetermined block size, and calculating an average value of color component values in the divided block. The method of claim 1, The extracting of the texture information feature values may include extracting the texture information feature values using a spatial gray level dependency (SGLD) matrix. The method of claim 4, wherein The SGLD matrix includes an Inertial array, an Inverse Difference array, a Correlation array, an Energy array, and an Entropy array. The method of claim 5, wherein And the inertial arrangement indicates an amount of change of two adjacent pixel values. The method of claim 5, wherein The inverse array detects an object from an image, characterized in that an element value of the array is increased when the local region in the color component is composed of homogeneous pixels, and an element value of the array is decreased when the local region in the color component is composed of homogeneous pixels. Way. The method of claim 5, wherein Wherein said correlation arrangement represents a degree of correlation for the entire area of each pixel in said color component. The method of claim 1, The extracting of the object may include extracting the object having a minimum standard error from a extracted classification information feature value through a binary classification and regression tree (CART). . An image acquisition unit obtaining an image from a subject; A preprocessor configured to select candidate regions of an object to be detected from the acquired image; A spatial gray level dependency (SGLD) processing unit for extracting a texture information feature value that is a unique characteristic in the selected candidate region; A classification and regression tree (CART) processing unit for classifying the extracted texture information feature values and extracting the object; And A controller configured to acquire the image through the image acquisition unit, and control the preprocessor, the SGLD processor, and the CART processor to extract the object from the acquired image; Object detection apparatus comprising a. The method of claim 10, The controller detects an average value existing within a predetermined range with respect to a unique color component of the object through the preprocessor, and selects regions corresponding to the color component of the object from the filtered region. Device. The method of claim 10, And the controller divides the image acquired through the image acquisition unit into a predetermined block size through the preprocessing unit, and calculates an average value of color component values in the divided block. The method of claim 10, The controller detects the texture information feature value using an SGLD matrix through the SGLD processing unit. The method of claim 13, The SGLD matrix includes an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array. The method of claim 14, And the inertial arrangement indicates an amount of change of two adjacent pixel values. The method of claim 14, And the inverse array is characterized in that the element values of the array increase when the local region in the color component consists of homogeneous pixels, and the element values of the array decrease when the local regions in the color component consist of homogeneous pixels. The method of claim 14, And wherein said correlation arrangement represents a degree of correlation for the entire area of each pixel in said color component. The method of claim 10, And the controller is configured to extract the object having a minimum standard error from the extracted texture information feature value by using a binary classification tree (CART) through the CART processing unit.
KR1020070019591A 2007-02-27 2007-02-27 Method and apparatus for extracting object from image KR20080079443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020070019591A KR20080079443A (en) 2007-02-27 2007-02-27 Method and apparatus for extracting object from image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020070019591A KR20080079443A (en) 2007-02-27 2007-02-27 Method and apparatus for extracting object from image

Publications (1)

Publication Number Publication Date
KR20080079443A true KR20080079443A (en) 2008-09-01

Family

ID=40020371

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020070019591A KR20080079443A (en) 2007-02-27 2007-02-27 Method and apparatus for extracting object from image

Country Status (1)

Country Link
KR (1) KR20080079443A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110018850A (en) * 2009-08-18 2011-02-24 제너럴 일렉트릭 캄파니 System, method and program product for camera-based object analysis
KR101034117B1 (en) * 2009-11-13 2011-05-13 성균관대학교산학협력단 A method and apparatus for recognizing the object using an interested area designation and outline picture in image
KR101064952B1 (en) * 2009-11-23 2011-09-16 한국전자통신연구원 Method and apparatus for providing human body parts detection
KR101108491B1 (en) * 2010-07-13 2012-01-31 한국과학기술연구원 An apparatus for object segmentation given a region of interest in an image and method thereof
WO2012020927A1 (en) * 2010-08-09 2012-02-16 에스케이텔레콤 주식회사 Integrated image search system and a service method therewith
KR20200120403A (en) * 2019-04-12 2020-10-21 대한민국(산림청 국립산림과학원장) System and method of tree species classification using satellite image
CN115766963A (en) * 2022-11-11 2023-03-07 辽宁师范大学 Encrypted image reversible information hiding method based on self-adaptive predictive coding

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110018850A (en) * 2009-08-18 2011-02-24 제너럴 일렉트릭 캄파니 System, method and program product for camera-based object analysis
KR101034117B1 (en) * 2009-11-13 2011-05-13 성균관대학교산학협력단 A method and apparatus for recognizing the object using an interested area designation and outline picture in image
KR101064952B1 (en) * 2009-11-23 2011-09-16 한국전자통신연구원 Method and apparatus for providing human body parts detection
US8620091B2 (en) 2009-11-23 2013-12-31 Electronics And Telecommunications Research Institute Method and apparatus for detecting specific external human body parts from texture energy maps of images by performing convolution
KR101108491B1 (en) * 2010-07-13 2012-01-31 한국과학기술연구원 An apparatus for object segmentation given a region of interest in an image and method thereof
WO2012020927A1 (en) * 2010-08-09 2012-02-16 에스케이텔레콤 주식회사 Integrated image search system and a service method therewith
US9576195B2 (en) 2010-08-09 2017-02-21 Sk Planet Co., Ltd. Integrated image searching system and service method thereof
US10380170B2 (en) 2010-08-09 2019-08-13 Sk Planet Co., Ltd. Integrated image searching system and service method thereof
KR20200120403A (en) * 2019-04-12 2020-10-21 대한민국(산림청 국립산림과학원장) System and method of tree species classification using satellite image
CN115766963A (en) * 2022-11-11 2023-03-07 辽宁师范大学 Encrypted image reversible information hiding method based on self-adaptive predictive coding

Similar Documents

Publication Publication Date Title
EP3333768A1 (en) Method and apparatus for detecting target
JP5010905B2 (en) Face recognition device
US12002259B2 (en) Image processing apparatus, training apparatus, image processing method, training method, and storage medium
KR101410489B1 (en) Face detection and method and apparatus
JP4098021B2 (en) Scene identification method, apparatus, and program
KR20200130440A (en) A method for identifying an object in an image and a mobile device for executing the method (METHOD FOR IDENTIFYING AN OBJECT WITHIN AN IMAGE AND MOBILE DEVICE FOR EXECUTING THE METHOD)
KR20160136391A (en) Information processing apparatus and information processing method
KR20080079443A (en) Method and apparatus for extracting object from image
JP2014041476A (en) Image processing apparatus, image processing method, and program
JP5578816B2 (en) Image processing device
CN106650615A (en) Image processing method and terminal
CN112149533A (en) Target detection method based on improved SSD model
CN113269010A (en) Training method and related device for human face living body detection model
JP2009123234A (en) Object identification method, apparatus and program
KR101515308B1 (en) Apparatus for face pose estimation and method thereof
KR102634186B1 (en) Method for verifying the identity of a user by identifying an object by identifying an object in an image containing the user&#39;s biometric characteristics and isolating the part of the image containing the biometric characteristics from other parts of the image within an image that has a biometric characteristic of the user and separating a portion of the image comprising the biometric characteristic from other portions of the image)
JP4285640B2 (en) Object identification method, apparatus and program
KR101681233B1 (en) Method and apparatus for detecting face with low energy or low resolution
CN116721288A (en) Helmet detection method and system based on YOLOv5
CN110795995A (en) Data processing method, device and computer readable storage medium
JP5625196B2 (en) Feature point detection device, feature point detection method, feature point detection program, and recording medium
CN110363192A (en) Object image identification system and object image discrimination method
JP4186541B2 (en) Image processing device
Hernandez et al. Classification of color textures with random field models and neural networks
KR20130067758A (en) Apparatus and method for detecting human by using svm learning

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination