KR20080079443A - Method and apparatus for extracting object from image - Google Patents
Method and apparatus for extracting object from image Download PDFInfo
- Publication number
- KR20080079443A KR20080079443A KR1020070019591A KR20070019591A KR20080079443A KR 20080079443 A KR20080079443 A KR 20080079443A KR 1020070019591 A KR1020070019591 A KR 1020070019591A KR 20070019591 A KR20070019591 A KR 20070019591A KR 20080079443 A KR20080079443 A KR 20080079443A
- Authority
- KR
- South Korea
- Prior art keywords
- array
- image
- color component
- texture information
- values
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/40—Tree coding, e.g. quadtree, octree
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
1 is a configuration diagram schematically showing an internal configuration of an object detecting apparatus according to an embodiment of the present invention;
2 is a flowchart illustrating an object detection method from an image according to an embodiment of the present invention;
3 is a view for explaining a classification process through a binary classification tree;
4 is a diagram illustrating an input image obtained by photographing a subject;
5 is a diagram illustrating a screen for selecting a candidate region of an object;
6 is a diagram illustrating blocking of image data for selecting a candidate region of an object;
7 is a diagram illustrating a screen for filtering color information in a block by color information of an object;
8 is a diagram illustrating a screen for setting candidate regions in an image obtained by binarization filtering; and
9 is a diagram illustrating a screen in which brightness information in a candidate area block is changed to Y information in a color space.
<Description of Symbols for Main Parts of Drawings>
100: object detection device 110: image acquisition unit
120: storage unit 130: display unit
140: object extraction unit 142: preprocessing unit
144: SGLD processing unit 146: CART processing unit
150: control unit 160: input unit
170: communication unit
The present invention relates to a method and apparatus for detecting an object from an image.
Techniques for detecting an object from a still image or a moving image acquired by a portable terminal through a camera include an object detection technique for a face of high interest to the user, a car, or a natural object.
In the object detection technique, a classifier is used to analyze a unique property of a specific object or to extract texture information and shape information. Neural networks and support vector machines (SVMs) are used as classifiers.
On the other hand, most portable terminals do not employ a neural network or a classifier such as SVM, and the image obtained through the camera in the portable terminal has a low resolution due to the limitation of the camera lens, and thus the sharpness is reduced. There was a problem with difficulty in detection.
In addition, when extracting texture information or shape information through the above-described classifier, there is a problem in that it cannot be applied to a portable terminal or a portable multimedia device in which the processing speed of the central processing unit (CPU) is limited.
In addition, even if the above-described classifier is applied to the portable terminal, the above-described classifier learning method is complicated, and there is a problem in that it is difficult to generate each classifier depending on various objects and camera modules.
In order to solve the above problems, the present invention provides a method for detecting an object from an image by selecting a candidate region of an object from an input image, extracting characteristic texture information, and detecting a desired object through the binary classification tree; The object is to provide a device.
Object detection method from an image according to the present invention for achieving the above object comprises the steps of: obtaining an image from a subject; Selecting a candidate region of an object to be detected in the acquired image; Extracting a texture information feature value that is a unique feature in the selected candidate region; And classifying the extracted texture information feature values to extract the object.
The selecting of the candidate region may include filtering average values existing within a predetermined range with respect to the unique color component of the object, and selecting regions corresponding to the color component of the object from the filtered region.
In the selecting of the candidate region, the acquired image is divided into a predetermined block size, and an average value of color component values is calculated in the divided block.
In the extracting of the texture information feature value, the texture information feature value is extracted using a spatial gray level dependency (SGLD) matrix.
In addition, the SGLD matrix includes an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array.
Here, the inertia array represents the degree of change of two adjacent pixel values, and the inverse difference array is an element value of the array increases when the local region in the color component is composed of homogeneous pixels, and the array is composed of heterogeneous pixels. The element value of is reduced.
In addition, the correlation arrangement indicates a degree of correlation for the entire area of each pixel in the color component.
In the extracting of the object, the object having a minimum standard error is extracted from the extracted texture information feature value through a binary classification and regression tree (CART).
On the other hand, the object detecting apparatus according to the present invention for achieving the above object, the image acquisition unit for obtaining an image from the subject; A preprocessor configured to select candidate regions of an object to be detected from the acquired image; A spatial gray level dependency (SGLD) processing unit for extracting a texture information feature value that is a unique characteristic in the selected candidate region; A classification and regression tree (CART) processing unit for classifying the extracted texture information feature values and extracting the object; And a controller configured to acquire the image through the image acquisition unit and to control the preprocessor, the SGLD processor, and the CART processor to extract the object from the acquired image.
The controller may filter the average values existing within a predetermined range with respect to the intrinsic color component of the object through the preprocessor, and select regions corresponding to the color component of the object from the filtered region.
The controller may divide the image acquired through the image acquisition unit into a predetermined block size through the preprocessing unit, and calculate an average value of color component values in the divided block.
The controller may extract the texture information feature value using the SGLD matrix through the SGLD processor.
In addition, the SGLD matrix includes an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array.
Here, the inertia array represents the degree of change of two adjacent pixel values, and the inverse difference array is an element value of the array increases when the local region in the color component is composed of homogeneous pixels, and the array is composed of heterogeneous pixels. The element value of is reduced.
In addition, the correlation arrangement indicates a degree of correlation for the entire area of each pixel in the color component.
The controller extracts the object having the minimum standard error from the extracted texture information feature value by using a binary classification tree (CART) through the CART processing unit.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
First of all, in adding reference numerals to the components of each drawing, it should be noted that the same reference numerals are used as much as possible even if displayed on different drawings.
In addition, in describing the present invention, when it is determined that the detailed description of the related well-known configuration or function may obscure the gist of the present invention, the detailed description thereof will be omitted.
1 is a configuration diagram schematically showing an internal configuration of an object detecting apparatus according to an embodiment of the present invention.
Referring to FIG. 1, the object detecting apparatus 100 according to the present invention may include an
Here, the
The
The
The
The
The
The
The
The
The
The
2 is a flowchart illustrating an object detection method from an image according to an exemplary embodiment of the present invention.
First, the SGLD matrix used by the
The SGLD matrix is a vector (m, n) (where m = 1, 2, when a pixel value having a range of [0, L-1] at the pixel (i, j) position is l (i, j)). The occurrence frequency P ab (m, n) of neighboring pixel values for ..., M, n = 1, 2, ..., N) can be obtained from
Here, # denotes a frequency of occurrence for the set {a, b}, W denotes the width of the image, and H denotes the height of the image.
By approximating and normalizing P ab (m, n) obtained by
Meanwhile, the SGLD matrix uses features related to an inertial array, an inverse difference array, a correlation array, an energy array, and an entropy array.
Here, the inertia array B I (m, n) can be expressed as
In this case, the inertia array indicates the degree of change of the two adjacent pixel (a, b) values.
In addition, the inverse difference array B D (m, n) may be represented by
In this case, the reverse order shows homogeneity for local regions within m and n. That is, m, when the local area in the n be of a homogeneous pixel B D (m, n) element of the array is increased, and, when configured with a dissimilar pixel B D (m, n) element of the array is Will decrease. Inverse array has a range of [0, 1], and 0 means that the degree of homogeneity is minimal.
In addition, the correlation array B C (m, n) can be represented by the following equation (5).
In this case, the correlation array refers to the degree of correlation for the entire area of the image. B C (m, n) has values of a and b in m and n as +1 for high correlation, +0 for lower correlation, and -1 for higher correlation for negative. . In
Meanwhile, the energy array may be expressed as in
In addition, the entropy array may be represented by
Meanwhile, a binary classification tree (CART) used in the
CART is the tree-forming algorithm of the decision tree first proposed by Breiman, Friedman, Olshen and Stone. CART is a tree mining technology that searches and finds information such as hidden patterns, rules, and relationships in the data, and performs classification and prediction by charting the tree structure as well as complex decision rules. It is an analysis method.
When the total number of data is N, if each sequence set is called S and M feature extraction vectors corresponding to each sequence are X, the sequence set S and the feature extraction vector X may be represented by Equation (8).
The feature vector obtained from the classification data must satisfy
Here,
According to this
The decision tree created by the conditions as described above has one root node as shown in FIG. 3 and is composed of one or more nodes and branches. Each node is represented by either internal or final nodes. 3 is a diagram illustrating a classification process through a binary classification tree. Internal nodes have two child nodes, and the final node cannot have child nodes. In addition, the final node is a class label (Class Label) is determined by the attribute of the feature data that contains each node. That is, any data must reach the last node, and the result of misclassification by the majority rule when determining the class label of the last node to which the data belongs is indicated as misclassification rate.
If any feature value (Xm) is defined as a classification rule to use the decision tree, then each data will be moved to the left child node if the defined rule is met, and to the right child node if it is not satisfied. An example of the classification process through the binary classification tree shown in FIG. 3 will be described later.
1 to 9, in the object detecting apparatus 100, the
Accordingly, the
That is, the
The
Next, the
That is, the
Accordingly, the user selects the candidate region of the object by moving the
In addition, the
To this end, the
Subsequently, the
Then, the
Thereafter, the
That is, the
Subsequently, the
That is, as shown in Equation 10, the SGLD matrix uses an inertia array B I , an inverse difference array B D , and a correlation array B C as described above.
The
The
That is, the
When the extracted texture information feature value is applied to the binary classification tree of FIG. 3, when the texture information feature value is greater than the weight "2" at the first node, the texture node proceeds to the third node and the texture information feature value at the third node. If it is smaller than the weight "5", the process proceeds to the fourth node.
If the texture information feature value in the fourth node is greater than the weight "7", go to the sixth node to obtain a first class, and if the texture information feature value is less than the weight "7", go to the seventh node and You get 2 classes. The
The
In the present invention, it is possible to improve the performance of the built-in classifier or to generate a classifier that does not exist by designating an object type and an object area to be automatically extracted in the future with respect to an image acquired by the object detecting apparatus 100. Such an adaptive classifier can satisfy the user's demand for various object extraction, and it is possible to make a database of images through automatic object extraction.
As described above, according to the present invention, a method and apparatus for detecting an object from an image selecting a candidate region of an object from an input image, extracting characteristic texture information, and detecting a desired object through the binary classification tree Can be realized.
The above description is merely illustrative of the technical idea of the present invention, and those skilled in the art to which the present invention pertains may make various modifications and changes without departing from the essential characteristics of the present invention.
Therefore, the embodiments disclosed in the present invention are not intended to limit the technical idea of the present invention but to describe the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments.
The protection scope of the present invention should be interpreted by the following claims, and all technical ideas within the equivalent scope should be interpreted as being included in the scope of the present invention.
As described above, according to the present invention, an object meeting a user's request may be extracted by selecting and specifying a type of an object to be extracted by the user. In addition, classification performance for object extraction can be greatly improved by using SGLD texture information and CART classification. In addition, it can be used as a classifier that can classify another object by extracting feature values from the object specified by the user. And a multimedia application device which extracts texture information and shape information can be implement | achieved.
Claims (18)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020070019591A KR20080079443A (en) | 2007-02-27 | 2007-02-27 | Method and apparatus for extracting object from image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020070019591A KR20080079443A (en) | 2007-02-27 | 2007-02-27 | Method and apparatus for extracting object from image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20080079443A true KR20080079443A (en) | 2008-09-01 |
Family
ID=40020371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020070019591A KR20080079443A (en) | 2007-02-27 | 2007-02-27 | Method and apparatus for extracting object from image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20080079443A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110018850A (en) * | 2009-08-18 | 2011-02-24 | 제너럴 일렉트릭 캄파니 | System, method and program product for camera-based object analysis |
KR101034117B1 (en) * | 2009-11-13 | 2011-05-13 | 성균관대학교산학협력단 | A method and apparatus for recognizing the object using an interested area designation and outline picture in image |
KR101064952B1 (en) * | 2009-11-23 | 2011-09-16 | 한국전자통신연구원 | Method and apparatus for providing human body parts detection |
KR101108491B1 (en) * | 2010-07-13 | 2012-01-31 | 한국과학기술연구원 | An apparatus for object segmentation given a region of interest in an image and method thereof |
WO2012020927A1 (en) * | 2010-08-09 | 2012-02-16 | 에스케이텔레콤 주식회사 | Integrated image search system and a service method therewith |
KR20200120403A (en) * | 2019-04-12 | 2020-10-21 | 대한민국(산림청 국립산림과학원장) | System and method of tree species classification using satellite image |
CN115766963A (en) * | 2022-11-11 | 2023-03-07 | 辽宁师范大学 | Encrypted image reversible information hiding method based on self-adaptive predictive coding |
-
2007
- 2007-02-27 KR KR1020070019591A patent/KR20080079443A/en not_active Application Discontinuation
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110018850A (en) * | 2009-08-18 | 2011-02-24 | 제너럴 일렉트릭 캄파니 | System, method and program product for camera-based object analysis |
KR101034117B1 (en) * | 2009-11-13 | 2011-05-13 | 성균관대학교산학협력단 | A method and apparatus for recognizing the object using an interested area designation and outline picture in image |
KR101064952B1 (en) * | 2009-11-23 | 2011-09-16 | 한국전자통신연구원 | Method and apparatus for providing human body parts detection |
US8620091B2 (en) | 2009-11-23 | 2013-12-31 | Electronics And Telecommunications Research Institute | Method and apparatus for detecting specific external human body parts from texture energy maps of images by performing convolution |
KR101108491B1 (en) * | 2010-07-13 | 2012-01-31 | 한국과학기술연구원 | An apparatus for object segmentation given a region of interest in an image and method thereof |
WO2012020927A1 (en) * | 2010-08-09 | 2012-02-16 | 에스케이텔레콤 주식회사 | Integrated image search system and a service method therewith |
US9576195B2 (en) | 2010-08-09 | 2017-02-21 | Sk Planet Co., Ltd. | Integrated image searching system and service method thereof |
US10380170B2 (en) | 2010-08-09 | 2019-08-13 | Sk Planet Co., Ltd. | Integrated image searching system and service method thereof |
KR20200120403A (en) * | 2019-04-12 | 2020-10-21 | 대한민국(산림청 국립산림과학원장) | System and method of tree species classification using satellite image |
CN115766963A (en) * | 2022-11-11 | 2023-03-07 | 辽宁师范大学 | Encrypted image reversible information hiding method based on self-adaptive predictive coding |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3333768A1 (en) | Method and apparatus for detecting target | |
JP5010905B2 (en) | Face recognition device | |
US12002259B2 (en) | Image processing apparatus, training apparatus, image processing method, training method, and storage medium | |
KR101410489B1 (en) | Face detection and method and apparatus | |
JP4098021B2 (en) | Scene identification method, apparatus, and program | |
KR20200130440A (en) | A method for identifying an object in an image and a mobile device for executing the method (METHOD FOR IDENTIFYING AN OBJECT WITHIN AN IMAGE AND MOBILE DEVICE FOR EXECUTING THE METHOD) | |
KR20160136391A (en) | Information processing apparatus and information processing method | |
KR20080079443A (en) | Method and apparatus for extracting object from image | |
JP2014041476A (en) | Image processing apparatus, image processing method, and program | |
JP5578816B2 (en) | Image processing device | |
CN106650615A (en) | Image processing method and terminal | |
CN112149533A (en) | Target detection method based on improved SSD model | |
CN113269010A (en) | Training method and related device for human face living body detection model | |
JP2009123234A (en) | Object identification method, apparatus and program | |
KR101515308B1 (en) | Apparatus for face pose estimation and method thereof | |
KR102634186B1 (en) | Method for verifying the identity of a user by identifying an object by identifying an object in an image containing the user's biometric characteristics and isolating the part of the image containing the biometric characteristics from other parts of the image within an image that has a biometric characteristic of the user and separating a portion of the image comprising the biometric characteristic from other portions of the image) | |
JP4285640B2 (en) | Object identification method, apparatus and program | |
KR101681233B1 (en) | Method and apparatus for detecting face with low energy or low resolution | |
CN116721288A (en) | Helmet detection method and system based on YOLOv5 | |
CN110795995A (en) | Data processing method, device and computer readable storage medium | |
JP5625196B2 (en) | Feature point detection device, feature point detection method, feature point detection program, and recording medium | |
CN110363192A (en) | Object image identification system and object image discrimination method | |
JP4186541B2 (en) | Image processing device | |
Hernandez et al. | Classification of color textures with random field models and neural networks | |
KR20130067758A (en) | Apparatus and method for detecting human by using svm learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |