KR101749029B1 - Apparatus and Method of Body Part Detection in Image - Google Patents

Apparatus and Method of Body Part Detection in Image Download PDF

Info

Publication number
KR101749029B1
KR101749029B1 KR1020150173993A KR20150173993A KR101749029B1 KR 101749029 B1 KR101749029 B1 KR 101749029B1 KR 1020150173993 A KR1020150173993 A KR 1020150173993A KR 20150173993 A KR20150173993 A KR 20150173993A KR 101749029 B1 KR101749029 B1 KR 101749029B1
Authority
KR
South Korea
Prior art keywords
image
shoulder
point
detecting
detected
Prior art date
Application number
KR1020150173993A
Other languages
Korean (ko)
Other versions
KR20170067383A (en
Inventor
최윤식
전승우
전기현
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Priority to KR1020150173993A priority Critical patent/KR101749029B1/en
Publication of KR20170067383A publication Critical patent/KR20170067383A/en
Application granted granted Critical
Publication of KR101749029B1 publication Critical patent/KR101749029B1/en

Links

Images

Classifications

    • G06K9/00369
    • G06K9/00228
    • G06K9/6204
    • G06K9/64
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a method for detecting a specific body part in an image.
A method for detecting a shoulder position according to the present invention includes: detecting a reference body part in an image; A shoulder region setting step of setting a shoulder region in the image according to the detected position of the reference body part; An image segmentation step of segmenting an image block corresponding to the shoulder area into image segments to generate a segmentation image divided into a plurality of regions; And a shoulder detection step of extracting an edge from the segmentation image, detecting an edge point on the extracted edge, and detecting a shoulder position according to the detected position of the corner point.

Description

TECHNICAL FIELD [0001] The present invention relates to a method and apparatus for detecting a body part in an image,

The present invention relates to a method for detecting a specific body part in an image.

In order to detect a part of a body such as a face or a hand, a technique of detecting a target part using a feature or a signal component such as a template or a color has been developed and used variously in the field of image recognition. For example, techniques for detecting a target portion of a plurality of bodies, such as detecting a face using a SIFT-based feature detector or a classifier such as an AdaBoost, or detecting a face using a mask template, have been developed and used.

However, unlike the detection of the body on the part where the skin is exposed such as the face or the hand and the characteristic pattern such as the eyes, nose, mouth or the finger, the detection of the shoulder or the waist is performed And thus it is difficult to recognize the position of the body part.

In addition, when attempting to detect the position of the shoulder or waist, it is difficult to derive an accurate result on the position of the body part due to the problems such as the color or pattern of the clothes and the complexity of the background. In addition, in the case of a two-dimensional image rather than a three-dimensional image or an image having depth information, there are many difficulties in detecting a correct shoulder or waist position due to restriction of a given information.

(Patent Document 0001) Korean Patent Laid-Open Publication No. 2014-0123399 (Apr. 22, 2014)

Accordingly, the present invention uses the image segmentation to distinguish the color difference between the background and the garment to detect the body part in the garment area, to detect corner points at various resolutions, and to match the original image to the shoulder and waist positions, And a method for detecting the same.

According to an aspect of the present invention, there is provided a method for detecting a shoulder position in an image, the method comprising: detecting a reference body part in an image; A shoulder region setting step of setting a shoulder region in the image according to the detected position of the reference body part; An image segmentation step of segmenting an image block corresponding to the shoulder area into image segments to generate a segmentation image divided into a plurality of regions; And a shoulder detection step of extracting an edge from the segmentation image, detecting an edge point on the extracted edge, and detecting a shoulder position according to the detected position of the corner point.

In one embodiment, the step of detecting the reference body part may detect a face from the image to the reference body part, and the step of setting the shoulder area may include a step of detecting, at a lower part of the detected face, A block having a size and a position set according to at least one of a width and a height of a face may be set as the shoulder area.

In one embodiment, the image segmentation step divides an image block corresponding to the shoulder area into a plurality of areas, and determines whether the pixels included in the same area have a video signal value within a predetermined range, The segmentation image can be generated by setting a signal value.

In one embodiment, the step of detecting the shoulder may include: a multi-scale image generation step of generating at least one reduced image by reducing the resolution of the segmentation image by at least one ratio; An edge extraction step of extracting the edge from the segmentation image and each of the reduced images; And a shoulder position estimation step of detecting the corner point at the edge, matching the detected corner points, and estimating the shoulder position using the matched corner point.

In one embodiment, the shoulder position estimating step may include detecting the corner point at the edge, matching the corner point detected in the segmentation image with the corner point detected in each of the reduced images, And the shoulder position can be estimated according to the position of the selected corner point.

In one embodiment, the step of estimating the shoulder position may include: detecting a corner point at the edge extracted from the segmentation image and the edge extracted from the reduced image; A corner point matching step of determining whether the positions of the detected corner points are matched with each other at a reference resolution and selecting the matched corner points when it is determined that the detected corner points are matched; And a shoulder positioning step of determining the shoulder position according to the position of the selected corner point.

In one embodiment, the corner point detection step may detect at least one corner point in the edge using a Local Binary Pattern in which a pattern value is set in a downward direction and a left or right direction.

In one embodiment, the edge point matching step may include mapping the edge points detected in the reduced image to the segmentation image, and determining whether the corner points detected by the segmentation image and the mapped corner points exist within a predetermined distance It can be determined that the corner points are matched with each other.

In one embodiment, the shoulder positioning step may include calculating a distance between the corner points selected at the corner point matching step, selecting the corner point based on the calculated distance from the selected corner points, The shoulder position can be determined according to the position of the point.

According to an aspect of the present invention, there is provided a method of detecting a waist position in an image, the method comprising: detecting a reference body part in an image; A waist region setting step of setting a waist region in the image according to the detected position of the reference body part; An image segmentation step of segmenting an image block corresponding to the waist region into a plurality of segments to generate a segmentation image; And a waist detection step of extracting an edge from the segmentation image, detecting a vertical line on the extracted edge, and detecting a waist position according to the position of the detected vertical line point.

In one embodiment, the step of detecting the reference body part includes: a face detection step of detecting a face in the image; And a shoulder detection step of detecting a shoulder in the image, wherein the waist region setting step includes a step of setting at least one of a width and a height of the detected shoulder on a lower portion of the detected face with reference to the detected face position As described above, the block having the size and position can be set as the waist region.

In one embodiment, the image segmentation step divides an image block corresponding to the waist region into a plurality of regions, and determines whether the pixels included in the same region have a video signal value within a predetermined range, The segmentation image can be generated by setting a signal value.

In one embodiment, the waist detecting step may include: a multi-scale image generating step of generating at least one reduced image by reducing the resolution of the segmentation image by at least one ratio; An edge extraction step of extracting the edge from the segmentation image and each of the reduced images; And a waist position estimating step of detecting the vertical line point at the edge, matching the detected vertical line points, and estimating the waist position using the matched vertical line point.

In one embodiment, the waist position estimating step may include detecting the vertical line point at the edge, matching the vertical line point detected in the segmentation image with the vertical line point detected in each of the reduced images, And the waist position can be estimated according to the position of the selected vertical line point.

In one embodiment, the waist position estimating step may include: a vertical line image point detecting step of detecting the vertical line image point at the edge extracted from the segmentation image and the edge extracted from the reduced image; A vertical line point matching step of determining whether the positions of the detected vertical line points are matched with each other at a reference resolution and selecting the matched vertical line points when it is determined that the detected vertical line points are matched; And a waist positioning step of determining the waist position according to the position of the selected vertical line point.

In one embodiment, the vertical line phase point detection step may detect at least one vertical line point on the edge using a Local Binary Pattern in which a pattern value is set in the vertical direction.

In one embodiment, the vertical line point matching step may include mapping the vertical line points detected in the reduced image to the segmentation image, and detecting the vertical line point and the mapped vertical line points detected in the segmentation image, It can be determined that the vertical line points are matched with each other.

In one embodiment, the method further comprises a reference point setting step of setting a reference point in the waist region, wherein the waist positioning step calculates a distance between the vertical line points selected in the vertical line point matching step and the set reference point, The vertical line point may be selected based on the calculated distance from the selected vertical line point points and the waist position may be determined according to the position of the selected vertical line point.

According to one aspect of the present invention, there is provided a method for detecting a shoulder position in an image, the method comprising: a reference body part detector for detecting a reference body part in an image; A shoulder region setting unit for setting a shoulder region in the image according to the detected position of the reference body region; An image segmentation unit for segmenting an image block corresponding to the shoulder region into a plurality of regions to generate a segmentation image; And a shoulder detection unit for detecting an edge from the segmentation image, detecting an edge point on the extracted edge, and detecting a shoulder position according to the position of the detected edge point.

According to an aspect of the present invention, there is provided a method of detecting a waist position in an image, the method comprising: detecting a reference body part in an image; A waist region setting unit for setting a waist region in the image according to the detected position of the reference body region; An image segmentation unit for generating a segmentation image by dividing an image block corresponding to the waist region into image segments by image segmentation; And a waist detector for detecting an edge on the segmentation image, detecting a vertical line on the extracted edge, and detecting a waist position according to the position of the detected vertical line.

According to the body detection method of the present invention, the shoulder and waist can be detected quickly and reliably in an image. According to the method of detecting a body according to the present invention, the position of the shoulder or the waist of the body can be reliably detected even when a variety of backgrounds exist in an image or when a person wears clothes. In addition, the body detection method according to the present invention has an effect of distinguishing the background and clothing colors even if they are similar in color by using the image segmentation technique, and also detecting edge points at various resolutions and matching them to the original image Since the shoulder or the waist is detected, there is an effect that the parts can be detected more accurately.

In addition, the body detecting method according to the present invention can be used for an augmented reality service in which a clothing image is more accurately added to a person present in a two-dimensional image based on the detected positions of the shoulders and the waist.

1 is a flowchart of a method of detecting a shoulder position in an image according to an embodiment of the present invention.
2 is a reference diagram for explaining the operation of the shoulder area setting step.
3 is a reference view showing a part of the image blocks corresponding to the shoulder region.
4 is a reference diagram showing a segmentation image generated by image segmentation of the image block of FIG.
5 is a detailed flowchart of the shoulder detection step.
6 is a reference diagram showing an edge extracted from the segmentation image and the reduced image.
7 is a detailed flowchart of the step of estimating the shoulder position.
8 is a reference view showing a local binary pattern used for detecting an edge point corresponding to a shoulder.
9 is a reference diagram for explaining the operation of the edge point matching step.
10 is a reference diagram for explaining the shoulder positioning step.
11 is a flowchart of a waist position detecting method according to the present invention.
12 is a detailed flowchart of the waist detecting step.
13 is a detailed flowchart of the waist estimation step.
14 is a reference diagram showing a local binary pattern set for detecting vertical line points.
15 is a flowchart of a waist position detection method including a reference point setting step.
16 is a block diagram of an apparatus for detecting a shoulder position in a video according to the above embodiment.
17 is a block diagram of an apparatus for detecting a waist position in an image according to the above embodiment.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the drawings, the same reference numerals are used to designate the same or similar components throughout the drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. In addition, the preferred embodiments of the present invention will be described below, but it is needless to say that the technical idea of the present invention is not limited thereto and can be variously modified by those skilled in the art.

1 is a flowchart of a method of detecting a shoulder position in an image according to an embodiment of the present invention.

The method of detecting a shoulder position in an image according to an embodiment of the present invention includes a reference body part detection step S100, a shoulder area setting step S200, an image segmentation step S300, and a shoulder detection step S400 .

The reference body part detection step (S100) detects the reference body part from the image.

The shoulder region setting step S200 sets the shoulder region in the image according to the detected position of the reference body part.

In the image segmentation step S300, an image block corresponding to the shoulder region is subjected to image segmentation to generate a segmentation image divided into a plurality of regions.

In the shoulder detection step S400, an edge is extracted from the segmentation image, an edge point is detected on the extracted edge, and a shoulder position is detected according to the position of the detected edge point.

First, the reference body part detection step S100 will be described.

The reference body part detection step (S100) detects the reference body part from the image. The reference body part means a body part used as a reference in setting a shoulder area in which a shoulder may exist in the image in order to detect the position of the shoulder in the method of detecting the shoulder position according to the present invention. The reference body part can be various parts of the human body as needed. For example, it can be a face, a neck, a hand, a foot, and the like.

Here, the reference body part detection step (S100) may preferably detect the face from the image to the reference body part. The person present in the video may typically be in a state of covering various types of clothing. Therefore, it is preferable that the reference body part is a face which is not covered with clothes. The reference body part detection step (S100) can detect a face in an image using various known face detection techniques. Here, the reference body part detection step (S100) can detect the face by analyzing the image and using the predetermined feature information. Here, in order to detect a face, various types of existing features may be extracted from the image, and a face may be detected using the feature. For example, edge features, corner features, LoG (Laplacian of Gaussian), and DoG (Difference of Gaussian) can be extracted and used for face detection. Here, various existing feature description schemes including a Scale-invariant feature transform (SIFT), a Speeded Up Robust Feature (SURF), and a Histogram of Oriented Gradients (HOG) can be used for face detection. Alternatively, a face may be detected by comparing a template image and a certain area within a target image for detection of a face.

More specifically, for example, the reference body part detection step (S100) is performed by 'Turk, Matthew, and Alex P. Pentland. "Face recognition using eigenfaces." Computer Vision and Pattern Recognition, 1991. Proceedings CVPR'91., IEEE Computer Society Conference on. IEEE, 1991. 'I, Wiskott, Laurenz, et al. "Face recognition by elastic bunch graph matching." Pattern Analysis and Machine Intelligence, IEEE Transactions on 19.7 (1997): 775-779. 'I' Zhao, Wenyi, et al. "Face recognition: A literature survey." ACM computing surveys (CSUR) 35.4 (2003): 399-458. &Quot; Here, it is needless to say that the reference body part detection step S100 can detect faces in an image using various known face detection techniques in addition to the above-described examples.

Next, the shoulder region setting step S200 will be described.

The shoulder region setting step S200 sets the shoulder region in the image according to the detected position of the reference body part. Here, the shoulder region means a region corresponding to a certain range of an image that can include a human shoulder. Here, the shoulder region can be set to have a sufficient margin to the region including the shoulder.

Here, the shoulder region setting step S200 may include a step of setting a size and position of the detected face based on at least one of a width and a height of the detected face on the basis of the detected face position, Shoulder area can be set. Since the shoulder of a person can exist in a certain area under the face with respect to the face of the person, it is preferable to set the shoulder area based on the position of the face. Also, since the size of the shoulder region is related to the size of the human body, and the size of the human body is also related to the size of the human face, it is preferable to set the size of the shoulder region according to the size of the face, that is, . Preferably, the shoulder region is set so as to have a predetermined width starting from the bottom of the detected face region to a predetermined height set on the basis of the size of the face, . In one embodiment, the left and right sides of the shoulder region, i.e., the lateral length, may be set to N times the face width, and the shoulder region may be set to M times the face height. Here, N is preferably set to 3 to 7, M is set to 1 to 2, and most preferably N is set to 5 and M is set to 1.5.

2 is a reference diagram for explaining the operation of the shoulder area setting step S200.

As shown in FIG. 2, the shoulder region S may be set based on the face region F detected in the image, as described above.

Next, the image segmentation step S300 will be described.

The image segmentation step S300 generates a segmentation image that is divided into a plurality of regions by using an image segmentation algorithm in which an image block corresponding to the shoulder region is previously set. Image segmentation divides an image into a plurality of regions, with the homogeneous region as the same region in the image. In the image segmentation step S300, the image block corresponding to the shoulder region is divided into a plurality of regions, and the image signal values of the pixels of the pixels are determined so that the pixels included in the same region have a video signal value within a predetermined range. To generate the segmentation image.

3 is a reference view showing a part of the image blocks corresponding to the shoulder region.

And FIG. 4 is a reference view showing a segmentation image generated by image segmentation of the image block in FIG. As shown in FIG. 4, the image segmentation step S300 may divide the shoulder region image block into a plurality of regions, set the image signal values of the pixels in the same region to the same value, and generate a segmentation image so that the regions are distinguished from each other have.

For this, the image segmentation step S300 may use a Graph based Segmentation technique, and image segmentation of the image block corresponding to the shoulder region may be performed using various other known image segmentation algorithms. For example, the image segmentation step (S300) is performed by Shi, Jianbo, and Jitendra Malik. "Normalized cuts and image segmentation." Pattern Analysis and Machine Intelligence, IEEE Transactions on 22.8 (2000): 888-905. 'I, Pal, Nikhil R., and Sankar K. Pal. "A review on image segmentation techniques." Pattern recognition 26.9 (1993): 1277-1294. 'I' Felzenszwalb, Pedro F., and Daniel P. Huttenlocher. "Efficient graph-based image segmentation." International Journal of Computer Vision 59.2 (2004): 167-181. The image can be divided into a plurality of regions by using the segmentation technique disclosed in US Pat. It should be noted that the image segmentation step S300 can also divide the image using various known image segmentation algorithms in addition to the above-described examples.

Next, the shoulder detection step (S400) will be described.

In the shoulder detection step S400, an edge is extracted from the segmentation image, an edge point is detected on the extracted edge, and a shoulder position is detected according to the position of the detected edge point. This is because there is a high probability that an edge on the edge extracted from the human shoulder in the image is an edge point.

More specifically, the shoulder detection step S400 may include a multi-scale image generation step S410, an edge extraction step S420, and a shoulder position estimation step S430.

5 is a detailed flowchart of the shoulder detection step S400.

In the multi-scale image generation step (S410), the segmentation image is reduced in resolution by at least one ratio to generate at least one reduced image. For example, the segmentation image can be reduced by a ratio of 1/2, 1/4, 1/8, respectively to generate reduced images. For example, if the segmentation image has a size of 256 x 256, the reduced images may have sizes of 128 x 128, 64 x 64, and 32 x 32, respectively, and thus a series of multi-scale images can be constructed. Such image reduction may be performed through downsampling, and may further include performing filtering to remove noise or errors that may occur during the reduction process.

The edge extraction step S420 extracts the edge from the segmentation image and each of the reduced images. Here, since the segmented image is an image segmented so that homogeneous regions have the same or a pixel value within a predetermined range, an edge is extracted from the boundary of the region when the edge is extracted. This is also the case for the reduced images generated by reducing the segmentation image. Here, the edge extraction step (S420) can use Canny Edge Detection algorithm, and it is possible to extract edges using various other known edge detection algorithms. For example, the edge extraction step (S420) may be referred to as' Perona, Pietro, and Jitendra Malik. "Scale-space and edge detection using anisotropic diffusion." Pattern Analysis and Machine Intelligence, IEEE Transactions on 12.7 (1990): 629-639. 'I' Ziou, Djemel, and Salvatore Tabbone. "Edge detection techniques-an overview." Pattern Recognition and Image Analysis C / C of Raspoznavia Obrazov I Analysis Izobrazhenii 8 (1998): 537-559. It is possible to detect an edge using an edge detection algorithm disclosed in, for example, the present invention, and the edge can be detected using various known edge detection methods other than the above-described examples.

6 is a reference diagram showing an edge extracted from the segmentation image and the reduced image. 6 (a) is an edge extracted from a segmentation image, and FIGS. 6 (b) and 6 (c) are views obtained by enlarging the edge extracted from the reduced image at a predetermined ratio to the same size as the segmentation image.

The shoulder position estimation step S430 detects the corner point at the edge, matches the detected corner points, and estimates the shoulder position using the matched corner point. Here, the step of estimating the shoulder position (S430) may include detecting the corner point at the edge, matching the corner point detected in the segmentation image with the corner point detected in each of the reduced images, And the shoulder position can be estimated according to the position of the selected corner point.

More specifically, the shoulder position estimation step S430 may include an edge point detection step S431, an edge point matching step S432, and a shoulder positioning step S433.

7 is a detailed flowchart of the shoulder position estimation step S430.

The edge point detection step (S431) detects the edge point extracted from the segmentation image and the edge extracted from the reduced image. Here, the corner point can be defined as a specific point when an edge connected in both directions with respect to a specific point at a specific point on the edge has a characteristic of proceeding with a slope within a predetermined range. Herein, the predetermined range may be a range set at, for example, about 90 degrees, and more specifically, a range between 120 degrees and 60 degrees. Particularly, in the case of the corner point corresponding to the shoulder, the line extending from the shoulder to the outer side of the arm proceeds in the vertical direction, and the line extending from the shoulder to the neck portion has the characteristic of proceeding in the horizontal direction. Therefore, in the edge point detection step S431, it is preferable to detect the edge portion corresponding to the edge portion of the shoulder as the edge point.

For this purpose, the edge point detection step S431 may detect at least one corner point on the edge using a Local Binary Pattern in which a pattern value is set in the downward direction and the left or right direction. The local binary pattern compares a predetermined number of pixels neighboring a specific pixel with a video signal value of a specific pixel and sets a value of 0 or 1 according to the comparison result. Means a binary pattern arranged in order. Here, the neighboring pixels may be set around a specific pixel so as to have a predetermined direction. For example, if the video signal value of the neighboring pixel is greater than the video signal value of the first neighboring pixel, the bit corresponding to the first neighboring pixel may be set to zero, and if the video signal value of the second neighboring pixel is The bit corresponding to the second neighboring pixel may be set to one when the value is greater than the video signal value of the specific pixel. Here, 0 and 1 may be set as opposite as required. For example, if a total of 8 neighboring pixels are set around a specific pixel by using a kernel of 3 x 3 in total, the binary pattern may have a bit value of 0 or 1 set to 8 bits each.

Here, the corner point detection step S431 calculates a local binary pattern at each point on the edge, and if it is determined that the calculated binary pattern belongs to binary patterns corresponding to corner points corresponding to the shoulders, It can be detected by an edge point. At this time, the binary pattern corresponding to the corner point can be set in advance according to the shape information of the shoulder.

8 is a reference view showing a local binary pattern used for detecting an edge point corresponding to a shoulder. FIG. 8 shows a local binary pattern corresponding to an edge corresponding to a shoulder when using a 7 x 7 kernel. 8 (a) shows a left shoulder, (b) shows a right shoulder, and directions from 0 to 15 can be set as shown in FIG. At this time, as shown in FIG. 8A, the binary pattern values of the kernel pixels located on the lines having the directions of 3, 4, 5, 7, 8, 9 and 10 are set to 0 or 1 Can be set. Also, as shown in FIG. 8 (b), the binary pattern values of kernel pixels located on lines having the same directions as 6, 7, 8, 9, 11, 12 and 13 are 0 or 1 Can be set.

As described above, the corner point detecting step S431 calculates a local binary pattern for each point on the edge, then compares the local binary pattern with a binary pattern of an edge point corresponding to a predefined shoulder, and detects whether the point is a corner point . Here, a plurality of corner points can be detected in each of the segmentation image and each of the reduced images, and these corner points become the first-order candidates of the shoulder positions.

Next, the corner matching step S432 is to determine whether the positions of the detected corner points are matched with each other at a reference resolution. If it is determined that the detected corner points are matched, the matching corner points are selected do. The corner point matching step S432 may map the corner points detected in the reduced image to the segmentation image. The edge point matching step S432 may determine that the corner points are matched if the corner point detected in the segmentation image and the mapped corner points are within a predetermined distance.

9 is a reference diagram for explaining the operation of the edge point matching step S432. As shown in FIG. 9, the edge point 2 detected in the reduced image reduced by 2 times and the edge point 4 detected in the reduced image reduced by 4 are enlarged to the size of the segmentation image and mapped to the segmentation image, Is greater than the corner point (S) Corner points existing within a predetermined distance between the corner points mapped in this way are matched and can be selected as the second-order candidates of the shoulder position.

In an embodiment, the edge point matching step S432 is a step of matching the edge points of the segmented images, which are mapped to the size of the segmentation image, and the edge points of the segmentation image, Can be selected. Here, it is preferable to select an edge point having corner points matched for all the reduced images among the corner points of the segmentation image as the second-order candidate group. For example, if two scaled images are generated, edge points of the segmentation image can be selected as second-order candidates only if the corner points of the segmentation image are respectively detected from the two scaled-down images and overlapped with the mapped corner points.

Next, the shoulder positioning step S433 determines the shoulder position according to the position of the selected corner point. In the corner point matching step S432, either the shoulder position or the left shoulder is selected as the shoulder position among the corner points selected as the second candidate group, or the shoulder position is calculated using the coordinates of the selected corner points It is possible. For example, the shoulder position may be determined by calculating the average of the coordinates of the corner points selected for each of the left shoulder and the right shoulder, or the shoulder position may be determined by median filtering. You can also perform filtering to remove outliers in this process. Here, the selected corner points of the left shoulder and the right shoulder can be distinguished based on the coordinates of the corner points.

In one embodiment, the shoulder positioning step S433 calculates a distance between the selected corner points in the corner matching step S432, selects the corner point based on the calculated distance from the selected corner points , The shoulder position can be determined according to the position of the selected corner point. At this time, the sum of the distances to the selected corner points according to the selected corner points is calculated, and the corner point having the smallest sum of the calculated distances can be determined as the final shoulder position.

10 is a reference diagram for explaining the shoulder positioning step (S433). Any one of a plurality of selected edge points M may be determined as the shoulder position F as shown in FIG.

The method for detecting a waist position in an image according to another embodiment of the present invention includes a reference body part detection step S1000, a waist region setting step S2000, an image segmentation step S3000, and a waist detection step S4000 .

11 is a flowchart of a waist position detecting method according to the present invention.

The reference body part detection step (S1000) detects the reference body part in the image. In one embodiment, the face and the shoulder can be detected as a reference body part. For this, the reference body part detection step S1000 may include a face detection step of detecting a face in the image, and a shoulder detection step of detecting a shoulder in the image. Here, the face detection and the shoulder detection can be detected in the same manner as described above with reference to the shoulder position detection method according to the present invention.

The waist region setting step S2000 sets the waist region in the image according to the detected position of the reference body part. At this time, the waist region setting step S2000 sets a block whose size and position are set according to at least one of the width and the height of the detected shoulder at the lower part of the detected face based on the detected face position The waist area can be set. For example, the waist region W can be set according to the detected face region F and the position and size of the shoulder region S as shown in Fig. In one embodiment, the horizontal width of the waist region may be set to N times the width of the shoulder, and a point corresponding to M1 to M2 times the shoulder width from the lower end of the face may be set as the upper limit and the lower limit in the vertical direction of the waist region. For example, N may be 1, M1 may be 1, and M2 may be 1.75, and the above values may be set to various values as required.

In the image segmentation step S3000, an image block corresponding to the waist region is subjected to image segmentation to generate a segmentation image divided into a plurality of regions. In the image segmentation step S3000, an image block corresponding to the waist region is divided into a plurality of regions, and the image signal values of the pixels of the pixels are determined so that pixels included in the same region have a video signal value within a predetermined range. To generate the segmentation image. Here, the image segmentation can be performed in the same manner as described above with reference to the shoulder position detecting method according to the present invention.

In the waist detection step S4000, an edge is extracted from the segmentation image, a vertical line point is detected on the extracted edge, and a waist position is detected according to the position of the detected vertical line point.

Next, the operation of the waist detection step (S4000) will be described in more detail.

12 is a detailed flowchart of the waist detection step S4000.

The waist detection step S4000 may include a multi-scale image generation step S4100, an edge extraction step S4200, and a waist position estimation step S4300.

In the multi-scale image generation step (S4100), the segmentation image is reduced in resolution by at least one ratio to generate at least one reduced image. Here, the reduced image can be generated in the same manner as the method described above while explaining the shoulder position detecting method according to the present invention.

The edge extraction step S4200 extracts the edge from the segmentation image and each of the reduced images. Here, the edge extraction can also be extracted in the same manner as described above with reference to the shoulder position detection method according to the present invention.

The waist position estimating step S4300 detects the vertical line point at the edge, matches the detected vertical line points, and estimates the waist position using the matched vertical line point. Here, the waist position estimation step S4300 may detect the vertical line point at the edge, match the vertical line point detected in the segmentation image and the vertical line point detected in each of the reduction images, And the waist position can be estimated according to the position of the selected vertical line point.

For this, the waist estimation step may include a vertical line phase detection step S4310, a vertical line point matching step S4320, and a waist positioning step S4330.

13 is a detailed flowchart of the waist estimation step.

The vertical line phase point detection step S4310 detects the vertical line point from the edge extracted from the segmentation image and the edge extracted from the reduced image. Here, a vertical line point may be defined as a specific point when an edge connected in both directions with respect to a specific point at a specific point on the edge has a characteristic of proceeding with a slope within a predetermined range. Here, the predetermined range may be a range that is set around 180 degrees, for example, and more specifically, it may range from 150 degrees to 210 degrees. Particularly, in the case of the vertical line corresponding to the waist, the line extending from the waist to the upper and lower portions has the characteristic to proceed in the vertical direction. Accordingly, in the vertical line point detection step (S4310), it is preferable to detect the edge portion corresponding to the line along the above waist as a vertical line point.

Here, the vertical line phase point detection step S4310 may detect at least one vertical line point on the edge by using a local binary pattern in which a pattern value is set in the vertical direction. Here, the vertical line phase detecting step (S4310) calculates the local binary pattern at each point on the edge. If it is determined that the calculated local binary pattern belongs to the binary patterns corresponding to the vertical line corresponding to the waist, Point can be detected as the vertical line point. At this time, the local binary pattern corresponding to the vertex line point can be set in advance according to the shape information of the waist. Here, a plurality of vertical line points can be detected in each of the segmentation image and each of the reduced images, and these vertical line points are the first-order candidates of the waist position.

14 is a reference diagram showing a local binary pattern set for detecting vertical line points. 14 shows a local binary pattern corresponding to a vertical line corresponding to the waist when a 7 x 7 kernel is used. The directions from 0 to 15 can be set as shown in FIG. 14, so that the binary pattern values of the kernel pixels located on the lines having the directions of 0, 1, 7, 8, 9 and 15 are distinguished from other pixels 0 " or " 1 ".

As described above, the vertical line point detection step (S4310) determines a local binary pattern for each point on the edge and then compares the local binary pattern with a binary pattern of a vertical line point corresponding to a predefined waist to determine whether the point is a vertical line point Can be detected.

The vertical line phase matching step S4320 determines whether the positions of the detected vertical line points are matched with each other at a reference resolution. If it is determined that the detected vertical line points are matched, Are selected as the second-order candidates of the waist position. Here, the vertical line point matching step (S4320) may map the vertical line points detected in the reduced image to the segmentation image. Next, the vertical line phase matching step may determine that the vertical line points match each other when the vertical line point detected by the segmentation image and the mapped vertical line point are within a predetermined distance. Here, the vertical line point matching step (S4320) is a method for detecting a shoulder position according to the present invention, and a second candidate group at the waist position can be selected by matching vertical line points in the same manner as described in matching the corner points have.

The waist positioning step S4330 determines the waist position according to the position of the selected vertical line point

Here, the method of detecting a waist position in an image according to the present invention may further include a reference point setting step (S2500) of setting a reference point in the waist region. Here, the reference point is a position in which the waist position is likely to be present, and a total of two can be set, one on each side of the waist. Here, the coordinates in the horizontal direction (X-axis direction) of the reference point can be set to the coordinates in the X-axis direction on the left and right sides of the face or shoulder detected in the reference body part detection step S1000. The coordinate in the vertical direction (Y-axis direction) of the reference point can be set as the center coordinate in the Y-axis direction of the waist region set in the waist region setting step S2000 (S2000). Here, the X axis direction can be set as the horizontal direction of the image, and the Y axis direction can be set as the vertical direction of the image.

15 is a flowchart of a waist position detection method including a reference point setting step.

In this case, the waist positioning step (S4330) may calculate the distance between the vertical line points selected in the vertical line point matching step (S4320) and the set reference point, and calculate the distance from the selected vertical line points It is possible to select the vertical line point and determine the waist position according to the position of the selected vertical line point. In one embodiment, the waist positioning step (S4330) may set the vertical line points located at the closest distance from the reference point among the selected vertical line points included in the second-order candidate group to each of the two reference points as the final waist position .

If necessary, the waist positioning step S4330 may select one of the left and right sides of the waist among the vertical line points selected as the second candidate group in the vertical line point matching step S4320 to determine the waist position, The coordinates of the corner points may be used to calculate the waist position. For example, it is possible to determine the waist position by calculating the average of the coordinates of the vertices on the left side of the waist and the waist side, or determine the waist position by median filtering. You can also perform filtering to remove outliers in this process. Here, the selected vertical line points on the left-hand side of the waist and the right-hand side of the waistline can be classified based on the coordinates of the vertical line point.

The shoulder position detecting apparatus according to another embodiment of the present invention may include a reference body part detecting unit 100, a shoulder area setting unit 200, an image segmentation unit 300, and a shoulder detection unit 400 . Here, the shoulder position detecting apparatus in the image according to the present invention can operate in the same manner as the shoulder position detecting method according to the present invention described in detail with reference to FIGS. 1 to 10 above. The overlapping portions will be omitted and briefly explained.

16 is a block diagram of an apparatus for detecting a shoulder position in a video according to the above embodiment.

The reference body part detection unit 100 detects a reference body part from the image.

The shoulder region setting unit 200 sets a shoulder region in the image according to the detected position of the reference body region.

The image segmentation unit 300 generates a segmentation image by dividing an image block corresponding to the shoulder region into image segments by image segmentation.

The shoulder detection unit 400 extracts an edge from the segmentation image, detects an edge point on the extracted edge, and detects the shoulder position according to the detected position of the edge point.

The apparatus for detecting a waist position in an image according to another embodiment of the present invention may include a reference body part detecting unit 1000, a waist region setting unit 2000, an image segmenting unit 3000, and a waist detecting unit 4000 . Here, the waist position detecting apparatus in the image according to the present invention can operate in the same manner as the waist position detecting method according to the present invention described in detail with reference to Figs. 11 to 14 above. The overlapping portions will be omitted and briefly explained.

17 is a block diagram of an apparatus for detecting a waist position in an image according to the above embodiment.

The reference body part detecting unit 1000 detects a reference body part from the image.

The waist region setting unit 2000 sets a waist region in the image according to the detected position of the reference body region.

The image segmentation unit 3000 generates a segmentation image by dividing an image block corresponding to the waist region into a plurality of regions by image segmentation.

The waist detection unit 4000 extracts an edge from the segmentation image, detects a vertical line on the extracted edge, and detects a waist position according to the position of the detected vertical line.

According to the method and apparatus for detecting shoulder and waist according to the present invention, the shoulder and waist can be detected quickly and reliably in an image. Further, according to the method and apparatus for detecting shoulder and waist according to the present invention, it is possible to reliably detect the position of the shoulder or waist of a human body even when a variety of backgrounds exist in an image or when a person wears clothes have. Further, the method and apparatus for detecting shoulder and waist according to the present invention have an effect of distinguishing even if the background and the color of clothes are similar by using the image segmentation technique, and also can detect an edge point or a vertical line point at various resolutions And detects the shoulder or the waist by matching it with the original segmentation image, so that it is possible to detect the parts more accurately.

Also, the shoulder and waist detecting method and apparatus according to the present invention can provide an augmented reality service that more accurately overlaps a clothing image to a person present in a two-dimensional image based on the detected positions of the shoulders and waist.

It is to be understood that the present invention is not limited to these embodiments, and all elements constituting the embodiment of the present invention described above are described as being combined or operated in one operation. That is, within the scope of the present invention, all of the components may be selectively coupled to one or more of them.

In addition, although all of the components may be implemented as one independent hardware, some or all of the components may be selectively combined to perform a part or all of the functions in one or a plurality of hardware. As shown in FIG. In addition, such a computer program may be stored in a computer readable medium such as a USB memory, a CD disk, a flash memory, etc., and read and executed by a computer to implement an embodiment of the present invention. As the recording medium of the computer program, a magnetic recording medium, an optical recording medium, a carrier wave medium, and the like can be included.

Furthermore, all terms including technical or scientific terms have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined in the Detailed Description. Commonly used terms, such as predefined terms, should be interpreted to be consistent with the contextual meanings of the related art, and are not to be construed as ideal or overly formal, unless expressly defined to the contrary.

It will be apparent to those skilled in the art that various modifications, substitutions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims. will be. Therefore, the embodiments disclosed in the present invention and the accompanying drawings are intended to illustrate and not to limit the technical spirit of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments and the accompanying drawings . The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

S100: Reference body part detection step
S200: shoulder region setting step
S300: Image Segmentation Step
S400: Shoulder detection step
S410: Multi-scale image generation step
S420: edge extraction step
S430: Shoulder position estimation step
S431: Edge point detection step
S432: Corner point matching step
S433: Shoulder positioning step
S1000: Reference body part detection step
S2000: Waist area setting step
S2500: Reference point setting step
S3000: Image Segmentation Phase
S4000: Waist detection step
S4100: Multi-scale image generation step
S4200: Edge extraction step
S4300: waist position estimation step
S4310: vertical line phase point detection step
S4320: Vertical line point matching step
S4330: Waist positioning step
100: Reference body part detecting part
200: shoulder area setting unit
300: Image Segmentation Unit
400: Shoulder detector
1000: Reference body part detecting part
2000: waist region setting section
3000: image segmentation section
4000: waist detector

Claims (20)

A reference body part detection step of detecting a reference body part in an image;
A shoulder region setting step of setting a shoulder region in the image according to the detected position of the reference body part;
An image segmentation step of segmenting an image block corresponding to the shoulder area into image segments to generate a segmentation image divided into a plurality of regions; And
And a shoulder detection step of detecting an edge point on the extracted edge and detecting a shoulder position according to a position of the detected edge point,
The reference body part detecting step detects a face from the image to the reference body part,
Wherein the shoulder region setting step sets a block having a size and a position according to at least one of a width and a height of the detected face at a lower portion of the detected face based on the detected face position as the shoulder region Wherein the shoulder position detecting means detects the shoulder position in the image.
delete The method according to claim 1,
The image segmentation step divides an image block corresponding to the shoulder area into a plurality of areas and sets the image signal values of the pixels so that pixels included in the same area have image signal values within a predetermined range , And generates the segmentation image.
The method according to claim 1,
A multi-scale image generation step of generating at least one reduced image by reducing the resolution of the segmentation image by at least one ratio;
An edge extraction step of extracting the edge from the segmentation image and each of the reduced images; And
And a shoulder position estimating step of detecting the edge point at the edge, matching the detected corner points, and estimating the shoulder position using the matched corner point. Detection method.
5. The method of claim 4,
Wherein the step of estimating the shoulder position comprises the steps of: detecting the corner point at the edge; matching the corner point detected in the segmentation image with the corner point detected in each of the reduced images, selecting the matched corner point; And estimating the shoulder position according to the position of the selected corner point.
6. The method according to claim 5,
A corner point detecting step of detecting the corner point from the edge extracted from the segmentation image and the edge extracted from the scaled image;
A corner point matching step of determining whether the positions of the detected corner points are matched with each other at a reference resolution and selecting the matched corner points when it is determined that the detected corner points are matched; And
And determining a shoulder position according to the position of the selected corner point.
The method according to claim 6,
Wherein the edge point detecting step detects at least one corner point in the edge using a Local Binary Pattern in which a pattern value is set in a downward direction and leftward or rightward direction. Detection method.
7. The method of claim 6,
Mapping the corner points detected in the reduced image to the segmentation image,
If the edge points detected in the segmentation image and the mapped corner points are within a predetermined distance, it is determined that the corner points match each other.
The method according to claim 6,
The shoulder positioning step may include calculating a distance between the corner points selected in the corner point matching step, selecting the corner point based on the calculated distance from the selected corner points, And the shoulder position is determined based on the position of the shoulder.
A reference body part detection step of detecting a reference body part in an image;
A waist region setting step of setting a waist region in the image according to the detected position of the reference body part;
An image segmentation step of segmenting an image block corresponding to the waist region into a plurality of segments to generate a segmentation image; And
A waist detecting step of extracting an edge from the segmentation image, detecting a vertical line point on the extracted edge, and detecting a waist position according to the position of the detected vertical line point,
Wherein the reference body part detection step comprises:
A face detecting step of detecting a face in the image; And
And a shoulder detecting step of detecting a shoulder in the image,
Wherein the waist region setting step includes a step of setting a block having a size and a position according to at least one of a width and a height of the detected shoulder at a lower portion of the detected face with reference to the detected face position as the waist region Wherein the position detecting means detects the position of the waist position in the image.
delete 11. The method of claim 10,
The image segmentation step divides an image block corresponding to the waist region into a plurality of areas and sets the image signal values of the pixels so that pixels included in the same area have image signal values within a predetermined range And generating the segmentation image.
11. The method according to claim 10,
A multi-scale image generation step of generating at least one reduced image by reducing the resolution of the segmentation image by at least one ratio;
An edge extraction step of extracting the edge from the segmentation image and each of the reduced images; And
And a waist position estimation step of detecting the vertical line point on the edge, matching the detected vertical line points, and estimating the waist position using the matched vertical line point. Of the waist position.
14. The method of claim 13,
Wherein the waist position estimating step comprises the steps of: detecting the vertical line point at the edge; matching the vertical line point detected in the segmentation image with the vertical line point detected in each of the reduced images, And estimating the position of the waist according to the position of the selected vertical line point.
15. The method of claim 14,
A vertical line image point detection step of detecting the vertical line image point at the edge extracted from the segmentation image and the edge extracted from the reduced image;
A vertical line point matching step of determining whether the positions of the detected vertical line points are matched with each other at a reference resolution and selecting the matched vertical line points when it is determined that the detected vertical line points are matched; And
And determining a position of the waist according to the position of the selected vertical line point.
16. The method of claim 15,
Wherein the vertical line phase point detection step detects at least one vertical line point on the edge using a Local Binary Pattern in which a pattern value is set in the vertical direction.
17. The method of claim 16, wherein the vertical line point matching step comprises:
Mapping the vertical line points detected in the reduced image to the segmentation image,
And determining that the vertical line points match each other when the vertical line point detected in the segmentation image and the mapped vertical line point are within a predetermined distance.
17. The method of claim 16,
Further comprising a reference point setting step of setting a reference point in the waist region,
The waist positioning step may include calculating a distance between the vertical line points selected in the vertical line point matching step and the set reference point and selecting the vertical line point based on the calculated distance from the selected vertical line point points And determining the position of the waist according to the position of the selected vertical line point.
A reference body part detector for detecting a reference body part in an image;
A shoulder region setting unit for setting a shoulder region in the image according to the detected position of the reference body region;
An image segmentation unit for segmenting an image block corresponding to the shoulder region into a plurality of regions to generate a segmentation image; And
And a shoulder detection unit for detecting an edge point on the extracted edge and detecting a shoulder position according to a position of the detected edge point,
Wherein the reference body part detecting unit detects a face from the image to the reference body part,
Wherein the shoulder area setting unit sets a block having a size and a position according to at least one of a width and a height of the detected face on the lower side of the detected face based on the detected face position as the shoulder area Wherein the shoulder position detecting device detects a shoulder position in the image.
A reference body part detector for detecting a reference body part in an image;
A waist region setting unit for setting a waist region in the image according to the detected position of the reference body region;
An image segmentation unit for generating a segmentation image by dividing an image block corresponding to the waist region into image segments by image segmentation; And
And a waist detection unit for extracting an edge from the segmentation image, detecting a vertical line point on the extracted edge, and detecting a waist position according to the position of the detected vertical line point,
The reference body part detecting unit detects a face in the image, detects a shoulder in the image,
The waist region setting unit sets a block in which the size and position are set according to at least one of the width and the height of the detected shoulder in the lower portion of the detected face based on the detected face position as the waist region Wherein the position detecting means detects the position of the waist position in the image.
KR1020150173993A 2015-12-08 2015-12-08 Apparatus and Method of Body Part Detection in Image KR101749029B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150173993A KR101749029B1 (en) 2015-12-08 2015-12-08 Apparatus and Method of Body Part Detection in Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150173993A KR101749029B1 (en) 2015-12-08 2015-12-08 Apparatus and Method of Body Part Detection in Image

Publications (2)

Publication Number Publication Date
KR20170067383A KR20170067383A (en) 2017-06-16
KR101749029B1 true KR101749029B1 (en) 2017-06-20

Family

ID=59278392

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150173993A KR101749029B1 (en) 2015-12-08 2015-12-08 Apparatus and Method of Body Part Detection in Image

Country Status (1)

Country Link
KR (1) KR101749029B1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102357763B1 (en) * 2019-12-02 2022-02-03 주식회사 알체라 Method and apparatus for controlling device by gesture recognition
CN112669342B (en) * 2020-12-25 2024-05-10 北京达佳互联信息技术有限公司 Training method and device of image segmentation network, and image segmentation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101173853B1 (en) * 2012-04-18 2012-08-14 (주)한국알파시스템 Apparatus and method for Multi-Object Recognition
KR101307984B1 (en) * 2012-09-04 2013-09-26 전남대학교산학협력단 Method of robust main body parts estimation using descriptor and machine learning for pose recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101173853B1 (en) * 2012-04-18 2012-08-14 (주)한국알파시스템 Apparatus and method for Multi-Object Recognition
KR101307984B1 (en) * 2012-09-04 2013-09-26 전남대학교산학협력단 Method of robust main body parts estimation using descriptor and machine learning for pose recognition

Also Published As

Publication number Publication date
KR20170067383A (en) 2017-06-16

Similar Documents

Publication Publication Date Title
US10719727B2 (en) Method and system for determining at least one property related to at least part of a real environment
US11037325B2 (en) Information processing apparatus and method of controlling the same
US9020251B2 (en) Image processing apparatus and method
Hu et al. Clothing segmentation using foreground and background estimation based on the constrained Delaunay triangulation
CN106504271A (en) Method and apparatus for eye tracking
US9443137B2 (en) Apparatus and method for detecting body parts
US10489640B2 (en) Determination device and determination method of persons included in imaging data
JP4964171B2 (en) Target region extraction method, apparatus, and program
EP2977932B1 (en) Image processing apparatus, image processing method and image processing program
JP2010117772A (en) Feature value extracting device, object identification device, and feature value extracting method
JP5656768B2 (en) Image feature extraction device and program thereof
US20150269778A1 (en) Identification device, identification method, and computer program product
US8891879B2 (en) Image processing apparatus, image processing method, and program
Ecins et al. Shadow free segmentation in still images using local density measure
KR101749029B1 (en) Apparatus and Method of Body Part Detection in Image
KR101749030B1 (en) Apparatus and Method of Body Part Detection in Image
Jacques et al. Head-shoulder human contour estimation in still images
JP2016081472A (en) Image processing device, and image processing method and program
US20170301091A1 (en) Image processing apparatus, image processing method, and storage medium
US20230071054A1 (en) Height estimation method, height estimation apparatus, and program
Jacques et al. Improved head-shoulder human contour estimation through clusters of learned shape models
Malashin et al. Restoring a silhouette of the hand in the problem of recognizing gestures by adaptive morphological filtering of a binary image
KR20110044392A (en) Image processing apparatus and method
US10878229B2 (en) Shape discrimination device, shape discrimination method and shape discrimination program
Prada et al. Improving object extraction with depth-based methods

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant