WO2011007993A2 - Apparatus for detecting person and method thereof - Google Patents

Apparatus for detecting person and method thereof Download PDF

Info

Publication number
WO2011007993A2
WO2011007993A2 PCT/KR2010/004525 KR2010004525W WO2011007993A2 WO 2011007993 A2 WO2011007993 A2 WO 2011007993A2 KR 2010004525 W KR2010004525 W KR 2010004525W WO 2011007993 A2 WO2011007993 A2 WO 2011007993A2
Authority
WO
WIPO (PCT)
Prior art keywords
person
image
information
attribute information
combined image
Prior art date
Application number
PCT/KR2010/004525
Other languages
French (fr)
Other versions
WO2011007993A3 (en
Inventor
Hyunjong Ji
Original Assignee
Lg Innotek Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Innotek Co., Ltd. filed Critical Lg Innotek Co., Ltd.
Publication of WO2011007993A2 publication Critical patent/WO2011007993A2/en
Publication of WO2011007993A3 publication Critical patent/WO2011007993A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to an apparatus for detecting person and a method thereof.
  • An person detecting technique is to seek an area occupied by the person in a video image or a moving picture data including a person.
  • the person detecting technique may be utilized for security, checking system, customer management in a big shopping mall and personal guard system.
  • the conventional person detecting technique is focused on how to detect a person from a video image or a moving picture data.
  • the technique is mainly to utilize color information, edge information and distance information of a 2-dimensional (2D) video image obtained by a camera to detect a person.
  • a method of detecting a head (head portion) of a person or a body centered on a head using the technique is used.
  • a silhouette of an object person is extracted by removing an image farther distanced than a predetermined distance from a camera and by comparing same blocks of a particular size from a video image obtained by two cameras. That is, the conventional person detecting technique has been largely limited to extraction of edges from a 2D image to detect a head, or grasping of a 2D person image using distance information.
  • the present invention is disclosed to provide an apparatus for detecting person configured to obtain color information of each portion comprising a person and posture information of the person using color image and distance image, and a method thereof.
  • an object of the invention is to solve at least one or more of the above problems and/or disadvantages in a whole or in part and to provide at least the advantages described hereinafter.
  • an apparatus for detecting a person characterized by comprising: an image processor respectively dividing a color image showing a video image in color information and a distance image showing the video image in distance information to a plurality of parts based on respectively pre-defined standard, and integrating the divided parts to generate a combined image comprising the plurality of parts; a comparator comparing each part of the combined image with shape information in attribute information of pre-stored person part to detect each part of the combined image matching the attribute information in person part; a determinator combining each part of the combined image detected in person part by the comparator to determine it as a person; and a detector using the color image of the image processor to detect color
  • a method for detecting a person characterized by comprising: respectively dividing a color image showing a video image in color information and a distance image showing the video image in distance information to a plurality of parts based on respectively pre-defined standard, and combining the divided parts to generate a combined image comprising the plurality of parts; comparing each part of the combined image with shape information in attribute information of pre-stored person part to detect each part of the combined image matching the attribute information in person part; combining each part of the combined image detected in person part by the comparator to determine it as a person; using the color image of the image processor to detect color information of each part of the person determined by the determinator, and referring to posture information in the attribute information to detect posture information of the determined person, wherein the attribute information includes the shape information of each part of the person based on the posture of the person and the posture information.
  • the apparatus for detecting a person and a method thereof according to the present invention have advantageous effects in that a detected posture of a person and person information can be checked in detail, unlike the conventional apparatus and method of simply detecting a person only, whereby a system capable of detecting an abnormally behaving person and checking clothes of a visitor can be developed.
  • FIG.1 is a schematic block diagram illustrating an apparatus for detecting a person according to an exemplary embodiment of the present invention.
  • FIG.2 is a conceptual diagram illustrating a process of detecting an integrated image according to an exemplary embodiment of the present invention.
  • FIG.3 is a conceptual view illustrating a process of generating characteristic information of each part comprising a person according to an exemplary embodiment of the present invention.
  • FIG.4 is a conceptual view illustrating an example comparing an integrated image with characteristic information of a person according to an exemplary embodiment of the present invention.
  • FIG.5 is a schematic flowchart illustrating a method for detecting a person according to an exemplary embodiment of the present invention.
  • FIGS. 1-5 of the drawings like numerals being used for like and corresponding parts of the various drawings.
  • Other features and advantages of the disclosed embodiments will be or will become apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. It is intended that all such additional features and advantages be included within the scope of the disclosed embodiments, and protected by the accompanying drawings.
  • the illustrated figures are only exemplary and not intended to assert or imply any limitation with regard to the environment, architecture, or process in which different embodiments may be implemented. Accordingly, the described aspect is intended to embrace all such alterations, modifications, and variations that fall within the scope and novel idea of the present invention.
  • a second constituent element may be denoted as a first constituent element without departing from the scope and spirit of the present disclosure, and similarly, a first constituent element may be denoted as a second constituent element.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first region/layer could be termed a second region/layer, and, similarly, a second region/layer could be termed a first region/layer without departing from the teachings of the disclosure.
  • FIG.1 is a schematic block diagram illustrating an apparatus for detecting a person according to an exemplary embodiment of the present invention.
  • an apparatus for detecting a person may include a color image processor (101), a distance image processor (103), an image processor (105), a comparator (107), an attribute information database (109), a generator (111), a determinator (113) and a detector (115).
  • the color image processor (101) obtains an image (hereinafter referred to as "color image") configured with color (red, green, blue) information data through capturing means such as a camera.
  • the distance image processor (103) obtains a distance image showing distance information as a pixel value between an object within an object space and image processing means.
  • the image processing means defines capturing means such as a camera or a person detecting apparatus itself, for example.
  • a parallax between two cameras may be utilized to obtain the color image and the distance image, where the color image processor (101) and the distance image processor (103) may become constituent elements, i.e., stereo cameras.
  • a block correlation method is utilized to obtain a parallax from a first image captured from a first camera (reference camera) and a second image captured from a second camera, and a trigonometry is utilized based on the parallax to detect a distance from the stereo camera relative to each pixel of the first and second images. The detected distance is matched to each pixel of the first image to generate a distance image indicated in a pixel value.
  • a camera may be used to obtain a color image, and a range finder, i.e., a laser or infrared is utilized to obtain a distance image, where the camera may become the color image processor (101) and the range finder may become the distance image processor (103).
  • a range finder i.e., a laser or infrared
  • the camera may become the color image processor (101) and the range finder may become the distance image processor (103).
  • Each color image and distance image obtained by the color image processor (101) and by the distance image processor (103) is inputted to the image processor (105).
  • the image processor (105) may extract edges present within the color image inputted from the color image processor (101) to generate a first edge image.
  • the image processor (105) may extract edges present within the distance image inputted from the distance image processor (103) to generate a second edge image.
  • Each of the first and second edge images is separated to parts based on a pre-defined standard. At this time, the first edge image may be divided to a part having a matched color based on similarity of color, and the second edge image may be divided to a part having a matched curvature based on curvature of a shape.
  • the first and second edge images divided to parts may be integrated to generate an integrated image.
  • the comparator (107) may compare each part comprising the integrated image generated by the image processor (105) with attribute information of a person stored in an attribute information database (109) to detect whether each part is a part comprising the person, and to detect which part of the person corresponds to the each part.
  • the attribute information database (109) may store the attribute information of the person, i.e., a plurality of 3D shapes of the pre-defined each part comprising a person.
  • the pre-defined each part comprising a person may include a head part, a body part, an arm part, a leg part, etc.
  • the attribute information may include shape information of each part of the person based on posture of the person and posture information.
  • the generator (111) may generate a plurality of 3D shapes of each part comprising a person and store the shapes in the attribute information database (109).
  • the generator (111) may prepare the 3D shapes of each part comprising the person.
  • the generator (111) may prepare each part comprising the person, and parts of the person wearing glasses, a mask and the like.
  • the generator (111) may obtain images rotated from 0 to 360 degrees about each axis on a 3D space using the 3D shapes of each part of the person. Successively, attributes of each image is obtained to form an attribute space.
  • the attribute space is executed for each part comprising the person to obtain a curvature in the 3D attribute space.
  • the curvature thus obtained is a continuous image formed by images through the rotation to allow the continuous image to look like a continuous line.
  • the curvature thus formed is stored in the attribute information database (109) as attribute information.
  • the attribute information may include shape information of each part of a person based on a plurality of postures of the person. For example, the head part is rotated at a predetermined angle about a predetermined axis to obtain information of a head shape of the sitting person.
  • the determinator (113) may combine each part of the combined image determined by the comparator (107) as being similar to the shape stored in the attribute information database (109). At this time, a result of determination by the comparator (107) as to which part of the combined image corresponds to which part of the person is referred to in the combination process. The determinator (113) may select a combination that has the highest likelihood of a person among the combination and determine the combination as the person. At this time, in a case a plurality of similar shapes is present in each part of the combined image, each part having the plurality of similar shapes is combined and determined as separate plural persons.
  • the detector (115) may detect the pre-defined information relative to each part comprising the combination determined as a person by the determinator (113). That is, a comparative result by the image processor (105) and the comparator (107) is consulted to detect color information of each part comprising the person and posture of the person. For example, the detector (115) may detect a sitting fair-skinned person wearing a green jacket, a pair of blue pants and with a black hair and a pair of glasses.
  • the present invention can check detailed information on a person and the posture of the person that have been detected, whereby it is possible to develop a system capable of detecting an abnormally behaving person and checking a person itself, the number of persons and attire of a visitor.
  • FIG.2 is a conceptual diagram illustrating a process of detecting an integrated image according to an exemplary embodiment of the present invention, where FIG.2(a) shows a color image acquired by the color image processor (101), FIG.2(b) illustrates the first edge image generated by extracting an edge from the color image by the image processor (105), FIG.2(c) is a distance image obtained by the distance image processor (103), FIG.2(d) defines the second edge image generated by extracting an edge from the distance image by the image processor (105), and FIG.2(e) depicts a combined image that is combined by FIG.2(b) and FIG.2(d).
  • FIG.2(a) shows a color image acquired by the color image processor (101)
  • FIG.2(b) illustrates the first edge image generated by extracting an edge from the color image by the image processor (105)
  • FIG.2(c) is a distance image obtained by the distance image processor (103)
  • FIG.2(d) defines the second edge image generated by extracting an edge from the
  • the combined image may be comprised of an edge (solid line) having a highest likelihood of an object being separated and an edge (dotted line) having a highest likelihood of an object being separated, and yet having a likelihood of an object not being separated.
  • FIG.3 is a conceptual view illustrating a process of generating characteristic information of each part comprising a person according to an exemplary embodiment of the present invention.
  • the generator (111) may rotate a 'head part' of a person on a 3D space comprised of a size axis ( ⁇ ), a shape axis ( ⁇ ) and a curvature axis ( ⁇ )to obtain a plurality of head shapes as shown in FIG.3(c).
  • a curved line is formed using attributes of the head shapes as depicted in FIG.3(d).
  • an attribute curved line is formed by collecting a head shape that has no rotation at all, a head shape obtained by rotating the head by 10 degrees to the shape axis ( ⁇ ), and a head shape obtained by rotating the head by 30 degrees to the size axis ( ⁇ ).
  • FIG.4 is a conceptual view illustrating an example comparing an integrated image with characteristic information of a person according to an exemplary embodiment of the present invention.
  • the comparator (107) may compare each part (201, 203) of the combined image which is a comparative object on an attribute space. At this time, the part "201" of the combined image is present on an attribute curve, such that it is possible to detect the part “201” of the combined image as a person part. However, the part “203" of the combined image is deviated from the attribute curve, and the part “203" is not the person part only to be excluded from the person part.
  • FIG.5 is a schematic flowchart illustrating a method for detecting a person according to an exemplary embodiment of the present invention.
  • the image processor (105) may extract the first edge image from the color image obtained by the color image processor (101), and divide it to parts based on the pre-defined standard (S101, S103 and S105). Furthermore, the image processor (105) may extract the second edge image from the distance image obtained by the distance image processor (103), and divide it to parts based on the pre-defined standard (S107, S109 and S111).
  • the image processor (105) may combine the first and second edge images divided to parts in steps of S105 and S111 to generate a combined image (S113).
  • the comparator (107) may compare each part comprising the combined image with the attribute information stored in the attribute information database (109) to determine whether the similarity is within a particular threshold (S115, S117).
  • the relevant part of the combined image is determined as the person part (S121). That is, in a case a value exhibited by the relevant part of the combined image on the attribute space differs within the value of the attribute curve and within the scope of the particular threshold, the relevant part of the combined image is determined as being positioned on the attribute curve.
  • the comparator (107) may determine whether a comparative object is further present (S123), and if a comparative object is further present, the flow re-starts from step S115. However, if the comparative object is no more present, the determinator (113) may combine relevant parts of the combined image determined as being the person part in S119, and determine as the person (S125).
  • the detector (115) may detect the pre-defined detailed information relative to each part comprising the person (S127). At this time, the detector (115) may compare the color image inputted by the image processor (105) with each part comprising the person to detect color information of each part comprising the person. Furthermore, the detector (115) may detect the posture of the person based on the comparative result by the comparator (107) between each part of the person and the attribute information based on postures of plural persons stored in the attribute information database (109).
  • the apparatus for detecting a person and a method thereof according to the present invention have industrial applicability in that a detected posture of a person and person information can be checked in detail, unlike the conventional apparatus and method of simply detecting a person only, whereby a system capable of detecting an abnormally behaving person and checking clothes of a visitor can be developed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to an apparatus for detecting person and a method thereof, wherein a video image obtained by using color information and distance information is divided into plural parts, each part is compared with attribute information of pre-defined person part to determine a person and to detect color information of each part of the person and posture of the person. At this time, the attribute information of person part includes 3D shape information of each part of the person based on various postures.

Description

APPARATUS FOR DETECTING PERSON AND METHOD THEREOF
The present invention relates to an apparatus for detecting person and a method thereof.
An person detecting technique is to seek an area occupied by the person in a video image or a moving picture data including a person. The person detecting technique may be utilized for security, checking system, customer management in a big shopping mall and personal guard system.
The conventional person detecting technique is focused on how to detect a person from a video image or a moving picture data. In general, the technique is mainly to utilize color information, edge information and distance information of a 2-dimensional (2D) video image obtained by a camera to detect a person. At this time, a method of detecting a head (head portion) of a person or a body centered on a head using the technique is used.
Alternatively, a silhouette of an object person is extracted by removing an image farther distanced than a predetermined distance from a camera and by comparing same blocks of a particular size from a video image obtained by two cameras. That is, the conventional person detecting technique has been largely limited to extraction of edges from a 2D image to detect a head, or grasping of a 2D person image using distance information.
The present invention is disclosed to provide an apparatus for detecting person configured to obtain color information of each portion comprising a person and posture information of the person using color image and distance image, and a method thereof.
Technical problems to be solved by the present invention are not restricted to the above-mentioned, and any other technical problems not mentioned so far will be clearly appreciated from the following description by skilled in the art.
An object of the invention is to solve at least one or more of the above problems and/or disadvantages in a whole or in part and to provide at least the advantages described hereinafter. In order to achieve at least the above objects, in whole or in part, and in accordance with the purposes of the invention, as embodied and broadly described, and in one general aspect of the present invention, there is provided an apparatus for detecting a person, characterized by comprising: an image processor respectively dividing a color image showing a video image in color information and a distance image showing the video image in distance information to a plurality of parts based on respectively pre-defined standard, and integrating the divided parts to generate a combined image comprising the plurality of parts; a comparator comparing each part of the combined image with shape information in attribute information of pre-stored person part to detect each part of the combined image matching the attribute information in person part; a determinator combining each part of the combined image detected in person part by the comparator to determine it as a person; and a detector using the color image of the image processor to detect color information of each part of the person determined by the determinator, and referring to posture information in the attribute information to detect posture information of the determined person, wherein the attribute information includes the shape information of each part of the person based on the posture of the person and the posture information.
In another general aspect of the present invention, there is provided a method for detecting a person, characterized by comprising: respectively dividing a color image showing a video image in color information and a distance image showing the video image in distance information to a plurality of parts based on respectively pre-defined standard, and combining the divided parts to generate a combined image comprising the plurality of parts; comparing each part of the combined image with shape information in attribute information of pre-stored person part to detect each part of the combined image matching the attribute information in person part; combining each part of the combined image detected in person part by the comparator to determine it as a person; using the color image of the image processor to detect color information of each part of the person determined by the determinator, and referring to posture information in the attribute information to detect posture information of the determined person, wherein the attribute information includes the shape information of each part of the person based on the posture of the person and the posture information.
The apparatus for detecting a person and a method thereof according to the present invention have advantageous effects in that a detected posture of a person and person information can be checked in detail, unlike the conventional apparatus and method of simply detecting a person only, whereby a system capable of detecting an abnormally behaving person and checking clothes of a visitor can be developed.
FIG.1 is a schematic block diagram illustrating an apparatus for detecting a person according to an exemplary embodiment of the present invention.
FIG.2 is a conceptual diagram illustrating a process of detecting an integrated image according to an exemplary embodiment of the present invention.
FIG.3 is a conceptual view illustrating a process of generating characteristic information of each part comprising a person according to an exemplary embodiment of the present invention.
FIG.4 is a conceptual view illustrating an example comparing an integrated image with characteristic information of a person according to an exemplary embodiment of the present invention.
FIG.5 is a schematic flowchart illustrating a method for detecting a person according to an exemplary embodiment of the present invention.
The disclosed embodiments and advantages thereof are best understood by referring to FIGS. 1-5 of the drawings, like numerals being used for like and corresponding parts of the various drawings. Other features and advantages of the disclosed embodiments will be or will become apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. It is intended that all such additional features and advantages be included within the scope of the disclosed embodiments, and protected by the accompanying drawings. Further, the illustrated figures are only exemplary and not intended to assert or imply any limitation with regard to the environment, architecture, or process in which different embodiments may be implemented. Accordingly, the described aspect is intended to embrace all such alterations, modifications, and variations that fall within the scope and novel idea of the present invention.
Meanwhile, the terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. The terms "first," "second," and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another, and the terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item. That is, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
For example, a second constituent element may be denoted as a first constituent element without departing from the scope and spirit of the present disclosure, and similarly, a first constituent element may be denoted as a second constituent element.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items and may be abbreviated as "/".
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first region/layer could be termed a second region/layer, and, similarly, a second region/layer could be termed a first region/layer without departing from the teachings of the disclosure.
It will be further understood that the terms "comprises" and/or "comprising," or "includes" and/or "including" when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
Also, "exemplary" is merely meant to mean an example, rather than the best. If is also to be appreciated that features, layers and/or elements depicted herein are illustrated with particular dimensions and/or orientations relative to one another for purposes of simplicity and ease of understanding, and that the actual dimensions and/or orientations may differ substantially from that illustrated.
That is, in the drawings, the size and relative sizes of layers, regions and/or other elements may be exaggerated or reduced for clarity. Like numbers refer to like elements throughout and explanations that duplicate one another will be omitted.
Now, the present invention will be described in detail with reference to the accompanying drawings.
FIG.1 is a schematic block diagram illustrating an apparatus for detecting a person according to an exemplary embodiment of the present invention.
Referring to FIG.1, an apparatus for detecting a person (100; hereinafter referred to as "a person detecting apparatus") may include a color image processor (101), a distance image processor (103), an image processor (105), a comparator (107), an attribute information database (109), a generator (111), a determinator (113) and a detector (115).
The color image processor (101) obtains an image (hereinafter referred to as "color image") configured with color (red, green, blue) information data through capturing means such as a camera. The distance image processor (103) obtains a distance image showing distance information as a pixel value between an object within an object space and image processing means. At this time, the image processing means defines capturing means such as a camera or a person detecting apparatus itself, for example.
According to the exemplary embodiment of the present invention, a parallax between two cameras, i.e., two stereo cameras, may be utilized to obtain the color image and the distance image, where the color image processor (101) and the distance image processor (103) may become constituent elements, i.e., stereo cameras.
At this time, two cameras are used to capture an object inside an object space and to obtain the color image. a block correlation method is utilized to obtain a parallax from a first image captured from a first camera (reference camera) and a second image captured from a second camera, and a trigonometry is utilized based on the parallax to detect a distance from the stereo camera relative to each pixel of the first and second images. The detected distance is matched to each pixel of the first image to generate a distance image indicated in a pixel value.
According to another exemplary embodiment of the present invention, a camera may be used to obtain a color image, and a range finder, i.e., a laser or infrared is utilized to obtain a distance image, where the camera may become the color image processor (101) and the range finder may become the distance image processor (103). Each color image and distance image obtained by the color image processor (101) and by the distance image processor (103) is inputted to the image processor (105).
The image processor (105) may extract edges present within the color image inputted from the color image processor (101) to generate a first edge image. The image processor (105) may extract edges present within the distance image inputted from the distance image processor (103) to generate a second edge image. Each of the first and second edge images is separated to parts based on a pre-defined standard. At this time, the first edge image may be divided to a part having a matched color based on similarity of color, and the second edge image may be divided to a part having a matched curvature based on curvature of a shape. The first and second edge images divided to parts may be integrated to generate an integrated image.
The comparator (107) may compare each part comprising the integrated image generated by the image processor (105) with attribute information of a person stored in an attribute information database (109) to detect whether each part is a part comprising the person, and to detect which part of the person corresponds to the each part.
The attribute information database (109) may store the attribute information of the person, i.e., a plurality of 3D shapes of the pre-defined each part comprising a person. At this time, the pre-defined each part comprising a person may include a head part, a body part, an arm part, a leg part, etc. Furthermore, the attribute information may include shape information of each part of the person based on posture of the person and posture information.
The generator (111) may generate a plurality of 3D shapes of each part comprising a person and store the shapes in the attribute information database (109). The generator (111) may prepare the 3D shapes of each part comprising the person. For example, the generator (111) may prepare each part comprising the person, and parts of the person wearing glasses, a mask and the like.
Furthermore, the generator (111) may obtain images rotated from 0 to 360 degrees about each axis on a 3D space using the 3D shapes of each part of the person. Successively, attributes of each image is obtained to form an attribute space. The attribute space is executed for each part comprising the person to obtain a curvature in the 3D attribute space. The curvature thus obtained is a continuous image formed by images through the rotation to allow the continuous image to look like a continuous line. The curvature thus formed is stored in the attribute information database (109) as attribute information.
At this time, the attribute information may include shape information of each part of a person based on a plurality of postures of the person. For example, the head part is rotated at a predetermined angle about a predetermined axis to obtain information of a head shape of the sitting person.
The determinator (113) may combine each part of the combined image determined by the comparator (107) as being similar to the shape stored in the attribute information database (109). At this time, a result of determination by the comparator (107) as to which part of the combined image corresponds to which part of the person is referred to in the combination process. The determinator (113) may select a combination that has the highest likelihood of a person among the combination and determine the combination as the person. At this time, in a case a plurality of similar shapes is present in each part of the combined image, each part having the plurality of similar shapes is combined and determined as separate plural persons.
The detector (115) may detect the pre-defined information relative to each part comprising the combination determined as a person by the determinator (113). That is, a comparative result by the image processor (105) and the comparator (107) is consulted to detect color information of each part comprising the person and posture of the person. For example, the detector (115) may detect a sitting fair-skinned person wearing a green jacket, a pair of blue pants and with a black hair and a pair of glasses.
Therefore, unlike the conventional method of simply detecting a person, the present invention can check detailed information on a person and the posture of the person that have been detected, whereby it is possible to develop a system capable of detecting an abnormally behaving person and checking a person itself, the number of persons and attire of a visitor.
FIG.2 is a conceptual diagram illustrating a process of detecting an integrated image according to an exemplary embodiment of the present invention, where FIG.2(a) shows a color image acquired by the color image processor (101), FIG.2(b) illustrates the first edge image generated by extracting an edge from the color image by the image processor (105), FIG.2(c) is a distance image obtained by the distance image processor (103), FIG.2(d) defines the second edge image generated by extracting an edge from the distance image by the image processor (105), and FIG.2(e) depicts a combined image that is combined by FIG.2(b) and FIG.2(d).
The combined image may be comprised of an edge (solid line) having a highest likelihood of an object being separated and an edge (dotted line) having a highest likelihood of an object being separated, and yet having a likelihood of an object not being separated.
FIG.3 is a conceptual view illustrating a process of generating characteristic information of each part comprising a person according to an exemplary embodiment of the present invention.
Referring to FIG.3(a), the generator (111) may rotate a 'head part' of a person on a 3D space comprised of a size axis (α), a shape axis (β) and a curvature axis (γ)to obtain a plurality of head shapes as shown in FIG.3(c).
A curved line is formed using attributes of the head shapes as depicted in FIG.3(d). For example, an attribute curved line is formed by collecting a head shape that has no rotation at all, a head shape obtained by rotating the head by 10 degrees to the shape axis (β), and a head shape obtained by rotating the head by 30 degrees to the size axis (α).
FIG.4 is a conceptual view illustrating an example comparing an integrated image with characteristic information of a person according to an exemplary embodiment of the present invention.
Referring to FIG.4, the comparator (107) may compare each part (201, 203) of the combined image which is a comparative object on an attribute space. At this time, the part "201" of the combined image is present on an attribute curve, such that it is possible to detect the part "201" of the combined image as a person part. However, the part "203" of the combined image is deviated from the attribute curve, and the part "203" is not the person part only to be excluded from the person part.
Now, the person detecting apparatus (100) will be described in terms of detailed operation of detecting a person. FIG.5 is a schematic flowchart illustrating a method for detecting a person according to an exemplary embodiment of the present invention.
Referring to FIG.5, the image processor (105) may extract the first edge image from the color image obtained by the color image processor (101), and divide it to parts based on the pre-defined standard (S101, S103 and S105). Furthermore, the image processor (105) may extract the second edge image from the distance image obtained by the distance image processor (103), and divide it to parts based on the pre-defined standard (S107, S109 and S111).
The image processor (105) may combine the first and second edge images divided to parts in steps of S105 and S111 to generate a combined image (S113). The comparator (107) may compare each part comprising the combined image with the attribute information stored in the attribute information database (109) to determine whether the similarity is within a particular threshold (S115, S117).
That is, in a case a relevant part of the combined image, which is a comparative object, is depicted on the attribute curve of FIG.3, determination is made as to whether a value thereof corresponds to a value of the attribute curve of FIG.3, or if not corresponded, a deviation is within the particular threshold. At this time, in a case the value thereof does not correspond to the value of the attribute curve, and deviates from the particular threshold, the relevant part of the combined image is excluded from the person part (S119).
However, in a case the value thereof corresponds to the value of the attribute curve, or is within the particular threshold, the relevant part of the combined image is determined as the person part (S121). That is, in a case a value exhibited by the relevant part of the combined image on the attribute space differs within the value of the attribute curve and within the scope of the particular threshold, the relevant part of the combined image is determined as being positioned on the attribute curve.
Furthermore, the comparator (107) may determine whether a comparative object is further present (S123), and if a comparative object is further present, the flow re-starts from step S115. However, if the comparative object is no more present, the determinator (113) may combine relevant parts of the combined image determined as being the person part in S119, and determine as the person (S125).
The detector (115) may detect the pre-defined detailed information relative to each part comprising the person (S127). At this time, the detector (115) may compare the color image inputted by the image processor (105) with each part comprising the person to detect color information of each part comprising the person. Furthermore, the detector (115) may detect the posture of the person based on the comparative result by the comparator (107) between each part of the person and the attribute information based on postures of plural persons stored in the attribute information database (109).
The apparatus for detecting a person and a method thereof according to the present invention have industrial applicability in that a detected posture of a person and person information can be checked in detail, unlike the conventional apparatus and method of simply detecting a person only, whereby a system capable of detecting an abnormally behaving person and checking clothes of a visitor can be developed.
Although the present invention has been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.

Claims (9)

  1. An apparatus for detecting a person, characterized by comprising: an image processor respectively dividing a color image showing a video image in color information and a distance image showing the video image in distance information to a plurality of parts based on respectively pre-defined standard, and integrating the divided parts to generate a combined image comprising the plurality of parts; a comparator comparing each part of the combined image with shape information in attribute information of pre-stored person part to detect each part of the combined image matching the attribute information in person part; a determinator combining each part of the combined image detected in person part by the comparator to determine it as a person; and a detector using the color image of the image processor to detect color information of each part of the person determined by the determinator, and referring to posture information in the attribute information to detect posture information of the determined person, wherein the attribute information includes the shape information of each part of the person based on the posture of the person and the posture information.
  2. The apparatus of claim 1, characterized by further comprising: a generator generating attribute information including a plurality of 3D shapes obtained by rotating each part of the person on a 3D space relative to each axis; and an attribute information database storing the attribute information.
  3. The apparatus of claim 2, characterized in that the generator generates as the attribute information an attribute curve formed by continuous images obtained by rotating each part of the person on the 3D space including each axis of size, shape and curvature.
  4. The apparatus of any one claim from claims 1 through 3, characterized in that the comparator determines whether each part of the combined image matches the attribute information within a scope of a pre-set threshold to detect each part of the combined image as the person part if the each part of the combined image matches the attribute information within a scope of a pre-set threshold, and to exclude each part of the combined image if the each part of the combined image does not match the attribute information within a scope of a pre-set threshold.
  5. The apparatus of claim 4, characterized in that the image processor divides a first edge image generated by extracting an edge from the color image into parts having matching color, divides a second edge image generated by extracting an edge from the distance image into parts having matching curvature shape, and combining the first and second edge images each divided into plural parts to generate the combined image.
  6. A method for detecting a person, characterized by comprising: respectively dividing a color image showing a video image in color information and a distance image showing the video image in distance information to a plurality of parts based on respectively pre-defined standard, and combining the divided parts to generate a combined image comprising the plurality of parts; comparing each part of the combined image with shape information in attribute information of pre-stored person part to detect each part of the combined image matching the attribute information in person part; combining each part of the combined image detected in person part by the comparator to determine it as a person; using the color image of the image processor to detect color information of each part of the person determined by the determinator, and referring to posture information in the attribute information to detect posture information of the determined person, wherein the attribute information includes the shape information of each part of the person based on the posture of the person and the posture information.
  7. The method of claim 6, characterized in that the attribute information includes a plurality of 3D shapes obtained by rotating each part of the person on a 3D space relative to each axis.
  8. The method of claim 6 or 7, characterized in that the step of detecting the person part includes determining whether each part of the combined image matches the attribute information within a scope of a pre-set threshold by comparing each part of the combined image with the attribute information, detecting each part of the combined image as the person part if the each part of the combined image matches the attribute information within a scope of a pre-set threshold, and excluding each part of the combined image if the each part of the combined image does not match the attribute information within a scope of a pre-set threshold.
  9. The method of claim 8, characterized in that the step of generating the combined image includes generating a first edge image by extracting an edge from the color image, dividing the first edge image to parts having matching color, generating a second edge image by extracting an edge from the distance image, dividing the second edge image to parts having matching curvature shape; and combining the first and second edge images respectively divided to plural parts to generate a combined image.
PCT/KR2010/004525 2009-07-14 2010-07-13 Apparatus for detecting person and method thereof WO2011007993A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020090064068A KR101607569B1 (en) 2009-07-14 2009-07-14 Apparatus for detecting person and method thereof
KR10-2009-0064068 2009-07-14

Publications (2)

Publication Number Publication Date
WO2011007993A2 true WO2011007993A2 (en) 2011-01-20
WO2011007993A3 WO2011007993A3 (en) 2011-03-31

Family

ID=43449950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/004525 WO2011007993A2 (en) 2009-07-14 2010-07-13 Apparatus for detecting person and method thereof

Country Status (2)

Country Link
KR (1) KR101607569B1 (en)
WO (1) WO2011007993A2 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039498A2 (en) * 1999-11-24 2001-05-31 Koninklijke Philips Electronics N.V. Method and apparatus for detecting moving objects in video conferencing and other applications
US20050152582A1 (en) * 2003-11-28 2005-07-14 Samsung Electronics Co., Ltd. Multiple person detection apparatus and method
KR20070008271A (en) * 2005-07-13 2007-01-17 엘지전자 주식회사 Detecting and tracking method to the person and robot using thereof
KR20090037275A (en) * 2007-10-10 2009-04-15 삼성전자주식회사 Detecting apparatus of human component and method of the same
KR20090069704A (en) * 2007-12-26 2009-07-01 주식회사 케이티 Method and apparatus for tracking human body in 3d space

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039498A2 (en) * 1999-11-24 2001-05-31 Koninklijke Philips Electronics N.V. Method and apparatus for detecting moving objects in video conferencing and other applications
US20050152582A1 (en) * 2003-11-28 2005-07-14 Samsung Electronics Co., Ltd. Multiple person detection apparatus and method
KR20070008271A (en) * 2005-07-13 2007-01-17 엘지전자 주식회사 Detecting and tracking method to the person and robot using thereof
KR20090037275A (en) * 2007-10-10 2009-04-15 삼성전자주식회사 Detecting apparatus of human component and method of the same
KR20090069704A (en) * 2007-12-26 2009-07-01 주식회사 케이티 Method and apparatus for tracking human body in 3d space

Also Published As

Publication number Publication date
KR101607569B1 (en) 2016-03-30
KR20110006435A (en) 2011-01-20
WO2011007993A3 (en) 2011-03-31

Similar Documents

Publication Publication Date Title
CN106326832B (en) Device and method for processing image based on object region
US7734069B2 (en) Image processing method, image processor, photographic apparatus, image output unit and iris verify unit
TWI604332B (en) Method, system, and computer-readable recording medium for long-distance person identification
CN109711243A (en) A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN109670390B (en) Living body face recognition method and system
EP1434164A2 (en) Method of extracting teeth area from teeth image and personal identification method and apparatus using the teeth image
CN107358219B (en) Face recognition method and device
US20090294666A1 (en) IR Camera and Method for Presenting IR Information
JP2004192378A (en) Face image processor and method therefor
US20190332853A1 (en) 3d imaging recognition by stereo matching of rgb and infrared images
EP1514225A1 (en) Face-recognition using half-face images
GB2341231A (en) Face detection in an image
CN110956114A (en) Face living body detection method, device, detection system and storage medium
WO2015133159A1 (en) Image processing device, image processing method, and image processing program
WO2020147346A1 (en) Image recognition method, system and apparatus
CN112115886A (en) Image detection method and related device, equipment and storage medium
JPWO2020065852A1 (en) Authentication system, authentication method, and program
CN110717428A (en) Identity recognition method, device, system, medium and equipment fusing multiple features
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
Proença et al. Visible-wavelength iris/periocular imaging and recognition surveillance environments
WO2011007993A2 (en) Apparatus for detecting person and method thereof
CN110688967A (en) System and method for static human face living body detection
JP2011258209A (en) Back of hand authentication system and back of hand authentication method
JP5730000B2 (en) Face matching system, face matching device, and face matching method
JP6917293B2 (en) Image monitoring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10800004

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10800004

Country of ref document: EP

Kind code of ref document: A2