WO2005071611A1 - 画像中の人物候補領域抽出方法及び人物候補領域抽出システム、人物候補領域抽出プログラム、並びに人物像の天地判定方法及び天地判定システム、天地判定プログラム - Google Patents
画像中の人物候補領域抽出方法及び人物候補領域抽出システム、人物候補領域抽出プログラム、並びに人物像の天地判定方法及び天地判定システム、天地判定プログラム Download PDFInfo
- Publication number
- WO2005071611A1 WO2005071611A1 PCT/JP2005/001568 JP2005001568W WO2005071611A1 WO 2005071611 A1 WO2005071611 A1 WO 2005071611A1 JP 2005001568 W JP2005001568 W JP 2005001568W WO 2005071611 A1 WO2005071611 A1 WO 2005071611A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- person
- human
- line
- vertical
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- Patent application title Method of human candidate area extraction in image and human candidate area extraction system, human candidate area extraction program, method of judging top and bottom of person image, top and bottom judgment system, top and bottom judgment program
- the present invention relates to pattern recognition, recognition, and object recognition technology, and in particular, it is possible to determine the top and bottom of an image including a human image and to extract an area having a high possibility of the existence of a human from the image.
- the present invention relates to a method of extracting a human candidate region and a human candidate region extraction system in an image, a human object candidate region extraction program, a method of determining a top and bottom of a human image, a top and bottom determination system, and a top and bottom determination program.
- JP-A-H09-105258 first, a skin color area in an image is extracted, mosaic data is created according to the extracted area, this is compared with an image dictionary, and the degree of coincidence is calculated. To detect the face image area.
- the skin color area does not necessarily coincide with the human face area, and if the skin color area is broadly defined, unnecessary areas are extracted, and if it is narrowly defined, extraction leakage occurs.
- the present invention has been devised to effectively solve such problems, and its main purpose is to determine the orientation of the image including the human image and the human image in the image.
- Human candidate area extraction method and person candidate area extraction system in novel images capable of extracting areas robustly and rapidly, person candidate area extraction program, person image top-and-bottom determination method and image It provides a judgment system and a Tenchi judgment program. Disclosure of the invention
- the human candidate extraction method in an image according to Invention 1 is a method for extracting an area in which a person image is present from an image including a person image, and For each vertical line and horizontal line consisting of pixel rows, the variance value of each image feature quantity is determined, and the area where the variance value of each image feature quantity exceeds the threshold value in each of the vertical line direction and the horizontal line direction
- the present invention is characterized in that an area in the image which is selected and overlapped with each other in each selected line direction is extracted as an area in which the person exists.
- the variances of the image feature amounts are determined for the vertical line consisting of the pixel rows in the vertical direction and the horizontal line consisting of the pixel rows in the horizontal direction that constitute the image
- the variance value of this image feature value is an area where a person exists as will be described in detail later. Then, it tends to be higher and lower in background areas where there are no people.
- a threshold value is set for the variance value of the image feature obtained for each of the vertical line direction and the horizontal line direction, and the areas exceeding the threshold are selected, and the selected line direction areas overlap with each other.
- the human candidate area extraction method in an image according to Invention 2 is a method for extracting an area where the human image is present from an image including a human image, The vertical direction is determined, and the image is rotated based on the determination result to correct the human image in the actual vertical direction. Thereafter, vertical lines and horizontal lines made up of vertical and horizontal pixel rows constituting the image are corrected. While determining the variance value of each image feature quantity, selecting the area where the variance value of each image feature quantity exceeds the threshold value in the vertical line direction and the horizontal line direction, and overlapping the selected line direction areas It is characterized in that the region in the image is extracted.
- the orientation of the person image in the image is determined, and the person image is corrected to the actual orientation so that the person's face is made to match the original orientation. It is
- the vertical line consisting of the pixel line in the vertical direction constituting the image and the horizontal line consisting of the pixel line in the horizontal direction are arranged similarly to the invention 1
- the variances of the respective image feature quantities are determined for each of A threshold value is set for the variance value of the extracted image feature amount, regions exceeding the threshold value are respectively selected, and regions in the image in which the regions in the respective selected line directions overlap are specified.
- the discrimination of the vertical direction of the person image in the image is characterized in that the image is surrounded by a polygonal frame having a triangle or more and included in the image. If the image of the person being photographed is a part centered on the upper body or head, the variance value of the image feature value is determined for the pixel row constituting the line on or near each side of the image frame, and the variance value It is characterized in that the side with the largest side or the side near the line is directed to the ground.
- the human image included in the target image is a part centered on the upper body or head, it is normal that a part of the human image is cut at any side of the image frame. It is.
- the variance value of the image feature amount is determined for the pixel sequence constituting the side of each side or each side of the image frame, and the dispersion value is the largest. Since it can be regarded as a broken part, by setting the side to the ground direction, it is possible to perform the judgment of the human figure of the image in a reliable and easy manner.
- Invention 3 In the human candidate area extraction method according to the third aspect, a plurality of lines of the image are selected in close proximity, and the average value of the variance value of the image feature amount for each line is used. It is characterized by
- the human candidate area extraction method in an image according to the fifth aspect of the present invention is characterized in that, in the human candidate area extraction method according to any one of the first to fourth aspects, the vertical line and the horizontal line are width directions and heights of the image. It is characterized in that lines at constant intervals in the direction are used.
- the dispersion value of the image special amount may be calculated using all the lines in the width direction and height direction of the image, but as in the present invention, the dispersion value is obtained using lines at regular intervals. Since the amount of information processing to be performed is greatly reduced, it is possible to extract an area in which a person image exists faster.
- a person candidate region extraction method in an image of invention 6 is
- the image feature amount may be an edge intensity or a hue angle of each pixel of a pixel row constituting each line, or both of them. It is characterized in that it is calculated based on that.
- the edge of the image can be calculated at high speed by using a known edge detection operator, and the influence of illumination can be reduced. Therefore, if this is adopted as the image feature amount, the variance value can be calculated. It is possible to calculate accurately and quickly.
- a person candidate region extraction method in an image of invention 7 is
- Invention according to the sixth aspect of the present invention is characterized in that the edge strength uses an edge detection operator of S b b 1.
- the most representative method for detecting rapid gradation change in an image is Find the derivative about. Then, since the differentiation of the digital image is substituted by the difference, the edge portion in which the gradation in the image is rapidly changed by first differentiating the image in the face detection frame
- the present invention uses the known S obe 1 edge detection operator with excellent detection performance as the first-order differential edge detection operator (filter). It is possible to reliably detect the human figure in the image.
- Image feature quantity calculation means for calculating the respective image feature quantities for the line and the horizontal line, and variance values of the respective image feature quantities for the vertical line direction and the horizontal line direction obtained by the image feature quantity calculation means A variance value calculation means, and a region in which the person exists is detected from the line direction regions in which the variance value of the image feature amount obtained by the variance value calculation means exceeds the threshold value in the vertical line direction and the horizontal line direction.
- a human candidate area detection means for extracting an area in which a person image is present from an image including a person image, the image reading means for reading the image, and a vertical line comprising pixel rows constituting the image read by the image reading means.
- invention 9 is a human candidate capture area extraction system according to
- a system for extracting an area in which a person image is present from an image including a person image, the image reading means for reading the image, and the orientation of the person image in the image read by the image reading means The vertical direction correcting means for correcting the human image in the actual direction of the earth by rotating the image based on the result of the vertical direction judging means; Image feature amount calculating means for calculating image feature amounts of vertical lines and horizontal lines made up of the pixel rows constituting the image after being extracted; vertical line directions obtained by the image feature amount calculating means; Dispersion value calculation means for calculating the dispersion value of each image feature amount in the horizontal line direction, and each dispersion value of the image feature amount obtained by the dispersion value calculation means exceeds the threshold value in the vertical line direction and the horizontal line direction A person candidate area detecting means for detecting an area where the person exists from an area in the line direction.
- the top-bottom determining means for determining the top-bottom direction of the human image in the image the image is surrounded by a polygonal frame having a triangle or more.
- the variance of the image feature value is determined for each side of the image or the line near each side, and the variance is the largest. It is characterized in that the large side or the side near the line is directed to the ground.
- the cut edge of the human image in the image frame can be detected, it is possible to reliably and easily determine the vertical position of the human image of the image.
- the top-bottom discrimination means for discriminating the top-bottom direction of the human image in the image selects a plurality of lines of the image in close proximity, and It is characterized in that an average value of variance values of image feature amounts for each line is used.
- an average value of variance values of image feature amounts for each line is used.
- the image feature quantity calculation means as the vertical line and the horizontal line, has a constant interval in the width direction and the height direction of the image. It is characterized in that each line is used.
- the amount of information processing for obtaining the variance value is greatly reduced as in the fifth aspect of the invention, so that it is possible to extract an area where a human image exists faster. .
- the image feature quantity calculating unit is configured to: determine the image feature quantity; an edge of each pixel of a pixel row constituting each line It is characterized in that it is calculated based on either or both of the intensity and the hue angle.
- the image feature amount calculating means is configured to calculate the edge intensity using an edge detection operator of S obe 1. It is said that.
- Dispersion value calculation means for obtaining the variance value of each image feature amount in the vertical line direction and horizontal line direction obtained by the image feature amount calculation means
- a person candidate area for detecting an area in which the person is present from the area in the line direction in which the dispersion value of the image feature amount obtained by the dispersion value calculation means exceeds a threshold in each of the vertical line direction and the horizontal line direction; It is characterized in that the detection means and the computer are made to function.
- Image feature quantity calculation means for calculating image feature quantities for vertical lines and horizontal lines consisting of pixel rows constituting the image after the vertical direction correction means corrects the vertical direction and the image feature quantity calculation means
- the variance value calculation means for obtaining the variance value of each image feature quantity in the vertical line direction and the horizontal line direction obtained in the above, and the variance value of the image feature quantity obtained by the variance value calculation means is the longitudinal line direction
- a person candidate area detecting means for detecting an area in which the person exists from areas in the line directions exceeding a threshold respectively in the horizontal line direction, and a function of causing the computer to function.
- Invention 16 In the human candidate area extraction program in an image according to 6, the top-bottom determination means for determining the top-bottom direction of the human image in the image, the image is surrounded by a polygon or more polygon frame.
- the variance value of the image feature value is determined for the line on each side of the image or in the vicinity of the side, and The feature is that the side with the largest variance value or the side near the line is directed to the ground.
- the top-bottom discrimination means for discriminating the top-bottom direction of the person image in the image, selects a plurality of lines of the image in proximity to each other, It is characterized in that the average value of the variance value of the image feature amount for each line is used.
- invention 1 The human candidate area extraction program in the image of nineteen,
- the image feature quantity calculation means as the vertical line and the horizontal line, has a constant interval in the width direction and the height direction of the image. It is characterized in that each line is used.
- the image feature quantity calculation means is characterized in that: the image feature quantity; edge strength of each pixel of a pixel row constituting each line It is characterized in that it is calculated based on either or both of the hue angles.
- Invention 2 The method of determining the image of a person according to 2
- a method for determining the vertical direction of a human figure in an image in which a human figure centered on the upper body or head is present in a polygonal frame of a triangle or more comprising: lines on each side of the image or lines near the sides It is characterized in that the variance value of the image feature quantity of the pixel row constituting the image is determined, and the side with the largest variance value or the side near the line is determined as the ground direction.
- the variance value of the image feature amount is determined for the pixel array constituting the side of each side or each side of the image frame, and the side with the largest dispersion value or the side near the line is the cut of the person image.
- a program for determining the vertical direction of a human figure in an image in which a human figure centered on the upper body or head is present in a polygonal frame of a triangle or more the computer comprising: Dispersion value calculation means for calculating the dispersion value of the image feature quantity of the pixel row constituting the line near each side, and the side with the largest dispersion value obtained by the dispersion value calculation means or the side side near the line toward the ground It is characterized in that it is made to function as a top-bottom discrimination means for discriminating.
- the top-and-bottom judgment of the human figure can be carried out reliably and easily, and the software is implemented using a general-purpose computer (hardware) such as a personal computer (PC).
- a general-purpose computer such as a personal computer (PC).
- FIG. 1 is a block diagram showing an embodiment of a person candidate area extraction system according to the present invention.
- FIG. 2 is a block diagram showing hardware available in the system of the present invention.
- FIG. 3 is a flowchart showing the process flow of the person candidate area extraction method of the present invention.
- FIG. 4 is a view showing an example of an image to be a human candidate extraction area.
- FIG. 5 is a diagram showing an operator of S o b e 1.
- FIG. 6 is a diagram showing changes in the image feature quantity variance value for each line in the vertical and horizontal directions.
- FIG. 7 is a conceptual diagram showing a state in which regions where the variance value exceeds the threshold overlap each other.
- FIG. 1 shows an embodiment of a person candidate area extraction system 100 in an image according to the present invention.
- the person candidate area extraction system 100 includes an image reading means 10 for reading an image G to be a person candidate area extraction target, and a person in the image G read by the image reading means 10.
- a top-bottom determination means 1 1 for determining the top-bottom direction of the image
- a top-bottom correction means 12 for correcting the human image in the actual top-to-bottom direction based on the result of the top-bottom determination means 1 1
- Image feature quantity calculation means 1 3 for calculating the image feature quantity of the image G after the vertical direction is corrected in 2 and variance value of the image feature quantity obtained by the image feature quantity calculation means 1 3
- Mainly composed of calculation means 14 and person candidate area detection means 15 for detecting an area where a person exists based on the variance value of the image feature amount obtained by the variance value calculation means 14.
- the image reading means 10 is for identification cards such as passports and driver's licenses.
- An image that includes a human figure centered on the upper body or head within a rectangular frame (edge) with uniform or complicated background, such as a photograph of a part or a part of a snapshot.
- an imaging sensor such as C CD (charge coupled device) or CMOS (complementary memory), R (red) and G (green). It is designed to provide functions such as reading and capturing as digital image data consisting of respective pixel data of and B (blue).
- a digital still camera a CCD such as a digital video camera, a CMOS camera, a vidicon camera, an image scanner, a drum scanner, etc.
- the face image G read optically by the imaging sensor is subjected to AZD conversion It is designed to provide a function of sequentially sending digital image data of the above to the top-bottom discrimination means 11.
- the image reading means 10 is provided with a data storage function, and the read face image data can be suitably stored in a storage device such as a hard disk drive (HDD) or a storage medium such as a DVD-ROM. It has become. Also, when the image G is supplied as digital image data via a network, storage medium, etc., this image reading means 10 becomes unnecessary, or DCE, D ata C icuit t e r e ing e quiment It functions as a communication means (I / F) etc., such as C CU (C ommu nication C ontrol U nit) CCP (C o m ic u on C ontrol Pro cessor).
- a communication means I / F
- the top-bottom determination means 11 provides a function to determine the top-bottom direction of the image G in which a human image centered on the upper body or head is present in a polygon image frame having a triangle or more.
- the one side of the image frame in which the human figure is missing is detected, and the vertical direction of the image G is determined with this side as the ground direction. That is, for example, an image G read by the image reading means 10 is such that a human image of an upper body substantially above the chest is present in a rectangular image frame F as shown in FIG.
- the image frame F has four sides f 1, f 2, f 3, f 4, or four lines L 1, L 2, L 3, L 4, which are set in the vicinity, are obtained from the pixel row
- the orientation of the human image is determined on the basis of the variance value of the image feature amount.
- the specific calculation method of the variance value of the image feature amount obtained from the pixel row constituting each line set in or near each side of the image frame F will be described in detail later.
- the heavens and earth correction means 12 provides a function to correct the human figure in the actual vertical direction based on the result of the heaven and earth judgment means 11. For example, in the example of FIG. Rotate the entire image counterclockwise 90 ° (or 2 70 ° clockwise) so that the middle left side, that is, the side f 4 on which the human figure is cut, is the ground direction. The vertical direction is corrected to match the actual vertical direction.
- the image feature quantity calculation means 13 provides a function of calculating the image feature quantity of the pixels constituting the image G after the vertical direction correction means 12 corrects the vertical direction, and specifically, Edge calculation unit 2 0 or hue angle calculation unit 2 1 Detects and detects either or both of the edge intensity or hue angle of each pixel of the pixel line constituting the vertical line and horizontal line consisting of vertical and horizontal pixel lines The image feature amount of each pixel is calculated based on the calculated value.
- the variance value calculating means 14 is configured to provide a function of obtaining the variance value of the image feature amount obtained by the image feature amount calculating means 13. A specific example will be described later.
- the person candidate area detection means 15 provides a function of detecting an area where a person is present based on the variance value of the image feature obtained by the variance value calculation means 14; Regions in which the variances of the respective image feature amounts exceed the threshold value in the horizontal line direction are respectively selected, and the region in the overlapping image of the regions in each of the selected line directions is the region where the person exists. Are to be extracted.
- each of the means 10, 11, 12, 13, 14 and 15 constituting the person candidate area extraction system 100 according to the present invention is actually shown in FIG. Is realized by a computer system such as a personal computer (PC) consisting of hardware such as CPU and RAM, and a dedicated computer program (software) describing algorithms as shown in Fig. 3 etc. .
- PC personal computer
- software software
- the hardware for realizing this person candidate area extraction system 100 is a CPU (Central Processing Unit) which is a central processing unit responsible for various control and arithmetic processing. ) RAM and RAM used for main storage (Ma in S torage)
- CPU Central Processing Unit
- RAM and RAM used for main storage Mo in S torage
- ROM Read Only Memory 41
- ROM Read Only Memory 42
- secondary storage such as a node disk drive (HDD) and semiconductor memory (S economic)
- Output device 44 consisting of a monitor (L CD (Liquid Crystal Display) or C RT (Cathode Ray Tube)), etc., an image scanner, a keyboard, a mouse, a C CD (Corge C ou d ed D evice ) And input / output interface (I / F) 46, etc., between the input device 45 consisting of an imaging sensor such as CMO S (C Peripheral C omponent I nterconnect) Nos and I SA (I ndustria 1 Standard Architecture; bus), etc., each processor bus, memory bus, system bus, I / O bus, etc. It is bus connected by the intra-species bus 4 7.
- CMO S C Peripheral C omponent I nterconnect
- I SA I ndustria 1 Standard Architecture; bus
- control programs supplied via a storage medium such as a CD-ROM, a DVD-ROM, a flexible disk (FD), or a communication network (LAN, WANs Internet, etc.)
- N data Is installed in the auxiliary storage device 43, and the program and data are loaded into the main storage device 41 if necessary, and the CPU 40 executes various resources according to the program loaded in the main storage device 41.
- the control and arithmetic processing is performed by full use, and the processing result (processing data) is output to the output unit 44 via the bus 47 and displayed, and the data is stored as required. It is intended to be stored and saved (updated) as appropriate in the database formed by 3.
- FIG. 3 is a flow chart showing an example of the flow of processing relating to person candidate area extraction according to the present invention.
- each side forming the image frame of the image G or a line close to each side
- the variance value of the image feature quantity is calculated. For example, when the image G read by the image reading means 11 has a rectangular image frame F as shown in FIG.
- the image frame F has four sides f 1, f 2, f 3 and f 4, each of the pixels constituting the four sides f 1, f 2, f 3 and f 4, or each of these sides f 1, f 2, f 3,
- f 1, f 2, f 3 For each pixel constituting a line L 1, L 2, L 3, L 4 which is in the vicinity of f 4 and extends parallel to the respective sides f 1, f 2, f 3, f 4, respectively, Calculate the image feature based on the hue angle or both.
- edge strength and “hue angle” in the present invention have the same meaning as generally defined in the image processing field.
- the edge strength for example, the luminance (Y) of each pixel is calculated in advance, and the known primary differential type (differential type) represented by “the edge detection operator of Sobe 1” or the like is calculated based on the luminance value. It can be calculated easily by using the edge detection operator.
- Figures 9 (a) and (b) show this “Sobel edge detection operator”, and the operator (filter) shown in (a) of the figure shows the eight pixel values surrounding the pixel of interest. Among them, the edge in the horizontal direction is enhanced by adjusting the three pixel values respectively located in the left and right columns, and the operator shown in FIG.
- the hue angle is an attribute that indicates the difference in color
- the “hue angle” is a reference color using a color index chart such as “Mensel's hue circle” or “Munsell's color solid”. It means the angle to the position of the color of the hair when. For example, according to the “Munsell hue circle”, when the background color to be used as the reference is “B lue”, the color of the hair is more “Y ellow” or “R ed” than “G reen”. Is large (image feature quantity is large).
- step S102 the image feature quantity calculated in each side fl, f2, f3, f4 or lines LI, L2, L3, L4 in the vicinity thereof. Select the one with the largest variance value of, and consider the part with that side or line as the ground direction.
- a part of each of the sides fl, f 2, f 3, f 4 or the line L 1, L 2, L 3, L 4 in the vicinity thereof excluding the side f 4 or the line L 4 is one Of the background f 4 or the line L 4 consisting of only Because the part of the person (chest) is on the part, the variance of the image feature of side f 4 or line L 4 is the largest. As a result, the side f 4 on the left side of the figure or the line L 4 side is regarded as the ground direction.
- step S104 the process proceeds to step S104, and the image G is rotated in alignment with the top-bottom direction.
- the side f 4 or the line L 4 side is regarded as the ground direction, so that the side f 4 or the line L 4 side is directed to the actual ground direction as shown by the arrow in the figure.
- the process proceeds to the next step S106 and the image features for all vertical lines. Find the variance of the quantity.
- each image feature amount is calculated for each vertical line consisting of 3 0 0 0 pixel rows arranged in the straight direction, so for each of the 5 0 0 vertical lines in total.
- the process proceeds to the next step S108, and a threshold is set for the variance, and the threshold is exceeded.
- Vertical lines are searched inward from the left side (left frame) and right side (right frame) of image G in order, and regions exceeding the threshold are regarded as human candidate regions, and the center of each horizontal position is made the center of human candidate regions. .
- step S 1 10 in which, in the same manner as described above, each variance value is determined for all horizontal lines.
- each variance value is determined for all horizontal lines.
- the dispersion value is calculated for each of the vertical lines of 500 in total.
- a threshold is set for the variances in the same manner as in step S 1 0 8 above, and the process proceeds to the next step S 1 1 2.
- the horizontal line exceeding the threshold is searched downward in order from the upper side (upper frame) of the image G, and the area below the threshold is regarded as a human candidate area.
- the region above point c in the horizontal direction is identified as the human candidate region because the threshold value is exceeded at point c.
- the point c will be identified as the top of the figure, that is, the top of the head. Note that, as described above, in this image G, a region covered with a part of the person is always located in the direction of the ground, so that a search from the lower side is unnecessary.
- the process proceeds to the next step S114, and when the regions exceeding the dispersion value in the vertical direction and the horizontal direction are respectively identified, the overlapping regions of the respective regions are identified.
- the area can be extracted as a person candidate area. .
- the overlapping area of the area a-b and the area below c is extracted as a human candidate area.
- the present invention makes use of the fact that the variance of the image feature value tends to be high in the region where a person is present and low in the background region etc where no person is present. Because the regions in which the variance values exceed the threshold are respectively selected and regions in the selected line direction overlap with each other are extracted as human candidate regions. Of course, it is possible to robustly (robust) and rapidly extract the region where the human figure in the image G exists, as well as correcting the vertical direction of G.
- the case of the image G of the rectangular image frame F has been described. Even in the case of the force S, the triangular image frame F and the polygonal image frame, it is possible to easily discriminate the vertical direction and extract the human scout region by the same method. In the case of a circular or elliptical image frame F, if its outer edge is divided at a constant distance and judged, as in the case of the polygon image frame F, the determination of the vertical direction and the extraction of the person candidate area It will be possible to do.
- the specific setting method of the threshold is not particularly limited, but as shown in FIG. 6, when there is a noticeable difference in the dispersion value of the image feature amount between the background portion and the human portion, If the threshold is set low, it is possible to extract the person candidate region more accurately. However, if no significant difference is seen due to the fact that something other than the person is shown in the background, the threshold is more Need to set higher.
- the usual threshold value is considered to be appropriate around the variance value around the image frame plus about 1 ⁇ 4 of the maximum variance value.
- edge intensity as the image feature value can be calculated at high speed and can reduce the effects of illumination, but there is something other than a person in the background and the difference in edge intensity between the background and person Since it is not appropriate to use the edge strength in cases where is small, etc., it is possible to more reliably contribute to the extraction of the human candidate area by using the hue angle in such a case.
- determination of the orientation of the human image in the image and its correction are performed before extraction of the human region.
- all directions of the human image in the target image are actual.
- these judgments and corrections can be omitted in cases that are consistent with the vertical direction.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05704381A EP1710747A4 (en) | 2004-01-27 | 2005-01-27 | METHOD FOR EXTRACTING A PERSONAL CANDIDATE AREA IN A PICTURE, PERSONAL CANDIDATE RANGE EXTRACTION SYSTEM, PERSONAL CANDIDATE RANGE EXTRACTION PROGRAM, METHOD FOR DETERMINING ABOVE AND BELOW A PERSON PICTURE, SYSTEM FOR DECISIONS ABOVE AND BELOW AND PROGRAM FOR DECISIONS ABOVE AND BELOW |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004018376A JP3714350B2 (ja) | 2004-01-27 | 2004-01-27 | 画像中の人物候補領域抽出方法及び人物候補領域抽出システム並びに人物候補領域抽出プログラム |
JP2004-018376 | 2004-01-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005071611A1 true WO2005071611A1 (ja) | 2005-08-04 |
Family
ID=34805567
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/001568 WO2005071611A1 (ja) | 2004-01-27 | 2005-01-27 | 画像中の人物候補領域抽出方法及び人物候補領域抽出システム、人物候補領域抽出プログラム、並びに人物像の天地判定方法及び天地判定システム、天地判定プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20050196044A1 (ja) |
EP (1) | EP1710747A4 (ja) |
JP (1) | JP3714350B2 (ja) |
KR (1) | KR20060129332A (ja) |
CN (1) | CN1910613A (ja) |
WO (1) | WO2005071611A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101894272A (zh) * | 2010-08-10 | 2010-11-24 | 福州展旭电子有限公司 | 两凝胶图像间的蛋白质点自动匹配方法 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8345918B2 (en) * | 2004-04-14 | 2013-01-01 | L-3 Communications Corporation | Active subject privacy imaging |
US7386150B2 (en) * | 2004-11-12 | 2008-06-10 | Safeview, Inc. | Active subject imaging with body identification |
JP4992212B2 (ja) * | 2005-09-06 | 2012-08-08 | ソニー株式会社 | 画像処理装置、画像判定方法及びプログラム |
JP2007101573A (ja) * | 2005-09-30 | 2007-04-19 | Fujifilm Corp | プリント注文受付装置 |
EP2024929A2 (en) * | 2006-04-21 | 2009-02-18 | Koninklijke Philips Electronics N.V. | Picture enhancing increasing precision smooth profiles |
US7995106B2 (en) * | 2007-03-05 | 2011-08-09 | Fujifilm Corporation | Imaging apparatus with human extraction and voice analysis and control method thereof |
CN101937563B (zh) * | 2009-07-03 | 2012-05-30 | 深圳泰山在线科技有限公司 | 一种目标检测方法和设备及其使用的图像采集装置 |
KR101636370B1 (ko) | 2009-11-10 | 2016-07-05 | 삼성전자주식회사 | 영상 처리 장치 및 방법 |
CN102375988B (zh) * | 2010-08-17 | 2013-12-25 | 富士通株式会社 | 文件图像处理方法和设备 |
CN102831425B (zh) * | 2012-08-29 | 2014-12-17 | 东南大学 | 一种人脸图像快速特征提取方法 |
JP6265640B2 (ja) * | 2013-07-18 | 2018-01-24 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法及びプログラム |
JP6423625B2 (ja) * | 2014-06-18 | 2018-11-14 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JP6017005B2 (ja) * | 2015-11-04 | 2016-10-26 | キヤノン株式会社 | 画像検索装置、画像検索方法及びプログラム |
JP6841232B2 (ja) * | 2015-12-18 | 2021-03-10 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US9996902B2 (en) | 2016-01-19 | 2018-06-12 | Google Llc | Image upscaling |
CN105741300B (zh) * | 2016-02-03 | 2018-07-20 | 浙江科澜信息技术有限公司 | 一种区域分割截图方法 |
CN105761253B (zh) * | 2016-02-03 | 2018-07-24 | 浙江科澜信息技术有限公司 | 一种三维空间虚拟数据高清截图方法 |
CN112686851B (zh) * | 2020-12-25 | 2022-02-08 | 合肥联宝信息技术有限公司 | 一种图像检测方法、装置及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08138024A (ja) * | 1994-11-04 | 1996-05-31 | Konica Corp | 画像の向き判定方法 |
JP2001014455A (ja) * | 1999-07-01 | 2001-01-19 | Nissha Printing Co Ltd | 画像処理方法およびこれに用いる画像処理装置、記録媒体 |
JP2001111806A (ja) * | 1999-10-05 | 2001-04-20 | Minolta Co Ltd | 画像作成装置並びに画像の上下を認識する装置及び方法 |
JP2002501234A (ja) * | 1998-01-08 | 2002-01-15 | シャープ株式会社 | 人間の顔の追跡システム |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4870694A (en) * | 1987-03-24 | 1989-09-26 | Fuji Photo Film Co., Ltd. | Method of determining orientation of image |
US5150433A (en) * | 1989-12-01 | 1992-09-22 | Eastman Kodak Company | Histogram/variance mechanism for detecting presence of an edge within block of image data |
US5781665A (en) * | 1995-08-28 | 1998-07-14 | Pitney Bowes Inc. | Apparatus and method for cropping an image |
US6137893A (en) * | 1996-10-07 | 2000-10-24 | Cognex Corporation | Machine vision calibration targets and methods of determining their location and orientation in an image |
US6173087B1 (en) * | 1996-11-13 | 2001-01-09 | Sarnoff Corporation | Multi-view image registration with application to mosaicing and lens distortion correction |
US6055326A (en) * | 1997-06-16 | 2000-04-25 | Lockheed Martin Management | Method for orienting electronic medical images |
US6128397A (en) * | 1997-11-21 | 2000-10-03 | Justsystem Pittsburgh Research Center | Method for finding all frontal faces in arbitrarily complex visual scenes |
US6434271B1 (en) * | 1998-02-06 | 2002-08-13 | Compaq Computer Corporation | Technique for locating objects within an image |
JP2000048184A (ja) * | 1998-05-29 | 2000-02-18 | Canon Inc | 画像処理方法及び顔領域抽出方法とその装置 |
AUPP400998A0 (en) * | 1998-06-10 | 1998-07-02 | Canon Kabushiki Kaisha | Face detection in digital images |
US6263113B1 (en) * | 1998-12-11 | 2001-07-17 | Philips Electronics North America Corp. | Method for detecting a face in a digital image |
FR2795206B1 (fr) * | 1999-06-17 | 2001-08-31 | Canon Kk | Procede de modification d'orientation geometrique d'une image |
JP2001134772A (ja) * | 1999-11-04 | 2001-05-18 | Honda Motor Co Ltd | 対象物認識装置 |
JP4778158B2 (ja) * | 2001-05-31 | 2011-09-21 | オリンパス株式会社 | 画像選出支援装置 |
AUPR541801A0 (en) * | 2001-06-01 | 2001-06-28 | Canon Kabushiki Kaisha | Face detection in colour images with complex background |
US6915025B2 (en) * | 2001-11-27 | 2005-07-05 | Microsoft Corporation | Automatic image orientation detection based on classification of low-level image features |
US7430303B2 (en) * | 2002-03-29 | 2008-09-30 | Lockheed Martin Corporation | Target detection method and system |
US7116823B2 (en) * | 2002-07-10 | 2006-10-03 | Northrop Grumman Corporation | System and method for analyzing a contour of an image by applying a Sobel operator thereto |
US7983446B2 (en) * | 2003-07-18 | 2011-07-19 | Lockheed Martin Corporation | Method and apparatus for automatic object identification |
-
2004
- 2004-01-27 JP JP2004018376A patent/JP3714350B2/ja not_active Expired - Fee Related
-
2005
- 2005-01-26 US US11/043,908 patent/US20050196044A1/en not_active Abandoned
- 2005-01-27 CN CNA2005800031263A patent/CN1910613A/zh active Pending
- 2005-01-27 KR KR1020067015241A patent/KR20060129332A/ko not_active Application Discontinuation
- 2005-01-27 WO PCT/JP2005/001568 patent/WO2005071611A1/ja active Application Filing
- 2005-01-27 EP EP05704381A patent/EP1710747A4/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08138024A (ja) * | 1994-11-04 | 1996-05-31 | Konica Corp | 画像の向き判定方法 |
JP2002501234A (ja) * | 1998-01-08 | 2002-01-15 | シャープ株式会社 | 人間の顔の追跡システム |
JP2001014455A (ja) * | 1999-07-01 | 2001-01-19 | Nissha Printing Co Ltd | 画像処理方法およびこれに用いる画像処理装置、記録媒体 |
JP2001111806A (ja) * | 1999-10-05 | 2001-04-20 | Minolta Co Ltd | 画像作成装置並びに画像の上下を認識する装置及び方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1710747A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101894272A (zh) * | 2010-08-10 | 2010-11-24 | 福州展旭电子有限公司 | 两凝胶图像间的蛋白质点自动匹配方法 |
Also Published As
Publication number | Publication date |
---|---|
US20050196044A1 (en) | 2005-09-08 |
EP1710747A1 (en) | 2006-10-11 |
JP2005215760A (ja) | 2005-08-11 |
JP3714350B2 (ja) | 2005-11-09 |
KR20060129332A (ko) | 2006-12-15 |
CN1910613A (zh) | 2007-02-07 |
EP1710747A4 (en) | 2008-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005071611A1 (ja) | 画像中の人物候補領域抽出方法及び人物候補領域抽出システム、人物候補領域抽出プログラム、並びに人物像の天地判定方法及び天地判定システム、天地判定プログラム | |
JP4505362B2 (ja) | 赤目検出装置および方法並びにプログラム | |
EP2833288B1 (en) | Face calibration method and system, and computer storage medium | |
KR100480781B1 (ko) | 치아영상으로부터 치아영역 추출방법 및 치아영상을이용한 신원확인방법 및 장치 | |
JP5456159B2 (ja) | 背景から前景の頭頂部を分離するための方法および装置 | |
US8526684B2 (en) | Flexible image comparison and face matching application | |
US20090245625A1 (en) | Image trimming device and program | |
US10404912B2 (en) | Image capturing apparatus, image processing apparatus, image capturing system, image processing method, and storage medium | |
JP6265592B2 (ja) | 顔特徴抽出装置および顔認証システム | |
JP2016018422A (ja) | 画像処理方法、画像処理装置、プログラム、記録媒体、生産装置、及び組立部品の製造方法 | |
JP6873644B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
WO2005055143A1 (ja) | 人物顔の頭頂部検出方法及び頭頂部検出システム並びに頭頂部検出プログラム | |
JP6107372B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
CN115049689A (zh) | 一种基于轮廓检测技术的乒乓球识别方法 | |
US20040161166A1 (en) | Image processing method, image processing apparatus, storage medium and program | |
JP2006127056A (ja) | 画像処理方法、画像処理装置 | |
WO2005055144A1 (ja) | 人物顔のあご検出方法及びあご検出システム並びにあご検出プログラム | |
US20110097000A1 (en) | Face-detection Processing Methods, Image Processing Devices, And Articles Of Manufacture | |
WO2011065419A1 (ja) | 家屋倒壊領域抽出システム、家屋倒壊領域抽出方法、及び家屋倒壊領域抽出プログラム | |
JP2005309717A (ja) | マーカ処理方法、マーカ処理装置、プログラム、および、記録媒体 | |
JP5095501B2 (ja) | 注目度算出装置および方法並びにプログラム | |
WO2024047847A1 (ja) | 検出装置、検出方法、および検出プログラム | |
JP2005346663A (ja) | オブジェクト画像判別方法およびオブジェクト画像判別システム、オブジェクト画像判別プログラム、並びに誤検出判別方法、誤検出判別システム、誤検出判別プログラム | |
KR101373405B1 (ko) | 식물 영상에 대한 객체 검출 방법 및 장치 | |
JP6558978B2 (ja) | 画像処理装置、画像処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005704381 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580003126.3 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020067015241 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2005704381 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067015241 Country of ref document: KR |