CN112470165B - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
CN112470165B
CN112470165B CN202080002690.8A CN202080002690A CN112470165B CN 112470165 B CN112470165 B CN 112470165B CN 202080002690 A CN202080002690 A CN 202080002690A CN 112470165 B CN112470165 B CN 112470165B
Authority
CN
China
Prior art keywords
target
pixel
image
color
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080002690.8A
Other languages
Chinese (zh)
Other versions
CN112470165A (en
Inventor
徳永将之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Toshiba Visual Solutions Corp
Original Assignee
Hisense Visual Technology Co Ltd
Toshiba Visual Solutions Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd, Toshiba Visual Solutions Corp filed Critical Hisense Visual Technology Co Ltd
Publication of CN112470165A publication Critical patent/CN112470165A/en
Application granted granted Critical
Publication of CN112470165B publication Critical patent/CN112470165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern

Abstract

Embodiments of the present invention relate to an image processing apparatus and an image processing method that perform target detection by machine learning and perform effective image quality adjustment processing for a target by using information of a color space. According to an embodiment, an image processing apparatus includes: a reduction unit that reduces an input image and outputs a reduced image; a target detection unit that detects a predetermined target object from the reduced image; an area determination unit configured to determine an object candidate area including the object in the input image, based on a detection result of the object detection unit; a color space determination unit configured to determine whether or not the target candidate region is a region corresponding to the target object, based on information on a color space corresponding to the target object; and an image processing circuit that controls image processing on the input image based on a determination result of the color space determination unit.

Description

Image processing apparatus and image processing method
This application claims priority to the filing of japanese patent application having application number 2019-120131, entitled "image processing apparatus and image processing method" by the japanese patent office on 27/6/2019, the entire contents of which are incorporated by reference in the present application.
Technical Field
Embodiments of the present application relate to an image processing apparatus and an image processing method.
Background
Conventionally, various image processing techniques such as super-resolution processing, sharpening processing, and noise reduction processing have been used to improve the quality of an image. In an image processing apparatus that performs such image quality improvement processing, more excellent image quality improvement is achieved by performing image processing corresponding to an object in an image.
For example, the face of a target person important as a recognition target may be detected, and super-resolution processing, noise reduction processing, and the like may be performed in consideration of the detected face region. In recent years, as a method of face detection, a process using deep learning is sometimes performed. In this case, in order to reduce the amount of computation for face detection, a generalization process of the face region may be performed using the reduced image.
However, there is a problem that an accurate face region cannot be specified from the determination result of the face region using the reduced image, and thus sufficient image quality improvement cannot be achieved.
Prior art documents
Patent document
Patent document 1: japanese laid-open patent publication No. 2019-40382
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing apparatus and an image processing method that can perform effective image quality adjustment processing for a target by performing target detection by machine learning and using information of a color space.
An image processing device according to an embodiment of the present application includes: a reduction unit that reduces an input image and outputs a reduced image; a target detection unit that detects a predetermined target object from the reduced image; a region determination unit configured to determine a target candidate region including the target object in the input image, based on a detection result of the target detection unit; a color space determination unit configured to determine whether or not the target candidate region is a region corresponding to the target object, based on information of a color space corresponding to the target object; and an image processing circuit that controls image processing for the input image based on a determination result of the color space determination unit.
Drawings
Fig. 1 is a block diagram showing an image processing apparatus according to an embodiment of the present application;
fig. 2 is an explanatory diagram for explaining an example of the processing of the target detection unit 4;
fig. 3 is an explanatory diagram for explaining an example of the processing of the target detection unit 4;
fig. 4 is a flowchart for explaining the operation of the embodiment.
Description of the reference numerals
1 \8230, a shrinking circuit 2 \8230, a region determination circuit 3 \8230, an image quality improvement processing circuit 4 \8230, an object detection section 5 \8230, and a color space determination section.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the drawings.
Fig. 1 is a block diagram showing an image processing apparatus according to an embodiment of the present application. In the present embodiment, a method of determining a target in a moving image and determining a color space of a detected target region by using a detector using an inductive model obtained by machine learning is used to accurately control image quality improvement processing for the target or its vicinity. This improves the image quality of an object such as a human face in an image.
The image processing apparatus of the present embodiment can be used for various apparatuses that perform image processing. For example, the image processing apparatus according to the present embodiment can be used in a television receiver, a video recorder, or the like, and can improve the image quality of various targets in an image of a broadcast program, and as a result, a high-quality moving image can be obtained in the entire image. Further, for example, the image processing apparatus according to the present embodiment may be used for a monitoring camera, an in-vehicle camera, or the like, and the image quality of various objects in a captured moving image may be improved, and as a result, the accuracy of recognition of an object such as a person may be improved.
In fig. 1, an input image is supplied to a reduction circuit 1, an area determination circuit 2, and an image quality improvement processing circuit 3. The input image is a moving image based on a predetermined frame rate, a predetermined resolution, and a predetermined standard. For example, the moving image may be a moving image based on a broadcast signal received by a television receiver or the like, or may be a moving image obtained from a predetermined camera system.
The reduction circuit 1 as a reduction section performs reduction processing on an input image. For example, the reduction circuit 1 may employ various known reduction algorithms such as a Bilinear method (Bilinear) and a Bicubic method (Bicubic), and the algorithms are not particularly limited. The reduction circuit 1 acquires a reduced image from an input image. The reduction magnification depends on the input image size and the calculation speed of the object detection unit 4. The reduction circuit 1 sequentially outputs the reduced images generated at a predetermined frame rate to the target detection unit 4.
The target detection unit 4 performs a process of detecting a target to be detected (hereinafter, referred to as a target object) from the input reduced image by using a machine learning technique. It should be noted that the target object may also be a predetermined target. In the target detection unit 4, a predetermined network for constructing an inference model for detecting a target object is constituted by hardware or software.
The inference model of the target detection unit 4 is obtained by providing a predetermined network with a large amount of training data created by attaching information indicating the range of the target object in the reduced image as a tag to the reduced image and learning the model. The generalized model outputs information indicating the range of the target object together with information of the reliability thereof for the input of the reduced image. As the predetermined network, DNN (deep neural network) may be used. As a method of machine learning, the target detection unit 4 may use a method other than a deep neural network, for example, a method such as Haar-Like (Haar-Like) method.
Fig. 2 and 3 are explanatory diagrams for explaining an example of the processing of the object detection unit 4. Fig. 2 and 3 show an example of detection processing in the case where the target object is a human face.
The reduced image Pin in fig. 2 and 3 represents the reduced image input to the target detection unit 4. The reduced image Pin includes images of persons O1 and O2, and a circle indicates an image of a face portion as a target object. In the example of fig. 2, the target detection unit 4 detects, as the detection regions of the target object, a region DR1 including the face portion of the person O1 and a region DR2 including the face portion of the person O2 by the induction processing, as shown in the reduced image Pout. For example, the target detection unit 4 detects a face portion, and sets a rectangular region of a predetermined size centered on the coordinates of the center of the detected face portion as a detection region. The target detection unit 4 outputs information on the regions DR1 and DR2 as a detection result of the target object.
On the other hand, fig. 3 shows an example of detecting the range of the target object in small regions (hereinafter referred to as determination small regions) into which the reduced image Pin is divided by a grid. In this case, the generalized model constituting the target detection unit 4 can be acquired by learning, as training data, a reduced image in which a label indicating whether or not the generalized model is a target object is added to each determination small region.
Therefore, the target detection unit 4 detects, through the induction processing, the detection regions targeted by the region DR3 including the 2 determination small regions detected as the face portion of the person O1 and the region DR4 including the 4 determination small regions detected as the face portion of the person O2 as indicated by the reduced image Pout. The target detection unit 4 outputs information on the regions DR3 and DR4 as a detection result of the target object.
The target detection unit 4 outputs information on the detection area to the area determination circuit 2. In both the examples of fig. 2 and 3, the area determination circuit 2 as an area determination unit converts the detection area detected for the reduced image into an area (hereinafter referred to as a target storage area) having a position and a size corresponding to the size of the input image.
The target detection unit 4 obtains a candidate of a region considered to constitute the target object (hereinafter referred to as a target candidate region) for the input image of the target integrated region. For example, the target detection unit 4 determines whether or not each pixel in the target general region is a candidate for a pixel in the target candidate region, that is, a pixel constituting the target object (hereinafter, referred to as a target pixel candidate), for each pixel in the input image of the target general region.
For example, the region determination circuit 2 may set the score of reliability at the time of determination of the detection region as a score (hereinafter, referred to as a region score) for determining whether or not each pixel of the target classification region is a target pixel candidate. In this case, in the example of fig. 2, all pixels in the target general region corresponding to the region DR1 have the same region score, and all pixels in the target general region corresponding to the region DR2 have the same region score. In the example of fig. 3, all pixels in the target general region have the same region score for each of the target general regions corresponding to the respective determination small regions of the regions DR3 and DR 4.
The region determination circuit 2 may determine the region score using not only the score of reliability at the time of determination of the detection region but also other information. The region determination circuit 2 may also set pixels whose region score exceeds a prescribed threshold as target pixel candidates.
In the present embodiment, in order to obtain a target pixel that is a pixel constituting a target object, a target pixel candidate is supplied to the color space determination unit 5. The target pixel is a pixel for performing image processing using a processing parameter for the target object.
The color space determination unit 5 determines the target pixel based on whether or not the target pixel candidate pixel holds information corresponding to the color space of the target object. For example, when the target object is a human face, and when the color information of the pixel of the target pixel candidate indicates a human skin color (face color), it can be determined that the pixel holds information corresponding to the color space of the target object.
For example, the color space determination unit 5 may convert each pixel of the target pixel candidate in the input image into information of a predetermined color space, and determine the color thereof. For example, the color space determination unit 5 converts each pixel of the target pixel candidate in the input image into an HSV color space, determines for each pixel whether or not the color of the pixel exists within a predetermined range (hereinafter referred to as a target color range) corresponding to the color of the target object within the HSV color space, and determines the target pixel. Further, the target pixel may also be determined by whether at least one of hue (H), chroma (S), and lightness (V) in the HSV color space exists within the target color range.
For example, the color space determination unit 5 may determine the target pixel by converting each pixel of the target pixel candidate in the input image into the YCbCr color space and determining whether or not the color of the pixel exists in a target color range in the YCbCr color space for each pixel. Further, even in this case, the target pixel may be determined by whether or not at least one of the YCrCb color spaces exists within the target color range.
The color space used by the color space determination unit 5 for determination is not limited to the HSV color space and the YCrCb color space described above, and various color spaces such as an RGB color space may be used. When a face of a person is targeted, the target color range differs depending on the type of person or the like. Therefore, the color space determination unit 5 can set a plurality of target color ranges when determining the target pixel.
In the above description, an example has been described in which whether or not the target pixel is determined by whether or not the color of each pixel of the target pixel candidate exists within the target color range. In contrast, the color space determination unit 5 may set a reference point in the target color range, set a color score corresponding to a distance from the reference point to a point of the color of each pixel, and set a pixel having a color score exceeding a predetermined threshold as the target pixel. The above-described example of determining whether or not the target pixel is present in the target color range may be an example in which the color score in the target color range is the maximum value and the color score outside the target color range is the minimum value.
For example, in the example of fig. 3, according to the result of the region score, each pixel of the region in which the circular portion in the regions DR3, DR4 is enlarged in correspondence with the size of the input image may become a target pixel candidate. However, as described above, when the score of reliability at the time of determination of the detection region by the region determination circuit 2 is used as the region score, all pixels in the target general region or the determination small region have the same region score. As a result, particularly in the outline portion of the face, pixels corresponding to the area other than the face portion (background) also become target pixel candidates.
In the present embodiment, a color score is obtained for each pixel of the target pixel candidate, and the color score is used for each pixel of the target pixel candidate except for the pixels of the background portion, thereby enabling the target pixel to be excluded.
The color space determination unit 5 outputs the determination result of whether or not the target pixel is present, or information of the color score, to the image quality improvement processing circuit 3 for each pixel of the target pixel candidate. In the following description, the result of determination as to whether or not the target pixel is present may be expressed as the information of the color score, and therefore, the case where the information of the color score is supplied to the image quality improvement processing circuit 3 will be described.
The image quality improvement processing circuit 3 constituting the image processing circuit performs predetermined image quality processing on the input image to perform image quality improvement processing. In the present embodiment, the image quality improvement processing circuit 3 may set the processing parameters of the image quality processing for each pixel based on the information of the color score with respect to the input image or the target pixel candidate in the input image.
For example, the image quality improvement processing circuit 3 may perform the sharpening process by setting a processing parameter suitable for the sharpening process for a target pixel, which is a pixel having a color score higher than a predetermined threshold. The image quality improvement processing circuit 3 may set a processing parameter suitable for the noise reduction processing for a pixel other than the pixel having the color score higher than the predetermined threshold in the input image or a pixel having the color score equal to or lower than the predetermined threshold in each of the target pixel candidates, thereby performing the noise reduction processing. Folding noise is likely to occur at a boundary between an object such as a textured human face and a relatively smooth background. The image quality improvement processing circuit 3 can improve the image quality of the target object by removing such noise or sharpening.
The image quality improvement processing circuit 3 is not limited to the sharpening process and the noise reduction process, and may perform various image processes such as a super-resolution process. In the super-resolution processing, the processing parameter for each pixel may be changed in accordance with the color score. In addition, although the example in which the processing parameter is set in association with only the color score of each pixel of the target pixel candidate having the area score larger than the predetermined threshold value has been described, the processing parameter may be set for each pixel in accordance with the values of the area score and the color score. The processing parameters may be changed not only for each pixel but also for each predetermined region.
Next, an operation of the embodiment configured as described above will be described with reference to fig. 4. Fig. 4 is a flowchart for explaining the operation of the embodiment.
A moving image or the like is input as an input image to the reduction circuit 1, the area determination circuit 2, and the image quality improvement processing circuit 3. The flowchart of fig. 4 shows the processing for each frame of a moving image to be input, and the circuits of fig. 1 execute the processing of fig. 4 for a predetermined frame.
The reduction circuit 1 performs reduction processing in step S1 of fig. 4. The input image is converted into a reduced image by a predetermined reduction algorithm. The reduced image is supplied to the object detection unit 4.
The target detection unit 4 detects a target object by using a machine learning technique (step S2). For example, the target detection unit 4 obtains a rectangular detection area as an image area of the target object. The detection result of the target detection unit 4 is supplied to the area determination circuit 2, and the area determination circuit 2 obtains a target integrated area in which the detection area is enlarged to the position and size of the original input image (step S3).
The area determination circuit 2 obtains an area score for determining whether or not the pixel is a candidate of the pixel constituting the target object, for each pixel in the target-generalized area (step S4). The region determination circuit 2 determines the pixel having the region score larger than the threshold value as the target pixel candidate (step S5).
The information of the target pixel candidate is supplied to the color space determination section 5. The color space determination unit 5 obtains a color score for each pixel of the target pixel candidate (step S6). For example, the color space determination unit 5 obtains a color score based on the relationship between the color of the pixel candidate of the target pixel in the predetermined color space and the target color range. That is, for example, the larger the color score is, the closer the color of the pixel is to the color of the target object in the color space. Therefore, by using the color score, it is possible to determine with higher accuracy whether or not each pixel of the target pixel candidate is a pixel of the target object.
The color space determination unit 5 outputs information of the color score of each pixel of the target pixel candidate to the image quality improvement processing circuit 3. The image quality improvement processing circuit 3 sets, for example, a processing parameter for image quality processing on the input image for each pixel based on the color score (step S7), and performs the image quality improvement processing (step S8).
For example, the image quality improvement processing circuit 3 sets a processing parameter suitable for sharpening processing for a target pixel having a color score higher than a predetermined threshold value, sets a processing parameter suitable for noise reduction processing for pixels other than the target pixel, and performs image quality improvement processing. This can improve the image quality of a target object such as a human face.
In the present embodiment, the image quality improvement processing circuit 3 sets different processing parameters for the target pixel and the pixels other than the target pixel, thereby making the portion of the pixels other than the target low in image quality and improving the image quality of the target. For example, with respect to a predetermined target object, the image quality of another target object in the image may be reduced, and in this case, the visibility of the target object can be relatively improved.
As described above, in the present embodiment, the method of performing the image quality improvement processing on the target and the periphery thereof can be controlled with high accuracy by determining not only the target object in the moving image but also the color space of the region of the detected target by using the generalized model obtained by machine learning. This improves the image quality of an object such as a face of a person in an image, improves the visibility of the object in a moving image, and improves the accuracy of recognition of the object.
In the above-described embodiment, the human face is taken as an example of the target object, but the target object is not particularly limited. For example, animals such as dogs and cats, automobiles, balls, and the like may be set as the target object. For example, when a golf ball is set as a target object, it is possible to improve the image quality of the golf ball in a moving image of a follow-up golf ball, and to realize image quality improvement processing such that the image can be clearly displayed even with a dent.
In the circuits (the reduction circuit 1, the area determination circuit 2, the image quality improvement processing circuit 3, the object detection unit 4, and the color space determination unit 5) of the above-described embodiment, each part constituting each circuit may be configured as an electronic circuit or may be configured as a circuit block in an integrated circuit. Each circuit may be configured to include 1 or more CPUs. Further, each circuit may be configured to read a program for executing the function of each section from a storage medium such as a memory and perform an action corresponding to the read program.
The present invention is not limited to the above-described embodiments, and various modifications can be made in the implementation stage without departing from the scope of the present invention. The above embodiment includes inventions at various stages, and various inventions are extracted by appropriately combining a plurality of disclosed constituent elements. For example, even if some components are eliminated from all the components disclosed in the embodiment, if the problems described in the section of the problems to be solved by the invention can be solved and the effects described in the section of the effects of the invention can be obtained, the configuration in which the components are eliminated can be extracted as the invention.

Claims (3)

1. An image processing apparatus comprising:
a reduction unit configured to reduce an input image and output a reduced image;
a target detection unit configured to detect a predetermined target object from the reduced image;
a region determination unit configured to receive information on a detection region output by the target detection unit, convert the detection region detected by the reduced image into a target accumulation region, and determine whether or not pixels in the target accumulation region constitute target pixel candidates for an input image of the target accumulation region; wherein the target inductive region is a region of a position and a size corresponding to a size of the input image;
a color space determination unit configured to determine whether or not the target pixel candidate pixel is a target pixel based on information of a color space of the target object; wherein the color space determination unit sets a reference point in a target color range, sets a color score corresponding to a distance from the reference point to a point of the color of each pixel, and determines a pixel having the color score larger than a preset threshold value as a target pixel; and
an image processing circuit for controlling image processing for the input image based on a determination result of the color space determination section, comprising: and sharpening the target pixel in the input image, and performing noise reduction on the pixel with the color score less than or equal to a preset threshold value in the input image.
2. The image processing apparatus according to claim 1,
the target detection unit further detects the target object from the reduced image by using a neural network-based generalization process.
3. An image processing method comprising:
the input image is reduced to output a reduced image,
detecting a predetermined target object from the reduced image,
receiving information related to a detection area, converting the detection area detected by the reduced image into a target induction area, and judging whether pixels in the target induction area form target pixel candidates or not according to an input image of the target induction area; wherein the target inductive region is a region of a position and a size corresponding to a size of the input image,
determining whether a pixel of the target pixel candidate is a target pixel according to information of a color space of the target object; wherein a reference point is set in a target color range, a color score corresponding to a distance from the reference point to a point of the color of each pixel is set, a pixel having a color score larger than a preset threshold value is determined as a target pixel, and
controlling image processing for the input image based on a determination result using the information of the color space, including: and sharpening the target pixel in the input image, and performing noise reduction on the pixel with the color score smaller than or equal to a preset threshold value in the input image.
CN202080002690.8A 2019-06-27 2020-06-24 Image processing apparatus and image processing method Active CN112470165B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019-120131 2019-06-27
JP2019120131A JP2021005320A (en) 2019-06-27 2019-06-27 Image processing system and image processing method
PCT/CN2020/098169 WO2020259603A1 (en) 2019-06-27 2020-06-24 Image processing apparatus and method

Publications (2)

Publication Number Publication Date
CN112470165A CN112470165A (en) 2021-03-09
CN112470165B true CN112470165B (en) 2023-04-04

Family

ID=74059858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080002690.8A Active CN112470165B (en) 2019-06-27 2020-06-24 Image processing apparatus and image processing method

Country Status (3)

Country Link
JP (1) JP2021005320A (en)
CN (1) CN112470165B (en)
WO (1) WO2020259603A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713209A (en) * 2004-06-24 2005-12-28 诺日士钢机株式会社 Photographic image processing method and equipment
CN101964874A (en) * 2009-07-23 2011-02-02 卡西欧计算机株式会社 Image processing apparatus and image processing method
CN102780887A (en) * 2011-05-09 2012-11-14 索尼公司 Image processing apparatus and image processing method
CN102812710A (en) * 2010-05-21 2012-12-05 夏普株式会社 Colour determination device, colour determination method, image processing circuit and program
CN104244802A (en) * 2012-04-23 2014-12-24 奥林巴斯株式会社 Image processing device, image processing method, and image processing program
CN104751136A (en) * 2015-03-11 2015-07-01 西安理工大学 Face recognition based multi-camera video event retrospective trace method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000076454A (en) * 1998-08-31 2000-03-14 Minolta Co Ltd Three-dimensional shape data processor
US7120279B2 (en) * 2003-01-30 2006-10-10 Eastman Kodak Company Method for face orientation determination in digital color images
JP2010152521A (en) * 2008-12-24 2010-07-08 Toshiba Corp Apparatus and method for performing stereographic processing to image
CN103473564B (en) * 2013-09-29 2017-09-19 公安部第三研究所 A kind of obverse face detection method based on sensitizing range
IL252657A0 (en) * 2017-06-04 2017-08-31 De Identification Ltd System and method for image de-identification
CN107909551A (en) * 2017-10-30 2018-04-13 珠海市魅族科技有限公司 Image processing method, device, computer installation and computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713209A (en) * 2004-06-24 2005-12-28 诺日士钢机株式会社 Photographic image processing method and equipment
CN101964874A (en) * 2009-07-23 2011-02-02 卡西欧计算机株式会社 Image processing apparatus and image processing method
CN102812710A (en) * 2010-05-21 2012-12-05 夏普株式会社 Colour determination device, colour determination method, image processing circuit and program
CN102780887A (en) * 2011-05-09 2012-11-14 索尼公司 Image processing apparatus and image processing method
CN104244802A (en) * 2012-04-23 2014-12-24 奥林巴斯株式会社 Image processing device, image processing method, and image processing program
CN104751136A (en) * 2015-03-11 2015-07-01 西安理工大学 Face recognition based multi-camera video event retrospective trace method

Also Published As

Publication number Publication date
WO2020259603A1 (en) 2020-12-30
CN112470165A (en) 2021-03-09
JP2021005320A (en) 2021-01-14

Similar Documents

Publication Publication Date Title
EP1596323B1 (en) Specified object detection apparatus
EP3477931A1 (en) Image processing method and device, readable storage medium and electronic device
US8605955B2 (en) Methods and apparatuses for half-face detection
US8977056B2 (en) Face detection using division-generated Haar-like features for illumination invariance
US10506174B2 (en) Information processing apparatus and method for identifying objects and instructing a capturing apparatus, and storage medium for performing the processes
US20070076951A1 (en) Image recognition device
US20110211233A1 (en) Image processing device, image processing method and computer program
WO2009095168A1 (en) Detecting facial expressions in digital images
JP2015099559A (en) Image processing apparatus, image processing method, and program
US11900664B2 (en) Reading system, reading device, reading method, and storage medium
CN112396050B (en) Image processing method, device and storage medium
WO2017061106A1 (en) Information processing device, image processing system, image processing method, and program recording medium
CN111368698B (en) Main body identification method, main body identification device, electronic equipment and medium
US7403636B2 (en) Method and apparatus for processing an image
JP5201184B2 (en) Image processing apparatus and program
CN112470165B (en) Image processing apparatus and image processing method
JP2018109824A (en) Electronic control device, electronic control system, and electronic control method
US11915498B2 (en) Reading system, reading device, and storage medium
US20150279039A1 (en) Object detecting apparatus and method
JP4789526B2 (en) Image processing apparatus and image processing method
JP6276504B2 (en) Image detection apparatus, control program, and image detection method
CN111275045A (en) Method and device for identifying image subject, electronic equipment and medium
JP4253265B2 (en) Shadow detection apparatus, shadow detection method and shadow detection program, image processing apparatus using shadow detection apparatus, image processing method using shadow detection method, and image processing program using shadow detection program
US20230126046A1 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
JP2011159097A (en) Method and device for detecting object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant