CN110378182A - Image analysis apparatus, method for analyzing image and recording medium - Google Patents

Image analysis apparatus, method for analyzing image and recording medium Download PDF

Info

Publication number
CN110378182A
CN110378182A CN201910179678.3A CN201910179678A CN110378182A CN 110378182 A CN110378182 A CN 110378182A CN 201910179678 A CN201910179678 A CN 201910179678A CN 110378182 A CN110378182 A CN 110378182A
Authority
CN
China
Prior art keywords
face
image
topography
extracted
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910179678.3A
Other languages
Chinese (zh)
Other versions
CN110378182B (en
Inventor
青位初美
相泽知祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Europa Corp
Original Assignee
Europa Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Europa Corp filed Critical Europa Corp
Publication of CN110378182A publication Critical patent/CN110378182A/en
Application granted granted Critical
Publication of CN110378182B publication Critical patent/CN110378182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/144Image acquisition using a slot moved over the image; using discrete sensing elements at predetermined points; using automatic curve following means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides image analysis apparatus, method for analyzing image and recording medium, and it is few and accurately detect from image data to test object object to can handle the time.In base position determining section (113), multiple eyes of such as face and the characteristic point of nose are detected from the image-region of the face including driver extracted by face area extraction unit (112) with rectangle frame (E1) by coarse search, according to the characteristic point of each organ detected, the position of the glabella of the face of driver is detected, and determines it as the base position (B) of face.Then, it is corrected position of the extraction unit (114) to rectangle frame relative to image data again by face area, so that the base position (B) of the face of above-mentioned determination is at the center of rectangle frame, and the rectangle frame being corrected using the position, extract the image-region including face again from above-mentioned image data.

Description

Image analysis apparatus, method for analyzing image and recording medium
Technical field
Embodiments of the present invention are for example related to the figure of the test object object for detecting face etc. from the image of shooting As resolver, method and program.
Background technique
For example, proposing technology as described below in the monitoring fields such as driver's monitoring: the image taken from camera Middle detection face detects the position of multiple organs such as eyes, nose, mouth about the face detected, and is based on its testing result, Infer the direction etc. of face.
As the method for detecting face from photographed images, such as it has been known that there is image procossing skills well known to template matching etc. Art.For example, its first method is to make the position of template relative to photographed images step by step with the pixel separation of specified quantity on one side It is moved, is on one side image-region more than threshold value from being detected in above-mentioned photographed images with the consistent degree of the image of template, Such as the image-region detected is extracted by the frame of rectangle, thus to detect face.
In addition, such as second method be employed as glabella detection with and pre-prepd template searches for the eyebrow in face Between, centered on the position of the glabella searched, object images are extracted (referring for example to special by the rectangle frame of prescribed level Sharp document 1).
Patent document 1: Japanese Patent Laid-Open 2004-185611 bulletin
But in first method, the time needed for shortening detection in order to reduce the matching times of template, ordinary circumstance Under, the fractionally spaced pixel separation that is set to be greater than photographed images of the template relative to the position of photographed images.Therefore, sometimes The positional relationship of rectangle frame and the face extracted by the rectangle frame can generate deviation.If the position of the face in rectangle frame Deviation is produced, then infers eyes, nose, mouth, profile of face etc. in the image for the face for wanting to go out according to said extracted Whens the position etc. of each organ, it can not detect without omitting ground for the organ needed for inferring, or cause error detection, cause to push away The reduction of disconnected precision.
In addition, be that face is extracted centered on the position in photographed images by glabella in the second approach, therefore, The positional relationship of rectangle frame and face is difficult to generate deviation, can steadily extract each organ of face etc..But for detecting The template matching processing of glabella needs many processing step and processing time, and therefore, the processing load of device increases, in addition, It is easy to generate detection delay.
Summary of the invention
The present invention is the invention for being conceived to above situation and being made, few and high-precision its purpose is to provide can handle the time The technology that degree ground detects test object object from image data.
In order to solve the above-mentioned technical problem, it is held in image analysis apparatus according to the present invention or by the image analysis apparatus In the first aspect of capable method for analyzing image, image obtained from shooting to the range for including test object object is obtained, The topography that the region of the test object object will be present from the image got uses the rule for surrounding the topography The extraction frame of scale cun extracts, and the base position of the test object object is determined from the topography extracted, Base position based on the determination carries out school to the extraction position of the topography extracted by the extraction frame Just, the topography is extracted by the extraction frame on the extraction position after the correction again, and is extracted again from described The state of the test object object detects in topography out.
According in a first aspect, for example, being produced even with the extraction position for extracting the topography that frame extracts Deviation can also be corrected the extraction position based on the base position of test object object, be mentioned again according to the extraction position after the correction Take above-mentioned topography.Therefore, the influence of the deviation of said extracted position is mitigated, thus, it is possible to improve from topography Detection accuracy when being detected to the state of test object object.In addition, the base position of above-mentioned test object object is to be based on The determination with the topography extracted in the state of above-mentioned deviation.Therefore, with from obtain picture search test object object Base position the case where compare, can shorten and mitigate in order to which processing time needed for extracting topography and processing are negative Lotus.
In the second aspect of device according to the present invention, by image acquiring unit obtain to include face range into Image obtained from row shooting, the region of the face will be present by topography's extraction unit from the image got Topography extracted with the extraction frame for the predetermined size for surrounding the topography.Then, it is determined by base position The position of characteristic point corresponding with multiple organs of the face is detected in portion from the topography extracted respectively, and being based on should Any position on the center line of the face is determined as base position, by again by the position of each characteristic point detected The base position of extraction unit based on the determination, so that the base position of the topography becomes the center for extracting frame After mode corrects the extraction position of the topography extracted by the extraction frame, after the correction Extraction position on using the extractions frame extract the topography again, and pass through state detecting section and extracted again from described The state of the face detects in topography out.
As an example, the base position determining section will be in the position of the glabella of the face, the vertex of nose, mouth Entreat point, the intermediate point on vertex of the position of the glabella and the nose, the position of glabella and the intermediate point of mouth central point and Either one in the mean place of the position of glabella, nose vertex and mouth central point is determined as the base position.
According to second aspect, such as the case where being detected face to detect its state as driver's monitoring Under, deviation is produced even with the extraction position for extracting the face image that frame extracts, it also can be by the center line of face On any position the extraction position is corrected as base position, and extract again according to the extraction position after the correction above-mentioned Face image.Therefore, the influence of the deviation of said extracted position is mitigated, thus, it is possible to accurately detect the shape of face State.In addition, the detection of any position on the center line of above-mentioned face is to be based on extracting in the state of with above-mentioned deviation Topography and determination.Therefore, with from obtain picture search face center line on any position the case where compared with, can To shorten search required processing time, and the processing load of alleviator.
In the third aspect of device according to the present invention, by base position determining section with the first search precision from institute The position that the characteristic point of the test object object is searched for by the topography extracted is stated, and is determined based on the characteristic point searched The base position of the test object object, be higher than by state detecting section the second search precision of first search precision from The characteristic point of the test object object is searched for by the topography extracted again, and is detected based on the characteristic point searched The state of the test object object.
According to the third aspect, and in order to which the state to test object object is detected and searches for the detection from topography The processing of the characteristic point of object is compared, and carries out the benchmark position in order to determine test object object with the lower search process of precision It sets and the processing from the position of the characteristic point of topography's searching and detecting object.Therefore, further it can shorten and mitigate For processing time needed for determining the feature point search of base position and processing load.
It further include output section in the fourth aspect of device according to the present invention, the institute detected described in output expression State the information of the state of test object object.
According to the fourth aspect of the invention, the information based on the state for indicating the test object object, such as external device (ED) The state of test object object can be grasped, and takes the measure for being suitable for the state.
Recording medium involved in the fifth aspect of the present invention, which is stored with, makes either side in above-mentioned first to fourth aspect The hardware processor that the image analysis apparatus includes executes the journey of the processing in each portion included by described image resolver Sequence.
I.e., various aspects according to the present invention, can provide that be capable of handling the time few and accurately from image data to inspection Survey the technology that object is detected.
Detailed description of the invention
Fig. 1 is the figure for illustrating image analysis apparatus involved in one embodiment of the present invention application examples.
Fig. 2 is the example for showing the hardware of image analysis apparatus involved in one embodiment of the present invention and constituting Block diagram.
Fig. 3 is an example for showing the software sharing of image analysis apparatus involved in one embodiment of the present invention Block diagram.
One example of the step of Fig. 4 is the study processing for showing image analysis apparatus shown in Fig. 3 and process content Flow chart.
Fig. 5 be show image analysis apparatus shown in Fig. 3 image analysis processing processing step and process content one The flow chart of a example.
Fig. 6 is the processing step and process content for showing the characteristic point search process in image analysis processing shown in fig. 5 An example flow chart.
Fig. 7 is the figure for the action example for illustrating the face area extraction unit of image analysis apparatus shown in Fig. 3.
Fig. 8 is show the face area that the face area extraction unit of image analysis apparatus shown in Fig. 3 is extracted one The figure of a example.
Fig. 9 is one for showing base position determined by the base position determining section of image analysis apparatus shown in Fig. 3 The figure of example.
Figure 10 is the face area face that extraction unit extracts again again for showing image analysis apparatus shown in Fig. 3 The figure of one example in region.
Figure 11 is the figure for showing an example of the characteristic point extracted from face image.
Figure 12 is the figure for showing the example for the characteristic point that Three-dimensional Display is extracted from face image.
Description of symbols
1 camera;2 image analysis apparatus;3 image acquiring units;4 face detection portions;4a face area extraction unit;4b benchmark position Set determining section;4c face area extraction unit again;5 face state test sections;11 control units;11A hardware processor;11B journey Sequence memory;12 buses;13 data storages;14 camera interfaces;15 external interfaces;111 images obtain control unit;112 faces Extracted region portion;113 base position determining sections;114 face areas extraction unit again;115 face state test sections;116 outputs Control unit;131 image storage parts;132 template storage units;133 face area storage units.
Specific embodiment
In the following, being illustrated referring to attached drawing to embodiment according to the present invention.
[application examples]
Firstly, the application examples to image analysis apparatus involved in embodiments of the present invention is illustrated.
Image analysis apparatus involved in embodiments of the present invention is for example for monitoring the state (example of the face of driver Such as the direction of face) driver's monitoring device, such as constitute as illustrated in fig. 1.
Image analysis apparatus 2 is connected to camera 1, including obtain exported from camera 1 picture signal image acquiring unit 3, Face detection portion 4 and face state test section 5.The position opposite with driver's seat is for example arranged in camera 1, with certain frame week Phase shoots the defined range of the face of the driver including being sitting in driver's seat, and exports its picture signal.
Image acquiring unit 3 for example successively receive from above-mentioned camera 1 export picture signal, by the picture signal received by The image data being made of digital signal is converted to every frame, and is stored in video memory.
Face detection portion 4 includes that face area extraction unit 4a, base position determining section 4b and face area extract again Portion 4c.Face area extraction unit 4a by every frame reads image acquired in above-mentioned image acquiring unit 3 from above-mentioned video memory Data extract the image-region (topography) of the face including driver from the image data.For example, face area extracts Portion 4a uses template matching method, makes the position of reference templates relative to picture number step by step with the pixel separation of specified quantity on one side According to being moved, on one side from detected in above-mentioned image data with the consistent degree of the image of reference templates be threshold value more than figure As region, which is extracted by rectangle frame.
Base position determining section 4b passes through coarse search from the figure including face extracted by above-mentioned rectangle frame first As the characteristic point of the defined organ of region detection face, such as eyes, nose.Also, according to the spy of each organ detected The position of the glabella, is determined as the base position of face by the position for levying the glabella of the position detection such as face of point.
Coarse search measures the characteristic point for limiting test object less as being only such as eyes and nose, uses characteristic point Configure the few three-dimensional facial contours model of the dimension of vector.In addition, passing through the figure to the face extracted using above-mentioned rectangle frame As the three-dimensional facial contours model of above-mentioned coarse search is hinted obliquely in region, to obtain above-mentioned each organ from above-mentioned face image region Characteristic quantity, and when the margin of error based on the normal solution value relative to acquired characteristic quantity and the margin of error are within threshold value Three-dimensional facial contours model infers the general location of the above-mentioned each characteristic point being defined in above-mentioned face image region.
Face area again the base position according to determined by said reference position determining portions 4b extraction unit 4c to rectangle frame Position relative to image data is corrected.For example, face area again extraction unit 4c to rectangle frame relative to above-mentioned image The position of data is corrected, so that the position for passing through glabella detected by said reference position determining portions 4b is rectangle frame The center of left and right directions.Then, from extracted again in above-mentioned image data above-mentioned position it is adjusted after rectangle frame in include Image-region.
Face state test section 5 is for example by detailed search from by above-mentioned face area, extraction unit 4c is mentioned again again The image-region including face taken out detect the face of driver multiple organs, for example eyes, nose, mouth, face wheel The direction of wide position and face.Then, the position of each organ of the above-mentioned face detected and the direction of face will be indicated Information exported as the information of state for the face for indicating driver.
Detailed search for example sets the characteristic point more than the amount of test object for eyes, nose, mouth, cheekbone etc., uses spy Three-dimensional facial contours model more than the dimension of sign point configuration vector.In addition, by being extracted again using above-mentioned rectangle frame The image-region of face hints obliquely at the three-dimensional facial contours model of above-mentioned detailed search, to obtain from above-mentioned face image region The characteristic quantity of above-mentioned each organ, and the margin of error based on the normal solution value relative to acquired characteristic quantity and the margin of error are Three-dimensional facial contours model when within threshold value, infers the position of the characteristic point more than the above-mentioned amount in above-mentioned face image region.
Due to being to constitute as described above, in image analysis apparatus 2, first in face area extraction unit 4a Such as being extracted from image data acquired in image acquiring unit 3 by rectangle frame E1 by template matching method includes driver's The image-region of face.At this point, the fractionally spaced of above-mentioned template is set to for example be equivalent to the thick interval of multiple pixels more.Cause This, the extraction position based on rectangle frame E1 of the image-region including above-mentioned face generates above-mentioned fractionally spaced caused sometimes Deviation.Also, such as shown in Figure 1, due to deviation size, there is a part of organ of face to be not included in rectangle frame sometimes Situation in E1.
But in image analysis apparatus 2, in the determining section 4b of base position, by coarse search from utilize above-mentioned rectangle The characteristic point of multiple organs (such as eyes and nose) of face is detected in the image-region including face that frame E1 is extracted, According to the characteristic point of each organ detected, such as the position B of the glabella of face is detected as illustrated in fig. 1.Then, pass through face The position B of the glabella after above-mentioned determination is corrected rectangle frame E1 as the base position of face by extraction unit 4c again in portion region. For example, position of the correction rectangle frame E1 relative to above-mentioned image data, so that the position B of above-mentioned glabella becomes the left and right of rectangle frame The center in direction.Then, using the rectangle frame for correcting above-mentioned position, extracting again from above-mentioned image data includes face Image-region.The E2 of Fig. 1 shows an example of the position of the rectangle frame after above-mentioned correction.
Then, in image analysis apparatus 2, in face state test section 5, from it is above-mentioned extract again include face Image-region in detect the eyes of face of driver, nose, mouth, face profile etc. position and face direction. Then, indicated as the information output of the state for the face for indicating driver each organ of the above-mentioned face detected position and The information of the direction of face.
Therefore, in one embodiment of the present invention, even in the image district including face extracted by rectangle frame The extraction position in domain produces deviation, and a part of organ of face is thus caused to be not included in rectangle frame, also can be according at this time The position of the organ of face included by the image-region extracted determines base position, corrects rectangle based on the base position The image-region including face is extracted in position of the frame relative to image data again.Therefore, it can be extracted by rectangle frame Image-region in, omit comprising face towards etc. detection needed for face organ, thus, it is possible to accurately Detect the state of the faces such as face's direction.In addition, inspection of the coarse search for the organ of face needed for determining said reference position It surveys.Therefore, compared with the case where directly searching the base position of face from the image data taken, it can use a small amount of figure As treating capacity determines base position in a short time.
[first embodiment]
(configuration example)
(1) system
Image analysis apparatus involved in one embodiment of the present invention is for example in the state of the face of monitoring driver It is used in driver's monitoring system.In this example embodiment, driver's monitoring system includes camera 1 and image analysis apparatus 2.
Camera 1 is for example configured at the position with driver's face of instrument board.Camera 1 is for example for example close using that can receive CMOS (Complementary MOS: complementary metal oxide semiconductor) imaging sensor of infrared ray is as picture pick-up device.Phase The shooting of machine 1 includes the prescribed limit of the face of driver, by its picture signal for example by signal cable to image analysis apparatus 2 It sends out.It should be noted that CCD (Charge Coupled Device: charge-coupled device can also be used as picture pick-up device Part) etc. other solid-state imagers.As long as in addition, the position as windshield or rearview mirror etc. with driver's face, The setting position of camera 1 can be set in arbitrary position.
(2) image analysis apparatus
Image analysis apparatus 2 detects the face image region of driver, root from the picture signal obtained by above-mentioned camera 1 According to the state of the face of face image region detection driver, the direction of such as face.
(2-1) hardware is constituted
Fig. 2 is the block diagram for showing an example of hardware composition for image analysis apparatus 2.
Image analysis apparatus 2 is such as with hard CPU (Central Processing Unit: central processing unit) Part processor 11A.It also, is by bus 12 by program storage 11B, data storage 13, camera interface 14, external interface 15 are connected to the device of hardware processor 11A.
Camera interface 14 receives the picture signal exported from above-mentioned camera 1 by signal cable.External interface 15 will indicate The information of the testing result of face state to such as other view of judgement or sleepy driver state assessment device, control vehicle it is dynamic The output of the external device (ED)s such as the automatic Pilot control device of work.
It should be noted that include in the car the interior cable network such as LAN (Local Area Network: local area network), It, can also in the case where in-vehicle wireless network using the small electric power wireless data communication standard such as Bluetooth (registered trademark) To carry out the letter between above-mentioned camera 1 and camera interface 14 and between external interface 15 and external device (ED) using above-mentioned network Number transmission.
Program storage 11B has used such as HDD (Hard Disk Drive: hard disk drive), SSD (Solid State Drive: solid state hard disk) etc. the nonvolatile memory that can be written and read at any time and ROM etc. it is non-volatile Memory is stored with program needed for executing various control processing involved in an embodiment as storage medium.
Data storage 13 includes non-volatile depositing of being written and read at any time such as being combined with HDD or SSD The component of the volatile memory such as reservoir and RAM executes involved in an embodiment as storage medium for being stored in It obtains, detect and calculated various data, template data etc. during various processing.
(2-2) software sharing
Fig. 3 is the block diagram for showing the software sharing of image analysis apparatus 2 involved in one embodiment of the present invention.
Image storage part 131, template storage unit 132 and facial regions are provided in the storage region of data storage 13 Domain storage unit 133.Image storage part 131 is for temporarily storing the image data obtained from camera 1.In template storage unit 132 It stores and photographed the reference templates of the image-region of face, for the image from the face extracted for extracting from image data The coarse search of the position of the regulation organ of region detection face with and detailed search each three-dimensional facial contours model.Face Region storage unit 133 is used to temporarily store the image-region of the face extracted again from image data.
Control unit 11 includes above-mentioned hardware processor 11A and above procedure memory 11B, and obtains and control including image Portion 111 processed, face area extraction unit 112, base position determining section 113, face area extraction unit 114, face state inspection again Survey portion 115 and output control unit 116 are used as software-based processing function portion.These processing function portions are above-mentioned hard by making Program that part processor 11A executive memory 11B is stored is realized.
The picture signal exported from above-mentioned camera 1 by every frame is received by camera interface 14, and is converted into and is believed by number Number constitute image data.Image obtains control unit 111 and carries out being taken into above-mentioned picture number by every frame from above-mentioned camera interface 14 According to and be stored in data storage 13 image storage part 131 processing.
Face area extraction unit 112 by every frame reads image data from above-mentioned image storage part 131, using being stored in mould The reference templates of the face of plate storage unit 132 extract the image for photographing the face of driver from the image data of above-mentioned reading Region.For example, face area extraction unit 112 makes reference templates phase according to preset multiple pixel separations (such as 8 pixels) Image data is gradually moved, and corresponds to each position for being moved to, calculates reference templates and image data The correlation of brightness.Then, calculated correlation and preset threshold value are compared, are by calculated correlation Face area of the corresponding image-region in substep position as the face for photographing driver more than threshold value, carries out passing through rectangle frame Come the processing extracted.The size of rectangle frame is preset according to the size of the face for the driver for photographing photographed images.
It should be noted that can for example use the reference templates of the profile corresponding to entire face, based on each of face Reference templates image of the template of organ (eyes, nose, mouth etc.) as above-mentioned face.In addition, as based on template matching The method of face extraction, such as face can be detected using the vertex for detecting head etc. by chroma key processing and based on the vertex The method in portion detects the region close to skin color and is method of face etc. by the region detection.In turn, face area mentions Portion 112 is taken also to may be constructed are as follows: to carry out the study based on teacher signal using neural network, will seem that the region detection of face is Face.In addition, face area extraction unit 112 carried out face detection processing can also by application other existing technologies come It realizes.
The three-dimensional facial contours for the coarse search that base position determining section 113 is stored for example, by using template storage unit 132 Model, from image-region (partial image data) inspection extracted by above-mentioned face area extraction unit 112 using rectangle frame Survey the relevant characteristic point of defined organ, such as eyes and nose of the face of driver.
The characteristic point that coarse search for example restrictively will test object is only set to such as eyes and nose or is only set to Eyes use the few three-dimensional facial contours model of the dimension of characteristic point configuration vector.The three-dimensional facial contours model of coarse search The actual face for e.g. corresponding to driver is generated by study processing.It should be noted that the three of coarse search The model for setting average initial parameter acquired in the general face image also can be used in dimension facial contours model.
In coarse search, to the face image region extracted in above-mentioned face area extraction unit 112 using rectangle frame The three-dimensional facial contours model of above-mentioned coarse search is hinted obliquely at, implements the sampling based on the three-dimensional facial contours model, from above-mentioned face Portion's image-region obtains sampling characteristic quantity.Then, the mistake of above-mentioned the sampling characteristic quantity and normal solution model parameter got is calculated Difference, model parameter when using the error for below threshold value are exported as the inferred results of above-mentioned sampling characteristic point.In coarse search In, big value, the value for being set to be large the tolerance of error when above-mentioned threshold value is set as than detailed search.
It should be noted that the three-dimensional facial contours model as coarse search, such as facial contours mould can also be used The defined node of type is configured at the arbitrary vertex (such as upper left corner) away from rectangle frame used in face area extraction unit 112 Defined position on such shape.
Defined organ of the base position determining section 113 based on the face by driver detected by above-mentioned coarse search The position of relevant characteristic point determines the datum mark of the face of driver.For example, base position determining section 113 is according to driver Face two eyes the position of characteristic point and the position of characteristic point of nose infer the position of glabella.Also, it should The position of glabella is determined as the base position of the face of driver.
Face area again extraction unit 114 according to passing through base position pair determined by said reference position determining portions 113 Rectangle frame is corrected relative to the position of image data.For example, face area again extraction unit 114 to rectangle frame relative to upper The position for stating image data is corrected, so that the position of the glabella detected by said reference position determining portions 113 is in square The center of the left and right directions of shape frame.Then, extraction unit 114 from above-mentioned image data extracts above-mentioned position to face area again again The image-region that rectangle frame after calibrated is surrounded.
Face state test section 115 for example, by using detailed search three-dimensional facial contours model from above-mentioned face area again Multiple organs, such as eyes, nose of the face of driver are detected in the image-region for the face that secondary extraction unit 114 extracts again The position of relevant multiple characteristic points such as son, mouth.Here detection processing uses detailed search.
Characteristic point more than amount corresponding with eyes, nose, mouth, cheekbone etc. is for example set as test object by detailed search, The three-dimensional facial contours model more than the dimension of vector is configured using characteristic point.In addition, preparing multiple courts with the face of driver Three-dimensional facial contours model to corresponding multiple models as the detailed search.For example, the positive direction of preparation and face, Oblique right direction, oblique left direction, diagonal upward direction, the representative face such as direction is towards corresponding model obliquely.It needs Illustrate, certain angle can also be spaced one from the two axis directions laterally and up and down and defines face's direction, And prepare and the institute of these each axis is angled combines corresponding three-dimensional facial contours model.
In turn, face image region is extracted using rectangle frame in one embodiment, therefore, three-dimensional facial contours model Also the above-mentioned each characteristic point that can be set as test object is configured at the rule on the arbitrary vertex (such as upper left corner) away from rectangle frame Such shape on fixed position.
Detailed search is the face for example extracted again into above-mentioned face area again extraction unit 114 using rectangle frame Image-region hints obliquely at the three-dimensional facial contours model of above-mentioned detailed search, implements adopting based on retina (Retina) structure Sample obtains sampling characteristic quantity from above-mentioned face image region.Retinal structure is around the characteristic point (node) of certain concern The radially structure of the sampled point of arranged discrete.
Detailed search calculates the margin of error of the sampling characteristic quantity and normal solution model parameter of above-mentioned acquisition, is by the margin of error Model parameter when below threshold value as above-mentioned sampling characteristic point inferred results and export.In detailed search, using error Tolerance be set to small value as above-mentioned threshold value.
Face state test section 115 infers face according to the inferred position of each characteristic point of the above-mentioned face detected Direction will indicate the information of the direction of the inferred position and face of above-mentioned each characteristic point as the information for the state for indicating face It is stored in face area storage unit 133.
Output control unit 116 reads pushing away for each node for indicating the above-mentioned face detected from face area storage unit 133 The information of the direction of disconnected position and face, by the information of the position of each node of the expression face of the reading and the direction of face from External interface 15 to for example judge it is sleepy or it is other depending on etc. drivers state device or switch between manually and automatically vehicle The output such as the automatic Pilot control device of driving mode.
(action example)
In the following, being illustrated to the action example of the image analysis apparatus 2 constituted as described above.
It should be noted that will include the image-region of face from captured image data detection in this example embodiment The reference templates of face used in handling are illustrated as the reference templates for being pre-stored within template storage unit 132.
(1) study is handled
Firstly, being illustrated to making image analysis apparatus 2 act required study processing.In order to pass through image analysis Device 2 needs to implement in advance study processing from the position of image data detection characteristic point.
Study processing passes through study processing routine (illustration omitted) Lai Zhihang being installed in image analysis apparatus 2 in advance. It should be noted that can also be in other than the image analysis apparatus 2, information processing such as being set to the server on network Study processing is executed in device, image analysis apparatus 2 will be loaded in by network under its learning outcome, be stored in template storage unit 132。
Acquisition processing of the study processing for example including three-dimensional facial contours model, three-dimensional facial contours model it is flat to image The acquisition processing hinted obliquely at processing, characteristic quantity sampling processing and error and infer matrix in face.
In study processing, prepare multiple study with face image (in the following, being known as " face in the explanation of study processing Portion's image ") and the characteristic point in each face image three-dimensional coordinate.Characteristic point by laser scanner or can for example be stood The technology of body camera etc. obtains, but it is also possible to use other arbitrary technologies.This feature point extraction process is learned to improve The precision for practising processing, it is also preferred that the face of the mankind is implemented as object.
Figure 11 is the figure of the position of the characteristic point (node) of the test object of illustrated face in a two-dimensional plane, Figure 12 It is using features described above point as the figure shown in three-dimensional coordinate.In the example of Figure 11 and Figure 12, the both ends for showing eyes are (interior Canthus and the tail of the eye) and center, the cheekbone part (eye socket bottom point) of left and right, the vertex of nose and endpoint, the left and right of left and right The corners of the mouth, the center of mouth, the left and right endpoint of nose and the corners of the mouth of left and right intermediate point the case where being respectively set at characteristic point.
Fig. 4 is an example for showing the processing step and process content of the study processing executed by image analysis apparatus 2 The flow chart of son.
The acquisition of (1-1) three-dimensional facial contours model
By step S01 defined variable i, 1 is substituted into wherein first for image analysis apparatus 2.Then, in step S02, from Image storage part 131 reads in i-th in the face image for the study of three-dimensional position for getting characteristic point in advance of face's figure As (Img_i).Here, due to having substituted into i for 1, read in first face image (Img_1).Then, pass through step S03 reads the set of the normal solution coordinate of the characteristic point of face image Img_i, obtains normal solution model parameter kopt, creates three-dimensional face The normal solution model of portion's shape.Then, image analysis apparatus 2 is based on normal solution model parameter kopt, creation by step S04 Deviate allocation models parameter kdif, creation deviates allocation models.It is preferred that the creation of the deviation allocation models generates random number, advising Deviate normal solution model in fixed range.
Above processing is specifically described.Firstly, the coordinate of each characteristic point pi is set as pi (xi, yi, zi).This When i be indicate 1 to n (quantity of n expression characteristic point) value.Then, it defines as shown in [mathematical expression 1] about each face image Characteristic point configure vector X.Characteristic point configuration vector about certain face image j is expressed as Xj.It should be noted that the dimension of X Number 3n.
[mathematical expression 1]
X=[x1, y1, z1, x2, y2, z2... .xn, yn, zn]T
However, in an embodiment of the invention, needing the three-dimensional facial contours model of coarse search and searching in detail The three-dimensional facial contours model of rope.Wherein, the three-dimensional facial contours model of coarse search is for searching for such as eyes and nose The few characteristic point of the relevant amount being defined, therefore, the dimension X that characteristic point configures vector X correspond to the few characteristic point of above-mentioned amount.
On the other hand, such as shown in figs. 11 and 12, the three-dimensional facial contours model of detailed search is for searching for Characteristic point more than amount relevant to eyes, nose, mouth, cheekbone, therefore, the dimension X that characteristic point configures vector X correspond to above-mentioned amount The quantity of more characteristic points.
Then, image analysis apparatus 2 makes acquired all characteristic points configure vector X normalization based on benchmark appropriate. Normalized benchmark at this time can also suitably be determined by designer.
In the following, being illustrated about normalized specific example.For example, the characteristic point about certain face image j configure to Amount Xj is moved to each point with center of gravity p when the barycentric coodinates of point p1~pn are set as pGGFor it in the coordinate system of origin Afterwards, using by Lm defined in [mathematical expression 2], its size normalization can be made.Specifically, passing through the coordinate value after movement Divided by Lm, size normalization can be made.Here, Lm be center of gravity to each point linear distance average value.
[mathematical expression 2]
In addition, for rotation, such as can be by carrying out rotation transformation to characteristic point coordinate, so that the center of connection two Straight line towards certain orientation, so as to be normalized.The above processing can by rotation, amplification, diminution combination It shows, therefore, the characteristic point configuration vector x after normalization can indicate (similarly transformation) as [mathematical expression 3].
[mathematical expression 3]
X=sRxRyRzX+t
Then, image analysis apparatus 2 carries out principal component analysis for the set of above-mentioned normalization characteristic point configuration vector.It is main Constituent analysis can for example carry out as described below.Firstly, it is (average to obtain average vector according to formula shown in [mathematical expression 4] Vector in the upper marker horizontal line of x by showing).It should be noted that N indicates the number of face image in mathematical expression 4 The quantity of amount, i.e. characteristic point configuration vector.
[mathematical expression 4]
In addition, average vector is subtracted by configuring vector from all normalization characteristic points as shown in [mathematical expression 5], from And obtain difference vector x '.The relevant difference vector of image j is shown as x ' j.
[mathematical expression 5]
As above-mentioned principal component analysis as a result, obtaining the group of 3n group characteristic vector and eigenvalue.Arbitrary normalization is special Sign point configuration vector can be indicated by formula shown in [mathematical expression 6].
[mathematical expression 6]
Here, P indicates that characteristic vector matrix, b indicate form parameter vector.Each value is as shown in [mathematical expression 7].It needs Bright, ei indicates characteristic vector.
[mathematical expression 7]
P=[e1,e2..., e3n]T
B=[b1, b2..., b3n]
In fact, can approximatively be indicated as shown in [mathematical expression 8] by using the value until the big preceding k dimension of eigenvalue Arbitrary normalization characteristic point configures vector x.In the following, the sequence according to eigenvalue from big to small, is known as the i-th principal component for ei.
[mathematical expression 8]
P '=[e1, e2..., ek]T
B '=[b1, b2..., bk]
It should be noted that facial contours model is applied (fitting) when actual face image, it is special to normalization Sign point configuration vector x carries out similarly transformation (translation, rotation).If by the parameter of similarly transformation be set as sx, sy, sz,S ψ, It can then be matched with form parameter, the earth's surface representation model parameter k as shown in [mathematical expression 9].
[mathematical expression 9]
By three-dimensional facial contours model represented by model parameter k substantially accurately with the spy on certain face image When sign point position consistency, the three-dimensional normal solution model parameter which is known as in the face image.Based on set by designer Threshold value or benchmark are to determine whether accurately consistent.
(1-2) hints obliquely at processing
Next image analysis apparatus 2 will deviate from allocation models in step S05 and hint obliquely on study image.
Three-dimensional facial contours model can carry out processing on 2d in two-dimensional surface by hinting obliquely at.As general 3D shape hints obliquely at the method in two-dimensional surface, and there are the various methods such as parallel projection method, perspective projection.Here, with saturating Depending on being illustrated for the one-point perspective projection in sciagraphy.But, it can also even be obtained using other any methods Identical effect.It is shown for the one-point perspective projection matrix such as [mathematical expression 10] of z=0 plane.
[mathematical expression 10]
Here, r=-1/zc, zc indicate the projection centre in z-axis.Three-dimensional coordinate [x, y, z] such as [mathematical expression 11] as a result, It is shown to be converted, in the coordinate system in z=0 plane, such as shown in [mathematical expression 12].
[mathematical expression 11]
[mathematical expression 12]
By handling above, three-dimensional facial contours model is hinted obliquely in two-dimensional surface.
The sampling of (1-3) characteristic quantity
Image analysis apparatus 2 is next in step S06, based on being hinted obliquely at the two-dimentional face for having above-mentioned deviation allocation models Shape executes the sampling for using retina (Retina) structure, obtains sampling characteristic quantity f_i.
The sampling of characteristic quantity be by make variable retinal structure with hint obliquely in the facial contours model on image combine come It carries out.Retina (Retina) structure is that arranged discrete is adopted with thinking the product of the characteristic point (node) of concern at certain The structure of sampling point.By implementing the sampling based on retinal structure, the letter around characteristic point can be efficiently sampled with low dimensional Breath.The study processing in, from three-dimensional facial contours model hint obliquely in two-dimensional surface facial contours model (hereinafter, referred to as Two-dimentional facial contours model) each node hint obliquely at point (on each point p) implement the sampling based on retinal structure.It needs to illustrate It is to be referred to based on the sampling of retinal structure according to implementation sampling at sampled point determined by retinal structure.
If the coordinate of i-th of sampled point is set as qi (xi, yi), can indicate to regard as shown in [mathematical expression 13] Web structure.
[mathematical expression 13]
Thus, for example can be indicated as shown in [mathematical expression 14] about certain point p (xp, yp) by carrying out based on view The sampling of membrane structure retinal feature amount fp obtained.
[mathematical expression 14]
fp=[f (p+q1) ..., f (p+qm)]T
But f (p) indicates the point p (characteristic quantity on sampled point p).In addition, the feature of each sampled point in retinal structure Amount is as the brightness of such as image, Sovel filter characteristic amount, Harr Wavelet characteristic quantity, Gabor Wavelet feature Amount, by they it is compound obtained by value acquire.It, can be in the case where characteristic quantity is multidimensional when as carried out detailed search Retinal feature amount is indicated as shown in [mathematical expression 15].
[mathematical expression 15]
Here, D indicates that the dimension of characteristic quantity, fd (p) indicate the characteristic quantity of the d dimension on point p.In addition, qi (d) indicate with D ties up opposite retinal structure, i-th sample coordinate.
It should be noted that retinal structure can make its size generate variation according to the scale of facial contours model.Example Such as, the size of retinal structure can inversely proportionally be made to generate variation with translation parameters sz.At this point it is possible to such as [mathematical expression 16] It is shown to indicate retinal structure r.It should be noted that α is fixed value appropriate.In addition it is also possible to according to facial contours mould Other parameters in type make the retinal structure rotate or generate change in shape.In addition, retinal structure also can be set as because Each node of facial contours model and its shape (structure) is different.In addition, retinal structure can also be that only center is put in order Structure.I.e., characteristic point (node) is also only contained in retinal structure as the structure of sampled point.
[mathematical expression 16]
In the three-dimensional facial contours model determined by certain model parameter, each node in plane will hinted obliquely at by hinting obliquely at Each of hint obliquely at and to carry out vector made of an above-mentioned sampling retinal feature amount obtained forms a line and be known as the three-dimensional face Sampling characteristic quantity f in shape.Sampling characteristic quantity f can be indicated as shown in [mathematical expression 17].In [mathematical expression 17], N indicates the quantity of the node in facial contours model.
[mathematical expression 17]
It should be noted that each node is normalized in sampling.For example, by carrying out change of scale so that spy Sign amount is fallen in the range of 0 to 1, to be normalized.In addition it is also possible to by being converted to obtain certain mean value or side Difference is normalized.It should be noted that can also be without normalization sometimes according to characteristic quantity.
The acquisition of (1-4) error deduction matrix
Next image analysis apparatus 2 in step S07, based on normal solution model parameter kopt and deviates allocation models ginseng Number kdif obtains error (deviation) dp_i of shape.Here, the face about all study is judged in step S08 Whether image has been completed processing.The judgement can for example be carried out by the quantity of the face image of value and study to i Compare to be judged.There are untreated face image, image analysis apparatus 2 makes the value of i in step S09 Increment executes step S02 and processing later based on the value of the new i after increment.
On the other hand, be judged as have been completed processing about all face images in the case where, image analysis dress 2 are set in step slo, to the error about each face image sampling characteristic quantity f_i obtained and three-dimensional facial contours model The set of dp_i executes canonical correlation analysis (Canonical Correlation Analysis).Then, in step s 11 The corresponding unwanted correlation matrix of fixed value for being less than predetermined threshold value is deleted, obtains final mistake in step s 12 Difference infers matrix.
Error infers that the acquisition of matrix is implemented by using canonical correlation analysis.Canonical correlation analysis is to acquire two One of the method for correlativity between the different variable of a dimension.By canonical correlation analysis, in each section of facial contours model When point is configured at position (positions different from the characteristic point that should be detected) of mistake, can obtain should be to which about expression The learning outcome of the correlativity of adjustment in direction.
Image analysis apparatus 2 creates three-dimensional according to the three dimensional local information of the characteristic point of the face image of study first Facial contours model.Alternatively, creating three-dimensional facial contours model according to the two-dimentional normal solution coordinate points of the face image of study. Then, normal solution model parameter is created according to three-dimensional facial contours model.By making the normal solution model parameter using random number etc. Deviate in a certain range, thus the creation deviation allocation models that at least any one node deviates from the three-dimensional position of characteristic point. Then, using the difference of the sampling characteristic quantity and deviation allocation models and normal solution model that are got based on deviation allocation models as one Group obtains the learning outcome about correlativity.In the following, being illustrated to its specific processing.
Image analysis apparatus 2 defines two groups of variable vectors x and y as shown in [mathematical expression 18] first.Deviation is matched in x expression Set the sampling characteristic quantity of model.Y indicates normal solution model parameter (kopt) and deviates allocation models parameter (expression deviation allocation models Parameter: kdif) difference.
[mathematical expression 18]
X=[x1, x2... xp]T
Y=[y1, y2... yq]T=kopt-kdif
Two groups of variable vectors are normalized to for each dimension average value is " 0 ", variance is " 1 " in advance.Normalization is adopted Parameter (respectively tie up average value, variance) is necessary parameter in the detection processing of aftermentioned characteristic point.In the following, by respective It is set as xave, xvar, yave, yvar, referred to as normalized parameter.
Then, when defining the linear transformation to two variables shown in such as [mathematical expression 19], the phase made between u, v is acquired Closing becomes maximum a, b.
[mathematical expression 19]
U=a1x1+…+apxp=aTx
V=b1y1+…+bqyq=bTy
Above-mentioned a and b is the Joint Distribution for considering x, y, such as defines its variance and covariance square shown in [mathematical expression 20] Battle array Σ when, as shown in [mathematical expression 21] for solve general Eigenvalue Problems when maximum eigenvalue characteristic vector and obtain It arrives.
[mathematical expression 20]
[mathematical expression 21]
First solve the Eigenvalue Problems that dimension is low in them.For example, obtained maximum intrinsic solving first formula When value is λ 1, corresponding characteristic vector is a1, vector b1 is obtained by formula shown in [mathematical expression 22].
[mathematical expression 22]
The λ 1 acquired in this way is known as the first canonical correlation coefficient.In addition, u1, v1 shown in [mathematical expression 23] are known as One canonical variable.
[mathematical expression 23]
In the following, such as corresponding to the second canonical variable of second largest eigenvalue, corresponding to the third of the third-largest eigenvalue Canonical variable is such, and the size based on eigenvalue successively acquires canonical variable.It should be noted that the detection of aftermentioned characteristic point The vector used in processing is the vector until the M canonical variable that eigenvalue has certain more than a certain amount of value (threshold value).This When threshold value can suitably be determined by designer.In the following, by the converting vector matrix until M canonical variable be set as A ', B ', referred to as error infer matrix.A ', B ' can be indicated as shown in [mathematical expression 24].
[mathematical expression 24]
A'=[a1..., aM]
B'=[b1..., bM]
B ' will not generally become square matrix.But inverse matrix is needed in the detection processing of characteristic point, therefore, for B ' Hypothetically increase by 0 vector, is allowed to as square matrix B ".Square matrix B " can be indicated as shown in [mathematical expression 25].
[mathematical expression 25]
B "=[b1..., bM, 0 ..., 0]
It should be noted that can be by using linear regression, linear multiple regression or non-linear multi-objective planning etc. Analysis method infers matrix to acquire error.But by using canonical correlation analysis, it can ignore corresponding to small eigenvalue Variable influence.Therefore, it can exclude to infer error the influence for not having influential factor, more stable mistake may be implemented Difference is inferred.Therefore, if you do not need to relevant effect, then canonical correlation analysis can not be used, but used above-mentioned other Analysis method come implement error infer matrix acquisition.In addition, error infers that matrix can also pass through SVM (Support Vector Machine: support vector machines) the methods of obtain.
In study as described above processing, a deviation allocation models is only created with face image for each study, but Multiple deviation allocation models can also be created.This be by for study image repeatedly (such as 10~100 times) repeatedly into Row above-mentioned steps S03~step S07 handles to realize.It should be noted that above-mentioned study processing is recorded in day in detail In No. 4093273 bulletin of this patent.
(2) detection of the face state of driver
Image analysis apparatus 2 handles three-dimensional facial contours model obtained using by above-mentioned study, as described below Execute the processing of the face state of detection driver.
Fig. 5 is the flow chart for showing an example of processing step and process content for face state detection processing.
(2-1) includes the acquisition of the image data of the face of driver
For example, by camera 1 from front shooting drive in driver appearance, thus picture signal obtained is from phase Machine 1 is sent to image analysis apparatus 2.Image analysis apparatus 2 receives above-mentioned picture signal by camera interface 14, turns by every frame It is changed to the image data being made of digital signal.
Image analysis apparatus 2 corresponds to each frame in step S20 and is taken under the control that image obtains control unit 111 Above-mentioned image data stores the image storage part 131 of data storage 13 successively.It should be noted that can be any Ground setting is stored in the frame period of the image data of image storage part 131.
The extraction of (2-2) face area
Image analysis apparatus 2 next under the control of face area extraction unit 112, in the step s 21 by every frame from Above-mentioned image storage part 131 reads in image data.Also, using the benchmark for the face being pre-stored in template storage unit 132 Template photographed the image-region of the face of driver from the detection of the image data of above-mentioned reading, and extracted the image using rectangle frame Region.
For example, face area extraction unit 112 makes the benchmark of face with preset multiple pixel separations (such as 8 pixels) Template is moved relative to image data substep.Fig. 7 is the figure for showing one example, and D shows reference templates in figure The pixel at four angles.Also, face area extraction unit 112 calculates this when the reference templates shifting for making face every time moves a step Calculated correlation is compared by the correlation of the brightness of reference templates and image data with preset threshold value, will Region detection corresponding with the substep shift position that correlation is threshold value or more is the face image region for including face.
I.e., when in this example embodiment, using the scouting interval than making reference templates mobile by every 1 pixel thick searching method come Detect face image region.Also, face image extraction unit 112 extracts from image data above-mentioned detect using rectangle frame Face image region, the face image region storage unit (illustration omitted) for being stored in it in data storage 13.Fig. 8 is to show The figure of one example of the positional relationship of the face image and rectangle frame E1 that extract.
The coarse search of (2-3) face organ
Next image analysis apparatus 2 under the control of base position determining section 113, is used first in step S22 and is deposited It is stored in the three-dimensional facial contours model of template storage unit 132, is extracted from by above-mentioned face area extraction unit 112 using rectangle frame Multiple characteristic points that face image region detection out is set for face's organ of driver.In this example embodiment, features described above The detection of point uses coarse search.In coarse search, as previously mentioned, being for example only defined in eyes using the characteristic point that will test object The few three-dimensional facial contours model of dimension that with nose or be only defined in eyes, characteristic point configuration vector.
In the following, being illustrated to an example of the detection processing for the characteristic point for using coarse search.
Fig. 6 is the flow chart for showing an example of its processing step and process content.
Base position determining section 113 is stored from the face image region of above-mentioned data storage 13 in step s 30 first The face image region extracted using rectangle frame is read in by every 1 frame of image data in portion 131.It then, will in step S31 Three-dimensional facial contours model based on initial parameter kinit is configured at the initial position in above-mentioned face image region.Also, pass through " 1 " is substituted into wherein, and defines ki by step S32, defined variable i, and initial parameter kinit is substituted into wherein.
For example, base position determining section 113 is obtained for the first time for the face image region extracted using above-mentioned rectangle frame When taking sampling characteristic quantity, firstly, determining the three-dimensional position of each characteristic point in three-dimensional facial contours model, the three-dimensional face is obtained Parameter (initial parameter) kinit of shape.The three-dimensional facial contours model is for example set to shape as described below: setting It is few due to the relevant amount being defined of the organs (node) such as the eyes of the three-dimensional facial contours model of coarse search and nose Characteristic point is configured on the specified position on the arbitrary vertex (such as upper left corner) away from rectangle frame.It should be noted that three-dimensional face Portion's shape is also possible to the center of the model and the consistent shape in center in the face image region extracted using rectangle frame Shape.
Initial parameter kinit refers to by passing through mould represented by initial value in model parameter k represented by [mathematical expression 9] Shape parameter.Value appropriate can also be set as initial parameter kinit.But by will obtain from general face image Average value is set as initial parameter kinit, copes with direction or expression shape change of various faces etc..Thus, for example about phase Like the parameter sx of conversion, sy, sz, s θ,S ψ, the normal solution model ginseng of the face image used when can also be handled using study Several average value.In addition, being for example also possible to zero about form parameter b.In addition, being obtained by face area extraction unit 112 In the case where the information of face's direction, initial parameter can also be set using the information.In addition it is also possible to by designer according to Other values that experience obtains are used as initial parameter.
Then, base position determining section 113 is in step S33, by three-dimensional face's shape by the coarse search represented by ki Shape model is hinted obliquely on the above-mentioned face image region of process object.Then, in step S34, using the above-mentioned face hinted obliquely at Shape executes the sampling based on retinal structure, obtains sampling characteristic quantity f.Then, in step s 35, adopted using above-mentioned Sample characteristic quantity f executes error inference process.
On the other hand, base position determining section 113 is about the face image extracted by face area extraction unit 112 Region obtain sampling characteristic quantity be second and later when, about by being joined by error inference process new model obtained Facial contours model represented by number k (i.e., the inferred value ki+1 of normal solution model parameter) obtains sampling characteristic quantity f.Also, In this case, error inference process also is executed using the sampling characteristic quantity f of above-mentioned acquisition in step s 35.
In error inference process, stored in sampling characteristic quantity f and template storage unit 132 based on above-mentioned acquisition Error infers matrix, normalized parameter etc., calculates the inference error kerr of three-dimensional facial contours model ki and normal solution model parameter. In addition, being based on inference error kerr, the inferred value ki+1 of normal solution model parameter is calculated by step S36.Moreover, in step In S37, difference of the Δ k as ki+1 and ki is calculated, square of the E as Δ k is calculated by step S38.
In addition, scanning for the end judgement of processing in error inference process.The processing for executing inference error amount, by This obtains new model parameter k.In the following, the specific processing example to error inference process is illustrated.
Firstly, making the above-mentioned sampling characteristic quantity f normalization got using normalized parameter (xave, xvar), acquiring use In the vector x for carrying out canonical correlation analysis.Also, based on formula shown in [mathematical expression 26], calculate the first~the M typical case change Thus amount obtains variable u.
[mathematical expression 26]
U=[u1..., uM]T=A 'Tx
Then, using formula shown in [mathematical expression 27], normalization error deduction amount y is calculated.It should be noted that When B ' is not square matrix in [mathematical expression 27], B 'T-1It is the pseudo inverse matrix of B '.
[mathematical expression 27]
Then, it for the normalization error deduction amount y of above-mentioned calculating, is answered using normalized parameter (yave, yvar) Original place reason, thus obtains error deduction amount kerr.Error deduction amount kerr is from current facial contours model parameter ki to just Solve the error deduction amount of model parameter kopt.Therefore, the inferred value ki+1 of normal solution model parameter can be joined by current model Number ki is obtained plus error deduction amount kerr.But kerr has a possibility that including error.Therefore, more stable in order to carry out Detection, pass through formula shown in [mathematical expression 28], obtain normal solution model parameter inferred value ki+1.In [mathematical expression 28], σ It is fixed value appropriate, can also be suitably determined by designer.In addition, σ can also for example become according to the variation of i Change.
[mathematical expression 28]
In error inference process, the sampling processing and error inference process of above-mentioned characteristic quantity is preferably repeated, Make the inferred value ki of normal solution model parameter close to normal solution parameter.Carry out it is such handle repeatedly when, obtain inferred value every time End judgement is all carried out when ki.
In terminating judgement, in step S39, whether in the normal range the value of the ki+1 obtained is first determined whether.At this The result judged as ki+1 value not in the normal range when, in step s 40, exported to display device (not shown) etc. wrong Accidentally, image analysis apparatus 2 terminates search process.
In view of this, it is assumed that the result of above-mentioned steps S39 judged as ki+1 value in the normal range.In such case Under, in step S41, whether the value that judgement passes through the calculated E of step S38 has been more than threshold epsilon.Also, threshold value is less than in E When ε, it is judged as that processing has restrained, kest is exported by step S42.After the output of the kest, 2 knot of image analysis apparatus The detection processing of face state of the beam based on 1 frame image data.
On the other hand, it when E has been more than threshold epsilon, carries out creating new three-dimensional based on the value of above-mentioned ki+1 by step S43 The processing of facial contours model.Later, in step S44, the value increment of i, return step S33.Then, by the image of next frame Data execute step S33 and later a series of as process object image, based on new three-dimensional facial contours model repeatedly Processing.
It should be noted that for example being ended processing in the case where the value of i has been more than threshold value.In addition, for example can also be The value of Δ k represented by [mathematical expression 29] is to end processing in threshold value situation below.In turn, in error inference process, Whether end judgement can be carried out in the normal range based on the value of the ki+1 got.For example, the ki+1's got In the case that value is clearly not the normal solution position in the image for indicate the face of people, ended processing by output error.In addition, It is wrong also by output in the case where the image of a part of Overflow handling object of the node represented by the ki+q1 by getting Accidentally end processing.
[mathematical expression 29]
Δ k=ki+1-ki
It is judged as normal solution model parameter in the case where continue processing, got in above-mentioned error inference process Inferred value ki+1 is given characteristic quantity sampling processing.On the other hand, be judged as end processing in the case where, obtain at the time point The inferred value ki (or being also possible to ki+1) of the normal solution model parameter obtained infers that parameter kest is defeated as final by step S42 Out.
It should be noted that the search process of the characteristic point of face described above is recorded in Japanese Patent No. in detail No. 4093273 bulletins.
The determination of the base position (2-4)
Base position determining section 113 is in step S23, the search result of face's organ based on above-mentioned coarse search, detection The position of the characteristic point of the face's organ searched determines face image based on the distance between the characteristic point detected Base position.For example, base position determining section 113 acquires it according to the position of the characteristic point of two eyes of the face of driver Distance, the position coordinates of the characteristic point of the position coordinates and nose of central point according to this distance infer the position of glabella.Also, Such as shown in figure 9, the position of the glabella of the deduction is determined as to the base position B of the face of driver.
It extracts again in (2-5) face image region
Image analysis apparatus 2 is next under the face area again control of extraction unit 114, in step s 24, according to upper Base position determined by base position determining section 113 is stated, the position to rectangle frame relative to image data is corrected.Example Such as, face area corrects position of the extraction unit 114 as illustrated in fig. 10 by rectangle frame relative to above-mentioned image data again from E1 For E2 so that the position (base position B) of glabella detected by said reference position determining portions 113 become rectangle frame up and down Direction and the center of left and right directions.Also, face area again extraction unit 114 extracted again from above-mentioned image data it is above-mentioned The face image region that the rectangle frame E2 that position is corrected by is surrounded.
As a result, the extraction position in the even face image region based on rectangle frame E1 produces deviation, the deviation Also can be corrected, can obtain do not omit include detailed search needed for face major organs face image.
The detailed search of (2-6) face organ
If the extraction process again in above-mentioned face image region terminates, image analysis apparatus 2 is transferred to step S25. Also, under the control of face state test section 115, from the above-mentioned face area face that extraction unit 114 extracts again again Image-region, using set by multiple organs of the three-dimensional facial contours mode inference of detailed search for the face of driver Amount more than characteristic point position.
In detailed search, as previously mentioned, for example as test object, for the eyes of face, nose, mouth, cheekbone etc. Characteristic point more than set amount sets the three-dimensional facial contours that characteristic point configures the dimension of vector using these characteristic points are corresponded to Model carries out the search of features described above point.In addition, the three-dimensional facial contours model as detailed search corresponds to driver's Multiple faces towards and prepare multiple models.Such as prepare the positive direction of the face of multiple types, oblique right direction, an oblique left side Direction, diagonal upward direction, obliquely the representative face such as direction towards corresponding model.
Face state test section 115 is using multiple three-dimensional facial contours moulds in order to prepare for above-mentioned detailed search Type executes the organ from the face image region detection for using above-mentioned rectangle frame E2 to extract again as above-mentioned test object The processing of the more characteristic point of amount.Although the processing step and process content of the detailed search executed here are configured using characteristic point Three-dimensional facial contours model this point more than when the dimension of vector is set to than coarse search, using corresponding to face towards being prepared Multiple three-dimensional facial contours model this point and the decision threshold of inference error value small when being set to than coarse search this A little upper difference, but substantially with use Fig. 6 to be illustrated before coarse search when processing step and process content phase Together.
The deduction of (2-7) face direction
If above-mentioned detailed search terminates, image analysis apparatus 2 is next in the control of face state test section 115 Under, in step S26, the search result of the characteristic point of each organ of the face based on above-mentioned detailed search infers driver's The direction of face.For example, can infer face according to the position of the eyes or nose of the position of the profile relative to face, mouth Direction.In addition it is also possible to according to face is corresponded to towards in the multiple three-dimensional facial contours models prepared and image data Between the smallest model of the margin of error, to infer the direction of face.In addition, face state test section 115 will indicate above-mentioned deduction Face's direction information, indicate each organ multiple characteristic points position information as expression driver face state Information be stored in face area storage unit 133.
The output of (2-8) face state
Image analysis apparatus 2 is under the control of output control unit 116, in step s 27, from face area storage unit 133 It reads the information for the face's direction for indicating above-mentioned deduction and indicates the letter of the position of multiple characteristic points of each organ of face Breath.Also, the information of above-mentioned reading is exported from external interface 15 to external device (ED).
External device (ED) can judge example according to the presence or absence of the detection of each organ of the information and face of above-mentioned face's direction Such as other view or the state of sleepy driver.Furthermore, it is possible to be used when switching the driving mode of vehicle between manually and automatically In the judgement that could switch.
(effect)
As detailed above, in one embodiment, in base position determining section 113, by face area extraction unit 112 from The multiple of such as face are detected by coarse search using in the image-region of the rectangle frame E1 face including driver extracted Eyes and the characteristic point of nose the position of the glabella of the face of driver is detected according to the characteristic point of each organ detected It sets, determines it as the base position B of face.Also, by face area again extraction unit 114 to rectangle frame relative to image The position of data is corrected, so that the base position B of the face of above-mentioned determination is at the center of rectangle frame, using the position by school Rectangle frame just, extracted from the above-mentioned image data again include face image-region.
Therefore, the extraction position of the even image-region including face of rectangle frame produces deviation, thus leads to face A part of organ in portion is not included in rectangle frame, can also be corrected position of the rectangle frame relative to image data, be extracted again Image-region including face.It therefore, can include detection face without omitting ground in the image-region extracted by rectangle frame The organ of face needed for portion's direction etc., thus, it is possible to the accurately state of the face of detection face direction etc..In addition, inspection It surveys face's organ needed for determining said reference position and uses coarse search.Therefore, with from the image data taken directly Search for face base position the case where compare, can it is few with image procossing amount and in short time determine base position.
[variation]
(1) in one embodiment, based on the base position B of the face detected by coarse search, only to rectangle frame phase The position of image data is corrected.But it's not limited to that, it can also be to the rectangle frame relative to image data Size be corrected.This can be realized as described below: for example be schemed by coarse search from the face extracted using rectangle frame As the left and right of the face of one of the tentative characteristic point as face in region and the detection of upper and lower profile, do not detected having found Profile when, by the size of rectangle frame to the direction of the undetected profile amplify.It should be noted that by the glabella of face The point for being determined as base position is identical with first embodiment.
(2) in one embodiment, with related from multiple organs in the face that the image data of input infers driver Multiple characteristic points position in case where be illustrated.But it's not limited to that, as long as test object object can be with Setting shape then can be any object.For example, being also possible to the full-length picture or X-ray of people as test object object Image is obtained by the faultage image photographic device of CT (Computed Tomography: computed tomography) etc. Internal organs image etc..In other words, this technology can be applied to the object of the individual differences with size or basic shape does not produce The raw test object object alternatively deformed.In addition, even such as vehicle, electric product, electronic equipment, circuit substrate The test object object for the rigid body that industrial products are not deformed like that, since shape can be set, this can be applicable in Technology.
(3) in one embodiment, it is carried out in case where detecting face state by each frame of image data Illustrate, but face state can also be detected every preset multiple frames.In addition, about image analysis apparatus composition or The respective processing step of coarse search and detailed search and process content of the characteristic point of test object object, the shape for extracting frame and Size etc. can also carry out without departing from the scope of spirit of the present invention various modifications to implement.
(4) in one embodiment, in case where the position for detecting the glabella of face of people is determined as base position It is illustrated.But it's not limited to that, such as also can detecte the vertex of nose, mouth central point, the position of glabella and nose Intermediate point, the position of glabella and the intermediate point of mouth central point on the vertex of son and position, nose vertex and the mouth center of glabella They are determined as base position by either one in the mean place of point.In short, being detected in the face of people as base position The point is determined as datum mark by the arbitrary point on heart line.
More than, embodiments of the present invention are described in detail, but the explanation is in all aspects all only It is example of the invention.Undoubtedly, various improvement or deformation can be carried out without departing from the scope of the present invention.Namely It says, in carrying out the present invention, can also suitably use specific composition corresponding with embodiment.
In short, the present invention is not defined in above embodiment, in implementation phase, in model without departing from the spirit like this In enclosing, composition part can be deformed to embody.Furthermore, it is possible to pass through multiple compositions disclosed in above embodiment It is partial appropriately combined to form various inventions.For example, it is also possible to be deleted from all composition parts shown in embodiment several A composition part.It in turn, can be with the appropriately combined composition part across different embodiments.
[annex]
Part or all of the respective embodiments described above, can also be as shown in following remarks other than claims It records, still, it's not limited to that.
(annex 1)
A kind of image analysis apparatus, has hardware processor (11A) and memory (11B), which is constituted Are as follows: the program for being stored in the memory (11B) is executed by the hardware processor (11A), to obtain to including detection Image obtained from the range of object is shot (111);The test object will be present from the image got The topography in the region of object extracts (112) with the extraction frame for the predetermined size for surrounding the topography;It is mentioned from described The position of the characteristic point of the test object object is detected by the topography of taking-up, and determines the inspection based on the position of this feature point Survey the base position (113) of object;Base position based on the determination, described in being extracted by the extraction frame The extraction position of topography is corrected, and described in being extracted again by the extraction frame on the extraction position after the correction Topography (114);And the state (115) of the test object object is detected from the topography extracted again.
(annex 2)
A kind of method for analyzing image, by with hardware processor (11A) and being stored with holds the hardware processor (11A) The device of the memory (11B) of capable program executes, and described image analytic method includes: that the hardware processor (11A) is obtained The process (S20) of image obtained from being shot to the range for including test object object;The hardware processor (11A) is from institute State topography's regulation ruler for surrounding the topography that the region of the test object object will be present in the image got Very little extraction frame is come the process (S21) that extracts;The hardware processor (11A) is detected from the topography extracted The position of the characteristic point of the test object object and the base position that the test object object is determined based on the position of this feature point Process (S22, S23);The base position of the hardware processor (11A) based on the determination is carried out to by the extraction frame The extraction position of the topography extracted is corrected and passes through the extraction frame again on the extraction position after the correction The secondary process (S24) for extracting the topography;And the hardware processor (11A) is from the Local map extracted again As detection indicates the process (S25) of the information of the feature of the test object object.

Claims (7)

1. a kind of image analysis apparatus, comprising:
Image acquiring unit obtains image obtained from shooting to the range for including test object object;
Topography's extraction unit, the topography that the region of the test object object will be present from the described image got uses The extraction frame of the predetermined size of the topography is surrounded to extract;
Base position determining section detects the position of the characteristic point of the test object object from the topography extracted, and The base position of the test object object is determined based on the position of this feature point;
Extraction unit again, based on the determining base position, to the topography extracted by the extraction frame Extraction position be corrected, and the Local map is extracted by the extraction frame again on the extraction position after the correction Picture;And
State detecting section detects the state of the test object object from the topography extracted again.
2. image analysis apparatus according to claim 1, wherein
Image obtained from the acquisition of described image acquisition unit shoots the range for including face,
Topography's packet in the region of the face will be present in topography's extraction unit from the described image got Enclose the extraction frame of the predetermined size of the topography to extract,
The base position determining section is from the topography extracted respectively to corresponding with multiple organs of the face The position of characteristic point is detected, and the position based on each characteristic point detected, by appointing on the center line of the face The position of meaning is determined as the base position,
The extraction unit again is based on the determining base position, to the Local map extracted by the extraction frame The extraction position of picture is corrected so that the base position of the topography it is described extract frame center, also, it is described again Secondary extraction unit extracts the topography by the extraction frame on the extraction position after the correction again,
The state detecting section detects the state of the face from the topography extracted again.
3. image analysis apparatus according to claim 2, wherein
The base position determining section by the position of the glabella of the face, the vertex of nose, mouth central point, the glabella position It sets and the intermediate point on the vertex of the nose, the position of the glabella and the intermediate point of the mouth central point and the glabella Either one in the mean place of position, the vertex of the nose and the mouth central point is determined as the base position.
4. image analysis apparatus according to any one of claim 1 to 3, wherein
The base position determining section searches for the test object object from the topography extracted with the first search precision Characteristic point position, and determine based on the characteristic point searched the base position of the test object object,
The state detecting section is to be higher than the second search precision of first search precision from the part extracted again The characteristic point of test object object described in picture search, and detect based on the characteristic point searched the shape of the test object object State.
5. image analysis apparatus according to any one of claim 1 to 3, wherein
Described image resolver further includes output section, and the output section output indicates the institute detected by the state detecting section State the information of the state of test object object.
6. a kind of method for analyzing image is executed, described image by the image analysis apparatus with hardware processor and memory Analytic method includes following procedure:
Image obtained from the acquisition of described image resolver shoots the range for including test object object;
The topography in the region of the test object object will be present in described image resolver from the described image got It is extracted with the extraction frame for the predetermined size for surrounding the topography;
Described image resolver detects the position of the characteristic point of the test object object from the topography extracted, and The base position of the test object object is determined based on the position of this feature point;
Described image resolver is based on the determining base position, to the part extracted by the extraction frame The extraction position of image is corrected, and extracts the part again by the extraction frame on the extraction position after the correction Image;And
Described image resolver detects the state of the test object object from the topography extracted again.
7. a kind of recording medium is stored at the hardware for making image analysis apparatus described in any one of claims 1 to 5 include Manage the program that device executes the processing in each portion included by described image resolver.
CN201910179678.3A 2018-04-12 2019-03-11 Image analysis device, image analysis method, and recording medium Active CN110378182B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018076730A JP6919619B2 (en) 2018-04-12 2018-04-12 Image analyzers, methods and programs
JP2018-076730 2018-04-12

Publications (2)

Publication Number Publication Date
CN110378182A true CN110378182A (en) 2019-10-25
CN110378182B CN110378182B (en) 2023-09-22

Family

ID=68052837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910179678.3A Active CN110378182B (en) 2018-04-12 2019-03-11 Image analysis device, image analysis method, and recording medium

Country Status (4)

Country Link
US (1) US20190318152A1 (en)
JP (1) JP6919619B2 (en)
CN (1) CN110378182B (en)
DE (1) DE102019106398A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163552A (en) * 2020-10-14 2021-01-01 北京达佳互联信息技术有限公司 Labeling method and device for key points of nose, electronic equipment and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376684B (en) * 2018-11-13 2021-04-06 广州市百果园信息技术有限公司 Face key point detection method and device, computer equipment and storage medium
JP2022517152A (en) 2019-12-16 2022-03-07 ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド Systems and methods for distinguishing between the driver and the occupant in the image captured in the vehicle
CN111931630B (en) * 2020-08-05 2022-09-09 重庆邮电大学 Dynamic expression recognition method based on facial feature point data enhancement
CN112418054A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN112416134A (en) * 2020-12-10 2021-02-26 华中科技大学 Device and method for quickly generating hand key point data set

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1423487A (en) * 2001-12-03 2003-06-11 微软公司 Automatic detection and tracing for mutiple people using various clues
US6687386B1 (en) * 1999-06-15 2004-02-03 Hitachi Denshi Kabushiki Kaisha Object tracking method and object tracking apparatus
US20040161134A1 (en) * 2002-11-21 2004-08-19 Shinjiro Kawato Method for extracting face position, program for causing computer to execute the method for extracting face position and apparatus for extracting face position
US20090196475A1 (en) * 2008-02-01 2009-08-06 Canfield Scientific, Incorporated Automatic mask design and registration and feature detection for computer-aided skin analysis
US20100183218A1 (en) * 2008-06-20 2010-07-22 Aisin Seiki Kabushiki Kaisha Object determining device and program thereof
US7916904B2 (en) * 2007-03-19 2011-03-29 Aisin Seiki Kabushiki Kaisha Face region detecting device, method, and computer readable recording medium
JP2012015727A (en) * 2010-06-30 2012-01-19 Nikon Corp Electronic camera
US20150186748A1 (en) * 2012-09-06 2015-07-02 The University Of Manchester Image processing apparatus and method for fitting a deformable shape model to an image using random forest regression voting
CN106909880A (en) * 2017-01-16 2017-06-30 北京龙杯信息技术有限公司 Facial image preprocess method in recognition of face
CN107564049A (en) * 2017-09-08 2018-01-09 北京达佳互联信息技术有限公司 Faceform's method for reconstructing, device and storage medium, computer equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4093273B2 (en) 2006-03-13 2008-06-04 オムロン株式会社 Feature point detection apparatus, feature point detection method, and feature point detection program
JP6851183B2 (en) 2016-11-11 2021-03-31 株式会社技研製作所 How to remove obstacles in the bucket device and tube

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6687386B1 (en) * 1999-06-15 2004-02-03 Hitachi Denshi Kabushiki Kaisha Object tracking method and object tracking apparatus
CN1423487A (en) * 2001-12-03 2003-06-11 微软公司 Automatic detection and tracing for mutiple people using various clues
US20040161134A1 (en) * 2002-11-21 2004-08-19 Shinjiro Kawato Method for extracting face position, program for causing computer to execute the method for extracting face position and apparatus for extracting face position
US7916904B2 (en) * 2007-03-19 2011-03-29 Aisin Seiki Kabushiki Kaisha Face region detecting device, method, and computer readable recording medium
US20090196475A1 (en) * 2008-02-01 2009-08-06 Canfield Scientific, Incorporated Automatic mask design and registration and feature detection for computer-aided skin analysis
US20100183218A1 (en) * 2008-06-20 2010-07-22 Aisin Seiki Kabushiki Kaisha Object determining device and program thereof
JP2012015727A (en) * 2010-06-30 2012-01-19 Nikon Corp Electronic camera
US20150186748A1 (en) * 2012-09-06 2015-07-02 The University Of Manchester Image processing apparatus and method for fitting a deformable shape model to an image using random forest regression voting
CN106909880A (en) * 2017-01-16 2017-06-30 北京龙杯信息技术有限公司 Facial image preprocess method in recognition of face
CN107564049A (en) * 2017-09-08 2018-01-09 北京达佳互联信息技术有限公司 Faceform's method for reconstructing, device and storage medium, computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163552A (en) * 2020-10-14 2021-01-01 北京达佳互联信息技术有限公司 Labeling method and device for key points of nose, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110378182B (en) 2023-09-22
US20190318152A1 (en) 2019-10-17
JP2019185469A (en) 2019-10-24
DE102019106398A1 (en) 2019-10-17
JP6919619B2 (en) 2021-08-18

Similar Documents

Publication Publication Date Title
CN110378182A (en) Image analysis apparatus, method for analyzing image and recording medium
DE102015005267B4 (en) Information processing apparatus, method therefor and measuring apparatus
US11010924B2 (en) Method and device for determining external parameter of stereoscopic camera
CN110378181A (en) Image analysis apparatus, method for analyzing image and recording medium
CN111414798A (en) Head posture detection method and system based on RGB-D image
US8842906B2 (en) Body measurement
DE112013003214T5 (en) Method for registering data
DE102016013274A1 (en) IMAGE PROCESSING DEVICE AND METHOD FOR RECOGNIZING AN IMAGE OF AN OBJECT TO BE DETECTED FROM ENTRY DATA
EP2886043A1 (en) Method for continuing recordings to detect three-dimensional geometries of objects
DE112011100652T5 (en) THREE-DIMENSIONAL MEASURING DEVICE, PROCESSING METHOD AND NON-VOLATILE COMPUTER-READABLE STORAGE MEDIUM
CN104471612B (en) Image processing apparatus and image processing method
CN106323241A (en) Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
CN111854620A (en) Monocular camera-based actual pupil distance measuring method, device and equipment
US20220027602A1 (en) Deep Learning-Based Three-Dimensional Facial Reconstruction System
CN108615256A (en) A kind of face three-dimensional rebuilding method and device
KR101454780B1 (en) Apparatus and method for generating texture for three dimensional model
DE102012222361B4 (en) Environment recognition device
JPWO2008041518A1 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
JP2019091122A (en) Depth map filter processing device, depth map filter processing method and program
CN107403448B (en) Cost function generation method and cost function generation device
CN114980800B (en) Refractive pattern generation method, device and computer readable storage medium
CN109978810A (en) Detection method, system, equipment and the storage medium of mole
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN111951295A (en) Method and device for determining flight trajectory based on polynomial fitting high precision and electronic equipment
DE102014117172A1 (en) Device and method for determining a foot and mobile measuring device for a podoscope

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant