CN109145864A - Determine method, apparatus, storage medium and the terminal device of visibility region - Google Patents
Determine method, apparatus, storage medium and the terminal device of visibility region Download PDFInfo
- Publication number
- CN109145864A CN109145864A CN201811046284.2A CN201811046284A CN109145864A CN 109145864 A CN109145864 A CN 109145864A CN 201811046284 A CN201811046284 A CN 201811046284A CN 109145864 A CN109145864 A CN 109145864A
- Authority
- CN
- China
- Prior art keywords
- driver
- visibility region
- face
- head pose
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention proposes method, apparatus, storage medium and the terminal device of a kind of determining visibility region, wherein the described method includes: obtaining the face image of driver;The face area of the driver and the characteristic point of ocular are extracted from the face image;The initial picture sequence of the face image is tracked, calculating is iterated with the characteristic point to the face area, obtains the head pose of the driver;According to the characteristic point of the ocular, the classification results of the visibility region of driver's observation are estimated;And the head pose according to the driver, the classification results of the visibility region are modified, and using revised visibility region as the visibility region of the driver.Using the present invention, the visibility region of driver can be accurately obtained.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of method, apparatus of determining visibility region, storage medium
And terminal device.
Background technique
In recent years, popularizing with vehicle, traffic safety increasingly become one of the safety problem of everybody concern.How
It avoids traffic accident, other than abiding by the objective requirements such as traffic order, artificial subjective factor is also very important.Artificial
Subjective factor in, more it is important that the attention and vehicle driving posture of driver.Therefore, how to avoid because driver infuses
Meaning power result in an automobile accident not concentrating generation seem particularly critical.
In driving procedure, can be become by detecting the variation of the visibility region of driver to study the attention of driver
Change.During determining the visibility region of driver, traditional scheme is the video detection driver using Two-dimensional Color Image
Head pose rotation variation, and then determine driver visibility region variation.
But above scheme has the disadvantage in that
1, when driving weaker indoor light, inhomogeneous illumination and narrow space, it is difficult to which accurate detection is driven
The head pose for the person of sailing.
2, in the case that driver is not look straight ahead, the head pose of driver is based only upon to determine the view of driver
Line region is inaccurate.
Summary of the invention
The embodiment of the present invention provides method, apparatus, storage medium and the terminal device of a kind of determining visibility region, to solve
Or alleviate above one or more technical problems in the prior art.
In a first aspect, the present invention provides a kind of method of determining visibility region, comprising: obtain the face image of driver;
The face area of the driver and the characteristic point of ocular are extracted from the face image;Track the face image
Initial picture sequence is iterated calculating with the characteristic point to the face area, obtains the head pose of the driver;Root
According to the characteristic point of the ocular, the classification results of the visibility region of driver's observation are estimated;And it is driven according to described
The head pose for the person of sailing is modified the classification results of the visibility region, and using revised visibility region described in
The visibility region of driver.
With reference to first aspect, in the first embodiment of first aspect, the face image include depth image and
Color image;And the characteristic point of the face area that the driver is extracted from the face image and ocular,
It include: to extract foreground area from the depth image;Judge whether the foreground area includes human face;When the prospect
When region includes human face, the position of the human face is positioned in the depth image;And from the human face
Extract the characteristic point of face area and ocular in position in the color image.
With reference to first aspect, described to track the initial of the face image in second of embodiment of first aspect
Sequence of pictures is iterated calculating to the characteristic point of the face area, obtains the head pose of the driver, comprising: with
The initial picture sequence of face image described in track obtains the particle filter estimated value of head pose;Wherein, the particle filter is estimated
Evaluation is for estimating the head pose;According to the particle filter estimated value, divided from the observation scope according to driver's cabin
The visibility region of head pose institute direction is determined in each visibility region;And the sight based on head pose institute direction
Region and the particle filter estimated value, are iterated calculating to the characteristic point of the face area, obtain the head pose.
With reference to first aspect or its any embodiment, in the third embodiment of first aspect, according to described
The characteristic point of ocular estimates the classification results of the visibility region of driver's observation, comprising: according to the ocular
Characteristic point, construct eye appearance, and determine pupil center straightforward position;According to the feature within the scope of the eye appearance
Point positions pupil center location;According to the straightforward position of the pupil center location and the pupil center, the pupil is calculated
Offset of the center relative to the straightforward position;And by it is described offset and the eye appearance input linear classifier,
Obtain the classification results of the visibility region of the driver;Wherein, the observation scope that the driver observes driver's cabin is drawn in advance
It is divided into multiple visibility regions.
Second aspect, the embodiment of the present invention also provide a kind of device of determining visibility region, comprising: face image obtains mould
Block, for obtaining the face image of driver;Feature point extraction module, for extracting the driver from the face image
Face area and ocular characteristic point;Head pose obtains module, for tracking the initial picture of the face image
Sequence is iterated calculating with the characteristic point to the face area, obtains the head pose of the driver;Estimate visibility region
Module is counted, for the characteristic point according to the ocular, estimates the classification results of the visibility region of driver's observation;With
And visibility region correction module carries out the classification results of the visibility region for the head pose according to the driver
Amendment, and using revised visibility region as the visibility region of the driver.
In conjunction with second aspect, in the first embodiment of second aspect, the face image include depth image and
Color image;And the feature point extraction module includes: foreground area extraction unit, for being extracted from the depth image
Foreground area;Human face judging unit, for judging whether the foreground area includes human face;Face location positioning is single
Member, for positioning the position of the human face in the depth image when the foreground area includes human face;With
And face's eye feature extraction unit, for from position of the human face in the color image extract face area and
The characteristic point of ocular.
In conjunction with second aspect, in second of embodiment of second aspect, it includes: grain that the head pose, which obtains module,
Sub- filter unit obtains the particle filter estimated value of head pose for tracking the initial picture sequence of the face image;Its
In, the particle filter estimated value is for estimating the head pose;Current gaze area determination unit, for according to the grain
Son filtering estimated value, determines head pose institute direction from each visibility region that the observation scope according to driver's cabin divides
Visibility region;And head pose iteration unit, for based on head pose institute direction visibility region and the particle
Estimated value is filtered, calculating is iterated using characteristic point of the iterative closest point algorithm to the face area, obtains the head
Posture.
In conjunction with second aspect or its any embodiment, in the third embodiment of second aspect, the sight
Region estimation module includes: eye appearance construction unit, for the characteristic point according to the ocular, constructs eye appearance,
And determine the straightforward position of pupil center;Pupil center location determination unit, for according to the spy within the scope of the eye appearance
Point is levied, pupil center location is positioned;Calculations of offset unit, for according to the straight of the pupil center location and the pupil center
Depending on position, offset of the pupil center location relative to the straightforward position is calculated;And classified calculating unit, it is used for institute
Offset and the eye appearance input linear classifier are stated, the classification results of the visibility region of the driver are obtained;Wherein, institute
The observation scope for stating driver's observation driver's cabin is divided into multiple visibility regions in advance.
The function of described device can also execute corresponding software realization by hardware realization by hardware.It is described
Hardware or software include one or more modules corresponding with above-mentioned function.
In a possible design, determine to include processor and memory, the memory in the structure of visibility region
For determining that the device of visibility region executes the program of above-mentioned determining visibility region, the processor is configured to for executing institute
State the program stored in memory.The device of the determining visibility region can also include communication interface, for determining sight area
The device and other equipment or communication in domain.
The third aspect, the embodiment of the present invention also provides a kind of computer readable storage medium, for determining visibility region
Computer software instructions used in device, including program involved in the method for executing above-mentioned determining visibility region.
One of technical solution in above-mentioned technical proposal have the following advantages that or the utility model has the advantages that
The embodiment of the present invention carries out classification estimation, and joint head to visibility region using classifier by detection eye feature
Portion's posture corrects this classification results, can accurately obtain the visibility region of driver.
One of technical solution in above-mentioned technical proposal have the following advantages that or the utility model has the advantages that
The embodiment of the present invention utilizes the combination of depth image and color image, weaker, non-unequal by indoor light is driven
The influence of situations such as even illumination and narrow space can accurately obtain face and eye feature, to improve visibility region inspection
The accuracy of survey.
Above-mentioned general introduction is merely to illustrate that the purpose of book, it is not intended to be limited in any way.Except foregoing description
Schematical aspect, except embodiment and feature, by reference to attached drawing and the following detailed description, the present invention is further
Aspect, embodiment and feature, which will be, to be readily apparent that.
Detailed description of the invention
In the accompanying drawings, unless specified otherwise herein, otherwise indicate the same or similar through the identical appended drawing reference of multiple attached drawings
Component or element.What these attached drawings were not necessarily to scale.It should be understood that these attached drawings depict only according to the present invention
Disclosed some embodiments, and should not serve to limit the scope of the present invention.
Fig. 1 is the flow diagram of one embodiment of the method for determining visibility region provided by the invention;
Fig. 2 is the schematic diagram of one embodiment that the visibility region of practical driver's cabin provided by the invention divides;
Fig. 3 is the schematic diagram of one embodiment that the visibility region of drive simulating room provided by the invention divides;
Fig. 4 is the schematic diagram of one embodiment of the makeover process of visibility region provided by the invention;
Fig. 5 is that the process of one embodiment of the process of the characteristic point of face area and ocular provided by the invention is shown
It is intended to;
Fig. 6 is the flow diagram of one embodiment of head pose acquisition process provided by the invention;
Fig. 7 is the flow diagram of one embodiment of the assorting process of visibility region provided by the invention;
Fig. 8 is a schematic diagram of eye appearance provided by the invention;
Fig. 9 is offset schematic diagram when pupil center location provided by the invention is looked at straight partially;
Figure 10 is that one of the equipment of determining visibility region provided by the invention applies exemplary schematic diagram;
Figure 11 is the structural schematic diagram of another embodiment of the device of determining visibility region provided by the invention;
Figure 12 is the structural schematic diagram of one embodiment of terminal device provided by the invention.
Specific embodiment
Hereinafter, certain exemplary embodiments are simply just described.As one skilled in the art will recognize that
Like that, without departing from the spirit or scope of the present invention, described embodiment can be modified by various different modes.
Therefore, attached drawing and description are considered essentially illustrative rather than restrictive.
Referring to Fig. 1, the embodiment of the invention provides a kind of methods of determining visibility region.Method provided in this embodiment
It can be, but not limited to be applied in the vehicles such as automobile, steamer, aircraft.The present embodiment includes step S100 to step S500,
It is specific as follows:
S100 obtains the face image of driver.
In the present embodiment, camera can be set in driver's cabin, for shooting the driving situation of record driver.It takes the photograph
As head may include common color camera, infrared photography head etc..For example, the available color image of colour imagery shot.It is infrared
The available depth image of pick-up lens.The face data of driver can be carried out in conjunction with the two two-dimentional, three-dimensional data whole
It closes.
S200 extracts the face area of driver and the characteristic point of ocular from face image.
In the present embodiment, the present embodiment can use ASM (Active Shape Model, active shape model), AAM
(Active Appearance Model, active appearance models) algorithm obtains the characteristic point of face.
It is a kind of Feature Points Extraction for being widely used in area of pattern recognition by taking AAM method as an example.Based on AAM
Human face characteristic positioning method during establishing faceform, not only consider local feature information, and comprehensively consider complete
Office's shape and texture information establishes face mixed model, i.e., by for statistical analysis to face shape feature and textural characteristics
For final corresponding AAM model.In the matching process of image, in order to not only quickly but also accurately carry out face characteristic mark
A kind of method fixed, that images match fitting is taken when carrying out positioning feature point to tested face object, can generalization of image be
The process of " match → comparison → adjust and match → compare again again ".AAM algorithm is broadly divided into AAM modeling and AAM matching primitives two
Part.AAM models the active apparent model that established model is object.So-called apparent model, exactly in AAM shape
On the basis of combine the faceform that sets up of texture information of extracted face object;" active " word then image specifically body
In present AAM matching primitives.
Firstly, describing shape feature using principal component point (Principal Component Analysis, PCA) method
The dynamic change of point.Feature Points may indicate that the position of face characteristic.Secondly, with specific AAM model instance and input
The mean square deviation of image defines an energy function, and the matching degree of AAM model is evaluated using the energy function.It is fixed in face
During position is matched, matching algorithm variation model parameter group is can be effectively utilized in the linear representation of model, to control shape
The change in location of shape characteristic point generates current new AAM model instance.Again, using currently available energy function value come more
The parameter of new model, iteration is repeatedly to realize the minimum of energy function.To reach model instance and input picture phase
Matched purpose, finally obtained Feature Points position just describe the characteristic point position of current face's image.
Since driver is when the larger facial expressions and acts such as opening one's mouth, closing one's eyes, using non-rigid head model, AAM method is nothing
Method detects to form the point cloud data of face from face image.Therefore, AM method can find out three-dimensional head in this case
The rigid head portion model of picture replaces non-rigid head model.Head pose can be improved to avoid the characteristic point for getting mistake
Determining accuracy.
S300 tracks the initial picture sequence of face image, is iterated calculating with the characteristic point to face area, obtains
The head pose of driver.
The present embodiment can by particle filter algorithm, iterative closest point algorithm (Iterative Closet Point,
The methods of) ICP head pose is estimated in combination.Particle filter (Particle Filter, PF) algorithm, by find one group
The random sample propagated in state space approximately indicates probability density function, replaces integral operation with sample average, in turn
Obtain the process of the minimum variance estimate of sample state.This this sample can visually be known as " particle ", therefore claim, particle filter.
Basic particle filter algorithm includes: optimal Bayesian Estimation algorithm, the important sampling algorithm of sequence, auxiliary sampling-resampling calculation
Method, regularization sampling algorithm, adaptive particle filter algorithm etc..The present embodiment can use iterative closest point algorithm
(Iterative Closet Point, ICP) is iterated calculating to head pose.Such as Point to Point (point pair
Point) searching algorithm, Point to Plane (point-to-area) searching algorithm, Point to Projection (point arrives object) search
Algorithm etc..It can be concentrated from measurement point using iterative closest point algorithm after determining its corresponding closest approach point set, with based on certainly
New closest approach point set is calculated by the registration Algorithm of form curved surface, until target function value that residual sum of squares (RSS) is constituted not
Become, terminates iterative process.
Initial picture sequence may include the sequence of pictures of the first frame or continuous several frames before shooting face image.It can be with
Use y1:t={ y1,...,ytIndicate.Head pose can be indicated in form that three-dimensional perspective or vector are expressed.
S400 estimates the classification results of the visibility region of driver's observation according to the characteristic point of ocular.
In the present embodiment, the range of visibility that can in advance can be observed driver in driver's cabin is divided into multiple views
Line region.As shown in Fig. 2, by taking practical driver's cabin as an example multiple visibility regions can will be divided into the observation area of driver's cabin.
For example, 5,9,12 etc..In the observation area of drive simulating room, driver's cabin can be divided into shown in Fig. 3.It include 12 areas in figure
Domain (Zone1 to Zone12), when driver observes different location, such as left-hand mirror (left side mirror), right backsight
Mirror (Right side mirror), intermediate rearview mirror (Rearview mirror), instrument board (Instrument board), in
It controls platform (Center console), drive (Driver), rear window (Back windows), windshield (windshield), head
Portion's camera (Head camera) and the visual isopter region automobile data recorder (Road scene camera), driver scheme in face
Head pose and eye sight as in are different.
In the present embodiment, random forest, Bayes, KNN (k-NearestNeighbor, K- arest neighbors) can be used
Equal classifiers carry out classified calculating to the feature of the ocular of input, obtain the classification results of the visibility region belonging to it.Point
Class device can advance with training data training and complete.Training data may include the eyes such as the feature, appearance, pupil of ocular
Portion's sample data and visibility region corresponding with this eye sample data.
S500 is modified the classification results of visibility region according to the head pose of driver, and with revised view
Visibility region of the line region as driver.
In the present embodiment, the classification results of visibility region can be moved in conjunction with the angle of head pose.It can also
With the head pose of first basic driver, determine that the head pose is incident upon or the visibility region of institute's direction.Then, it will determine
Visibility region combined with classification results above-mentioned, obtain the visibility region of driver.
The embodiment of the present invention carries out classification estimation, and joint head to visibility region using classifier by detection eye feature
Portion's posture corrects this classification results, can accurately obtain the visibility region of driver.
As shown in figure 4, it is the amendment or calibration process of a visibility region.It is C point before calibration, is D point after calibration.
In one possible implementation, the face image may include depth image and color image.This face
Image may include the upper part of the body of human body.Depth image and color image are in synchronization and the case where in same shooting angle
The upper part of the body image of the driver of lower acquisition.Based on this, above-mentioned steps S100 acquisition face area and ocular feature
The process of point, as shown in figure 5, may include step S110 to step S140, it is as follows:
S110 extracts foreground area from depth image.
In the present embodiment, depth image is formed by putting, and is each the numerical value between 0-255.What numerical value represented
It is that this puts corresponding image to the distance of depth camera, they can be obtained according to the value size of point each in depth image
To the distance of camera.Therefore, the difference that can use the depth of foreground area and background area, before being extracted in depth image
Scene area.
Illustratively, the image data stage is being obtained, (can be being set outside a body-sensing of Microsoft's production by Kinect
It is standby) in different camera obtain RGB image and depth image.RGB image by Kinect CCD (Charge-coupled
Device, charge-coupled device) camera lens obtains, and depth image is then obtained by infrared detector camera lens
S120 judges whether foreground area includes human face.
For the present embodiment using the methods of AAM or ASM method, whether detection foreground area includes face face, can be short
Judge rapidly in RGB image whether to include human face in time.
S130 positions the position of human face when foreground area includes human face in depth image.It can basis
The change in depth situation of each pixel of face therefrom determines the position of human face.
S140 extracts the characteristic point of face area and ocular from position of the human face in color image.
In the present embodiment, it can use AAM (Active Appearance Model, active appearance models) or ASM
(Active Shape Model, active shape model) method extracts characteristic point from color image.Such as: AAM method can be with
It using the method for least square method, matching, comparing, adjusting, after the iterative process match, compare again, adjusting, rapidly
It is fitted on new image.And it is matched using rigid head portion model, available rigid head portion point cloud data.So-called rigidity is
Refer to face do not close one's eyes, open one's mouth, the expressions such as expression line.The characteristic point got using rigid head portion model is compared to non-rigid head
Model is more accurate.The modeling process and iterative process of AAM method are a kind of common methods, and details are not described herein.
The extraction process of the characteristic point of the present embodiment, using the combination of depth image and color image, not by driver's cabin
Light is weaker, inhomogeneous illumination and influence situations such as narrow space, can accurately obtain face and eye feature, greatly
The big accuracy for improving visibility region detection.
In a kind of possible embodiment, as shown in fig. 6, the head pose acquisition process of above-mentioned steps S300, can wrap
It includes:
S310 tracks the initial picture sequence of face image, obtains the particle filter estimated value of head pose;Wherein, grain
Son filtering estimated value is for estimating head pose.
The present embodiment can use particle filter algorithm to estimate head pose.The head of subsequent step S330 can be reduced
The number of iterations of the calibration process of posture, and improve the accuracy of calibration.The process of particle filter may comprise steps of:
The first step samples n primary posture sample since the first frame picture in the initial picture sequence.Its
In, the weighted value of each primary posture sample is 1/n, is usedIt indicates;Each initial grain
The prior density of sub- posture sample is preset value, using p (b0) indicate.Primary posture sample usesIt indicates.
Second step, the ratio between the weighted value of each particle posture sample sampled according to former frame picture, from working as
Resampling particle posture sample in previous frame picture.
Assuming that first the 0th frame picture of frame picture, present frame picture is t frame picture, then can be adopted according to t-1 frame picture
Sample to each particle posture sample weighted value between the ratio that is formed sampled.For example, particle 1, particle 2 and particle 3
Weighted value is respectively 2:3:5, then can be sampled by 0.2 overall oversampling ratio with particle 1, particle 2 can be by 0.3
Overall oversampling ratio is sampled, and particle 3 can be sampled by 0.5 overall oversampling ratio.
Third step, according to the relationship of the head pose vector of former frame picture and the head pose vector of present frame picture,
Determine the weighted value of the particle posture sample newly sampled.
In the present embodiment, the relationship between two head attitude vectors can be expressed using likelihood function.For example, with
The head pose vector b of present frame picturetFor, likelihood function p (x can be usedt|bt) be indicated, this function uses feature
The statistic texture of vector expression way quantifies x (bt) texture homogeneity.X () indicates the texture unrelated with shape.p(xt|bt)
It can be expressed as follows:
Wherein, c is the constant of likelihood function, and c > 0, e are reconstructed errors,It is spy relevant with first feature vector M
Value indicative, ξ are the estimators of likelihood function, and ρ is the arithmetic mean of instantaneous value of remaining characteristic value.
In turn, dynamic model can be usedThe head pose vector b of former frame picture is describedt-1With work as
The head pose vector b of previous frame picturetRelationship.
4th step carries out the weighted value of the particle posture sample newly sampled using maximum a-posteriori estimation formula
It calculates, obtains the predicted value of the head pose vector of next frame picture.Specifically, formula is as follows:
Wherein,It is the weighted value for the particle posture sample that t frame is adopted, j ∈ n.
In addition to the 0th frame picture, each frame picture can be calculated with above-mentioned second step to third step, until calculating
The predicted value of the head pose vector of last frame picture out.Step S320 and step S330 can use head pose vector
Predicted value carries out subsequent calculating.
Specifically, above-mentioned third step may include as follows:
Firstly, according to dynamic modelDraw the Approximate prediction point of the particle posture sample newly sampled
Cloth p (bt|x1:(t-1))。x1:(t-1)Indicate the 1st frame picture to t-1 frame picture the texture unrelated with shape.
Then, according to the Approximate prediction distribution p (bt|x1:(t-1)), calculate the geometry of the particle posture sample newly sampled
Similar features x (bt);
In turn, using likelihood function, the geometric similarity feature of the particle posture sample newly sampled is quantified, is obtained
The likelihood value of the particle posture sample newly sampled.For j-th of particle posture sample, likelihood value can be indicated:
Finally, being distributed according to the ratio of the likelihood value of the particle posture sample newly sampled, the particle newly sampled is determined
The weighted value of posture sample.For j-th of particle posture sample, the weighted value determined are as follows:
By the time in present frame, with n to the particle sampledWeighting, so that the particle after weighting is approximate
The head pose vector p (b of prior pointt-1|x1:(t-1)) Posterior distrbutionp.
S320 is determined from each visibility region that the observation scope according to driver's cabin divides according to particle filter estimated value
The visibility region of head pose institute direction.
The present embodiment divides the mode of visibility region, can be as previously mentioned, details are not described herein.Dividing visibility region
Meanwhile head pose template can be established for each visibility region in advance.For example, using self-learning algorithm, to each of default division
The head pose of visibility region is classified, and index is established.The head pose template of each visibility region may include 1 or more
It is a.Each head pose template can correspond to a head attitude vectors.It is practical due to the particle filter estimated value of head pose
On also be indicated as head pose vector.Therefore, the head pose vector and grain of each head pose template of each visibility region are calculated
Son filtering estimated value between space length, available each visibility region apart from probability distribution.According to apart from probability distribution,
It can determine the visibility region of head pose institute direction.
Specifically, this step may include: to obtain the default each sight divided after getting particle filter estimated value
The corresponding head pose template in region.Then, each point in each head pose template is measured at a distance from particle filter estimated value,
And determine each visibility region apart from probability distribution.According to each visibility region apart from probability distribution, head pose can be determined
The visibility region of institute's direction.
S330, visibility region and particle filter estimated value based on head pose institute direction, to the characteristic point of face area
It is iterated calculating, obtains head pose.
In the present embodiment, remove head be moved forward or rearward and scale in the case where, head pose vector is determined
Justice is at sextuple column vector b.Wherein, θx,θy,θzIt is Yaw, the angle in tri- directions Pitch, Roll, tx,ty,tzIt is x, y, in z-axis
Deviation.The direction Yaw refers to that Y-axis rotates, and Pitch is rotated around X-axis, and Roll is rotated around Z axis.
The present embodiment can be iterated calculating using ICP algorithm.A particle filter estimated value actually head pose
Vector can convert thereof into corresponding initial conversion matrix according to this, be updated to ICP algorithm and be calculated.Therefore, above-mentioned steps
The implementation process of S330 can be such that
(1), according to particle filter estimated value, initial conversion matrix is calculated.
In the present embodiment, iterative process can be by the head of former frames of the first frame of face image or continuous picture
Portion's attitude vectors, as initial head pose vector.
For initial picture sequence, it is expressed as y1:t={ y1,...,yt, it can be obtained just by particle filter algorithm
Beginning head pose vectorParticle filter estimated value.It is then possible to be converted into initial conversion using this particle filter estimated value
Matrix.
(2), the visibility region of head pose institute direction and the head pose template of adjacent visibility region are obtained.
In the present embodiment, the angle of head pose can be indicated by euclidean angular region, that is, use Yaw,
Pitch and Roll is indicated.By calculate head pose institute direction visibility region and other regions head pose template it
Between European angle, carry out exactness adjacent sight matching.Assuming that the sum 9 of visibility region, then the head pose mould of each visibility region
Plate may include: P1,P2,...,Pm,...P9。
(3), it is based on the initial conversion matrix, calculates the corresponding optimum translation matrix of each head pose template.Its
In, optimum translation matrix can make the error between the head pose template and two point sets of rigid point cloud data minimum.The
(3) calculating process of step can be realized step by step, as follows:
(3.1), for each point in stiff points cloud matrix, determination belongs to pattern matrix and the point nearest with the point.Wherein,
Initial stiff points cloud matrix Q indicates the rigid point cloud data, pattern matrix PmIndicate the head pose template.
Specifically, it can use NNSP (Nearest Neighbor Search Point, Nearest-neighbor search point) algorithm
Calculate the closest approach of two matrixes.Formula is as follows:
Wherein, PmIndicate pattern matrix, pjFor modular matrix PmIn j-th point;qiFor i-th in stiff points cloud matrix Q
A point.M for the visibility region of head pose institute direction serial number.
(3.2), optimum translation matrix is calculated, each point of stiff points cloud matrix and the corresponding closest approach in pattern matrix are made
Error function result minimizes.Specifically, error function is as follows:
Wherein,Indicate optimum translation matrix;The optimum translation matrix that (R, t) is once iterated to calculate before indicating.First
Secondary calculate is based on initial conversion matrix.
(3.3), if error function result is greater than preset error threshold, according to optimum translation matrix and template square
Battle array updates stiff points cloud matrix, and returns and recalculate optimum translation matrix.
It is calculated through (3.2) stepSuitable for matrix Q, updated stiff points cloud matrix Q can be with are as follows:Above-mentioned (3.2) and (3.3) are constantly iterated, until the variation drop of the coupling error of error function
As low as in the threshold value of setting, iteration stopping.Threshold size are as follows: ei-1-ei< τ.
(3.4), if error function result is less than preset error threshold, the optimum translation square currently calculated is exported
Battle array stops iteration.
For the head pose template of the adjacent visibility region of visibility region m, above-mentioned (3.1) step can be held to
(3.4) step obtains its corresponding optimum translation matrix (Rneighbor,tneighbor)。
(4), each optimum translation matrix is weighted and is averaged, obtain the angle of the head pose.
Assuming that currently determining the head pose template of the visibility region of head pose institute direction and the head in an adjacent region
Portion's pose template, then optimum translation matrix can respectively indicate are as follows:(Rneighbor,tneighbor).At this point it is possible to this
The two is weighted and averages, and calculates the θ of head posex,θy,θzAngle exact value.
In a kind of mode in the cards, as shown in fig. 7, the assorting process of the visibility region of above-mentioned steps S400, it can
It is as follows to include step S410 to step S440:
S410 constructs eye appearance according to the characteristic point of ocular, and determines the straightforward position of pupil center.
In the present embodiment, six features of eye according to the concept of human eye hexagon, can be selected from characteristic point,
The position of six dots at human eye edge as shown in Figure 8.Feature including two canthus, upper eyelid edge and lower eyelid edge
Point.The present embodiment is not limited to hexagon, also may include the shape that octagon etc. can describe the key feature of human eye.It will obtain
The characteristic point opsition dependent taken is linked in sequence, and obtains a closed polygon.For example, hexagon.Then this closing is sought
The central point of polygon, it can obtain the straightforward position B of pupil center.
S420 positions pupil center location according to the characteristic point within the scope of eye appearance.
After obtaining a closed hexagon, the range that will test pupil center location limits closed hexagon herein
It is interior, locating speed can be accelerated.
Further, it can also be positioned using gradient descent method.It is specific as follows:
Possible pupil center, g are indicated with CiIt is in point xiThe gradient vector at place.Standardized motion vector is diWith ladder
Spend vector giDirection having the same.Thus, we can pass through standardized vector and gradient vector giThe middle inner product for calculating them
To extract the gradient vector field of image.Wherein, the location of pixels of circular optimal central point c is x on this imagei, i ∈
{1,...,N}.This calculating process, following formula:
Further, for the time complexity of low algorithm, we can only take into account the main component of gradient vector, suddenly
Slightly arrive the region of gradient uniformity.In order to obtain the gradient of image, we calculate partial derivative:
Since the common color than skin and sclera of pupil is deeply very much, we use a weight wcCome for each
Central point c assignment, saturate central point weight are higher than the weight of brighter areas.Calculating process can following formula:
Wherein, wc=I*(cx,cy) it is by image that is smooth and reversely inputting in point (cx,cy) gray value.
It should be noted that image needs to first pass through Gaussian smoothing operation, some reflective bright spots can be prevented.
By calculating above, the position C of pupil center can be obtained.
S430 calculates pupil center location relative to direct-view according to the straightforward position of pupil center location and pupil center
The offset of position.
Illustratively, the A point in Fig. 9 is pupil center location, the straightforward position of B point pupil center, then from B point to A point
Vector, then the offset for pupil center location A relative to straightforward position B.
Offset and eye appearance input linear classifier are obtained the classification results of the visibility region of driver by S440;Its
In, the observation scope that driver observes driver's cabin is divided into multiple visibility regions in advance.
In the present embodiment, different eye appearances and different offsets be relative to same eye appearance, corresponding to
Visibility region may be different.In the present embodiment, training data can be pre-established, utilizes corresponding point of training sample
Class device is trained, and can will establish the mapping relations of both eye appearance, offset with visibility region.Hereafter, it can use
Classifier is directly classified.
On the other hand, the mapping of both eye appearance, offset with visibility region can directly be established according to measurement data
Then relationship in assorting process, can be indexed according to this mapping relations, obtain corresponding visibility region.
It is that one of the equipment of determining visibility region provided in this embodiment applies exemplary schematic diagram referring to Figure 10.
The variation for determining the watching area of driver may be implemented in it, and carries out automatic alarm when watching deviation attentively when detecting.
This equipment includes the model of the method for trained above-mentioned offer and main body 1, transformer 2 and the power supply 3 for executing method.
Wherein, main body 1 is integrated with input/output unit and processor.
Main body 1 includes display screen 1.1, depth camera 1.2, infrared camera 1.3.Display screen 1.1 is touch screen, can be with
Human-computer interaction is carried out, for handling input-output operation, can be used for the collected depth image of display camera or warp
Blinkpunkt region after sight calibration.The side of main body 1 can install depth camera 1.2 with fixed focal length and red
Outer camera 1.3, for obtaining head and the eye motion information of driver.The video camera or camera may be mounted at the master
On body.It can also be with split installation, that is, connect using cable with main body 1, camera is mounted on room mirror.In addition, should
Video camera or camera can also install the positions such as the instrument board top of automobile, display panel top.
By the work of both depth camera and infrared transmitter, random speckle is formed, is recorded by thermal camera
The position of speckle, and 3D depth image is obtained after processor calculates.
Transformer 2 provides the operating voltage of 220V for depth camera.
Hardware configuration provides power supply to power supply 3 thus, independently of the control system of automobile.For controlling the hardware configuration
Starting and closing.
Multiple buttons 1.4 can be set in display screen 1.1.It may include " set " button, for I/O mode to be arranged
And content.Such as the parameter (focal length, resolution ratio etc.) of input camera, while blinkpunkt region can also be exported, it carries out
The setting etc. of real-time tracking.It may include " switching " button, for switching input and output content.It also may include that " mode " is pressed
Button, for the selection such as calibration mode, operational mode and image information display mode.It can also include " mute " button, to control
It whether mute makes.
In addition, can also be mounted with loudspeaker 1.5 in main body 1, audible alarm unit is doubled as.For watching attentively as driver
Automatic alarm when presumptive area is deviateed in region.
The information collection part of this hardware configuration can be the depth camera with infrared light supply, and infrared light supply, which ensure that, is
System also can be used normally in the case where night or dark.Infrared light is black light to driver, therefore will not
It influences to drive.
Referring to Figure 11, the embodiment of the present invention also provides a kind of device of determining visibility region, comprising:
Face image obtains module 100, for obtaining the face image of driver;
Feature point extraction module 200, for extracting the face area and eye of the driver from the face image
The characteristic point in region;
Head pose obtains module 300, for tracking the initial picture sequence of the face image, to the facial regions
The characteristic point in domain is iterated calculating, obtains the head pose of the driver;
Visibility region estimation module 400 estimates driver's observation for the characteristic point according to the ocular
The classification results of visibility region;And
Visibility region correction module 500, the classification for the head pose according to the driver, to the visibility region
As a result it is modified, and using revised visibility region as the visibility region of the driver.
In one possible implementation, the face image includes depth image and color image;And the spy
Levying point extraction module includes:
Foreground area extraction unit, for extracting foreground area from the depth image;
Human face judging unit, for judging whether the foreground area includes human face;
Face location positioning unit, for being determined in the depth image when the foreground area includes human face
The position of the position human face;And
Face's eye feature extraction unit, for extracting face from position of the human face in the color image
The characteristic point in region and ocular.
In one possible implementation, the head pose acquisition module includes:
Particle filter unit obtains the particle filter of head pose for tracking the initial picture sequence of the face image
Wave estimated value;Wherein, the particle filter estimated value is for estimating the head pose;
Current gaze area determination unit is used for according to the particle filter estimated value, from the observation model according to driver's cabin
Enclose the visibility region that head pose institute direction is determined in each visibility region of division;And
Head pose iteration unit, for based on head pose institute direction visibility region and the particle filter estimate
Evaluation is iterated calculating to the characteristic point of the face area, obtains the head pose.
In one possible implementation, the visibility region estimation module includes:
Eye appearance construction unit constructs eye appearance for the characteristic point according to the ocular, and determines pupil
The straightforward position at center;
Pupil center location determination unit, for positioning pupil center according to the characteristic point within the scope of the eye appearance
Position;
Calculations of offset unit calculates institute for the straightforward position according to the pupil center location and the pupil center
State offset of the pupil center location relative to the straightforward position;And
Classified calculating unit, for obtaining the driving for the offset and the eye appearance input linear classifier
The classification results of the visibility region of member;Wherein, the observation scope that the driver observes driver's cabin is divided into multiple sights in advance
Region.
The function of described device can also execute corresponding software realization by hardware realization by hardware.It is described
Hardware or software include one or more modules corresponding with above-mentioned function.
In a possible design, determine to include processor and memory, the memory in the structure of visibility region
For determining that the device of visibility region executes the program of determining visibility region in above-mentioned first aspect, the processor is configured to
For executing the program stored in the memory.The device of the determining visibility region can also include communication interface, be used for
Determine the device and other equipment or communication of visibility region.
The embodiment of the present invention also provides a kind of terminal device of determining visibility region, and as shown in figure 12, which includes: to deposit
Reservoir 21 and processor 22, being stored in memory 21 can be in the computer program on processor 22.Processor 22 executes calculating
The method of the determination visibility region in above-described embodiment is realized when machine program.The quantity of memory 21 and processor 22 can be one
It is a or multiple.
The equipment further include:
Communication interface 23, for the communication between processor 22 and external equipment.
Memory 21 may include high speed RAM memory, it is also possible to further include nonvolatile memory (non-volatile
Memory), a for example, at least magnetic disk storage.
If memory 21, processor 22 and the independent realization of communication interface 23, memory 21, processor 22 and communication are connect
Mouth 23 can be connected with each other by bus and complete mutual communication.Bus can be industry standard architecture (ISA,
Industry Standard Architecture) bus, external equipment interconnection (PCI, Peripheral Component) be total
Line or extended industry-standard architecture (EISA, Extended Industry Standard Component) bus etc..Always
Line can be divided into address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Figure 12, but simultaneously convenient for indicating
Only a bus or a type of bus are not indicated.
Optionally, in specific implementation, if memory 21, processor 22 and communication interface 23 are integrated in chip piece
On, then memory 21, processor 22 and communication interface 23 can complete mutual communication by internal interface.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.Moreover, particular features, structures, materials, or characteristics described
It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this
The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples
Sign is combined.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic." first " is defined as a result, the feature of " second " can be expressed or hidden
It include at least one this feature containing ground.In the description of the present invention, the meaning of " plurality " is two or more, unless otherwise
Clear specific restriction.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.
The computer-readable medium of the embodiment of the present invention can be computer-readable signal media or computer-readable deposit
Storage media either the two any combination.The more specific example at least (non-exclusive of computer readable storage medium
List) include the following: there is the electrical connection section (electronic device) of one or more wirings, portable computer diskette box (magnetic dress
Set), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (deposit by EPROM or flash
Reservoir), fiber device and portable read-only memory (CDROM).In addition, computer readable storage medium can even is that
Can the paper of print routine or other suitable media on it because can for example be swept by carrying out optics to paper or other media
It retouches, is then edited, interprets or handled when necessary with other suitable methods electronically to obtain program, then will
It is stored in computer storage.
In embodiments of the present invention, computer-readable signal media may include in a base band or as carrier wave a part
The data-signal of propagation, wherein carrying computer-readable program code.The data-signal of this propagation can use a variety of
Form, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media is also
It can be any computer-readable medium other than computer readable storage medium, which can send, pass
It broadcasts or transmits for instruction execution system, input method or device use or program in connection.Computer can
The program code for reading to include on medium can transmit with any suitable medium, including but not limited to: wirelessly, electric wire, optical cable, penetrate
Frequently (RadioFrequency, RF) etc. or above-mentioned any appropriate combination.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is the program that relevant hardware can be instructed to complete by program, which can store in a kind of computer-readable storage
In medium, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.If integrated module with
The form of software function module is realized and when sold or used as an independent product, also can store computer-readable at one
In storage medium.Storage medium can be read-only memory, disk or CD etc..
More than, only a specific embodiment of the invention, but scope of protection of the present invention is not limited thereto, and it is any to be familiar with
Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in its various change or replacement, these
It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims
It is quasi-.
Claims (10)
1. a kind of method of determining visibility region characterized by comprising
Obtain the face image of driver;
The face area of the driver and the characteristic point of ocular are extracted from the face image;
The initial picture sequence of the face image is tracked, calculating is iterated with the characteristic point to the face area, is obtained
The head pose of the driver;
According to the characteristic point of the ocular, the classification results of the visibility region of driver's observation are estimated;And
According to the head pose of the driver, the classification results of the visibility region are modified, and with revised view
Visibility region of the line region as the driver.
2. the method as described in claim 1, which is characterized in that the face image includes depth image and color image;With
And the characteristic point of the face area that the driver is extracted from the face image and ocular, comprising:
Foreground area is extracted from the depth image;
Judge whether the foreground area includes human face;
When the foreground area includes human face, the position of the human face is positioned in the depth image;And
The characteristic point of face area and ocular is extracted from position of the human face in the color image.
3. the method as described in claim 1, which is characterized in that the initial picture sequence of the tracking face image is right
The characteristic point of the face area is iterated calculating, obtains the head pose of the driver, comprising:
The initial picture sequence for tracking the face image obtains the particle filter estimated value of head pose;Wherein, the particle
Filtering estimated value is for estimating the head pose;
According to the particle filter estimated value, the head is determined from each visibility region that the observation scope according to driver's cabin divides
The visibility region of posture institute of portion direction;And
Visibility region and the particle filter estimated value based on head pose institute direction, to the feature of the face area
Point is iterated calculating, obtains the head pose.
4. method as described in any one of claims 1 to 3, which is characterized in that according to the characteristic point of the ocular, estimation
The classification results of the visibility region of driver's observation, comprising:
According to the characteristic point of the ocular, eye appearance is constructed, and determines the straightforward position of pupil center;
According to the characteristic point within the scope of the eye appearance, pupil center location is positioned;
According to the straightforward position of the pupil center location and the pupil center, the pupil center location is calculated relative to institute
State the offset of straightforward position;And
By the offset and the eye appearance input linear classifier, the classification knot of the visibility region of the driver is obtained
Fruit;Wherein, the observation scope that the driver observes driver's cabin is divided into multiple visibility regions in advance.
5. a kind of device of determining visibility region characterized by comprising
Face image obtains module, for obtaining the face image of driver;
Feature point extraction module, for extracting the face area of the driver and the spy of ocular from the face image
Sign point;
Head pose obtains module, for tracking the initial picture sequence of the face image, with the spy to the face area
Sign point is iterated calculating, obtains the head pose of the driver;
Visibility region estimation module estimates the sight area of driver's observation for the characteristic point according to the ocular
The classification results in domain;And
Visibility region correction module, for the head pose according to the driver, to the classification results of the visibility region into
Row amendment, and using revised visibility region as the visibility region of the driver.
6. device as claimed in claim 5, which is characterized in that the face image includes depth image and color image;With
And the feature point extraction module includes:
Foreground area extraction unit, for extracting foreground area from the depth image;
Human face judging unit, for judging whether the foreground area includes human face;
Face location positioning unit, for positioning institute in the depth image when the foreground area includes human face
State the position of human face;And
Face's eye feature extraction unit, for extracting face area from position of the human face in the color image
With the characteristic point of ocular.
7. device as claimed in claim 5, which is characterized in that the head pose obtains module and includes:
Particle filter unit, for tracking the initial picture sequence of the face image, the particle filter for obtaining head pose is estimated
Evaluation;Wherein, the particle filter estimated value is for estimating the head pose;
Current gaze area determination unit, for being drawn from the observation scope according to driver's cabin according to the particle filter estimated value
The visibility region of head pose institute direction is determined in each visibility region divided;And
Head pose iteration unit, for based on head pose institute direction visibility region and the particle filter estimation
Value, is iterated calculating to the characteristic point of the face area, obtains the head pose.
8. such as the described in any item devices of claim 5 to 7, which is characterized in that the visibility region estimation module includes:
Eye appearance construction unit constructs eye appearance for the characteristic point according to the ocular, and determines pupil center
Straightforward position;
Pupil center location determination unit, for positioning pupil center location according to the characteristic point within the scope of the eye appearance;
Calculations of offset unit calculates the pupil for the straightforward position according to the pupil center location and the pupil center
Offset of the hole center relative to the straightforward position;And
Classified calculating unit, for obtaining the driver's for the offset and the eye appearance input linear classifier
The classification results of visibility region;Wherein, the observation scope that the driver observes driver's cabin is divided into multiple visibility regions in advance.
9. a kind of realize the terminal device for determining visibility region, which is characterized in that the terminal device includes:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors
The method for realizing the determination visibility region as described in any in claim 1-4.
10. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the program is held by processor
The method of the determination visibility region as described in any in claim 1-4 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811046284.2A CN109145864A (en) | 2018-09-07 | 2018-09-07 | Determine method, apparatus, storage medium and the terminal device of visibility region |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811046284.2A CN109145864A (en) | 2018-09-07 | 2018-09-07 | Determine method, apparatus, storage medium and the terminal device of visibility region |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109145864A true CN109145864A (en) | 2019-01-04 |
Family
ID=64823995
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811046284.2A Pending CN109145864A (en) | 2018-09-07 | 2018-09-07 | Determine method, apparatus, storage medium and the terminal device of visibility region |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109145864A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109808464A (en) * | 2019-01-24 | 2019-05-28 | 北京梧桐车联科技有限责任公司 | A kind of windshield light transmittance adjusting method and device |
CN109835260A (en) * | 2019-03-07 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | A kind of information of vehicles display methods, device, terminal and storage medium |
CN110263641A (en) * | 2019-05-17 | 2019-09-20 | 成都旷视金智科技有限公司 | Fatigue detection method, device and readable storage medium storing program for executing |
CN110414427A (en) * | 2019-07-26 | 2019-11-05 | Oppo广东移动通信有限公司 | Light measuring method and Related product based on eyeball tracking |
CN110490235A (en) * | 2019-07-23 | 2019-11-22 | 武汉大学 | A kind of Vehicle Object view prediction towards 2D image and threedimensional model restoration methods and device |
CN110645677A (en) * | 2019-10-24 | 2020-01-03 | 宁波奥克斯电气股份有限公司 | Lamp panel display method and device and air conditioner indoor unit |
CN111196536A (en) * | 2019-11-26 | 2020-05-26 | 恒大智慧科技有限公司 | Method, apparatus and storage medium for capacity-based control of elevators in intelligent community |
CN111402256A (en) * | 2020-04-13 | 2020-07-10 | 视研智能科技(广州)有限公司 | Three-dimensional point cloud target detection and attitude estimation method based on template |
EP3789848A1 (en) * | 2019-09-05 | 2021-03-10 | Smart Eye AB | Determination of gaze direction |
CN112799515A (en) * | 2021-02-01 | 2021-05-14 | 重庆金康赛力斯新能源汽车设计院有限公司 | Visual interaction method and system |
CN113361441A (en) * | 2021-06-18 | 2021-09-07 | 山东大学 | Sight line area estimation method and system based on head posture and space attention |
TWI753419B (en) * | 2020-02-28 | 2022-01-21 | 英華達股份有限公司 | Apparatus and method of achieving driving habit by sight tracking |
CN116704589A (en) * | 2022-12-01 | 2023-09-05 | 荣耀终端有限公司 | Gaze point estimation method, electronic device and computer readable storage medium |
CN117984897A (en) * | 2024-04-02 | 2024-05-07 | 长城汽车股份有限公司 | Vehicle rearview mirror control method and device, vehicle and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008007781A1 (en) * | 2006-07-14 | 2008-01-17 | Panasonic Corporation | Visual axis direction detection device and visual line direction detection method |
CN102510480A (en) * | 2011-11-04 | 2012-06-20 | 大连海事大学 | Automatic calibrating and tracking system of driver sight line |
CN102830793A (en) * | 2011-06-16 | 2012-12-19 | 北京三星通信技术研究有限公司 | Sight tracking method and sight tracking device |
CN103942527A (en) * | 2013-01-18 | 2014-07-23 | 通用汽车环球科技运作有限责任公司 | Method for determining eye-off-the-road condition by using road classifier |
CN104766059A (en) * | 2015-04-01 | 2015-07-08 | 上海交通大学 | Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning |
CN104850228A (en) * | 2015-05-14 | 2015-08-19 | 上海交通大学 | Mobile terminal-based method for locking watch area of eyeballs |
CN104951808A (en) * | 2015-07-10 | 2015-09-30 | 电子科技大学 | 3D (three-dimensional) sight direction estimation method for robot interaction object detection |
CN105740846A (en) * | 2016-03-02 | 2016-07-06 | 河海大学常州校区 | Horizontal visual angle estimation and calibration method based on depth camera |
CN106598221A (en) * | 2016-11-17 | 2017-04-26 | 电子科技大学 | Eye key point detection-based 3D sight line direction estimation method |
CN106778687A (en) * | 2017-01-16 | 2017-05-31 | 大连理工大学 | Method for viewing points detecting based on local evaluation and global optimization |
CN106843492A (en) * | 2017-02-08 | 2017-06-13 | 大连海事大学 | A kind of many people's viewpoint calibration systems and method |
CN107193383A (en) * | 2017-06-13 | 2017-09-22 | 华南师范大学 | A kind of two grades of Eye-controlling focus methods constrained based on facial orientation |
CN107818310A (en) * | 2017-11-03 | 2018-03-20 | 电子科技大学 | A kind of driver attention's detection method based on sight |
CN107862246A (en) * | 2017-10-12 | 2018-03-30 | 电子科技大学 | A kind of eye gaze direction detection method based on various visual angles study |
-
2018
- 2018-09-07 CN CN201811046284.2A patent/CN109145864A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008007781A1 (en) * | 2006-07-14 | 2008-01-17 | Panasonic Corporation | Visual axis direction detection device and visual line direction detection method |
CN102830793A (en) * | 2011-06-16 | 2012-12-19 | 北京三星通信技术研究有限公司 | Sight tracking method and sight tracking device |
CN102510480A (en) * | 2011-11-04 | 2012-06-20 | 大连海事大学 | Automatic calibrating and tracking system of driver sight line |
CN103942527A (en) * | 2013-01-18 | 2014-07-23 | 通用汽车环球科技运作有限责任公司 | Method for determining eye-off-the-road condition by using road classifier |
CN104766059A (en) * | 2015-04-01 | 2015-07-08 | 上海交通大学 | Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning |
CN104850228A (en) * | 2015-05-14 | 2015-08-19 | 上海交通大学 | Mobile terminal-based method for locking watch area of eyeballs |
CN104951808A (en) * | 2015-07-10 | 2015-09-30 | 电子科技大学 | 3D (three-dimensional) sight direction estimation method for robot interaction object detection |
CN105740846A (en) * | 2016-03-02 | 2016-07-06 | 河海大学常州校区 | Horizontal visual angle estimation and calibration method based on depth camera |
CN106598221A (en) * | 2016-11-17 | 2017-04-26 | 电子科技大学 | Eye key point detection-based 3D sight line direction estimation method |
CN106778687A (en) * | 2017-01-16 | 2017-05-31 | 大连理工大学 | Method for viewing points detecting based on local evaluation and global optimization |
CN106843492A (en) * | 2017-02-08 | 2017-06-13 | 大连海事大学 | A kind of many people's viewpoint calibration systems and method |
CN107193383A (en) * | 2017-06-13 | 2017-09-22 | 华南师范大学 | A kind of two grades of Eye-controlling focus methods constrained based on facial orientation |
CN107862246A (en) * | 2017-10-12 | 2018-03-30 | 电子科技大学 | A kind of eye gaze direction detection method based on various visual angles study |
CN107818310A (en) * | 2017-11-03 | 2018-03-20 | 电子科技大学 | A kind of driver attention's detection method based on sight |
Non-Patent Citations (4)
Title |
---|
张博文: "基于深度图的驾驶员头部姿态分析", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
洪文学 等: "《基于多元统计图表示原理的信息融合和模式识别技术》", 31 January 2008 * |
温晴川 等: "基于双目立体视觉的视线跟踪系统标定", 《光学学报》 * |
胡芳琴: "基于视线检测的屏幕感兴趣区域追踪", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109808464A (en) * | 2019-01-24 | 2019-05-28 | 北京梧桐车联科技有限责任公司 | A kind of windshield light transmittance adjusting method and device |
CN109835260A (en) * | 2019-03-07 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | A kind of information of vehicles display methods, device, terminal and storage medium |
CN109835260B (en) * | 2019-03-07 | 2023-02-03 | 百度在线网络技术(北京)有限公司 | Vehicle information display method, device, terminal and storage medium |
CN110263641A (en) * | 2019-05-17 | 2019-09-20 | 成都旷视金智科技有限公司 | Fatigue detection method, device and readable storage medium storing program for executing |
CN110490235A (en) * | 2019-07-23 | 2019-11-22 | 武汉大学 | A kind of Vehicle Object view prediction towards 2D image and threedimensional model restoration methods and device |
CN110490235B (en) * | 2019-07-23 | 2021-10-22 | 武汉大学 | Vehicle object viewpoint prediction and three-dimensional model recovery method and device facing 2D image |
CN110414427A (en) * | 2019-07-26 | 2019-11-05 | Oppo广东移动通信有限公司 | Light measuring method and Related product based on eyeball tracking |
EP3789848A1 (en) * | 2019-09-05 | 2021-03-10 | Smart Eye AB | Determination of gaze direction |
WO2021043931A1 (en) * | 2019-09-05 | 2021-03-11 | Smart Eye Ab | Determination of gaze direction |
CN110645677A (en) * | 2019-10-24 | 2020-01-03 | 宁波奥克斯电气股份有限公司 | Lamp panel display method and device and air conditioner indoor unit |
CN111196536A (en) * | 2019-11-26 | 2020-05-26 | 恒大智慧科技有限公司 | Method, apparatus and storage medium for capacity-based control of elevators in intelligent community |
TWI753419B (en) * | 2020-02-28 | 2022-01-21 | 英華達股份有限公司 | Apparatus and method of achieving driving habit by sight tracking |
CN111402256B (en) * | 2020-04-13 | 2020-10-16 | 视研智能科技(广州)有限公司 | Three-dimensional point cloud target detection and attitude estimation method based on template |
CN111402256A (en) * | 2020-04-13 | 2020-07-10 | 视研智能科技(广州)有限公司 | Three-dimensional point cloud target detection and attitude estimation method based on template |
CN112799515A (en) * | 2021-02-01 | 2021-05-14 | 重庆金康赛力斯新能源汽车设计院有限公司 | Visual interaction method and system |
CN113361441A (en) * | 2021-06-18 | 2021-09-07 | 山东大学 | Sight line area estimation method and system based on head posture and space attention |
CN113361441B (en) * | 2021-06-18 | 2022-09-06 | 山东大学 | Sight line area estimation method and system based on head posture and space attention |
CN116704589A (en) * | 2022-12-01 | 2023-09-05 | 荣耀终端有限公司 | Gaze point estimation method, electronic device and computer readable storage medium |
CN116704589B (en) * | 2022-12-01 | 2024-06-11 | 荣耀终端有限公司 | Gaze point estimation method, electronic device and computer readable storage medium |
CN117984897A (en) * | 2024-04-02 | 2024-05-07 | 长城汽车股份有限公司 | Vehicle rearview mirror control method and device, vehicle and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109145864A (en) | Determine method, apparatus, storage medium and the terminal device of visibility region | |
CN109271914A (en) | Detect method, apparatus, storage medium and the terminal device of sight drop point | |
CN111414798B (en) | Head posture detection method and system based on RGB-D image | |
CN107230218B (en) | Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras | |
CN108229366B (en) | Deep learning vehicle-mounted obstacle detection method based on radar and image data fusion | |
CN108171673B (en) | Image processing method and device, vehicle-mounted head-up display system and vehicle | |
CN105574518B (en) | Method and device for detecting living human face | |
CN109299643B (en) | Face recognition method and system based on large-posture alignment | |
CN102830793B (en) | Sight tracing and equipment | |
CN104200192B (en) | Driver's gaze detection system | |
WO2019006760A1 (en) | Gesture recognition method and device, and movable platform | |
CN111144207B (en) | Human body detection and tracking method based on multi-mode information perception | |
US6757571B1 (en) | System and process for bootstrap initialization of vision-based tracking systems | |
CN109949375A (en) | A kind of mobile robot method for tracking target based on depth map area-of-interest | |
CN109255329A (en) | Determine method, apparatus, storage medium and the terminal device of head pose | |
CN108985210A (en) | A kind of Eye-controlling focus method and system based on human eye geometrical characteristic | |
CN108731587A (en) | A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model | |
CN110232389A (en) | A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance | |
CN108596087B (en) | Driving fatigue degree detection regression model based on double-network result | |
Pathak et al. | Fast 3D mapping by matching planes extracted from range sensor point-clouds | |
CN102930252A (en) | Sight tracking method based on neural network head movement compensation | |
CN108491810A (en) | Vehicle limit for height method and system based on background modeling and binocular vision | |
CN108021926A (en) | A kind of vehicle scratch detection method and system based on panoramic looking-around system | |
WO2023272453A1 (en) | Gaze calibration method and apparatus, device, computer-readable storage medium, system, and vehicle | |
CN111476077A (en) | Multi-view gait recognition method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |