CN109725721A - Human-eye positioning method and system for naked eye 3D display system - Google Patents

Human-eye positioning method and system for naked eye 3D display system Download PDF

Info

Publication number
CN109725721A
CN109725721A CN201811636813.4A CN201811636813A CN109725721A CN 109725721 A CN109725721 A CN 109725721A CN 201811636813 A CN201811636813 A CN 201811636813A CN 109725721 A CN109725721 A CN 109725721A
Authority
CN
China
Prior art keywords
face
human
region
eye
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811636813.4A
Other languages
Chinese (zh)
Other versions
CN109725721B (en
Inventor
刘功琴
方勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yiweishi Technology Co Ltd
Original Assignee
Shanghai Yiweishi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yiweishi Technology Co Ltd filed Critical Shanghai Yiweishi Technology Co Ltd
Priority to CN201811636813.4A priority Critical patent/CN109725721B/en
Publication of CN109725721A publication Critical patent/CN109725721A/en
Application granted granted Critical
Publication of CN109725721B publication Critical patent/CN109725721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Present invention discloses a kind of human-eye positioning method and system for naked eye 3D display system, the human-eye positioning method includes: face training step;Face datection step: possible human face region in image is obtained using the sorter model of face training step training, and marks face location;Ocular training step;Human eye area estimating step: the human face region that face datection step obtains is extracted, a front surface and a side surface face regression model of ocular training step training is used;Human eye area aligning step;The some pixel position coordinates for being used to describe ocular obtained using human eye area estimating step, calculate left and right ocular;The region at right and left eyes position is corrected;Ins location step;Right and left eyes region is extracted, the position of human eye central point is obtained.The accuracy rate of human eye Ins location can be improved in the present invention, promotes the stability of naked eye 3D display system, provides the user with better 3D viewing experience.

Description

Human-eye positioning method and system for naked eye 3D display system
Technical field
The invention belongs to naked eye 3D display technical field, it is related to a kind of naked eye 3D display system more particularly to one kind is used for The human-eye positioning method and system of naked eye 3D display system.
Background technique
Naked eye 3D display system refers to the developing direction of the following three-dimensional display apparatus, it can solve more people's multiposition not The problem of may be viewed by three-dimensional content with wearing spectacles meets the habit that people watch movie and video programs.It is accurate in this system The position of eyeball is oriented on ground, plays critical effect to naked eye 3D display system.
There are following Railway Projects for the existing human-eye positioning method for naked eye 3D display device: first is that being more than best observation It is poor apart from display effect;Second is that real-time is inadequate, movement is too fast to there is ghost image;Third is that hair blocks eyebrow, side face, left and right shifting Dynamic, mobile too fast, insufficient light will appear human eye detection mistake;Fourth is that detecting multiple and different human eye positions when people is motionless It sets, interference detection results.The method about human eye Ins location has much at present, is positioned including the use of wearable device Method utilizes method, ash point detection method of Hough transformation positioning etc..Although wearable device can relatively accurately orient people Eye eyeball position, but uncomfortable feeling can be brought to people, and poor robustness;Utilize Hough transformation positioning or ash point detection Method location efficiency it is not high, be easy leakage positioning or accidentally position.
In view of this, nowadays there is an urgent need to design a kind of human-eye positioning method, to overcome existing human-eye positioning method to deposit Drawbacks described above.
Summary of the invention
The present invention provides a kind of human-eye positioning method and system for naked eye 3D display system, and it is fixed that human eye eyeball can be improved The accuracy rate of position promotes the stability of naked eye 3D display system, provides the user with better 3D viewing experience.
In order to solve the above technical problems, according to an aspect of the present invention, adopting the following technical scheme that
A kind of human-eye positioning method for naked eye 3D display system, comprising the following steps:
Step S1, the realtime graphic obtained to camera pre-processes, comprising: color notation conversion space, normalization, Gauss Denoising;Wherein color notation conversion space is mainly the image for converting the image for the rgb format that camera obtains to LUV format.LUV Format-pattern is to be obtained by color space CIE XYZ space through simple transformation, it has vision uniformity.Rgb format is converted to XYZ format formula:
CIE XYZ turns CIE LUV format
U=13L. (u '-un′)
V=13L. (v '-vn′)
Normalized is exactly to adjust image size using the method for cube interpolation;
Gauss denoising is exactly that image and Gaussian function are carried out convolution, to eliminate some noises present in image;
Step S2, possible human face region is obtained by training, uses the HOG value and LUV component value inside these regions Haar feature is calculated, is brought into trained Face datection model respectively, obtains corresponding classification results, pressed down using non-maximum Method processed selects best classification results, marks the corresponding region of the classification results;
Training obtains possible human face region and refers to by a region (such as 80*80 pixel constitutes region), use The available many characteristic areas of Haar feature, after these characteristic areas pass through Face datection model treatment, removal a part Characteristic area, what is obtained can most represent some regions of face characteristic;
HOG value refer to image vertically and horizontally gradient and corresponding amplitude value information;
The expression of LUV component value takes the corresponding numerical value in LUV format-pattern difference channel;
Here Face datection model is made of a series of AdaBoost classifier, uses different feature training Obtain different models;
Face is tilted in order to can detecte out algorithm, model training is carried out for comprising the data for tilting face, obtains To the corresponding Face datection model of inclination face;
One region obtains multiple as a result, for these as a result, using non-maxima suppression by multiple model treatments Method, to obtain final one as a result, the result and a threshold value are compared, so that it may judge that region is Include face;
Step S3, using marking human eye, eyebrow, the location informations such as nose in disclosed data set, using regression model into Row training, obtains the corresponding regression model R1 of frontal faces;
In training, it is assumed that sample is
Sample data is loaded, and to the true shape of N number of sampleIt is normalized, calculates average shape
Device R is returned for first layert, wherein t ∈ { 1,2 ..., T }.For each recurrence device Rt: in N number of training image, often The upper recurrence shape of one sample isThe true shape of each sample isNeedle for each sample, withFor base Standard, randomly selects P pixel, calculates the difference there are two institutes between pixel, while calculating the current shape of each sample and target The difference of shapeIt is projected on an accidental projection direction and obtains a scalar, calculating p* (p-1)/ Related coefficient between the difference and the variable of 2 gray values selects the maximum multipair pixel of related coefficient as shape indexing spy Sign, i.e., the local coordinate of multipair pixel;
Device r is returned for the second layerk, wherein k ∈ { 1,2 ..., K }.For each recurrence device rk, it is randomly provided one group of threshold Value calculates the shape indexing feature that upper one layer of recurrence device obtains, all training samples is divided into 2FClass, according to formulaThe output of each Fern node is calculatedThen the current shape of each sample is added into institute The node output for belonging to classification is used as updated current shape, it may be assumed thatK second layer of every completion returns Device, first layer return device and increase by 1;
The corresponding information of average shape and T*K recurrence device that training is obtained is stored in a model file.
Step S4, acquire some side face data, the location informations such as label human eye, eyebrow, nose, using regression model into Row training, obtains the corresponding regression model R2 of side face;
Here training method is identical as step S3, and two steps are uniquely distinguished, and the data type that they are used is different Sample.
Step S5, the human face region that extraction step S2 is detected uses the face regression model R1 of step S3 and S4 training And R2, obtain the positions such as corresponding human eye, eyebrow, nose in human face region;
Load trained regression model, random selection initialization shape;
Device R is returned into first layert, each recurrence device Rt, wherein t ∈ { 1,2 ..., T }, is that the K second layer returns device, Device r is returned for each second layerk, wherein k ∈ { 1,2 ..., K }, reads the corresponding characteristic point position of current recurrence device and threshold Value, calculates the classification of each sample, and current shape is added to the regressor of corresponding nodeAs new current shape, i.e.,Result to the end is obtained after completing T*K times.Wherein corresponding one of each initialization shape is defeated Out, it preferably exports in order to obtain as a result, being ranked up to these results, using the mean value of intermediate multiple values as final human eye area Domain estimated result.
Step S6, the characteristic point obtained using step S5 calculates left and right ocular;Then using area algorithm is to left and right The region of eye position is corrected;
The positions Loc1 such as corresponding human eye, eyebrow, nose in human face region is obtained using regression model R1;
The positions Loc2 such as corresponding human eye, eyebrow, nose in human face region is obtained using regression model R2;
Judge whether human eye area position Loc1 and Loc2 is suitable according to some priori knowledges;
The gradient answered of each pair of point is calculated to human eye area, analyzes the gradient information of entire human eye area, by Loc1 and Loc2 calculates the gradient value of corresponding left eye region and right eye region, compares the difference of the two, if above some threshold value, Readjust the regional location of right and left eyes;Otherwise selection difference small left eye and right eye region is exported;
Step S7, right and left eyes region is extracted, obtains human eye center position using Ins location algorithm;
The extraction and pretreatment of human eye area remove interference noise, such as glasses influence etc.;
The calculating of the vertical gradient, horizontal gradient, range value of human eye area;
Gaussian Blur processing is carried out to human eye area, and using the value as weight.The purpose for the arrangement is that in outburst area Dark colored portion pixel value, and weaken gradient representated by bright colored portion;
According to formula
It calculates so that c*Corresponding position coordinates when being maximized, the as position of human eye central point.
A kind of human-eye positioning method for naked eye 3D display system, the human-eye positioning method include:
Step S1, the realtime graphic obtained to camera pre-processes;
Step S2, possible human face region is obtained by training, pixel corresponds to HOG value and LUV value meter in using area Calculate Haar characteristic value, bring into trained Face datection model and calculated respectively, obtain corresponding classification results, every time from The middle maximum value of selection, marks the corresponding region of the value;
Step S3, using the human eye, eyebrow, nasal portion data marked in disclosed data set, using regression model into Row training, obtains the corresponding regression model of frontal faces;
Step S4, acquire some side face data, the human eye of label, eyebrow, nasal portion data, using regression model into Row training, obtains the corresponding regression model of side face;
Step S5, the human face region that extraction step S2 is detected uses a front surface and a side surface of step S3 and step S4 training Face's regression model obtains the position of corresponding human eye, eyebrow, nose in human face region;
Step S6, the characteristic point obtained using step S5 calculates left and right ocular;Then correcting area algorithm pair is used The region at right and left eyes position is corrected;
Step S7, right and left eyes region is extracted, obtains the position of human eye central point using human eye location of the core algorithm.
As one embodiment of the present invention, a faceform can be calculated in each feature;Assuming that extracting n Feature, using m scale, then pattern number in total is n*m, exports a suitable people using the method for non-maxima suppression Face model.
As one embodiment of the present invention, in step S4, in the training process, for the robustness for improving model, adopt With the way for expanding training sample;Stochastical sampling in the image marked from other, and using the data point in sampled images as every The original shape of a training sample;
Test sample selection is multiple to be initialized, and is obtained multiple regression models, is ranked up to this multiple regression model, Selection takes their mean value, as initial position in intermediate multiple regression models.
As one embodiment of the present invention, in step S4, (1) a part of test data, it is assumed that N number of data need It is normalized using Statistical Shape analysis method adjustment face shape, then using the method for K mean cluster to these Data point is clustered, the central point that cluster the is obtained position initial as these characteristic points in face;
(2) data set used in has plenty of from existing test data and concentrates random selection, according to the side introduced in (1) Method is handled, and is executed Q times, is obtained the position of the corresponding characteristic point of Q group data set;
(3) the regression model equation of human eye area is that the characteristic point that training obtains in (1) and (2) is carried out linear combination It obtains.
As one embodiment of the present invention, in step S6, the regression model obtained using the training of frontal one image, Obtain the position Loc1 of corresponding human eye in human face region, eyebrow, nose;
The regression model obtained using the training of side face image, obtains corresponding human eye, eyebrow, nose in human face region Position Loc2;
Judge whether human eye area position Loc1 and Loc2 is suitable according to some priori knowledges;
The gradient answered of each pair of point is calculated to human eye area, analyzes the gradient information of entire human eye area, by Loc1 and Loc2 calculates corresponding left eye region and right eye region, and then the gradient value of zoning interior pixels point, compares left eye region The regional location of right and left eyes is readjusted if difference is more than some threshold value with the otherness of right eye region.
As one embodiment of the present invention, in step S7, the extraction and pretreatment of human eye area, removal interference are made an uproar Sound;
The calculating of human eye area, including vertical gradient, horizontal gradient, range value;
Human eye area carries out Gaussian Blur processing, dark colored portion pixel value in outburst area, while weakening bright colored portion institute The gradient of representative.
A kind of human-eye positioning method for naked eye 3D display system, which comprises
Face training step;Characteristics of image is extracted from face positive data, is trained, is obtained just using these features The corresponding face classifier model of face face;Characteristics of image is extracted from the data of face side, is trained, is obtained using these features The corresponding face classifier model of side face;
Face datection step;Possible face area in image is obtained using the sorter model of face training step training Domain, and mark face location;
Ocular training step: it using the location information for marking human eye, eyebrow, nose in disclosed data set, extracts Shape indexing feature, is trained using regression model, obtains the corresponding regression model of frontal faces and side face corresponding time Return model;
Human eye area estimating step;The human face region that face datection step obtains is extracted, ocular training step is used Trained a front surface and a side surface face regression model obtains the position that key position is set in human face region;
Human eye area aligning step;The some pixels for being used to describe ocular obtained using human eye area estimating step Point position coordinates, calculate left and right ocular;The region at right and left eyes position is corrected;
Ins location step;Right and left eyes region is extracted, the position of human eye central point is obtained.
A kind of human-eye positioning method for naked eye 3D display system, described method includes following steps:
(1) face datection step;
Face datection is the position that face is oriented from image or video, is usually used in recognition of face, face tracking, table In the applications such as feelings identification.The Face datection process is referring to Fig. 1, the process includes the following steps:
Pretreatment operation is carried out to camera captured image, removal picture noise influences, and image is enable preferably to be used for Subsequent image processing work;
After pretreatment, characteristics of image is extracted, three component values including HOG value and LUV color space utilize these values Haar characteristic value in zoning;
Haar characteristic value is brought into and carries out classification processing in multiple sorter models, obtains multiple classification results;
For multiple classification results, using the method for non-maxima suppression, each selection result is worth that maximum region;
If that corresponding region of maximum value is greater than some threshold value, being considered as the region is human face region, is otherwise judged For non-face region;
(2) human eye area estimating step;
Human eye area estimation is exactly the position that left eye and right eye are found out in human face region;
There is very big difference relative to face location in eye locations, in front face and side face in order to improve this The robustness of inventive method, herein for two distinct types of human face data (front face and side face), training is obtained Two regression models R1 and R2;
Si∶(p1,p2,…,pn-1,pn), n=1,2 ..., 21, pk=(x, y)
R:(R1,R2,…,Rt,…,RT), t=1,2 ..., T
Wherein R indicates regression model, RtIndicate t-th of recurrence device in regression model, SiIt indicates by i-th of test sample In, the set of human eye relevant data points composition, data point pkIt is described by x and y coordinates;
Rt(I,Si t-1) indicate to return device RtUtilize input sample image I and a upper shape St-1The shape being calculated increases Amount.
For the human face region that method for detecting human face obtains, two groups of position of human eye are obtained using regression model R1 and R2 respectively Coordinate;
Maximum-minimize method is used using position of human eye coordinate, that is, takes and all detects the upper and left of position of human eye coordinate The minimum value on boundary, the method for lower and right margin maximum value, is calculated two groups of different position of human eye regions;
(3) human eye area aligning step;
Since the result obtained after the processing of human eye area estimation method may be comprising human eye part, so needing Processing is corrected to human eye area;
Human eye area aligning step is as follows:
The positions such as corresponding human eye, eyebrow, nose in human face region are obtained using regression model R1, calculate left eye region EyeLeft1 and right eye region eyeRight1;
The positions such as corresponding human eye, eyebrow, nose in human face region are obtained using regression model R2, calculate left eye region EyeLeft2 and right eye region eyeRight2;
Human eye area eyeLeft1 is judged according to some priori knowledges, and eyeRight1, eyeLeft2, whether is eyeRight2 Properly, if improper directly abandon the region, gradient information analysis otherwise is carried out to region;
Gradient information analysis is actually that more corresponding left eye region and right eye region gradient difference are anisotropic, if difference is non- Chang great illustrates that left eye region and the positioning of right eye region position are problematic, needs to readjust, otherwise not deal with, directly use Corresponding left eye region and right eye region;
(4) Ins location step;
Ins location the following steps are included:
The extraction and pretreatment of human eye area remove interference noise, such as glasses influence etc.;
The calculating of the vertical gradient, horizontal gradient, range value of human eye area;
Gaussian Blur processing is carried out to human eye area, is then negated again, and using the value as weight w.The purpose done so It is dark colored portion pixel value in outburst area, and weakens gradient representated by bright colored portion;
Use maximum value lookup algorithm, i.e. formula
It calculates so that c*Corresponding position coordinates when being maximized, the as position of human eye central point;
diIndicate the corresponding motion vector of ith pixel point in image, xiIndicate that the position of ith pixel point in image is sat Mark, the position for the human eye central point assumed that c is indicated, giWhat is indicated is in xiThe gradient value at place, wcWhat is indicated is each human eye The weighted value of the position c of central point, N are pixel numbers total in image.
A kind of human eye positioning system for naked eye 3D display system, the system comprises:
Face training module is trained using these features, is obtained to extract characteristics of image from face positive data To the corresponding face classifier model of frontal faces;Characteristics of image is extracted from the data of face side, is trained using these features, Obtain the corresponding face classifier model of side face;
Face detection module, to use the sorter model of face training module training to obtain possible face in image Region, and mark face location;
Ocular training module, to utilize the location information for marking human eye, eyebrow, nose in disclosed data set, Shape indexing feature is extracted, is trained using regression model, the corresponding regression model of frontal faces is obtained and side face is corresponding Regression model;
Human eye area estimation module, the human face region obtained to extract face detection module use ocular training A front surface and a side surface face regression model of module training obtains the position that key position is set in human face region;
Human eye area correction module is used to describe some of ocular to what is obtained using human eye area estimation module Pixel position coordinates calculate left and right ocular;The region at right and left eyes position is corrected;
Ins location module obtains the position of human eye central point to extract right and left eyes region.
The present invention solves following Railway Project existing for the existing human-eye positioning method for naked eye 3D display device: first is that It is poor more than best observed range display effect;Second is that real-time is inadequate, movement is too fast to there is ghost image;Third is that hair block eyebrow, Side face moves left and right, moves too fast, insufficient light and will appear human eye detection mistake;Fourth is that detected when people is motionless it is multiple not Same position of human eye, interference detection results.
The beneficial effects of the present invention are: human-eye positioning method proposed by the present invention for naked eye 3D display system and it is System, it is more than that best observed range display effect is poor that it is existing, which to solve Ins location in existing naked eye 3D display device,;Hair blocks Eyebrow, side face, move left and right, move too fast, insufficient light and detected when will appear human eye detection mistake and motionless people it is more The problems such as a different position of human eye, interference detection results.The present invention can accurately orient a front surface and a side surface face with And the eyeball position in inclination face, the stability of naked eye 3D display system is promoted, better 3D viewing experience is provided the user with.
Detailed description of the invention
Fig. 1 is the flow chart in one embodiment of the invention for the human-eye positioning method of naked eye 3D display system.
Fig. 2 is the flow chart of the Face datection process of human-eye positioning method in one embodiment of the invention.
Fig. 3 is the flow chart of the estimation and correction course of the human eye area of human-eye positioning method in one embodiment of the invention.
Fig. 4 is the flow chart of the human eye location of the core process of human-eye positioning method in one embodiment of the invention.
Fig. 5 is the composition schematic diagram in one embodiment of the invention for the human eye positioning system of naked eye 3D display system.
Fig. 6 is in one embodiment of the invention for describing the schematic diagram of ocular pixel.
Specific embodiment
The preferred embodiment that the invention will now be described in detail with reference to the accompanying drawings.
For a further understanding of the present invention, the preferred embodiment of the invention is described below with reference to embodiment, still It should be appreciated that these descriptions are only further explanation the features and advantages of the present invention, rather than to the claims in the present invention Limitation.
Just for several typical embodiments, the present invention is not limited merely to the model of embodiment description for the description of the part It encloses.Some technical characteristics in same or similar prior art means and embodiment, which are replaced mutually, also to be described in the present invention In the range of protection.
Present invention discloses a kind of human-eye positioning method for naked eye 3D display system, Fig. 1 is one embodiment of the invention In for naked eye 3D display system human-eye positioning method flow chart;Referring to Fig. 1, in one embodiment of this invention, institute Stating human-eye positioning method includes:
Face training step;Characteristics of image is extracted from face positive data, is trained, is obtained just using these features The corresponding face classifier model of face face;Characteristics of image is extracted from the data of face side, is trained, is obtained using these features The corresponding face classifier model of side face;
Face datection step;Possible face area in image is obtained using the sorter model of face training step training Domain, and mark face location;
Ocular training step: it using the location information for marking human eye, eyebrow, nose in disclosed data set, extracts Shape indexing feature, is trained using regression model, obtains the corresponding regression model of frontal faces and side face corresponding time Return model;
Human eye area estimating step;The human face region that face datection step obtains is extracted, ocular training step is used Trained a front surface and a side surface face regression model obtains the position that key position is set in human face region;
Human eye area aligning step;The some pixels for being used to describe ocular obtained using human eye area estimating step (Fig. 6 is for describing the schematic diagram of ocular pixel in one embodiment of the invention, as shown in fig. 6, pixel to point position coordinates Point position coordinates can be such as (x1, y1), (x2, y2) etc., the i.e. corresponding position of each point in Fig. 6), calculate left and right eye area Domain;The region at right and left eyes position is corrected;
Ins location step;Right and left eyes region is extracted, the position of human eye central point is obtained.
Fig. 2 is the flow chart of the Face datection process of human-eye positioning method in one embodiment of the invention, and Fig. 3 is the present invention one The flow chart of the estimation of the human eye area of human-eye positioning method and correction course in embodiment, Fig. 4 are in one embodiment of the invention The flow chart of the human eye location of the core process of human-eye positioning method;Please see Fig. 2 to Fig. 4, in one embodiment of this invention, Human-eye positioning method for naked eye 3D display system includes face datection step, human eye area estimating step, human eye area school Positive step, Ins location step;The detailed process of each step of the invention is introduced individually below.
[face datection step]
Face datection is the position that face is oriented from image or video, is usually used in recognition of face, face tracking, table In the applications such as feelings identification.The Face datection process is referring to Fig. 2, the process includes the following steps:
Pretreatment operation is carried out to camera captured image, removal picture noise influences, and image is enable preferably to be used for Subsequent image processing work;
After pretreatment, characteristics of image is extracted, three component values including HOG value and LUV color space utilize these values Haar characteristic value in zoning;
Haar characteristic value is brought into and carries out classification processing in multiple sorter models, obtains multiple classification results;
For multiple classification results, using the method for non-maxima suppression, each selection result is worth that maximum region;
If that corresponding region of maximum value is greater than some threshold value, being considered as the region is human face region, is otherwise judged For non-face region.
[human eye area estimating step]
Human eye area estimation is exactly the position that left eye and right eye are found out in human face region;
There is very big difference relative to face location in eye locations, in front face and side face in order to improve this The robustness of inventive method, herein for two distinct types of human face data (front face and side face), training is obtained Two regression models R1 and R2;
Si∶(p1,p2,…,pn-1,pn), n=1,2 ..., 21, pk=(x, y)
R:(R1,R2,…,Rt,…,RT), t=1,2 ..., T
Wherein R indicates regression model, RtIndicate t-th of recurrence device in regression model, SiIt indicates by i-th of test sample In, the set of human eye relevant data points composition, data point pkIt is described by x and y coordinates,.
Rt(I,Si t-1) indicate to return device RtUtilize input sample image I and a upper shape St-1The shape being calculated increases Amount.
For the human face region that method for detecting human face obtains, two groups of position of human eye are obtained using regression model R1 and R2 respectively Coordinate;
Maximum-minimize method is used using position of human eye coordinate, that is, takes and all detects the upper and left of position of human eye coordinate The minimum value on boundary, the method for lower and right margin maximum value, is calculated two groups of different position of human eye regions;
[human eye area aligning step]
Since the result obtained after the processing of human eye area estimation method may be comprising human eye part, so needing Processing is corrected to human eye area;
Human eye area aligning step is as follows:
The positions such as corresponding human eye, eyebrow, nose in human face region are obtained using regression model R1, calculate left eye region EyeLeft1 and right eye region eyeRight1;
The positions such as corresponding human eye, eyebrow, nose in human face region are obtained using regression model R2, calculate left eye region EyeLeft2 and right eye region eyeRight2;
Human eye area eyeLeft1 is judged according to some priori knowledges, and eyeRight1, eyeLeft2, whether is eyeRight2 Properly, if improper directly abandon the region, gradient information analysis otherwise is carried out to region;
Gradient information analysis is actually that more corresponding left eye region and right eye region gradient difference are anisotropic, if difference is non- Chang great illustrates that left eye region and the positioning of right eye region position are problematic, needs to readjust, otherwise not deal with, directly use Corresponding left eye region and right eye region.
[Ins location step]
Ins location the following steps are included:
The extraction and pretreatment of human eye area remove interference noise, such as glasses influence etc.;
The calculating of the vertical gradient, horizontal gradient, range value of human eye area;
Gaussian Blur processing is carried out to human eye area, is then negated again, and using the value as weight w.The purpose done so It is dark colored portion pixel value in outburst area, and weakens gradient representated by bright colored portion;
Use maximum value lookup algorithm, i.e. formula
It calculates so that c*Corresponding position coordinates when being maximized, the as position of human eye central point.
diIndicate the corresponding motion vector of ith pixel point in image, xiIndicate that the position of ith pixel point in image is sat Mark, the position for the human eye central point assumed that c is indicated, giWhat is indicated is in xiThe gradient value at place, wcWhat is indicated is each human eye The weighted value of the position c of central point, N are pixel numbers total in image.
In one embodiment of this invention, for the human-eye positioning method of naked eye 3D display system the following steps are included:
Step S1, the realtime graphic obtained to camera pre-processes, comprising: color notation conversion space, normalization, Gauss Denoising;Wherein color notation conversion space is mainly the image for converting the image for the rgb format that camera obtains to LUV format.LUV Format-pattern is to be obtained by color space CIE XYZ space through simple transformation, it has vision uniformity.Rgb format is converted to XYZ format formula:
CIE XYZ turns CIE LUV format
U=13L. (u '-un′)
V=13L. (v '-vn′)
Normalized is exactly to adjust image size using the method for cube interpolation;
Gauss denoising is exactly that image and Gaussian function are carried out convolution, to eliminate some noises present in image;
Step S2, possible human face region is obtained by training, uses the HOG value and LUV component value inside these regions Haar feature is calculated, is brought into trained Face datection model respectively, obtains corresponding classification results, pressed down using non-maximum Method processed selects best classification results, marks the corresponding region of the classification results;
Training obtains possible human face region and refers to by a region (such as 80*80 pixel constitutes region), use The available many characteristic areas of Haar feature, after these characteristic areas pass through Face datection model treatment, removal a part Characteristic area, what is obtained can most represent some regions of face characteristic;
HOG value refer to image vertically and horizontally gradient and corresponding amplitude value information;
The expression of LUV component value takes the corresponding numerical value in LUV format-pattern difference channel;
Here Face datection model is made of a series of AdaBoost classifier, uses different feature training Obtain different models;
Face is tilted in order to can detecte out algorithm, model training is carried out for comprising the data for tilting face, obtains To the corresponding Face datection model of inclination face;
One region obtains multiple as a result, for these as a result, using non-maxima suppression by multiple model treatments Method, to obtain final one as a result, the result and a threshold value are compared, so that it may judge that region is Include face;
Step S3, using marking human eye, eyebrow, the location informations such as nose in disclosed data set, using regression model into Row training, obtains the corresponding regression model R1 of frontal faces;
In training, it is assumed that sample is
Sample data is loaded, and to the true shape of N number of sampleIt is normalized, calculates average shape
Device R is returned for first layert, wherein t ∈ { 1,2 ..., T }.For each recurrence device Rt: in N number of training image, often The upper recurrence shape of one sample isThe true shape of each sample isNeedle for each sample, withFor base Standard, randomly selects P pixel, calculates the difference there are two institutes between pixel, while calculating the current shape of each sample and target The difference of shapeIt is projected on an accidental projection direction and obtains a scalar, calculating p* (p-1)/ Related coefficient between the difference and the variable of 2 gray values selects the maximum multipair pixel of related coefficient as shape indexing spy Sign, i.e., the local coordinate of multipair pixel;
Device r is returned for the second layerk, wherein k ∈ { 1,2 ..., K }.For each recurrence device rk, it is randomly provided one group of threshold Value calculates the shape indexing feature that upper one layer of recurrence device obtains, all training samples is divided into 2FClass, according to formulaThe output of each Fern node is calculatedThen the current shape of each sample is added into institute The node output for belonging to classification is used as updated current shape, it may be assumed thatK second layer of every completion returns Device, first layer return device and increase by 1;
Average shape and the corresponding information of T*K recurrence device (such as returning output information) deposit one that training is obtained In a model file.
Step S4, acquire some side face data, the location informations such as label human eye, eyebrow, nose, using regression model into Row training, obtains the corresponding regression model R2 of side face;
Here training method is identical as step S3, and two steps are uniquely distinguished, and the data type that they are used is different Sample.
Step S5, the human face region that extraction step S2 is detected uses the face regression model R1 of step S3 and S4 training And R2, obtain the positions such as corresponding human eye, eyebrow, nose in human face region;
Load trained regression model, random selection initialization shape;
Device R is returned into first layert, each recurrence device Rt, wherein t ∈ { 1,2 ..., T }, is that the K second layer returns device, Device r is returned for each second layerk, wherein k ∈ { 1,2 ..., K }, reads the corresponding characteristic point position of current recurrence device and threshold Value, calculates the classification of each sample, and current shape is added to the regressor of corresponding nodeAs new current shape, i.e.,Result to the end is obtained after completing T*K times.Wherein corresponding one of each initialization shape is defeated Out, it preferably exports in order to obtain as a result, being ranked up to these results, using the mean value of intermediate multiple values as final human eye area Domain estimated result.
Step S6, the characteristic point obtained using step S5 calculates left and right ocular;Then using area algorithm is to left and right The region of eye position is corrected;
The positions Loc1 such as corresponding human eye, eyebrow, nose in human face region is obtained using regression model R1;
The positions Loc2 such as corresponding human eye, eyebrow, nose in human face region is obtained using regression model R2;
Judge whether human eye area position Loc1 and Loc2 is suitable according to some priori knowledges;
The gradient answered of each pair of point is calculated to human eye area, analyzes the gradient information of entire human eye area, by Loc1 and Loc2 calculates the gradient value of corresponding left eye region and right eye region, compares the difference of the two, if above some threshold value, Readjust the regional location of right and left eyes;Otherwise selection difference small left eye and right eye region is exported;
Step S7, right and left eyes region is extracted, obtains human eye center position using Ins location algorithm;
The extraction and pretreatment of human eye area remove interference noise, such as glasses influence etc.;
The calculating of the vertical gradient, horizontal gradient, range value of human eye area;
Gaussian Blur processing is carried out to human eye area, and using the value as weight.The purpose for the arrangement is that in outburst area Dark colored portion pixel value, and weaken gradient representated by bright colored portion;
According to formula
It calculates so that c*Corresponding position coordinates when being maximized, the as position of human eye central point.
diIndicate the corresponding motion vector of ith pixel point in image, xiIndicate that the position of ith pixel point in image is sat Mark, the position for the human eye central point assumed that c is indicated, giWhat is indicated is in xiThe gradient value at place, wcWhat is indicated is each human eye The weighted value of the position c of central point, N are pixel numbers total in image.
The present invention also discloses a kind of human eye positioning system for naked eye 3D display system, the system comprises: face instruction Practice module 1, face detection module 2, ocular training module 3, human eye area estimation module 4, human eye area correction module 5, Ins location module 6.
Face training module 1 is trained using these features, is obtained to extract characteristics of image from face positive data To the corresponding face classifier model of frontal faces;Characteristics of image is extracted from the data of face side, is trained using these features, Obtain the corresponding face classifier model of side face.
Face detection module 2 is to use the sorter model of face training module training to obtain possible face in image Region, and mark face location;
Ocular training module 3 marks the location information of human eye, eyebrow, nose to utilize in disclosed data set, Shape indexing feature is extracted, is trained using regression model, the corresponding regression model of frontal faces is obtained and side face is corresponding Regression model.
The human face region that human eye area estimation module 4 is obtained to extract face detection module uses ocular training A front surface and a side surface face regression model of module training obtains the position that key position is set in human face region.
Human eye area correction module 5 is used to describe some of ocular to what is obtained using human eye area estimation module Pixel position coordinates calculate left and right ocular;The region at right and left eyes position is corrected.
Ins location module 6 obtains the position of human eye central point to extract right and left eyes region.
The composition implementation of above each module refers to description of the above various embodiments in relation to alignment processing process, here It does not repeat them here.
In conclusion the human-eye positioning method and system proposed by the present invention for naked eye 3D display system, solves existing There is Ins location in naked eye 3D display device existing poor more than best observed range display effect;Hair blocks eyebrow, side face, a left side It moves right, move too fast, insufficient light and detect multiple and different human eyes when will appear human eye detection mistake and motionless people The problems such as position, interference detection results.The present invention can be oriented accurately in a front surface and a side surface face and inclination face Eyeball position, promoted naked eye 3D display system stability, provide the user with better 3D viewing experience.
Description and application of the invention herein are illustrative, is not wishing to limit the scope of the invention to above-described embodiment In.The deformation and change of embodiments disclosed herein are possible, the realities for those skilled in the art The replacement and equivalent various parts for applying example are well known.It should be appreciated by the person skilled in the art that not departing from the present invention Spirit or essential characteristics in the case where, the present invention can in other forms, structure, arrangement, ratio, and with other components, Material and component are realized.Without departing from the scope and spirit of the present invention, can to embodiments disclosed herein into The other deformations of row and change.

Claims (10)

1. a kind of human-eye positioning method for naked eye 3D display system, which is characterized in that the localization method includes following step It is rapid:
Step S1, the realtime graphic obtained to camera pre-processes, comprising: color notation conversion space, normalization, Gauss are gone It makes an uproar;Wherein color notation conversion space is mainly the image for converting the image for the rgb format that camera obtains to LUV format;LUV lattice Formula image is to be obtained by color space CIE XYZ space through simple transformation, has vision uniformity;Rgb format is converted to XYZ lattice Formula formula:
CIE XYZ turns CIE LUV format
U=13L. (u '-un′)
V=13L. (v '-vn′)
Normalized is exactly to adjust image size using the method for cube interpolation;
Gauss denoising is exactly that image and Gaussian function are carried out convolution, to eliminate some noises present in image;
Step S2, possible human face region is obtained by training, uses the HOG value and the calculating of LUV component value inside these regions Haar feature brings into trained Face datection model respectively, obtains corresponding classification results, use non-maxima suppression side Method selects best classification results, marks the corresponding region of the classification results;
Training obtains possible human face region and refers to by a region, obtains many characteristic areas using Haar feature, this A little characteristic areas are by removing a part of characteristic area, what is obtained can most represent face characteristic after Face datection model treatment Some regions;
HOG value refer to image vertically and horizontally gradient and corresponding amplitude value information;
The expression of LUV component value takes the corresponding numerical value in LUV format-pattern difference channel;
Face datection model is made of a series of AdaBoost classifier, is obtained using different feature training different Model;
In order to enable algorithm to detect inclination face, model training is carried out for comprising the data for tilting face, is tilted The corresponding Face datection model of face;
One region by multiple model treatments obtain it is multiple as a result, for these as a result, using non-maxima suppression method, As soon as can judge whether region includes people to obtain final one as a result, the result and a threshold value are compared Face;
Step S3, it using the location information for marking human eye, eyebrow, nose in disclosed data set, is instructed using regression model Practice, obtains the corresponding regression model R1 of frontal faces;
In training, it is assumed that sample is
Sample data is loaded, and to the true shape of N number of sampleIt is normalized, calculates average shape
Device R is returned for first layert, wherein t ∈ { 1,2 ..., T };For each recurrence device Rt: in N number of training image, per the same This upper recurrence shape isThe true shape of each sample isNeedle for each sample, withOn the basis of, at random Choose P pixel, calculate there are two difference between pixel, while calculating the difference of each sample current shape and target shape ValueIt is projected on an accidental projection direction and obtains a scalar, calculates p* (p-1)/2 gray scale Related coefficient between the difference of value and the variable selects the maximum multipair pixel of related coefficient as shape indexing feature, i.e., more To the local coordinate of pixel;
Device r is returned for the second layerk, wherein k ∈ { 1,2 ..., K };For each recurrence device rk, it is randomly provided one group of threshold value, is counted It counts the shape indexing feature that one layer of recurrence device obtains in, all training samples is divided into 2FClass, according to formulaThe output of each Fern node is calculatedThen the current shape of each sample is added into institute The node output for belonging to classification is used as updated current shape, it may be assumed thatK second layer of every completion returns device, First layer returns device and increases by 1;
The corresponding information of average shape and T*K recurrence device that training is obtained is stored in a model file;
Step S4, some side face data are acquired, the location information of human eye, eyebrow, nose is marked, is instructed using regression model Practice, obtains the corresponding regression model R2 of side face;
Here training method is identical as step S3, and two steps are uniquely distinguished, and the data type that the two uses is different;
Step S5, the human face region that extraction step S2 is detected, face the regression model R1 and R2 trained using step S3 and S4, Obtain the positions such as corresponding human eye, eyebrow, nose in human face region;
Load trained regression model, random selection initialization shape;
Device R is returned into first layert, each recurrence device Rt, wherein t ∈ { 1,2 ..., T }, is that the K second layer returns device, for Each second layer returns device rk, wherein k ∈ { 1,2 ..., K }, reads the corresponding characteristic point position of current recurrence device and threshold value, Current shape is added the regressor of corresponding node by the classification for calculating each sampleAs new current shape, i.e.,Result to the end is obtained after completing T*K times;Wherein corresponding one of each initialization shape is defeated Out, it preferably exports in order to obtain as a result, being ranked up to these results, using the mean value of intermediate multiple values as final human eye area Domain estimated result;
Step S6, the characteristic point obtained using step S5 calculates left and right ocular;Then using area algorithm is to left and right eye The region of position is corrected;
The first position Loc1 of corresponding human eye in human face region, eyebrow, nose is obtained using regression model R1;
The second position Loc2 of corresponding human eye in human face region, eyebrow, nose is obtained using regression model R2;
Judge whether the first position Loc1 of human eye area and second position Loc2 are suitable according to some priori knowledges;
The gradient that each pair of point is answered is calculated to human eye area, the gradient information of entire human eye area is analyzed, passes through first position Loc1 and second position Loc2 calculates the gradient value of corresponding left eye region and right eye region, compares the difference of the two, if all More than some threshold value, then the regional location of right and left eyes is readjusted;Otherwise selection difference small left eye and right eye region carries out defeated Out;
Step S7, right and left eyes region is extracted, obtains human eye center position using Ins location algorithm;
The extraction and pretreatment of human eye area remove interference noise;
The calculating of the vertical gradient, horizontal gradient, range value of human eye area;
Gaussian Blur processing is carried out to human eye area, and using the value as weight, with dark colored portion pixel value in outburst area, and Weaken gradient representated by bright colored portion;
According to formula
It calculates so that c*Corresponding position coordinates when being maximized, the as position of human eye central point;
diIndicate the corresponding motion vector of ith pixel point in image, xiIndicate the position coordinates of ith pixel point in image, c The position of the human eye central point assumed that indicated, giWhat is indicated is in xiThe gradient value at place, wcWhat is indicated is each human eye center The weighted value of the position c of point, N is pixel number total in image.
2. a kind of human-eye positioning method for naked eye 3D display system, which is characterized in that the human-eye positioning method includes:
Step S1, the realtime graphic obtained to camera pre-processes;
Step S2, possible human face region is obtained by training, pixel corresponds to HOG value in using area and LUV value calculates Haar characteristic value is brought into trained Face datection model respectively and is calculated, obtains corresponding classification results, every time therefrom Maximum value is selected, the corresponding region of the value is marked;
Step S3, it using the human eye, eyebrow, nasal portion data marked in disclosed data set, is instructed using regression model Practice, obtains the corresponding regression model of frontal faces;
Step S4, some side face data are acquired, the human eye of label, eyebrow, nasal portion data are instructed using regression model Practice, obtains the corresponding regression model of side face;
Step S5, the human face region that extraction step S2 is detected uses a front surface and a side surface face of step S3 and step S4 training Regression model obtains the position of corresponding human eye, eyebrow, nose in human face region;
Step S6, the characteristic point obtained using step S5 calculates left and right ocular;Then using correcting area algorithm to left and right The region of eye position is corrected;
Step S7, right and left eyes region is extracted, obtains the position of human eye central point using human eye location of the core algorithm.
3. the human-eye positioning method according to claim 2 for naked eye 3D display system, it is characterised in that:
A faceform can be calculated in each feature;Assuming that extracting n feature, using m scale, then model in total Number is n*m, exports a suitable faceform using the method for non-maxima suppression.
4. the human-eye positioning method according to claim 2 for naked eye 3D display system, it is characterised in that:
In step S4, in the training process, for improve model robustness, using expand training sample way;From other marks Stochastical sampling in the image of note, and using the data point in sampled images as the original shape of each training sample;
Test sample selection is multiple to be initialized, and is obtained multiple regression models, is ranked up to this multiple regression model, is selected In intermediate multiple regression models, their mean value is taken, as initial position.
5. the human-eye positioning method according to claim 2 for naked eye 3D display system, it is characterised in that:
In step S4, (1) a part of test data, it is assumed that N number of data need to adjust face shape using Statistical Shape analysis method Shape is normalized, and is then clustered using the method for K mean cluster to these data points, the center that cluster is obtained The point position initial as these characteristic points in face;
(2) data set used in, has plenty of to concentrate from existing test data and randomly chooses, according to the method introduced in (1) into Row processing, executes Q times, obtains the position of the corresponding characteristic point of Q group data set;
(3) the regression model equation of human eye area is that the characteristic point that training obtains in (1) and (2) is carried out linear combination and obtained 's.
6. the human-eye positioning method according to claim 2 for naked eye 3D display system, it is characterised in that:
In step S6, the regression model obtained using the training of frontal one image obtains corresponding human eye, eyebrow in human face region The first position Loc1 of hair, nose;
Using the obtained regression model of side face image training, obtain corresponding human eye in human face region, eyebrow, nose the Two position Loc2;
Judge whether the first position Loc1 of human eye area and the 2nd Loc2 are suitable according to priori knowledge;
The gradient that each pair of point is answered is calculated to human eye area, the gradient information of entire human eye area is analyzed, passes through first position Loc1 and second position Loc2 calculates corresponding left eye region and right eye region, then the gradient of zoning interior pixels point Value, compares the otherness of left eye region and right eye region, if difference is more than some threshold value, readjusts the region of right and left eyes Position.
7. the human-eye positioning method according to claim 2 for naked eye 3D display system, it is characterised in that:
In step S7, the extraction and pretreatment of human eye area remove interference noise;
The calculating of human eye area, including vertical gradient, horizontal gradient, range value;
Human eye area carries out Gaussian Blur processing, dark colored portion pixel value in outburst area, while weakening representated by bright colored portion Gradient.
8. a kind of human-eye positioning method for naked eye 3D display system, which is characterized in that the described method includes:
Face training step;Characteristics of image is extracted from face positive data, is trained using these features, obtains frontal faces Corresponding face classifier model;Characteristics of image is extracted from the data of face side, is trained using these features, obtains side The corresponding face classifier model of face;
Face datection step;Possible human face region in image is obtained using the sorter model of face training step training, and Mark face location;
Ocular training step: using the location information for marking human eye, eyebrow, nose in disclosed data set, shape is extracted Index feature is trained using regression model, obtains the corresponding regression model of frontal faces and the corresponding recurrence mould of side face Type;
Human eye area estimating step;The human face region that face datection step obtains is extracted, the training of ocular training step is used A front surface and a side surface face regression model, obtain in human face region set key position position;
Human eye area aligning step;The some pixel points for being used to describe ocular obtained using human eye area estimating step Coordinate is set, left and right ocular is calculated;The region at right and left eyes position is corrected;
Ins location step;Right and left eyes region is extracted, the position of human eye central point is obtained.
9. a kind of human-eye positioning method for naked eye 3D display system, which is characterized in that described method includes following steps:
(1) face datection step;
Face datection is the position that face is oriented from image or video, is included the following steps:
Pretreatment operation is carried out to camera captured image, removal picture noise influences, and so that image is preferably used for subsequent Image processing work;
After pretreatment, characteristics of image is extracted, three component values including HOG value and LUV color space are calculated using these values Haar characteristic value in region;
Haar characteristic value is brought into and carries out classification processing in multiple sorter models, obtains multiple classification results;
For multiple classification results, using the method for non-maxima suppression, each selection result is worth that maximum region;
If that corresponding region of maximum value is greater than some threshold value, being considered as the region is human face region, is otherwise judged as non- Human face region;
(2) human eye area estimating step;
Human eye area estimation is exactly the position that left eye and right eye are found out in human face region;
There is very big difference relative to face location in eye locations, in front face and side face for two kinds of inhomogeneities The human face data of type, that is, be directed to front face and side face, and training obtains two regression models R1 and R2;
Si: (p1,p2,…,pn-1,pn), n=1,2 ..., 21, pk=(x, y)
R:(R1,R2,…,Rt,…,RT), t=1,2 ..., T
Wherein R indicates regression model, RtIndicate t-th of recurrence device in regression model, SiIt indicates by i-th of test sample, people The set of eye relevant data points composition, data point pkIt is described by x and y coordinates;
Rt(I,Si t-1) indicate to return device RtUtilize input sample image I and a upper shape St-1The shape increment being calculated;
For the human face region that method for detecting human face obtains, two groups of position of human eye are obtained using regression model R1 and R2 respectively and are sat Mark;
Maximum-minimize method is used using position of human eye coordinate, that is, takes all upper and left margins for detecting position of human eye coordinate Minimum value, two groups of different position of human eye regions are calculated in the method for lower and right margin maximum value;
(3) human eye area aligning step;
Since the result obtained after the processing of human eye area estimation method may be comprising human eye part, so needing to people Vitrea eye domain is corrected processing;
Human eye area aligning step is as follows:
The position of corresponding human eye, eyebrow, nose in human face region is obtained using regression model R1, calculates left eye region EyeLeft1 and right eye region eyeRight1;
The position of corresponding human eye, eyebrow, nose in human face region is obtained using regression model R2, calculates left eye region EyeLeft2 and right eye region eyeRight2;
Judge whether human eye area eyeLeft1, eyeRight1, eyeLeft2, eyeRight2 close according to some priori knowledges It is suitable, if improper directly abandon the region, gradient information analysis otherwise is carried out to region;
Gradient information analysis is actually that more corresponding left eye region and right eye region gradient difference are anisotropic, if difference is very Greatly, illustrate that left eye region and the positioning of right eye region position are problematic, need to readjust, otherwise not deal with, directly use pair The left eye region and right eye region answered;
(4) Ins location step;
Ins location the following steps are included:
The extraction and pretreatment of human eye area remove interference noise, such as glasses influence etc.;
The calculating of the vertical gradient, horizontal gradient, range value of human eye area;
Gaussian Blur processing is carried out to human eye area, is then negated again, and using the value as weight w;The purpose for the arrangement is that prominent Dark colored portion pixel value in region out, and weaken gradient representated by bright colored portion;
Use maximum value lookup algorithm, i.e. formula
It calculates so that c*Corresponding position coordinates when being maximized, the as position of human eye central point;
diIndicate the corresponding motion vector of ith pixel point in image, xiIndicate the position coordinates of ith pixel point in image, c The position of the human eye central point assumed that indicated, giWhat is indicated is in xiThe gradient value at place, wcWhat is indicated is each human eye center The weighted value of the position c of point, N is pixel number total in image.
10. a kind of human eye positioning system for naked eye 3D display system, which is characterized in that the system comprises:
Face training module is trained using these features, is obtained just to extract characteristics of image from face positive data The corresponding face classifier model of face face;Characteristics of image is extracted from the data of face side, is trained, is obtained using these features The corresponding face classifier model of side face;
Face detection module, to use the sorter model of face training module training to obtain possible face area in image Domain, and mark face location;
Ocular training module, to extract using the location information for marking human eye, eyebrow, nose in disclosed data set Shape indexing feature, is trained using regression model, obtains the corresponding regression model of frontal faces and side face corresponding time Return model;
Human eye area estimation module, the human face region obtained to extract face detection module, uses ocular training module Trained a front surface and a side surface face regression model obtains the position that key position is set in human face region;
Human eye area correction module, to some pixels for being used to describe ocular obtained using human eye area estimation module Point position coordinates, calculate left and right ocular;The region at right and left eyes position is corrected;
Ins location module obtains the position of human eye central point to extract right and left eyes region.
CN201811636813.4A 2018-12-29 2018-12-29 Human eye positioning method and system for naked eye 3D display system Active CN109725721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811636813.4A CN109725721B (en) 2018-12-29 2018-12-29 Human eye positioning method and system for naked eye 3D display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811636813.4A CN109725721B (en) 2018-12-29 2018-12-29 Human eye positioning method and system for naked eye 3D display system

Publications (2)

Publication Number Publication Date
CN109725721A true CN109725721A (en) 2019-05-07
CN109725721B CN109725721B (en) 2022-03-11

Family

ID=66297944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811636813.4A Active CN109725721B (en) 2018-12-29 2018-12-29 Human eye positioning method and system for naked eye 3D display system

Country Status (1)

Country Link
CN (1) CN109725721B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287797A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of dioptric screening technique based on mobile phone
CN110287796A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of dioptric screening method based on mobile phone and external equipment
CN110490065A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 Face identification method and device, storage medium, computer equipment
CN111160292A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298963A (en) * 2014-09-11 2015-01-21 浙江捷尚视觉科技股份有限公司 Robust multi-pose fatigue monitoring method based on face shape regression model
CN106990839A (en) * 2017-03-21 2017-07-28 张文庆 A kind of eyeball identification multimedia player and its implementation
US20170286823A1 (en) * 2015-04-13 2017-10-05 Boe Technology Group Co., Ltd. Electronic certificate and display method therefor
CN108174182A (en) * 2017-12-30 2018-06-15 上海易维视科技股份有限公司 Three-dimensional tracking mode bore hole stereoscopic display vision area method of adjustment and display system
CN108600733A (en) * 2018-05-04 2018-09-28 成都泰和万钟科技有限公司 A kind of bore hole 3D display method based on tracing of human eye

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298963A (en) * 2014-09-11 2015-01-21 浙江捷尚视觉科技股份有限公司 Robust multi-pose fatigue monitoring method based on face shape regression model
US20170286823A1 (en) * 2015-04-13 2017-10-05 Boe Technology Group Co., Ltd. Electronic certificate and display method therefor
CN106990839A (en) * 2017-03-21 2017-07-28 张文庆 A kind of eyeball identification multimedia player and its implementation
CN108174182A (en) * 2017-12-30 2018-06-15 上海易维视科技股份有限公司 Three-dimensional tracking mode bore hole stereoscopic display vision area method of adjustment and display system
CN108600733A (en) * 2018-05-04 2018-09-28 成都泰和万钟科技有限公司 A kind of bore hole 3D display method based on tracing of human eye

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287797A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of dioptric screening technique based on mobile phone
CN110287796A (en) * 2019-05-24 2019-09-27 北京爱诺斯科技有限公司 A kind of dioptric screening method based on mobile phone and external equipment
CN110287796B (en) * 2019-05-24 2020-06-12 北京爱诺斯科技有限公司 Refractive screening method based on mobile phone and external equipment
CN110490065A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 Face identification method and device, storage medium, computer equipment
CN110490065B (en) * 2019-07-11 2024-02-02 平安科技(深圳)有限公司 Face recognition method and device, storage medium and computer equipment
CN111160292A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN111160292B (en) * 2019-12-31 2023-09-22 上海易维视科技有限公司 Human eye detection method
CN111160291B (en) * 2019-12-31 2023-10-31 上海易维视科技有限公司 Human eye detection method based on depth information and CNN

Also Published As

Publication number Publication date
CN109725721B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN109725721A (en) Human-eye positioning method and system for naked eye 3D display system
CN103810490B (en) A kind of method and apparatus for the attribute for determining facial image
CN110147721B (en) Three-dimensional face recognition method, model training method and device
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
CN101930543B (en) Method for adjusting eye image in self-photographed video
Kang et al. Real-time image restoration for iris recognition systems
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN110569731A (en) face recognition method and device and electronic equipment
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN104318603A (en) Method and system for generating 3D model by calling picture from mobile phone photo album
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN106446779A (en) Method and apparatus for identifying identity
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
KR101558547B1 (en) Age Cognition Method that is powerful to change of Face Pose and System thereof
CN112528939A (en) Quality evaluation method and device for face image
KR101326691B1 (en) Robust face recognition method through statistical learning of local features
CN108446642A (en) A kind of Distributive System of Face Recognition
CN108053425B (en) A kind of high speed correlation filtering method for tracking target based on multi-channel feature
CN111274851A (en) Living body detection method and device
CN110245575B (en) Human body type parameter capturing method based on human body contour line
CN106446832B (en) Video-based pedestrian real-time detection method
CN108491798A (en) Face identification method based on individualized feature
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN108520208A (en) Localize face recognition method
JP2003044853A (en) Face detection device, face pose detection device, partial image extraction device and methods for the devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant