CN106218409A - A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device - Google Patents
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device Download PDFInfo
- Publication number
- CN106218409A CN106218409A CN201610575172.0A CN201610575172A CN106218409A CN 106218409 A CN106218409 A CN 106218409A CN 201610575172 A CN201610575172 A CN 201610575172A CN 106218409 A CN106218409 A CN 106218409A
- Authority
- CN
- China
- Prior art keywords
- human eye
- point
- image
- coordinate
- bore hole
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012856 packing Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000009466 transformation Effects 0.000 claims description 13
- 230000001629 suppression Effects 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 11
- 230000001105 regulatory effect Effects 0.000 claims description 10
- 239000000284 extract Substances 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 8
- 230000001815 facial effect Effects 0.000 abstract 1
- 210000003128 head Anatomy 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Arrangement of adaptations of instruments
-
- B60K35/211—
-
- B60K35/213—
-
- B60K35/81—
Abstract
The invention discloses a kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device, wherein, described method includes: demarcate binocular camera, obtain facial image in real time, carry human eye in image, calculate the human eye locus relative to instrument display screen, adjust the distance between display floater and lens pillar according to position of human eye place vision area, change emergent light by adjustment distance thus reach preferable bore hole 3D and show.Said method can detect human eye automatically and accurately, and provides the locus of human eye, thus is adjusted bore hole 3D vehicle instrument display device according to the locus of human eye so that current position of human eye is in optimal viewing areas.
Description
Technical field
The present invention relates to a kind of 3D automobile instrument Display Technique, be specifically related to a kind of can the bore hole 3D automotive meter of tracing of human eye
Table display packing and device.
Background technology
Compared with conventional two-dimensional display instrument, the display of 3D instrument is more mated with the visual signature of people so that Ren Men
Third dimension and feeling of immersion more it is rich in during viewing instrument, and can display traffic information and the work information of car in real time more directly perceived.And
Bore hole 3D technology makes driver just can directly show to three-dimensional vehicle data with naked-eye observation without wearing spectacles, Bu Huiying
Ring the safety travelled.Current bore hole 3D technology mainly has three kinds, is disparity barrier formula, lens pillar formula and sensing light respectively
Source formula.Owing to disparity barrier and sensing light-source type all exist picture brightness this fatal defects relatively low, and make to watch body
Test and have a greatly reduced quality.So lens pillar technology is the most suitable for Mu Qian, its sharpest edges are exactly that picture brightness will not be because of
3Dization and decline.The principle of lens pillar 3D technology is to add last layer lens pillar before LCDs, makes liquid crystal display screen
Image plane be on the focal plane of lens, and each pixel of liquid crystal display screen epigraph is segmented into several sub-pixel, so
Under lens, sub-pixel point just can project away in different directions, when lens pillar and liquid crystal display screen pixel column are angled, just
Each group of sub-pixel can be made to repeat to project vision area, i.e. can see 3D rendering in several different vision areas.But lens pillar
The shortcoming of technology can only watch 3D rendering exactly in these specify vision area, and owing to the increase of sub-pixel point can make
Image resolution ratio degradation affects viewing effect, so multiple views 3D effect cannot be accomplished, say, that bore hole 3D display
After completing, its viewing ratio is fixed the most therewith with angle, it is desirable to user can not arbitrarily change viewing distance, very
Impact viewing impression.
Summary of the invention
In order to solve the problems referred to above, the invention provides following technical scheme.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, specifically include following steps:
Step 1, installs binocular camera, demarcates the position of binocular camera in vehicle cab front end;Binocular is utilized to take the photograph
As head gathers left and right two two field pictures of synchronization, correct left and right two two field pictures, the image after being corrected respectively;
Step 2, extracts face range areas the image after correcting;
Step 3, extracts human eye area in face range areas, and is optimized the human eye area extracted, and is optimizing
After human eye area in detect iris;
Step 4, calculating iris, relative to the locus at photographic head place, utilizes the position of photographic head and instrument display screen
Relation and iris, relative to the locus at photographic head place, are calculated the human eye locus relative to instrument display screen.
Further, step 2 includes following sub-step:
Step 21, carries out binary conversion treatment to the image after correcting, obtains binary image;
Step 22, with the horizontal direction of binary image as X-axis, direction vertical with X-axis in binary image is as Y
Axle, determines that human face region is in the starting point of X-direction and end point;
Step 23, determines that human face region is in the starting point of Y direction and end point;
Step 24, by starting point and end point, the starting point of Y direction and the end point of X-direction, obtains confining people
The rectangle of face scope.
Further, the determination human face region described in step 22 walks in the starting point of X-direction and the concrete of end point
Suddenly it is:
Step 221, sets X=b, b=0;
Step 222, when obtaining X=b, in binary image the number of white point be designated as SumTempx and number a little
Sum_p_c;Set threshold value pLThresh1, if the ratio of SumTempx and Sum_p_c is more than or equal to pLThresh1, then X=b
As human face region in the starting point of X-direction, it is the left margin X-coordinate of face, is designated as x_L;
If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then to b plus step-length x_p, repeat step 222,
Until finding the left margin X-coordinate of face;
Step 223, sets X=b, b=0;
Step 224, when obtaining X=b, in binary image the number of white point be designated as SumTempx and number a little
Sum_p_c;Set threshold value pLThresh1, if the ratio of SumTempx and Sum_p_c is more than or equal to pLThresh1, then X=b
As human face region in the starting point of X-direction, it is the right margin X-coordinate of face, is designated as x_R;
If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then to b plus step-length-x_p, repeat step 224,
Until finding the right margin X-coordinate of face;
Further, the determination human face region described in step 23 walks in the starting point of Y direction and the concrete of end point
Suddenly it is:
Step 231, sets Y=c, c=0;
Step 232, when obtaining Y=c, in binary image the number of white point be designated as SumTempy and number a little
Sum_p_r;Set threshold value pLThresh2, if the ratio of SumTempy and Sum_p_r is more than or equal to pLThresh2, then Y=c
As human face region in the starting point of Y direction, it is the left margin Y coordinate of face, is designated as y_U;
If the ratio of SumTempy and Sum_p_r is less than pLThresh2, then to c plus step-length y_p, repeat step 232,
Until finding the coboundary Y coordinate of face;
Step 233, it is known that x_R, x_L and y_U, be can get face lower boundary coordinate by following formula:
Y_D=y_U-1.36 × (x_R-x_L)
Further, the method extracting human eye area in step 3 from face range areas is:
Wherein, (x y) represents coordinate (x, y) gray value at place, M in binary image to GhX () represents in binary image
(x, y) gray value at place is in the horizontal integral projection curve in [x_L, x_R] region for coordinate;
Find trough corresponding with human eye in described horizontal integral projection curve, utilize two ripples adjacent with this trough
Peak dot finds Y-axis coordinate k_1 and k_2 that the two wave crest point is corresponding;
Make y_1=k_2-3/5 (k_2-k_1), y_2=k_2+3/5 (k_2-k_1), obtain the human eye of y_1 and y_2 composition
Region.
Further, being optimized the human eye area extracted described in step 3 refers to:
Step 31, utilizes gaussian filtering to process human eye area, obtains smoothed image;
Step 32, carries out the calculating in gradient magnitude and direction, then carries out maximum suppression the pixel in smoothed image,
To non-maxima suppression image: concrete operations are as follows:
Choose each pixel in smoothed image successively as current pixel point, if the amplitude of current pixel point is more than it
The amplitude of two pixels adjacent on gradient direction, then this current pixel point is local maximum;Otherwise by this current pixel
The gray value of point sets to 0;After in rejecting smoothed image, all gray values are the pixel of 0, then composition non-maxima suppression image;
Step 33, sets two threshold values L and H, wherein L=1/2H, successively in optional non-maxima suppression image
Pixel is as current pixel point, if the amplitude of this current pixel point is more than or equal to L, then this current pixel point is Low threshold office
Portion's maximum of points, otherwise sets to 0 the gray value of this current pixel point;If the amplitude of this current pixel point is more than or equal to H, then should
Current pixel point is high threshold local maximum point, is otherwise set to 0 by the gray value of this current pixel point;
Step 34, all Low threshold local maximum point composition Low threshold edge image;All high threshold local maximums
Point composition high threshold edge image;
Step 35, if breakpoint occurs in the edge of high threshold edge image, then searches this breakpoint coordinate corresponding to Low threshold limit
Pixel in edge image, finds the pixel that can connect high threshold edge image breakpoint in the eight neighborhood point of this pixel,
This pixel is connected at the breakpoint of high threshold edge image;
Step 36, repeats step 35 until the edge closure of high threshold edge image, the high threshold edge graph now obtained
As being the human eye area after optimizing.
Further, the human eye area after optimization described in step 3 detects iris to refer to:
Step 37, utilizes the limit of the four direction up and down at the human eye area edge that step 36 obtains, and uses minimum
Boundary rectangle method estimates the center of circle and the radius of human eye area, thus obtains the parametric equation of this human eye area;
Step 38, carries out Hough transform in radius to described parametric equation and obtains a transformation space, this change
Change space and comprise several circles with R as radius;
Step 39, a circle in optional described transformation space is as current circle, all circles in ergodic transformation space, system
Count the number of the circle identical with this current round heart coordinate and be designated as phase concentric number, and labelling this currently justify;
Step 310, repeats step 39, until all circles are all marked as current circle in described transformation space;
Step 311, finds the current circle that phase concentric number is most, and this round central coordinate of circle is iris coordinate.
The present invention have also been devised a kind of bore hole 3D vehicle instrument display device, and described device includes that display floater and column are saturating
Mirror, it is characterised in that also include a microprocessor, is provided with hydraulic regulation dress between described display floater and lens pillar
Putting, described hydraulic regulating device is connected with microprocessor.
Further, described hydraulic regulating device is provided with 4, and it uses the arrangement of double loop diagonal formula to divide
Cloth is on four angles of display floater bottom surface, and is connected with lens pillar.
Further, described display floater and lens pillar are connected by cyclic spring connector.
Compared with prior art, the present invention has the following technical effect that
1. the present invention can detect human eye automatically and accurately, and provides the locus of human eye, thus according to human eye
Bore hole 3D vehicle instrument display device is adjusted by locus so that current position of human eye is in optimal viewing areas.
Processing speed the most of the present invention is fast, identification precision is high.
Accompanying drawing explanation
Fig. 1 is that the present invention is a kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and the flow chart of system;
Fig. 2 determines that the schematic diagram of position of human eye;
Fig. 3 is the one-dimensional transform figure processing image;
Fig. 4 is binary image;
Fig. 5 is face scope extraction figure;
Fig. 6 is the horizontal integral projection figure of face;
Fig. 7 is human eye extraction figure;
Fig. 8 is eye recognition design sketch;
Fig. 9 is lens pillar formula bore hole 3D display device schematic diagram;
Figure 10 is present configuration schematic diagram;
Figure 11 is that hydraulic means pipeline arranges schematic diagram;
In figure, label represents: 1 display floater, 2 Flexible Connectors, 3 hydraulic regulating devices, 4 lens pillars.
Detailed description of the invention
The invention will be further described with embodiment below in conjunction with the accompanying drawings.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, the method utilizes and is arranged on vehicle cab front end
Binocular camera gather image to realize the automobile instrument of tracing of human eye showing, specifically include following steps:
Step 1, installs binocular camera, demarcates the position of binocular camera, utilizes binocular camera to gather synchronization
Left and right two two field picture, respectively correct about two two field pictures to mate image;
Binocular positioning principle figure as shown in Figure 2, it is embodied as step and is:
Step 11, measures the relative position between two photographic head by demarcation, and the rightest photographic head is relative to left shooting
The D translation t of head and rotate R parameter, binocular camera C1 and C2 and the world coordinate system external parameter relative to position is for rotating
Matrix R1 and R2 and translation vector t1 and t2, binocular camera is represented by with the relative position of world coordinate system:
zc1=R1zw+t1 zc2=R2zw+t2
Obtain the position relationship of binocular camera:
zc1=R1R2 -1zc2+t1-R1R2 -1t2。
Thus the geometrical relationship between two video cameras can represent with R and t:
R=R1R2 -1T=t1-R1R2 -1t2
Step 12, calculates the parallax that impact point is formed on the view of two, left and right, first must be this point at left and right two view
The pixel coupling of upper correspondence is got up, but it is very time-consuming, so searching to reduce coupling to mate corresponding point in two-dimensional space
Rope scope, has introduced limit restraint and has made the coupling of corresponding point be become linear search from two dimension, as shown in Figure 3.
Step 2, extracts face range areas the image after correcting;
Wherein, step 2 is extracted face range areas method particularly includes:
Step 21, carries out binary conversion treatment to the image after correcting, and image is converted into stain and the two-value of white point composition
Change image;
Step 22, with the horizontal direction of binary image as X-axis, direction vertical with X-axis in black white image as Y-axis,
Determine that human face region is in the starting point of X-direction and end point:
Step 221, sets X=b, b=0;
Step 222, when obtaining X=b, in binary image the number of white point be designated as SumTempx and number a little
Sum_p_c;Set threshold value pLThresh1, if the ratio of SumTempx and Sum_p_c is more than or equal to pLThresh1, then X=b
As human face region in the starting point of X-direction, it is the left margin X-coordinate of face, is designated as x_L;
If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then to b plus step-length x_p, repeat step (2-2-
2), until finding the left margin X-coordinate of face;
Step 223, sets X=b, b=0;
Step 224, when obtaining X=b, in binary image the number of white point be designated as SumTempx and number a little
Sum_p_c;Set threshold value pLThresh1, if the ratio of SumTempx and Sum_p_c is more than or equal to pLThresh1, then X=b
As human face region in the starting point of X-direction, it is the right margin X-coordinate of face, is designated as x_R;
If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then to b plus step-length-x_p, repeat step (2-
2-4), until finding the right margin X-coordinate of face;
Step 23, determines that human face region is in the starting point of Y direction and end point:
Step 231, sets Y=c, c=0;
Step 232, when obtaining Y=c, in binary image the number of white point be designated as SumTempy and number a little
Sum_p_r;Set threshold value pLThresh2, if the ratio of SumTempy and Sum_p_r is more than or equal to pLThresh2, then Y=c
As human face region in the starting point of Y direction, it is the left margin Y coordinate of face, is designated as y_U;
If the ratio of SumTempy and Sum_p_r is less than pLThresh2, then to c plus step-length y_p, repeat step 232,
Until finding the coboundary Y coordinate of face;
Step 233, it is known that x_R, x_L and y_U, is obtained by the shape of face meeting Aesthetic Standards, from hair line to chin away from
It is about 1.36, so face lower boundary coordinate is from the ratio wide with face:
Y_D=y_U-1.36 × (x_R-x_L)
I.e. can be obtained confining the rectangle of face scope by x_R, x_L, y_U and y_D.
Step 24, the Y direction that the starting point of the X-direction obtained by step 22 and end point, step 23 are obtained
Starting point and end point, obtain confining the rectangle of face scope
Step 3, extracts human eye area in face range areas, and is optimized the human eye area extracted, and is optimizing
After human eye area in detect iris;
Wherein, the method extracting human eye area in step 3 from face range areas is:
Wherein, (x y) represents that binary image is at coordinate (x, y) gray value at place, M to Gh(x) expression binary image (x,
Y) gray value at place is at the horizontal integral projection in [x_L, x_R] region;
By the floor projection curve of the binary image that formula 1 obtains, as shown in Figure 6, the more apparent trough in curve
The each organ characteristic being same face is corresponding, as observed from left to right, is apparent that four troughs, first corresponding eyebrow the most
Hair, second corresponding eye, the 3rd should be nose, the 4th corresponding face, it is therefore desirable to position second in this curve
With the 3rd wave crest point, find Y-axis coordinate k_1 and k_2 that the two wave crest point is corresponding in binary image;
Make y_1=k_2-3/5 (k_2-k_1) and y_2=k_2+3/5 (k_2-k_1), obtain the human eye of y_1 and y_2 composition
Region, as shown in Figure 7.
Wherein, the method optimizing human eye area is:
Step 31, utilizes gaussian filtering to filter the noise in human eye area image, i.e. smoothing processing;
Step 32, carries out the calculating in gradient magnitude and direction, then carries out greatly the pixel in the image after smoothing processing
Value suppression, obtains non-maxima suppression image:
Each pixel chosen successively after smoothing processing in image is as current pixel point, if the width of current pixel point
The amplitude of two pixels that value is adjacent more than on its gradient direction, then this current pixel point is local maximum;Otherwise should
The gray value of current pixel point sets to 0;After all gray values are the pixel of 0 in rejecting smoothed image, form non-maxima suppression
Image;
Step 33, sets two threshold values L and H, wherein L=1/2H, successively in optional non-maxima suppression image
Pixel is as current pixel point, if the amplitude of this current pixel point is more than or equal to L, then this current pixel point is Low threshold office
Portion's maximum of points, otherwise sets to 0 the gray value of this current pixel point;If the amplitude of this current pixel point is more than or equal to H, then should
Current pixel point is high threshold local maximum point, is otherwise set to 0 by the gray value of this current pixel point;
Step 34, all Low threshold local maximum point composition Low threshold edge image;All high threshold local maximums
Point composition high threshold edge image;
Step 35, if breakpoint occurs in the edge of high threshold edge image, then searches this breakpoint coordinate corresponding to Low threshold limit
Pixel in edge image, finds the pixel that can connect high threshold edge image breakpoint in the eight neighborhood point of this pixel,
This pixel is connected at the breakpoint of high threshold edge image;
Step 36, repeats step 35 until the edge closure of high threshold edge image, and high threshold edge image now is i.e.
For the human eye area after optimizing.
Wherein, iris is detected method particularly includes:
Step 37, utilizes the limit of the four direction up and down at the human eye area edge that step 36 obtains, and uses minimum
Boundary rectangle method estimates the center of circle and the radius of human eye area, thus obtains the parametric equation of this human eye area;
Step 38, carries out Hough transform in radius to described parametric equation and obtains a transformation space, this change
Change space and comprise several circles with R as radius;
Step 39, a circle in optional described transformation space is as current circle, all circles in ergodic transformation space, system
Count the number of the circle identical with this current round heart coordinate and be designated as phase concentric number, and labelling this currently justify;
Step 310, repeats step 39, until all circles are all marked as current circle in described transformation space;
Step 311, finds the current circle that phase concentric number is most, and this round central coordinate of circle is iris coordinate.Cause
For when some coordinate points of transformation space is equal, representing these points on same circle, there is peak value in coordinate points same number
Time corresponding coordinate points be round parameter, thus obtain iris.
Step 4, calculating iris, relative to the locus at photographic head place, utilizes the position of photographic head and instrument display screen to close
System obtains the human eye locus relative to instrument display screen.
The present invention have also been devised a kind of bore hole 3D vehicle instrument display device, and this device includes display floater 1, lens pillar
4 and microprocessor, it is provided with hydraulic regulating device 3 between display floater 1 and lens pillar 4, this hydraulic regulating device 3 is by micro-
Processor controls to change the distance between display floater 1 and lens pillar 4, needs to adjust between display floater 1 and lens pillar 4
The distance of joint is calculated according to position of human eye, meets driver and perfect bore hole 3D can be watched to show in any position
Effect.
As shown in Figure 10, hydraulic regulating device is provided with 4, and it is symmetrically distributed on four angles of display floater 1 bottom surface,
And be connected with lens pillar 4, for the ease of the distance between regulation display floater 1 and lens pillar 4, display floater 1 and column
Lens 4 are connected by cyclic spring connector 2.
As shown in figure 11,4 hydraulic regulating devices 3 use the arrangement of double loop diagonal formula, it is possible to effectively
The depth of parallelism ensured between display floater 1 and lens pillar 4, and reliability is high.
Embodiment of above is merely to illustrate the present invention, and not limitation of the present invention, common about technical field
Technical staff, without departing from the spirit and scope of the present invention, it is also possible to make a variety of changes and modification, therefore own
The technical scheme of equivalent falls within scope of the invention, and the scope of patent protection of the present invention should be defined by the claims.
Claims (10)
1. one kind can the bore hole 3D automobile instrument display packing of tracing of human eye, it is characterised in that specifically include following steps:
Step 1, installs binocular camera, demarcates the position of binocular camera in vehicle cab front end;Utilize binocular camera
Gather left and right two two field pictures of synchronization, correct left and right two two field pictures, the image after being corrected respectively;
Step 2, extracts face range areas the image after correcting;
Step 3, extracts human eye area in face range areas, and is optimized the human eye area extracted, after optimization
Human eye area detects iris;
Step 4, calculating iris, relative to the locus at photographic head place, utilizes the position relationship of photographic head and instrument display screen
And iris is relative to the locus at photographic head place, it is calculated the human eye locus relative to instrument display screen.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, it is characterised in that step
2 include following sub-step:
Step 21, carries out binary conversion treatment to the image after correcting, obtains binary image;
Step 22, with the horizontal direction of binary image as X-axis, direction vertical with X-axis in binary image is as Y-axis, really
Determine human face region in the starting point of X-direction and end point;
Step 23, determines that human face region is in the starting point of Y direction and end point;
Step 24, by starting point and end point, the starting point of Y direction and the end point of X-direction, obtains confining face model
The rectangle enclosed.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, it is characterised in that step
Determination human face region described in 22 is in the starting point of X-direction and concretely comprising the following steps of end point:
Step 221, sets X=b, b=0;
Step 222, when obtaining X=b, in binary image the number of white point be designated as SumTempx and number Sum_p_ a little
c;Setting threshold value pLThresh1, if the ratio of SumTempx and Sum_p_c is more than or equal to pLThresh1, then X=b is as people
Face region, in the starting point of X-direction, is the left margin X-coordinate of face, is designated as x_L;
If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then to b plus step-length x_p, repeat step 222, until
Find the left margin X-coordinate of face;
Step 223, sets X=b, b=0;
Step 224, when obtaining X=b, in binary image the number of white point be designated as SumTempx and number Sum_p_ a little
c;Setting threshold value pLThresh1, if the ratio of SumTempx and Sum_p_c is more than or equal to pLThresh1, then X=b is as people
Face region, in the starting point of X-direction, is the right margin X-coordinate of face, is designated as x_R;
If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then to b plus step-length-x_p, repeat step 224, until
Find the right margin X-coordinate of face.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, it is characterised in that step
Determination human face region described in 23 is in the starting point of Y direction and concretely comprising the following steps of end point:
Step 231, sets Y=c, c=0;
Step 232, when obtaining Y=c, in binary image the number of white point be designated as SumTempy and number Sum_p_ a little
r;Setting threshold value pLThresh2, if the ratio of SumTempy and Sum_p_r is more than or equal to pLThresh2, then Y=c is as people
Face region, in the starting point of Y direction, is the left margin Y coordinate of face, is designated as y_U;
If the ratio of SumTempy and Sum_p_r is less than pLThresh2, then to c plus step-length y_p, repeat step 232, until
Find the coboundary Y coordinate of face;
Step 233, it is known that x_R, x_L and y_U, be can get face lower boundary coordinate by following formula:
Y_D=y_U-1.36 × (x_R-x_L).
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, it is characterised in that step
The method extracting human eye area in 3 from face range areas is:
Wherein, (x y) represents coordinate (x, y) gray value at place, M in binary image to GhX () represents coordinate in binary image
(x, y) gray value at place is in the horizontal integral projection curve in [x_L, x_R] region;
Find trough corresponding with human eye in described horizontal integral projection curve, utilize two wave crest points adjacent with this trough
Find Y-axis coordinate k_1 and k_2 that the two wave crest point is corresponding;
Make y_1=k_2-3/5 (k_2-k_1), y_2=k_2+3/5 (k_2-k_1), obtain the human eye area of y_1 and y_2 composition.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye, it is characterised in that step
Being optimized the human eye area extracted described in 3 refers to:
Step 31, utilizes gaussian filtering to process human eye area, obtains smoothed image;
Step 32, carries out the calculating in gradient magnitude and direction, then carries out maximum suppression, obtain non-the pixel in smoothed image
Maximum suppression image: concrete operations are as follows:
Choose each pixel in smoothed image successively as current pixel point, if the amplitude of current pixel point is more than its gradient
The amplitude of two pixels adjacent on direction, then this current pixel point is local maximum;Otherwise by this current pixel point
Gray value sets to 0;After in rejecting smoothed image, all gray values are the pixel of 0, then composition non-maxima suppression image;
Step 33, sets two threshold values L and H, wherein L=1/2H, successively a pixel in optional non-maxima suppression image
Point is as current pixel point, if the amplitude of this current pixel point is more than or equal to L, then this current pixel point be Low threshold local
It is worth greatly a little, otherwise the gray value of this current pixel point is set to 0;If the amplitude of this current pixel point is more than or equal to H, then this is current
Pixel is high threshold local maximum point, is otherwise set to 0 by the gray value of this current pixel point;
Step 34, all Low threshold local maximum point composition Low threshold edge image;All high threshold local maximum point groups
Become high threshold edge image;
Step 35, if breakpoint occurs in the edge of high threshold edge image, then searches this breakpoint coordinate corresponding to Low threshold edge graph
Pixel in Xiang, finds the pixel that can connect high threshold edge image breakpoint in the eight neighborhood point of this pixel, should
Pixel is connected at the breakpoint of high threshold edge image;
Step 36, repetition step 35 is until the edge closure of high threshold edge image, and the high threshold edge image now obtained is
Human eye area after optimization.
A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and system, its feature exists
In, the human eye area after optimization described in step 3 detects iris and refers to:
Step 37, utilizes the limit of the four direction up and down at the human eye area edge that step 36 obtains, and uses minimum external
Rectangular Method estimates the center of circle and the radius of human eye area, thus obtains the parametric equation of this human eye area;
Step 38, carries out Hough transform in radius to described parametric equation and obtains a transformation space, and this conversion is empty
Between comprise several circles with R as radius;
Step 39, in optional described transformation space one circle as current circle, all circles in ergodic transformation space, statistics with
The number of the circle that this current round heart coordinate is identical is designated as phase concentric number, and labelling this currently justify;
Step 310, repeats step 39, until all circles are all marked as current circle in described transformation space;
Step 311, finds the current circle that phase concentric number is most, and this round central coordinate of circle is iris coordinate.
8. a bore hole 3D vehicle instrument display device, described device includes display floater (1) and lens pillar (4), its feature
It is, also includes a microprocessor, between described display floater (1) and lens pillar (4), be provided with hydraulic regulating device
(3), described hydraulic regulating device (3) is connected with microprocessor.
9. bore hole 3D vehicle instrument display device as claimed in claim 8, it is characterised in that described hydraulic regulating device
(3) being provided with 4, it uses the arrangement of double loop diagonal formula to be distributed on four angles of display floater (1) bottom surface, and
It is connected with lens pillar (4).
10. bore hole 3D vehicle instrument display device as claimed in claim 8, described display floater (1) and lens pillar (4)
Connected by cyclic spring connector (2).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610575172.0A CN106218409A (en) | 2016-07-20 | 2016-07-20 | A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610575172.0A CN106218409A (en) | 2016-07-20 | 2016-07-20 | A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106218409A true CN106218409A (en) | 2016-12-14 |
Family
ID=57532056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610575172.0A Pending CN106218409A (en) | 2016-07-20 | 2016-07-20 | A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106218409A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107067013A (en) * | 2017-04-25 | 2017-08-18 | 航天科技控股集团股份有限公司 | One kind is based on many instrument hawkeye fuzzy detection system and methods |
CN107833263A (en) * | 2017-11-01 | 2018-03-23 | 宁波视睿迪光电有限公司 | Feature tracking method and device |
CN109309828A (en) * | 2017-07-28 | 2019-02-05 | 三星电子株式会社 | Image processing method and image processing apparatus |
CN109963140A (en) * | 2017-12-25 | 2019-07-02 | 深圳超多维科技有限公司 | Nakedness-yet stereoscopic display method and device, equipment and computer readable storage medium |
CN109961473A (en) * | 2017-12-25 | 2019-07-02 | 深圳超多维科技有限公司 | Eyes localization method and device, electronic equipment and computer readable storage medium |
CN111985303A (en) * | 2020-07-01 | 2020-11-24 | 江西拓世智能科技有限公司 | Human face recognition and human eye light spot living body detection device and method |
CN112650495A (en) * | 2021-01-05 | 2021-04-13 | 东风汽车股份有限公司 | Method for creating visual area of display plane of combination instrument based on CATIA software |
WO2021110034A1 (en) * | 2019-12-05 | 2021-06-10 | 北京芯海视界三维科技有限公司 | Eye positioning device and method, and 3d display device and method |
CN113534490A (en) * | 2021-07-29 | 2021-10-22 | 深圳市创鑫未来科技有限公司 | Stereoscopic display device and stereoscopic display method based on user eyeball tracking |
CN117092830A (en) * | 2023-10-18 | 2023-11-21 | 世优(北京)科技有限公司 | Naked eye 3D display device and driving method thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104391567A (en) * | 2014-09-30 | 2015-03-04 | 深圳市亿思达科技集团有限公司 | Display control method for three-dimensional holographic virtual object based on human eye tracking |
CN104408462A (en) * | 2014-09-22 | 2015-03-11 | 广东工业大学 | Quick positioning method of facial feature points |
WO2015168464A1 (en) * | 2014-04-30 | 2015-11-05 | Visteon Global Technologies, Inc. | System and method for calibrating alignment of a three-dimensional display within a vehicle |
-
2016
- 2016-07-20 CN CN201610575172.0A patent/CN106218409A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015168464A1 (en) * | 2014-04-30 | 2015-11-05 | Visteon Global Technologies, Inc. | System and method for calibrating alignment of a three-dimensional display within a vehicle |
CN104408462A (en) * | 2014-09-22 | 2015-03-11 | 广东工业大学 | Quick positioning method of facial feature points |
CN104391567A (en) * | 2014-09-30 | 2015-03-04 | 深圳市亿思达科技集团有限公司 | Display control method for three-dimensional holographic virtual object based on human eye tracking |
Non-Patent Citations (4)
Title |
---|
向元平,王国才,乔汇东: "基于正面人脸图像的人脸轮廓的提取", 《微计算机信息》 * |
周飞,王晨升: "基于Canny算法的一种边缘提取改进算法", 《北京图象图形学学会会议论文集》 * |
张杰,杨晓飞,赵瑞莲: "基于积分投影和Hough变换圆检测的人眼精确定位方法研究", 《电子器件》 * |
苏剑波,徐波: "《应用模式识别技术导论 人脸识别与语音识别》", 31 May 2001 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107067013A (en) * | 2017-04-25 | 2017-08-18 | 航天科技控股集团股份有限公司 | One kind is based on many instrument hawkeye fuzzy detection system and methods |
CN109309828B (en) * | 2017-07-28 | 2022-03-22 | 三星电子株式会社 | Image processing method and image processing apparatus |
CN109309828A (en) * | 2017-07-28 | 2019-02-05 | 三星电子株式会社 | Image processing method and image processing apparatus |
US11634028B2 (en) | 2017-07-28 | 2023-04-25 | Samsung Electronics Co., Ltd. | Image processing method of generating an image based on a user viewpoint and image processing device |
CN107833263A (en) * | 2017-11-01 | 2018-03-23 | 宁波视睿迪光电有限公司 | Feature tracking method and device |
CN109963140A (en) * | 2017-12-25 | 2019-07-02 | 深圳超多维科技有限公司 | Nakedness-yet stereoscopic display method and device, equipment and computer readable storage medium |
CN109961473A (en) * | 2017-12-25 | 2019-07-02 | 深圳超多维科技有限公司 | Eyes localization method and device, electronic equipment and computer readable storage medium |
WO2021110034A1 (en) * | 2019-12-05 | 2021-06-10 | 北京芯海视界三维科技有限公司 | Eye positioning device and method, and 3d display device and method |
CN111985303A (en) * | 2020-07-01 | 2020-11-24 | 江西拓世智能科技有限公司 | Human face recognition and human eye light spot living body detection device and method |
CN112650495A (en) * | 2021-01-05 | 2021-04-13 | 东风汽车股份有限公司 | Method for creating visual area of display plane of combination instrument based on CATIA software |
CN112650495B (en) * | 2021-01-05 | 2022-05-17 | 东风汽车股份有限公司 | Method for creating visual area of display plane of combination instrument based on CATIA software |
CN113534490A (en) * | 2021-07-29 | 2021-10-22 | 深圳市创鑫未来科技有限公司 | Stereoscopic display device and stereoscopic display method based on user eyeball tracking |
CN113534490B (en) * | 2021-07-29 | 2023-07-18 | 深圳市创鑫未来科技有限公司 | Stereoscopic display device and stereoscopic display method based on user eyeball tracking |
CN117092830A (en) * | 2023-10-18 | 2023-11-21 | 世优(北京)科技有限公司 | Naked eye 3D display device and driving method thereof |
CN117092830B (en) * | 2023-10-18 | 2023-12-22 | 世优(北京)科技有限公司 | Naked eye 3D display device and driving method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106218409A (en) | A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device | |
CN106782268B (en) | Display system and driving method for display panel | |
US10194135B2 (en) | Three-dimensional depth perception apparatus and method | |
CN105094337B (en) | A kind of three-dimensional gaze estimation method based on iris and pupil | |
CN107527324B (en) | A kind of pattern distortion antidote of HUD | |
CN106875444B (en) | A kind of object localization method and device | |
CN103034330B (en) | A kind of eye interaction method for video conference and system | |
EP3400706B1 (en) | Gaze correction of multi-view images | |
CN107885325A (en) | A kind of bore hole 3D display method and control system based on tracing of human eye | |
US20140063018A1 (en) | Depth estimation device, depth estimation method, depth estimation program, image processing device, image processing method, and image processing program | |
CN105138965A (en) | Near-to-eye sight tracking method and system thereof | |
CN107907048A (en) | A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning | |
US20120069009A1 (en) | Image processing apparatus | |
CN104915656B (en) | A kind of fast human face recognition based on Binocular vision photogrammetry technology | |
CN105930821A (en) | Method for identifying and tracking human eye and apparatus for applying same to naked eye 3D display | |
US9332247B2 (en) | Image processing device, non-transitory computer readable recording medium, and image processing method | |
WO2017152529A1 (en) | Determination method and determination system for reference plane | |
US20120154376A1 (en) | Tracing-type stereo display apparatus and tracing-type stereo display method | |
CN106959759A (en) | A kind of data processing method and device | |
CN104597057B (en) | A kind of column Diode facets defect detecting device based on machine vision | |
CN103558910A (en) | Intelligent display system automatically tracking head posture | |
CN112232310B (en) | Face recognition system and method for expression capture | |
CN106228513A (en) | A kind of Computerized image processing system | |
CN101339606A (en) | Human face critical organ contour characteristic points positioning and tracking method and device | |
CN110096925A (en) | Enhancement Method, acquisition methods and the device of Facial Expression Image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161214 |
|
RJ01 | Rejection of invention patent application after publication |