CN112698724B - Implementation method of penetrating screen system based on camera eye movement tracking - Google Patents

Implementation method of penetrating screen system based on camera eye movement tracking Download PDF

Info

Publication number
CN112698724B
CN112698724B CN202011620899.9A CN202011620899A CN112698724B CN 112698724 B CN112698724 B CN 112698724B CN 202011620899 A CN202011620899 A CN 202011620899A CN 112698724 B CN112698724 B CN 112698724B
Authority
CN
China
Prior art keywords
coordinate system
camera
screen
eye
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011620899.9A
Other languages
Chinese (zh)
Other versions
CN112698724A (en
Inventor
秦学英
卢世逸
姜新波
李佳宸
黄鸿
何弦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202011620899.9A priority Critical patent/CN112698724B/en
Publication of CN112698724A publication Critical patent/CN112698724A/en
Application granted granted Critical
Publication of CN112698724B publication Critical patent/CN112698724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a realization method of a penetrating screen system based on camera eye movement tracking, which belongs to the technical field of augmented reality and comprises the following steps of firstly determining the eye position under a camera coordinate system, then establishing a standard coordinate system, converting the camera coordinate system into the standard coordinate system and then into a Unity coordinate system, mapping a real screen and eyes into a Unity coordinate system, aligning with a camera in a virtual scene after a series of transformation and calibration, and enabling a user to achieve an interactive effect with the system only by using the eyes.

Description

Implementation method of penetrating screen system based on camera eye movement tracking
Technical Field
The invention relates to a method for realizing a penetrating screen system based on camera eye movement tracking, and belongs to the technical field of augmented reality.
Background
The eye tracking technology is applied to various fields, and in the aspect of man-machine interaction, the control of a computer through eyes becomes a feasible interaction mode. Eye tracking can also be used for analyzing user psychology, so that programmers can conveniently perform work such as webpage layout planning and personalized advertisement recommendation, and the most important information can be captured by users more easily. In the medical field, eye tracking can also help to view the condition of the patient's eyes more conveniently. Some disabled persons or persons with very inconvenient actions can use the eye tracker to realize more convenient interaction. In the field of security, eye Tracking helps machines better identify human eye features, achieving higher security [ Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini D.Kannan, Antonio Torlba. eye Tracking for Evaporation. CVPR,2016], [ R.Jacob and K.S.Karn. eye Tracking in human-computer interaction and user interaction research: Ready to delver the process.Mind, 2003], [ C.H.Morimoto and M.R.Mimi. eye Tracking for interactive applications. CV 2005.1 ]. The invention mainly utilizes the eyeball positioning function in the eye movement tracking technology.
The penetrating screen in the invention means that a virtual scene observed through the screen has certain stereoscopic impression, thereby achieving the effect of being fake and trusting. At present, a plurality of similar works are available on the market, such as a recently scarred Chengdu 3D outdoor screen on twitter, but the finished product displayed on a video has slight edge distortion and the effect is displayed by depending on a large-area folded screen, and although the effect is very brilliant, the manufacturing cost is very expensive. Roomality, for another example, indicated in 7 months of 2020 that they would develop a large 3D immersive virtual window without glasses, which would use AI to render a scene and publish a piece of video, but after that we did not publish any relevant information, and we currently have no other information about the project than this video. It may be that learning to render a scene through AI is not completely real-time, and the algorithm running speed needs to be further optimized. Generally, the prior art has the disadvantages of high cost, not being close enough to people, not being real-time enough, and the fact that a lot of related work needs to wear glasses and cannot be done with naked eyes.
Disclosure of Invention
The invention aims to solve the main problem that the camera coordinate system is priced and corresponds to the screen and the eyes of people, and reasonable coordinate system conversion is carried out, so that the positions of the eyes of people and the positions of the cameras of the virtual scene can be in one-to-one correspondence. A secondary problem is the implementation of a transmissive screen, how to construct the image so that the screen can achieve the se-through effect. A very critical problem is to use a suitable algorithm to track the position of the eye, which requires that the accuracy of the algorithm be guaranteed and that it be possible to do in real time.
Aiming at the defects of the prior art, the invention provides a method for realizing a penetrating screen system based on camera eye movement tracking.
The technical scheme of the invention is as follows:
the key of the system is to convert the camera coordinate system to the standard coordinate system and then to the Unity coordinate system, and the step aligns the real price with the virtual price, which is one of the most important problems to be solved for realizing the penetrating screen. People feel that the virtual scene moves by one centimeter, the reality can be realized only by correspondingly moving the virtual scene by one centimeter, and the effect of falsely and falsely playing the reality is achieved.
The system is monocular, and the eye positions mentioned below are based on the left eye.
A realization method of a penetrating screen system based on camera eye movement tracking comprises the following steps:
1) first, the problem of monocular camera detecting the eye and measuring the distance needs to be solved. In order to solve the problem, firstly, a calibration experiment needs to be carried out on a camera to obtain required data;
according to the camera pinhole model, there is the following formula:
Figure BDA0002872264590000021
wherein f is the known focal length of the camera, d is the distance from the object to the camera, x is the distance between the imaged pupils, and w is the distance between the pupils of the two actual eyes of the user, firstly fixing d to calculate x, and simultaneously obtaining the pixel pitch p of the imaged binocular pupils, then
Figure BDA0002872264590000022
a represents the length distance of a unit pixel, unit: mm/pixel;
then, the eyeball position of each frame of the camera is obtained through Hough circle detection, the basic idea of Hough circle transformation is that each non-zero pixel point on an image is considered to be a potential point on a circle, a cumulative coordinate plane is generated through voting, a cumulative weight is set to position the circle, the eyeball has an oval shape after imaging and can be detected, so that two-dimensional coordinates (u and v) of the eye on the camera imaging can be determined, the coordinates of the image center are (u0 and v0), and the coordinates under the camera coordinate system of the eyeball are
Figure BDA0002872264590000023
2) Establishing a standard coordinate system, as shown in fig. 1, defining a position of a monocular as an origin of the standard coordinate system, wherein a connecting line between the origin and a midpoint of a screen is a z-axis, the position just passes through the center of the screen along the z-axis of the standard coordinate system, the x-axis is a straight line in a horizontal direction, the y-axis is a straight line in a vertical direction, and a plane formed by the x-axis and the y-axis is parallel to the screen;
in addition, the figure also indicates a camera coordinate system, the origin of the camera coordinate system is a camera, the z-axis is parallel to the z-axis of the standard coordinate system, but the directions are opposite, the x-axis and the y-axis are respectively parallel to the x-axis and the y-axis of the standard coordinate system and have the same direction, and the eye position acquired by the eyeball is identified by the system at the beginning under the standard price of the coordinate system;
3) converting the camera coordinate system into a standard coordinate system, and converting into a coordinate system of a virtual scene, wherein the virtual scene is built by using unity, so the virtual scene coordinate system can also be called unity;
the conversion formula from the camera coordinate system to the standard coordinate system is as follows: cs ═ a (Ca-D) (1);
wherein
Figure BDA0002872264590000031
D represents the coordinate of the origin of a standard coordinate system under the camera coordinate system, Ca is the eye coordinate under the camera coordinate system, and Cs is the eye coordinate under the standard coordinate system;
then, since the unit of the standard coordinate system is mm and the unit of the unit coordinate system is m, the formula for converting the standard coordinate system into the unit coordinate system is as follows: cu ═ 0.001Cs (2);
cu is the coordinate of the eye in a unity coordinate system;
4) solve the imaging problem of the camera. The invention requires that the screen provides a window-like visual effect. The image on the window is essentially light traveling in a straight line, and the range outside the window that a person can see at different locations is actually the range outside the window that the light at the location he is in can travel to, and then the associated image is projected onto the window. The view cone properties of the camera express the range that it can see and present an image in that range on the front plane, which can be started by this. Firstly, mapping a screen and eyes in reality to a unity coordinate system, mapping the positions of the eyes to the unity coordinate system through a conversion formula in the step 3), measuring the vertical distance between the position of an origin D and the screen in reality, measuring the width and the height of the screen at the same time, determining the position and the size of the screen under a standard coordinate system, and converting the position and the size of the screen into the unity coordinate system by using a formula (2), thereby finishing the mapping operation;
then, the view cone of the eye in the virtual scene (i.e. the camera in the camera imaging principle) at each position is determined, the view cone determines the projection area that the camera is finally imaging, by determining the view cone of the camera, we can define the virtual scene area that the camera should see at its position, and further control the camera imaging picture, as shown in fig. 2, the plane formed by the set point ABCD is the virtual screen, and the position of the point O is the eye, when the eye reaches a new position, the program calculates and takes out the point O to the line segment with the longest distance of four points, and re-determines the coordinates of the other three corner points in the virtual coordinate system by using the length as the standard, determines the coordinates of the point T by using the formula (3), and by taking the example in fig. 2 as OA is the longest, then extends OB, OC, OD to OB ', OC', OD ', so that | OA | | OC | | OD' |, the following relationship is then utilized:
T=(A+B'+C'+D')/4 (3)
wherein A represents the corner point coordinate with the longest distance, B ', C ' and D ' are the re-determined corner point coordinates,
the calculated point T is the coordinate of a point passing through the central line of the visual cone of the camera eye under a virtual coordinate system, and the visual angle posture can be determined by using the point;
the image in the virtual screen seen at this time is the visual image that we need, and then the perspective matrix is used to perform perspective transformation, and the following is a detailed description of the operation.
Preferably, in step 4), the step of performing perspective transformation using the perspective matrix includes:
by calling the unity method, two-dimensional coordinates of the four corner points of the virtual screen mapped to the virtual scene on the imaged image can be obtained, and four point pairs can be formed with (0,0), (0, height), (width, height), (width,0) (assuming that the finally generated image has width height pixels), thereby calculating a perspective matrix H which is a 3 × 3 matrix, and the calculation steps are as follows: for a point pair a (x, y,1), b (x1, y1,1), the following transformation equation is used:
b=HaT (4);
wherein aT is the transpose of a, a is the corner of the image to be perspective transformed, b is the corner of the imaged image, and a point pair can be constructed into the following matrix:
Figure BDA0002872264590000041
four dot pairs can construct four matrixes, and the matrixes are combined to obtain a matrix U with 8 rows and 9 columns; then constructing a UT & ltU & gt matrix, wherein eigenvectors corresponding to the minimum eigenvalues of the UT & ltU & gt matrix are 9 parameters of the perspective matrix H; and (4) after the perspective matrix is obtained, substituting the perspective matrix into the formula (4) to perform perspective transformation, and obtaining a new image after perspective.
Assuming that the pixel coordinate of a pixel in the original image is a and the pixel coordinate of the pixel converted to the newly generated image is b, b is HaT
The perspective transformation can be performed by the above formula, and in addition, the calculation of the operation is performed on the GPU in order to accelerate the calculation performance.
The mapping problem realizes the process of converting the Tobii coordinate system into the standard coordinate system and then converting the Tobii coordinate system into the unity coordinate system in the first three steps. On the other hand, since the state of the eye does not change during the observation, we cannot directly change the parameters of the visual cone, so the problem is simplified to the problem of the camera pose at each position.
The invention has the beneficial effects that:
1. the invention utilizes the existing eyeball tracking technology to position the position of the eyeball in the three-dimensional space, and utilizes a series of coordinate system conversion to convert the eyeball into the virtual space, and the virtual space corresponds to the eyeball so as to unify the virtual and real pricing.
2. The camera imaging rule meeting the window vision requirement is found, the penetrating screen effect is realized, and the effect of falseness and falseness is basically achieved.
3. The cost is low, and the household computer with the camera can use the invention.
Drawings
FIG. 1 depicts a diagram of an overall real-world system and a standard coordinate system and camera coordinate system defined within the system;
FIG. 2 is a schematic diagram of an algorithm for determining a perspective pose;
FIG. 3 is a schematic diagram of the relationship between the camera near sanction plane and the virtual screen;
FIG. 4 is a schematic view of camera imaging;
fig. 5 is a schematic diagram of the final result after perspective transformation.
Detailed Description
The present invention will be further described by way of examples, but not limited thereto, with reference to the accompanying drawings.
Example 1:
a realization method of a penetrating screen system based on camera eye movement tracking comprises the following steps:
1) first, the problem of monocular camera detecting the eye and measuring the distance needs to be solved. In order to solve the problem, firstly, a calibration experiment needs to be carried out on a camera to obtain required data;
according to the camera pinhole model, there is the following formula:
Figure BDA0002872264590000051
wherein f is the known focal length of the camera and is 35mm, d is the distance from an object to the camera, x is the distance between the imaged pupils, w is the distance between the pupils of the two actual eyes of the user, and the interpupillary distance of the detected user is 62mm during the experiment. Firstly fixing d to solve x, and simultaneously obtaining the pixel pitch p of the imaged binocular pupil, then
Figure BDA0002872264590000052
a represents the length distance of a unit pixel, unit: mm/pixel;
then, the eyeball position of each frame of the camera is obtained through Hough circle detection, and the basic idea of Hough circle transformation is that each non-zero pixel point on the image is considered to be a potential circleThe point above is that, by generating a cumulative coordinate plane by voting and setting a cumulative weight to locate a circle, the eyeball may have an elliptical shape after imaging and thus may be detected, so that two-dimensional coordinates (u, v) of the eye on the camera image can be determined, and assuming that coordinates of the image center are (u0, v0), coordinates under the camera coordinate system of the eyeball are (u0, v0)
Figure BDA0002872264590000053
2) Establishing a standard coordinate system, as shown in fig. 1, defining a position of a monocular as an origin of the standard coordinate system, wherein a connecting line between the origin and a midpoint of a screen is a z-axis, the position just passes through the center of the screen along the z-axis of the standard coordinate system, the x-axis is a straight line in a horizontal direction, the y-axis is a straight line in a vertical direction, and a plane formed by the x-axis and the y-axis is parallel to the screen;
in addition, fig. 1 also indicates a camera coordinate system, a camera is arranged at the origin of the camera coordinate system, the z-axis is parallel to the z-axis of the standard coordinate system, but the directions are opposite, the x-axis and the y-axis are respectively parallel to the x-axis and the y-axis of the standard coordinate system and have the same direction, and the eye position acquired by the eyeball is identified by the system at the beginning under the standard price of the coordinate system;
3) converting the camera coordinate system into a standard coordinate system, and converting into a coordinate system of a virtual scene, wherein the virtual scene is built by using unity, so the virtual scene coordinate system can also be called unity;
the conversion formula from the camera coordinate system to the standard coordinate system is as follows: cs ═ a (Ca-D) (1);
wherein
Figure BDA0002872264590000061
D represents the coordinate of the origin of a standard coordinate system under the camera coordinate system, Ca is the eye coordinate under the camera coordinate system, and Cs is the eye coordinate under the standard coordinate system;
then, since the unit of the standard coordinate system is mm and the unit of the unit coordinate system is m, the formula for converting the standard coordinate system into the unit coordinate system is as follows: cu ═ 0.001Cs (2);
cu is the coordinate of the eye in a unity coordinate system; the unit of the Unity coordinate system is m, and the unit of the standard coordinate system is mm, and only uniform units are needed; this also embodies the advantage of the design of the standard coordinate system that it is convenient to convert to the unity coordinate system.
4) Solve the imaging problem of the camera. The invention requires that the screen provides a window-like visual effect. The image on the window is essentially light traveling in a straight line, and the range outside the window that a person can see at different locations is actually the range outside the window that the light at the location he is in can travel to, and then the associated image is projected onto the window. The view cone properties of the camera express the range that it can see and present an image in that range on the front plane, which can be started by this. Firstly, mapping a screen and eyes in reality to a unity coordinate system, mapping the positions of the eyes to the unity coordinate system through a conversion formula in the step 3), measuring the vertical distance between the position of an origin D and the screen to be 540mm in reality, measuring the width and the height of the screen to be 195mm, determining the position and the size of the screen under a standard coordinate system, and converting the position and the size of the screen into the unity coordinate system by using a formula (2), thereby finishing the mapping operation;
then, the view cone of the eye in the virtual scene (i.e. the camera in the camera imaging principle) at each position is determined, the view cone determines the projection area that the camera is finally imaging, by determining the view cone of the camera, we can define the virtual scene area that the camera should see at its position, and further control the camera imaging picture, as shown in fig. 2, the plane formed by the set point ABCD is the virtual screen, and the position of the point O is the eye, when the eye reaches a new position, the program calculates and takes out the point O to the line segment with the longest distance of four points, and re-determines the coordinates of the other three corner points in the virtual coordinate system by using the length as the standard, determines the coordinates of the point T by using the formula (3), and by taking the example in fig. 2 as OA is the longest, then extends OB, OC, OD to OB ', OC', OD ', so that | OA | | OC | | OD' |, the following relationship is then utilized:
T=(A+B'+C'+D')/4 (3)
wherein A represents the corner point coordinate with the longest distance, B ', C ' and D ' are the re-determined corner point coordinates,
the calculated point T is the coordinate of a point passing through the central line of the visual cone of the camera eye under a virtual coordinate system, and the visual angle posture can be determined by using the point;
the image in the virtual screen seen at this time is the visual image that we need, and then the perspective matrix is used to perform perspective transformation, and the following is a detailed description of the operation.
In step 4), the step of performing perspective transformation by using the perspective matrix is as follows:
by calling unity, two-dimensional coordinates of the four corner points of the virtual screen mapped to the virtual scene on the imaged image can be obtained, and four point pairs can be formed with (0,0), (0, height), (width, height), (width,0) (i.e., the four corner points of fig. 4) (assuming that the finally generated image has width height pixels), so that a perspective matrix H can be calculated, which is a 3 < 3 > matrix, and the calculation steps are as follows: for a point pair a (x, y,1), b (x1, y1,1), the following transformation equation is used:
b=HaT (4);
where aT is the transpose of a, a is the corner of the image to be perspective-transformed (i.e., the corner of the black-line frame in fig. 4), b is the corner of the imaged image (i.e., fig. 4 itself), and a pair of points can be constructed as the following matrix:
Figure BDA0002872264590000071
four dot pairs can construct four matrixes, and the matrixes are combined to obtain a matrix U with 8 rows and 9 columns; then constructing a UT & ltU & gt matrix, wherein eigenvectors corresponding to the minimum eigenvalues of the UT & ltU & gt matrix are 9 parameters of the perspective matrix H; after the perspective matrix is obtained, the new image is replaced with the perspective matrix in the formula (4) to perform perspective transformation, and a new image after perspective is obtained (that is, fig. 5).
Fig. 3 shows the relationship between the camera near clipping plane (white line frame) and the virtual screen (black line frame) calculated by the above algorithm. The camera image at this time is as shown in fig. 4, and the image in the black frame is the image that we want to obtain. Therefore, it is necessary to perform perspective transformation, and the perspective matrix is calculated by corresponding four points of the black frame in the camera to four corners of the screen, and the perspective transformation is performed, and the displayed result is as shown in fig. 5.

Claims (2)

1. A realization method of a penetrating screen system based on camera eye movement tracking is characterized by comprising the following steps:
1) firstly, a calibration experiment needs to be carried out on the camera to obtain required data;
according to the camera pinhole model, the following formula exists:
Figure FDA0003407989040000011
wherein f is the known focal length of the camera, d is the distance from the object to the camera, x is the distance between the imaged pupils, and w is the distance between the pupils of the two actual eyes of the user, firstly fixing d to solve x, and simultaneously obtaining the pixel pitch p of the imaged binocular pupils, then
Figure FDA0003407989040000012
a represents the length distance of a unit pixel, unit: mm/pixel;
then, the eye position of each frame of the camera is obtained through Hough circle detection, two-dimensional coordinates (u, v) of eyes on the camera image are determined, and the coordinates of the image center are (u0, v0), so that the coordinates under the camera coordinate system of the eyes are
Figure FDA0003407989040000013
2) Establishing a standard coordinate system, defining a position of a monocular as an origin of the standard coordinate system, wherein a connecting line of the origin and a midpoint of a screen is a z-axis, an x-axis is a straight line in a horizontal direction, a y-axis is a straight line in a vertical direction, and a plane formed by the x-axis and the y-axis is parallel to the screen;
the camera is arranged at the origin of the camera coordinate system, the z axis is parallel to the z axis of the standard coordinate system, but the directions are opposite, and the x axis and the y axis are respectively parallel to the x axis and the y axis under the standard coordinate system and have the same direction;
3) converting a camera coordinate system into a standard coordinate system, and converting into a coordinate system of a virtual scene, wherein the virtual scene is built by using unity, so that the virtual scene coordinate system is also called unity;
the conversion formula from the camera coordinate system to the standard coordinate system is as follows: cs ═ a (Ca-D) (1);
wherein
Figure FDA0003407989040000014
D represents the coordinate of the origin of a standard coordinate system under the camera coordinate system, Ca is the eye coordinate under the camera coordinate system, and Cs is the eye coordinate under the standard coordinate system;
then, since the unit of the standard coordinate system is mm and the unit of the unit coordinate system is m, the formula for converting the standard coordinate system into the unit coordinate system is as follows: cu ═ 0.001Cs (2);
cu is the coordinate of the eye in a unity coordinate system;
4) firstly, mapping a screen and eyes in reality to a unity coordinate system, mapping the positions of the eyes to the unity coordinate system through the conversion formula in the step 3), then measuring the vertical distance between the position of the origin D and the screen in reality, measuring the width and the height of the screen at the same time, determining the position and the size of the screen under a standard coordinate system, and then converting the position and the size of the screen into the unity coordinate system by using a formula (2), thereby completing the mapping operation;
then determining the visual cone of the eye in each position under the virtual scene, wherein the plane formed by the set point ABCD is the virtual screen, the position of the point O is the eye, when the eye reaches a new position, calculating and taking out the line segment with the longest distance from the point O to four points by a program, redetermining the coordinates of the other three corner points under the virtual coordinate system by taking the length as a standard, determining the coordinates of the point T by using a formula (3),
T=(A+B'+C'+D')/4 (3)
where a stands for the corner coordinate with the longest distance, i.e. OA is the longest, then OB, OC, OD to OB ', OC ', OD ' are extended such that | OA | OB ' | OC ' | OD ' |, B ', C ', D ' are the re-determined corner coordinates,
the calculated point T is the coordinate of a point passing through the central line of the visual cone of the camera eye under a virtual coordinate system, and the visual angle posture can be determined by using the point;
the image in the virtual screen is the visual image required by us, and then the perspective matrix is used for perspective transformation.
2. The method for implementing the penetrating screen system based on the eye movement tracking of the camera according to claim 1, wherein in the step 4), the step of performing the perspective transformation by using the perspective matrix comprises:
by calling the unity method, two-dimensional coordinates of four corner points of a virtual screen mapped to a virtual scene on an imaged image can be obtained, and four point pairs can be formed with (0,0), (0, height), (width, height), (width,0), whereby a perspective matrix H can be calculated, which is a matrix of 3 × 3, the calculation steps being as follows: for a point pair a (x, y,1), b (x1, y1,1), the following transformation equation is used:
b=HaT (4);
wherein aT is the transpose of a, a is the corner of the image to be perspective transformed, b is the corner of the imaged image, and a point pair can be constructed into the following matrix:
Figure FDA0003407989040000021
four dot pairs can construct four matrixes, and the matrixes are combined to obtain a matrix U with 8 rows and 9 columns; then constructing a UT & ltU & gt matrix, wherein eigenvectors corresponding to the minimum eigenvalues of the UT & ltU & gt matrix are 9 parameters of the perspective matrix H; and (4) after the perspective matrix is obtained, substituting the perspective matrix into the formula (4) to perform perspective transformation, and obtaining a new image after perspective.
CN202011620899.9A 2020-12-30 2020-12-30 Implementation method of penetrating screen system based on camera eye movement tracking Active CN112698724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011620899.9A CN112698724B (en) 2020-12-30 2020-12-30 Implementation method of penetrating screen system based on camera eye movement tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011620899.9A CN112698724B (en) 2020-12-30 2020-12-30 Implementation method of penetrating screen system based on camera eye movement tracking

Publications (2)

Publication Number Publication Date
CN112698724A CN112698724A (en) 2021-04-23
CN112698724B true CN112698724B (en) 2022-02-11

Family

ID=75512932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011620899.9A Active CN112698724B (en) 2020-12-30 2020-12-30 Implementation method of penetrating screen system based on camera eye movement tracking

Country Status (1)

Country Link
CN (1) CN112698724B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449968B (en) * 2023-06-20 2023-08-22 深圳市联志光电科技有限公司 Computer screen control method and device and computing equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064514A (en) * 2012-12-13 2013-04-24 航天科工仿真技术有限责任公司 Method for achieving space menu in immersive virtual reality system
CN107145226A (en) * 2017-04-20 2017-09-08 中国地质大学(武汉) Eye control man-machine interactive system and method
CN109696954A (en) * 2017-10-20 2019-04-30 中国科学院计算技术研究所 Eye-controlling focus method, apparatus, equipment and storage medium
KR20200067641A (en) * 2018-12-04 2020-06-12 삼성전자주식회사 Calibration method for 3d augmented reality and apparatus thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064514A (en) * 2012-12-13 2013-04-24 航天科工仿真技术有限责任公司 Method for achieving space menu in immersive virtual reality system
CN107145226A (en) * 2017-04-20 2017-09-08 中国地质大学(武汉) Eye control man-machine interactive system and method
CN109696954A (en) * 2017-10-20 2019-04-30 中国科学院计算技术研究所 Eye-controlling focus method, apparatus, equipment and storage medium
KR20200067641A (en) * 2018-12-04 2020-06-12 삼성전자주식회사 Calibration method for 3d augmented reality and apparatus thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multi-feature 3D Object Tracking with Adaptively-Weighted Local Bundles;Jiachen Li等;《IEEE》;20201216;第229-230页 *

Also Published As

Publication number Publication date
CN112698724A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN109040738B (en) Calibration method and non-transitory computer readable medium
US20210075963A1 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
CN105094337B (en) A kind of three-dimensional gaze estimation method based on iris and pupil
US20190121427A1 (en) Iris and pupil-based gaze estimation method for head-mounted device
CN103207664B (en) A kind of image processing method and equipment
JP7015152B2 (en) Processing equipment, methods and programs related to key point data
EP2919093A1 (en) Method, system, and computer for identifying object in augmented reality
CN106791784A (en) Augmented reality display methods and device that a kind of actual situation overlaps
CN107004275A (en) For determining that at least one of 3D in absolute space ratio of material object reconstructs the method and system of the space coordinate of part
CN106959759A (en) A kind of data processing method and device
CN107277495A (en) A kind of intelligent glasses system and its perspective method based on video perspective
KR20160138729A (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
KR20040090711A (en) Method, apparatus and program for compositing images, and method, apparatus and program for rendering three-dimensional model
CN115359093A (en) Monocular-based gaze estimation and tracking method
CN112698724B (en) Implementation method of penetrating screen system based on camera eye movement tracking
CN115035004A (en) Image processing method, apparatus, device, readable storage medium and program product
Wibirama et al. 3D gaze tracking on stereoscopic display using optimized geometric method
CN107765840A (en) A kind of Eye-controlling focus method equipment of the general headset equipment based on binocular measurement
CN111164542A (en) Method of modifying an image on a computing device
CN108985291A (en) A kind of eyes tracing system based on single camera
CN110060349A (en) A method of extension augmented reality head-mounted display apparatus field angle
WO2014119555A1 (en) Image processing device, display device and program
Fuhrmann et al. Practical calibration procedures for augmented reality
JPH03296176A (en) High-speed picture generating/displaying method
Yoshimura et al. Appearance-based gaze estimation for digital signage considering head pose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant