CN115022616B - Image focusing enhancement display device and display method based on human eye tracking - Google Patents

Image focusing enhancement display device and display method based on human eye tracking Download PDF

Info

Publication number
CN115022616B
CN115022616B CN202210944472.7A CN202210944472A CN115022616B CN 115022616 B CN115022616 B CN 115022616B CN 202210944472 A CN202210944472 A CN 202210944472A CN 115022616 B CN115022616 B CN 115022616B
Authority
CN
China
Prior art keywords
image
processing
focusing
light field
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210944472.7A
Other languages
Chinese (zh)
Other versions
CN115022616A (en
Inventor
袁仲云
郭翔燕
程永强
邢志刚
郝润芳
杨琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Speed Electronics Co ltd
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202210944472.7A priority Critical patent/CN115022616B/en
Publication of CN115022616A publication Critical patent/CN115022616A/en
Application granted granted Critical
Publication of CN115022616B publication Critical patent/CN115022616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image focusing enhancement display device and a display method based on human eye tracking, belonging to the technical field of image focusing enhancement display; the technical problem to be solved is as follows: the improvement of the structure and the display method of the image focusing enhancement display device based on human eye tracking is provided; the technical scheme for solving the technical problem is as follows: the image focusing enhancement display device comprises an image display device for displaying a light field image, a visual tracking device for identifying the pupil of a user, determining the direction of sight and judging the position of the sight focused on a screen, and a camera shooting acquisition device for processing and acquiring the image at the position of the sight focus and generating the light field image; the visual tracking equipment comprises a first processing unit, the first processing unit is electrically connected with a human eye motion capture module through a wire, the human eye motion capture module sends an acquisition signal to the first processing unit, and the first processing unit sends a visual focusing coordinate signal to the camera acquisition equipment; the invention is applied to the field of image focusing enhancement.

Description

Image focusing enhancement display device and display method based on human eye tracking
Technical Field
The invention discloses an image focusing enhancement display device and a display method based on human eye tracking, and belongs to the technical field of image focusing enhancement display.
Background
At present, an optical camera arranged on a mobile phone, a camera or a computer is mainly used as a sensor to collect and simulate images in an external environment, and finally a single or continuously refreshed pixel (digital) image is obtained.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to solve the technical problems that: an improvement of a structure and a display method of an image focusing enhancement display device based on human eye tracking is provided.
In order to solve the technical problems, the invention adopts the technical scheme that: an image focusing enhancement display device based on human eye tracking comprises an image display device, a visual tracking device and a camera shooting acquisition device, wherein the image display device is used for displaying a light field image;
the visual tracking equipment comprises a first processing unit, the first processing unit is electrically connected with an eye movement capturing module through a lead, the eye movement capturing module sends a collected eye focusing screen position signal to the first processing unit, and the first processing unit processes the signal and then sends a visual focusing coordinate signal to the camera shooting collecting equipment;
the camera shooting and collecting device comprises a second processing unit, the second processing unit is electrically connected with an image capturing module through a wire, the image capturing module is specifically composed of a multi-lens array, when the vision tracking device determines the position of an eye focusing screen, a focusing position image is captured by the multi-lens array to form an initial light field image, and the second processing unit processes the initial light field image to form a final light field image;
the image display device comprises a third processing unit which is electrically connected with the display unit through a lead;
the visual tracking equipment, the camera shooting acquisition equipment and the image display equipment are internally provided with data communication modules;
the second processing unit is also used for coding the light field image and sending the coded image to the image display equipment through the data communication module.
An image focusing enhancement display method based on human eye tracking comprises the following image focusing enhancement display steps:
the method comprises the following steps: determining the visual focusing coordinate by determining the visual line direction of the eye focusing screen: when the binocular of a person watches the screen, binocular cameras arranged above the screen track binocular lines of the person respectively, and a binocular focusing area is determined, namely, the area to be enhanced is judged as a display area of the screen;
step two: controlling camera shooting collection equipment with a built-in multi-lens to capture an initial light field image, and analyzing and processing the initial light field image by a second processing unit to obtain a focus stack and obtain a full focus image and a depth map;
step three: carrying out contrast enhancement processing and local enhancement processing on the image to finally obtain a processing image with enhanced contrast;
step four: and sending the contrast-enhanced processing image to an image display device for display.
The specific method for determining the coordinates of the binocular focusing area in the first step comprises the following steps:
step S601, determining a central point of each eye through a human eye motion capture module;
step S602, determining an initial vector, and calculating to obtain an initial pupil position;
step S603, determining the angle of two eyes;
step S604, determining the sight line direction;
and step S605, determining the coordinates of the sight focusing screen, and transmitting the position information to the camera shooting and collecting equipment.
The specific method for determining the coordinates of the sight line focusing screen in step S605 is as follows:
defining the relative position of the pupil center and the inner canthus as a vector, serving as a target vector from the pupil center to the inner canthus, mapping the target vector to the 3D direction of the pupil center, and adopting a 3D eyeball model for gaze estimation, wherein the movement of the eyeball pupil is represented by a rotation matrix as follows:
Figure 100002_DEST_PATH_IMAGE001
let the angles of the visual distances on the x axis and the y axis be respectively recorded as sum, the distance from the center of the eyeball to the center of the screen be m, the radius of the eyeball be r, and the included angles between the position of the camera and the x axis and the y axis be respectively expressed as h and n, then the relationship between the gaze angle and the target vector is expressed as:
Figure 100002_DEST_PATH_IMAGE002
the relationship of the gaze position (X, Y) and gaze angle displayed on the screen is represented as:
Figure 100002_DEST_PATH_IMAGE003
wherein𝛽=arcsin(sin(h)—
Figure 100002_DEST_PATH_IMAGE004
)—h ;
Figure 100002_DEST_PATH_IMAGE005
Indicating gaze angles on the x-axis and y-axis.
The specific method for processing the image data by the image pickup acquisition equipment in the second step is as follows:
step S701: recognizing visual focusing coordinates through a human eye motion capture module;
step S702, transmitting screen coordinate information to a camera shooting and collecting device through a data communication module of the image display device;
step S703, acquiring a focus screen area through an image capture module to obtain an initial light field image;
step S704, analyzing the collected image by the second processing unit to obtain a focus stack, and performing image enhancement processing to generate a light field image.
The specific method for obtaining the light field image in steps S703 and S704 is as follows:
step S801: processing the original image to obtain a focus stack to obtain a full focus image;
step S802: carrying out gray level processing on the full-focus image to generate a gray level image, wherein an adopted RGB-to-YUV data processing formula is as follows:
Figure 100002_DEST_PATH_IMAGE006
step S803, contrast enhancement processing is carried out on the gray level image to obtain a first enhanced image;
step S804, performing color reduction processing on the first enhanced image to obtain a color processed image, wherein the YUV-to-RGB data processing formula adopted is as follows:
Figure 100002_DEST_PATH_IMAGE007
step S805, adjusting the local part of the image, comparing the color processing image with the full focus image, and adjusting a mapping curve and the local part of the image to obtain a second enhanced image;
and step 806, performing color reduction processing on the second enhanced image, wherein the YUV-to-RGB data processing formula adopted is the same as the YUV-to-RGB data processing formula adopted in the step 804, and obtaining a final light field image.
The mapping curve adopted in step S805 specifically is:
the calculation formula of the adopted mapping curve is as follows:
Figure 100002_DEST_PATH_IMAGE008
in the formula, T0 is a full-focus image mapping curve, T1 is a first enhanced image mapping curve, thr is a Yin = Yout point, the image is not changed at the moment and is output as an original full-focus image, a gain1 value is used for adjusting an image too-dark area, a gain2 value is used for adjusting an image too-bright area, and a method of combining the curve T1 and the curve T0 is adopted to adjust the local part, so that the expression of enhanced image information is realized.
And when the processing image is sent to the image display device, specifically encoding the generated light field image to generate encoded data, sending the encoded data to the image display device, decoding the encoded information by the third processing unit to obtain a final light field image, and displaying the final light field image on a screen by the image display device.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a binocular vision tracking focusing screen, a method for enhancing and displaying an image based on light field information, which is specifically provided with a vision tracking module for identifying pupils of a user and determining the specific position of an eye focusing image, a camera shooting and collecting module for collecting the image and processing the collected initial image to generate a final light field image, and an image display module for displaying the light field image; the invention mainly adopts the detection of the pupil position and the tracking of the sight line by utilizing the information to determine the position of a focusing screen of human eyes, captures an initial light field image, adopts the full focusing and contrast enhancement processing of the light field image, provides a data processing algorithm for improving the visual effect of the image, can effectively improve the use value of the image, aims to show the specific information in the screen image more truly and vividly and provides a better picture effect for a user.
Drawings
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is a schematic diagram of an image focusing and enhancing display device according to the present invention;
FIG. 2 is a diagram illustrating an operation status of an image focusing enhancement display method according to the present invention;
FIG. 3 is a flowchart illustrating the steps for determining the coordinates of a gaze focusing screen in accordance with the present invention;
FIG. 4 is a schematic diagram of the coordinate adaptive thresholding measurement for determining a line of sight focusing screen according to the present invention;
FIG. 5 is a schematic view of the present invention illustrating the measurement of the canthus vector within the coordinates of the defined gaze focusing screen;
FIG. 6 is a schematic diagram of an eye model determining gaze in determining gaze focus screen coordinates in accordance with the present invention;
FIG. 7 is a flowchart illustrating the steps of determining the coordinates of the binocular focusing area according to the present invention;
FIG. 8 is a flowchart of the steps for processing image data by the image capture device of the present invention;
FIG. 9 is a flowchart illustrating the steps of the present invention in processing a light field image;
FIG. 10 is a flowchart illustrating operation of an embodiment of the present invention;
FIG. 11 is a flowchart illustrating the steps of processing data by the image display apparatus according to the present invention;
the numbering in the figures means: 1-a visual tracking device; 11-a first processing unit; 12-a human eye motion capture module; 13-visual focus coordinates; 2-camera shooting and collecting equipment; 21-a second processing unit; 22-an image capture module; 23-a network transmission/reception unit; 3-an image display device; 31-a third processing unit; 32-a display unit; 33-a network reception/transmission unit; 34-Focus screen area.
Detailed Description
As shown in fig. 1, the present invention specifically provides an image focusing enhancement display device based on human eye tracking, the system includes: the visual tracking device 1 is used for identifying pupils of a user, determining the sight direction and judging the specific position of a focusing screen by eye tracking; the camera shooting and collecting device 2 is used for collecting images at a focusing position, processing the collected images and generating light field images; and the image display device 3 is used for displaying the processed light field image.
The vision tracking device 1 comprises a first processing unit and a human eye motion capture module connected with the first processing unit, wherein the human eye motion capture module adopts a binocular camera to find out pupils and pupil centers and finds out lower canthi by using a feature extractor, then estimates fixation according to an initial vector and a 3D eyeball model and determines the position of an eye focusing screen, the first processing unit determines specific position information of screen focusing and sends the position information to a camera shooting and collecting device to collect images.
The camera shooting and collecting equipment comprises a second processing unit, an image capturing module and a network transmission/receiving unit, wherein the image capturing module is connected with the second processing unit and consists of a plurality of cameras (multi-lens array), and after the visual tracking equipment determines the position of an eye focusing screen, the multi-lens array captures a focusing position image to form an initial light field image; the second processing unit is used for processing the initial light field image to form a final light field image.
The second processing unit is specifically provided with a light field image full focusing and contrast enhancing method, and the method specifically comprises the following steps: controlling an original light field image to generate a focus stack to obtain a series of images focused on different local areas; carrying out full focusing processing on the focus stack to obtain a full focusing image; carrying out gray level processing on the full-focus image to generate a gray level image; carrying out contrast enhancement processing on the gray level image to obtain a first enhanced image; carrying out color reduction processing on the first enhanced image to obtain a color processing image; comparing the color processing image with the original image, and adjusting a mapping curve and the image part to obtain a second enhanced image; and carrying out color reduction processing on the second enhanced image to obtain a final light field image.
The image display apparatus includes a third processing unit and a display unit connected to the third processing unit.
The second processing unit is further configured to encode the light field image, and transmit the encoded image to the image display device through the network transmission unit.
The communication content of the camera shooting and collecting equipment comprises: an initial light field image is captured after receiving the position information. The eye focusing screen processing unit is used for receiving the eye focusing screen specific position information determined by the vision tracking device, encoding the final light field image, and transmitting the encoded image to the image display device through the network transmission unit.
The communication content of the image display apparatus includes: the third processing unit is used for receiving the coded image and sending the coded image to the third processing unit; transmitting the received position information of the focusing screen area to a communication module of the camera shooting acquisition equipment;
the third processing unit: and the light field image is output to the display unit for displaying.
Further, the present invention aims to provide an image focusing enhancement display device and a display method based on human eye tracking, which are used for improving the picture representation effect.
As shown in fig. 1, a connection structure of control hardware in a light field image display system is specifically shown in a block diagram form, the display system includes a visual tracking device 1, a camera shooting and collecting device 2, and an image display device 3, and specifically, a visual focusing coordinate 13 is confirmed by a human eye visual tracking system, and an initial light field image is collected by matching with the camera shooting and collecting device 2, and the image is processed to generate and display a final light field image.
As shown in fig. 2, a schematic diagram of an operation state of the light field image display system is shown, and when the user shows that the user watches the screen with eyes, the binocular camera arranged above the screen tracks binocular vision, and determines a binocular focusing area, namely, a display area to be enhanced of the screen. The method comprises the steps that an initial light field image is captured by a built-in multi-lens light field camera, a processing unit of camera shooting collection equipment processes the initial light field image, a focus stack is obtained, a full focus image and a depth map are obtained, and then contrast enhancement processing and local enhancement are carried out on the image, so that a contrast enhancement processing map is obtained.
The visual tracking device 1 comprises a first processing unit 11 and a human eye motion capture module 12 connected to the first processing unit.
As shown in fig. 3, specifically, an operation flow diagram for finding the center of the pupil and extracting the lower corner of the eye, and fig. 3 illustrates the overall process of finding the features: firstly, denoising a source image, removing noise to obtain a filtered image, then finding out a pupil and a pupil center, finding out a lower eye corner by using a feature extractor, and then estimating and watching according to a vector (an arrow head part in the figure) and a 3D eyeball model to obtain a result image.
To find the pupil center, first search the pupil area, here usually the area with the lowest pixel intensity, using a sliding window search method, i.e. integrating the pixel intensities within a moving window and using the result of the mean threshold, a binary image will be produced, where the pupil area is clearly visible; however, there may be some regions that are noisy (noise exists) due to eyelashes or flickers, and therefore, morphological opening and closing processing needs to be performed on the binary image to remove the noise, and an example of the obtained binary image is shown in fig. 3, and it can be seen that the pupil region is clearly found.
In order to find the inner canthus, an adaptive threshold method is adopted for an input image to obtain a result shown on the right side of the graph 4, then the two values are accumulated according to the row direction, the result is shown in the middle of the graph 4, and extreme values can be seen to appear at the top and the bottom of a pupil area and respectively correspond to the positions of an upper eyelid and a lower eyelid; as can be seen from the above results, the inner corner of the eye is usually the starting point of the lower eyelid, and thus the point where the lower eyelid line meets the diagonal line corresponding to the upper eyelid for the first time is found and set as the inner corner of the eye.
The relative position of the pupil center and the inner corner of the eye is defined as a vector, called pupil center to inner corner of the eye (PC-IEC) vector, an example of which is indicated by a diagonal arrow in the upper diagram of fig. 5, and in order to map this vector to the 3D direction of the pupil center, a 3D eyeball model for gaze estimation is used, and the movement of the eyeball pupil can be expressed by a rotation matrix as:
Figure DEST_PATH_IMAGE009
fig. 6 is a schematic diagram of determining a line of sight by an eyeball model, and parameters are defined as shown in fig. 6, wherein angles of visual distances on an x axis and a y axis are respectively recorded as sum, a distance from the center of an eyeball to the center of a screen is m, a radius of the eyeball is r, included angles between a camera position and the x axis and the y axis are respectively expressed as h and n, and then a relation between a gaze angle and a PC-IEC vector can be expressed as follows:
Figure DEST_PATH_IMAGE010
the relationship of the gaze position (X, Y) and gaze angle displayed on the screen can be expressed as:
Figure DEST_PATH_IMAGE011
wherein
Figure DEST_PATH_IMAGE012
This means that the user's gaze angle is on the x-axis and the y-axis.
Fig. 7 shows a flowchart for determining the specific location of the eye focusing screen by eye tracking by determining the direction of the eye focusing screen and thus the eye focusing coordinates 13, comprising the following steps:
step S601, determining the center of each eye;
step S602, determining an initial vector so as to obtain an initial pupil position;
step S603, determining the angle of two eyes;
step S604, determining the sight line direction;
step S605, determining the coordinates of the sight focusing screen;
the visual focus coordinates 13 are determined according to the above steps and the position information is transmitted to the camera capturing device 2, the camera capturing device 2 comprising a second processing unit 21, an image capturing module 22 connected to the second processing unit and a network transmission/reception unit 23.
Fig. 11 shows a flowchart of the operation of the image capture apparatus 2, including the steps of:
step S701, recognizing a visual focus coordinate 13 through a human eye motion capture module 12 of the visual tracking equipment 1;
step S702, transmitting the screen coordinate information to the image capture apparatus 2 through the network receiving/transmitting unit 33 of the image display apparatus 3;
step S703, acquiring an initial light field image from the focus screen area by the image capture module 22 of the camera capture device 2;
step S704, processing the acquired image by the second processing unit 21 of the camera shooting and acquiring device 2 to obtain a focal stack, and performing image enhancement processing to generate a light field image;
step S705 of transmitting the light field image to the image display apparatus 3 through the network transmission/reception unit 23 of the image capturing apparatus 2;
in step S706, the image display apparatus 3 displays the light field image.
As shown in fig. 9 and 10, the specific process of image processing in step S704 is a specific example, and includes the following steps:
step S801, processing an original image to obtain a focus stack to obtain a full focus image;
step S802, performing gray scale processing (RGB to YUV) on the full-focus image to generate a gray scale image, wherein the RGB to YUV adopts the following formula:
Figure DEST_PATH_IMAGE013
step S803, the gray level image is subjected to contrast enhancement processing to obtain a first enhanced image;
step S804, performing color reduction processing (YUV to RGB) on the first enhanced image to obtain a color processed image, wherein the YUV to RGB adopts the following formula:
Figure DEST_PATH_IMAGE014
after the color processing image is obtained, as shown in fig. 10, it can be seen that the image contrast is significantly enhanced, and some non-obvious information in the full-focus image is intensively expressed, but a local area is too bright or too dark, so that part of information is lost, and therefore, the image needs to be locally adjusted.
Step S805, comparing the color processed image with the full focus image, and adjusting a mapping curve and a local image to obtain a second enhanced image.
And step 806, performing color reduction processing (converting YUV to RGB) on the second enhanced image, wherein a formula is the same as the YUV to RGB data processing formula adopted in the step 804, and obtaining a final light field image.
The curve adjustment method is as follows, and each specific parameter is a mapping curve as shown in fig. 10:
Figure 813915DEST_PATH_IMAGE008
wherein, T0 is a full focus image mapping curve, T1 is a first enhanced image mapping curve, and Thr is a Yin = Yout point, at this time, the image is not changed, and the original full focus image is output. The gain1 value is used for adjusting the too dark area of the image, and the gain2 value is used for adjusting the too bright area of the image. And adjusting the local part by adopting a method of combining the curve T1 and the curve T0, and enhancing the image information expression.
The generated light field image is encoded, encoded data is generated, and the encoded data is transmitted to the image display device 3 through the network transmission/reception unit 23.
The image display apparatus 3 includes a third processing unit 31, and a network receiving/transmitting unit 33 and a display unit 32 connected to the third processing unit. Fig. 11 shows a flowchart of the operation of the image display apparatus, including the following specific steps:
step S101, the network receiving/transmitting unit 33 of the image display apparatus 3 receives the encoded information from the image capture apparatus 2;
step S102, the third processing unit 31 of the image display device 3 decodes the encoded information to obtain a final light field image;
in step S103, the display unit 32 of the image display apparatus 3 displays the final light field image onto the screen.
The vision tracking device 1, the camera shooting and collecting device 2 and the image display device 3 are integrated, high in integration and simple and convenient to use.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. An image focus enhancement display apparatus based on human eye tracking, comprising an image display device for displaying a light field image, characterized in that: the system also comprises a visual tracking device used for identifying the pupils of the user, determining the direction of the sight line and judging the position of the sight line focus on the screen, and a camera shooting and collecting device used for processing and collecting the image at the sight line focus position and generating a light field image;
the visual tracking equipment comprises a first processing unit, the first processing unit is electrically connected with a human eye motion capture module through a lead, the human eye motion capture module sends a collected eye focusing screen position signal to the first processing unit, and the first processing unit processes the signal and then sends a visual focusing coordinate signal to the camera shooting collection equipment;
the camera shooting and collecting device comprises a second processing unit, the second processing unit is electrically connected with an image capturing module through a lead, the image capturing module is specifically composed of a multi-lens array, when the vision tracking device determines the position of an eye focusing screen, a focusing position image is captured by the multi-lens array to form an initial light field image, the second processing unit analyzes and processes the collected initial light field image to obtain a focus stack, and then image enhancement processing is carried out to generate a final light field image;
the image display device comprises a third processing unit which is electrically connected with the display unit through a lead;
the visual tracking equipment, the camera shooting acquisition equipment and the image display equipment are internally provided with data communication modules;
the second processing unit is also used for encoding the light field image and sending the encoded image to the image display device through the data communication module.
2. An image focusing enhancement display method based on human eye tracking is characterized in that: the method comprises the following image focusing enhancement display steps:
the method comprises the following steps: determining the visual focusing coordinate by determining the visual line direction of the eye focusing screen: when the binocular of a person watches the screen, the binocular cameras arranged above the screen respectively track binocular lines of the person and determine a binocular focusing area, namely, the area to be enhanced is judged as a screen display area;
step two: controlling camera shooting collection equipment with built-in multiple lenses to capture an initial light field image, and analyzing and processing the initial light field image by a second processing unit to obtain a focus stack to obtain a full focus image and a depth map;
the specific method for processing the image data by the camera shooting acquisition equipment comprises the following steps:
step S701: recognizing visual focusing coordinates through a human eye motion capture module;
step S702, transmitting screen coordinate information to camera shooting acquisition equipment through a data communication module of the image display equipment;
step S703, acquiring a focus screen area through an image capture module to obtain an initial light field image;
step S704, analyzing and processing the collected image through a second processing unit to obtain a focus stack, and then performing image enhancement processing to generate a light field image;
step three: carrying out contrast enhancement processing and local enhancement processing on the image to finally obtain a processing image with enhanced contrast;
step four: and sending the contrast-enhanced processing image to an image display device for display.
3. The image focusing and enhancing display method based on human eye tracking as claimed in claim 2, wherein: the specific method for determining the coordinates of the binocular focusing area in the first step comprises the following steps:
step S601, determining a central point of each eye through a human eye motion capture module;
step S602, determining an initial vector, and calculating to obtain an initial pupil position;
step S603, determining the angle of two eyes;
step S604, determining the sight line direction;
and step S605, determining the coordinates of the sight focusing screen, and transmitting the position information to the camera shooting and collecting equipment.
4. The image focusing and enhancing display method based on human eye tracking as claimed in claim 3, wherein: the specific method for determining the coordinates of the sight line focusing screen in step S605 is as follows:
defining the relative position of the pupil center and the inner canthus as a vector, serving as a target vector from the pupil center to the inner canthus, mapping the target vector to the 3D direction of the pupil center, and adopting a 3D eyeball model for gaze estimation, wherein the movement of the eyeball pupil is represented by a rotation matrix as follows:
Figure DEST_PATH_IMAGE001
let the angles of the viewing distances on the x axis and the y axis be respectively recorded as sum, the distance from the center of the eyeball to the center of the screen be m, the radius of the eyeball be r, the included angles between the position of the camera and the x axis and the y axis be respectively expressed as h and n, and then the relationship between the gaze angle and the target vector is expressed as:
Figure DEST_PATH_IMAGE002
the relationship between the gaze position (X, Y) and the gaze angle displayed on the screen is expressed as:
Figure DEST_PATH_IMAGE003
wherein𝛽=arcsin(sin(h)—
Figure DEST_PATH_IMAGE004
)—h;
Figure DEST_PATH_IMAGE005
Indicating gaze angles on the x-axis and y-axis.
5. The image focusing and enhancing display method based on human eye tracking as claimed in claim 2, characterized in that: the specific method for obtaining the light field image in steps S703 and S704 is as follows:
step S801: processing the original image to obtain a focus stack to obtain a full-focus image;
step S802: carrying out gray level processing on the full-focus image to generate a gray level image, wherein an adopted RGB-YUV data processing formula is as follows:
Figure DEST_PATH_IMAGE006
step S803, contrast enhancement processing is carried out on the gray level image to obtain a first enhanced image;
step S804, performing color reduction processing on the first enhanced image to obtain a color processed image, wherein the YUV-to-RGB data processing formula adopted is as follows:
Figure DEST_PATH_IMAGE007
step S805, adjusting the local part of the image, comparing the color processing image with the full focus image, and adjusting a mapping curve and the local part of the image to obtain a second enhanced image;
and step 806, performing color reduction processing on the second enhanced image, wherein the YUV-to-RGB data processing formula adopted is the same as the YUV-to-RGB data processing formula adopted in the step 804, and obtaining a final light field image.
6. The image focusing enhancement display method based on human eye tracking as claimed in claim 5, wherein: the mapping curve adopted in step S805 is specifically:
the calculation formula of the adopted mapping curve is as follows:
Figure DEST_PATH_IMAGE008
in the formula, T0 is a full-focus image mapping curve, T1 is a first enhanced image mapping curve, thr is a Yin = Yout point, the image is not changed at the moment and is output as an original full-focus image, a gain1 value is used for adjusting an image too-dark area, a gain2 value is used for adjusting an image too-bright area, and a method of combining the curve T1 and the curve T0 is adopted to adjust the local part, so that the expression of enhanced image information is realized.
7. The image focusing and enhancing display method based on human eye tracking as claimed in claim 2, characterized in that: and when the processing image is sent to the image display device, specifically, the generated light field image is coded to generate coded data, the coded data is sent to the image display device, the coded information is decoded by the third processing unit to obtain a final light field image, and the final light field image is displayed on a screen by the image display device.
CN202210944472.7A 2022-08-08 2022-08-08 Image focusing enhancement display device and display method based on human eye tracking Active CN115022616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210944472.7A CN115022616B (en) 2022-08-08 2022-08-08 Image focusing enhancement display device and display method based on human eye tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210944472.7A CN115022616B (en) 2022-08-08 2022-08-08 Image focusing enhancement display device and display method based on human eye tracking

Publications (2)

Publication Number Publication Date
CN115022616A CN115022616A (en) 2022-09-06
CN115022616B true CN115022616B (en) 2022-12-02

Family

ID=83065694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210944472.7A Active CN115022616B (en) 2022-08-08 2022-08-08 Image focusing enhancement display device and display method based on human eye tracking

Country Status (1)

Country Link
CN (1) CN115022616B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389923A (en) * 2023-06-05 2023-07-04 太原理工大学 Light field image refocusing measurement detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572217A (en) * 2011-12-29 2012-07-11 华为技术有限公司 Visual-attention-based multimedia processing method and device
CN105607730A (en) * 2014-11-03 2016-05-25 航天信息股份有限公司 Eyeball tracking based enhanced display method and apparatus
CN109683335A (en) * 2017-10-19 2019-04-26 英特尔公司 Use showing without 3D glasses light field for eye position
CN112913231A (en) * 2018-10-22 2021-06-04 艾沃鲁什奥普提克斯有限公司 Light field display, adjusted pixel rendering method for the same, and vision correction system and method using the same

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110273369A1 (en) * 2010-05-10 2011-11-10 Canon Kabushiki Kaisha Adjustment of imaging property in view-dependent rendering
US9690099B2 (en) * 2010-12-17 2017-06-27 Microsoft Technology Licensing, Llc Optimized focal area for augmented reality displays
US10895909B2 (en) * 2013-03-04 2021-01-19 Tobii Ab Gaze and saccade based graphical manipulation
CN104469344B (en) * 2014-12-03 2017-03-01 北京智谷技术服务有限公司 Light field display control method and device, light field display device
EP3099055A1 (en) * 2015-05-29 2016-11-30 Thomson Licensing Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
US10859830B2 (en) * 2018-01-31 2020-12-08 Sony Interactive Entertainment LLC Image adjustment for an eye tracking system
EP3779892A4 (en) * 2018-04-12 2021-05-05 Toppan Printing Co., Ltd. Light-field image generation system, image display system, shape information acquisition server, image generation server, display device, light-field image generation method and image display method
US10871825B1 (en) * 2019-12-04 2020-12-22 Facebook Technologies, Llc Predictive eye tracking systems and methods for variable focus electronic displays
CA3186253A1 (en) * 2020-07-24 2022-01-27 Raul Mihali Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572217A (en) * 2011-12-29 2012-07-11 华为技术有限公司 Visual-attention-based multimedia processing method and device
CN105607730A (en) * 2014-11-03 2016-05-25 航天信息股份有限公司 Eyeball tracking based enhanced display method and apparatus
CN109683335A (en) * 2017-10-19 2019-04-26 英特尔公司 Use showing without 3D glasses light field for eye position
CN112913231A (en) * 2018-10-22 2021-06-04 艾沃鲁什奥普提克斯有限公司 Light field display, adjusted pixel rendering method for the same, and vision correction system and method using the same

Also Published As

Publication number Publication date
CN115022616A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN108427503B (en) Human eye tracking method and human eye tracking device
CN106598221B (en) 3D direction of visual lines estimation method based on eye critical point detection
WO2021068486A1 (en) Image recognition-based vision detection method and apparatus, and computer device
CN108111749B (en) Image processing method and device
EP0932114B1 (en) A method of and apparatus for detecting a face-like region
JP3673834B2 (en) Gaze input communication method using eye movement
CN107368806B (en) Image rectification method, image rectification device, computer-readable storage medium and computer equipment
CN110032271A (en) Contrast control device and its method, virtual reality device and storage medium
CN107948517A (en) Preview screen virtualization processing method, device and equipment
CN108513668B (en) Picture processing method and device
EP2795905B1 (en) Video processing apparatus and method for detecting a temporal synchronization mismatch
CN112001920B (en) Fundus image recognition method, device and equipment
CN115022616B (en) Image focusing enhancement display device and display method based on human eye tracking
CN115171024A (en) Face multi-feature fusion fatigue detection method and system based on video sequence
CN116665313A (en) Deep learning-based eye movement living body detection method and system
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
WO2024125578A1 (en) Gaze information determination method and apparatus, and eye movement tracking device
CN110781712A (en) Human head space positioning method based on human face detection and recognition
CN107911609B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
JP2004157778A (en) Nose position extraction method, program for operating it on computer, and nose position extraction device
CN114445294A (en) Image processing method, computer storage medium, and near-to-eye display device
JP2014120139A (en) Image process device and image process device control method, imaging device and display device
Dauphin et al. Background suppression with low-resolution camera in the context of medication intake monitoring
JP4636314B2 (en) Image processing apparatus and method, recording medium, and program
CN113923501B (en) LED screen panoramic display method and system based on VR virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231122

Address after: 518000 The podium building 305A of Luohu Investment Holding Building, No. 112, Qingshuihe 1st Road, Qingshuihe Community, Luohu District, Shenzhen, Guangdong

Patentee after: SHENZHEN SPEED ELECTRONICS CO.,LTD.

Address before: 030024 No. 79 West Main Street, Taiyuan, Shanxi, Yingze

Patentee before: Taiyuan University of Technology