CN111857461A - Image display method and device, electronic equipment and readable storage medium - Google Patents

Image display method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111857461A
CN111857461A CN202010605425.0A CN202010605425A CN111857461A CN 111857461 A CN111857461 A CN 111857461A CN 202010605425 A CN202010605425 A CN 202010605425A CN 111857461 A CN111857461 A CN 111857461A
Authority
CN
China
Prior art keywords
position information
screen
user
distance
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010605425.0A
Other languages
Chinese (zh)
Other versions
CN111857461B (en
Inventor
吴正君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010605425.0A priority Critical patent/CN111857461B/en
Publication of CN111857461A publication Critical patent/CN111857461A/en
Application granted granted Critical
Publication of CN111857461B publication Critical patent/CN111857461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The application discloses an image display method, an image display device, electronic equipment and a readable storage medium, and belongs to the technical field of communication. The image display method includes: the method comprises the steps of acquiring first eye data in a face image of a user acquired by a camera, determining a target distance between eyes of the user and a screen, determining screen position information watched by the eyes of the user according to the target distance and the first eye data, acquiring space position information of the eyes of the user, determining a target image according to the screen position information and the space position information, and displaying the target image. Therefore, different picture contents can be watched without moving the position of the electronic equipment, and the problem of inconvenient operation of a user is solved.

Description

Image display method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image display method and device, an electronic device and a readable storage medium.
Background
With the application of electronic devices becoming more and more widespread, more and more people almost use the electronic devices every day, and users can view pictures, text contents and the like on the electronic devices through screens of the electronic devices.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: because the screen of the electronic device can only display the picture with the same size as the screen, if more screen pictures need to be seen, the user needs to rotate the electronic device or slide the object on the screen (for example, when the user views the picture, if the picture is bigger, the user needs to slide the picture up, down, left and right to see all the contents of the picture).
Disclosure of Invention
An embodiment of the present application provides an image display method, an image display apparatus, an electronic device, and a readable storage medium, which can solve the problem in the prior art that a user is inconvenient to operate if the user needs to see more screen images.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image display method, where the method includes:
acquiring first eye data in a face image of a user acquired by a camera, and determining a target distance between eyes of the user and the screen;
Determining screen position information of the user's eye gaze according to the target distance and the first eye data;
acquiring spatial position information of the eyes of the user;
and determining a target image according to the screen position information and the space position information, and displaying the target image.
In a second aspect, an embodiment of the present application provides an image display apparatus, including:
the first acquisition module is used for acquiring first eye data in a face image of a user acquired by a camera and determining a target distance between the eyes of the user and the screen;
a first determining module, configured to determine, according to the target distance and the first eye data, screen position information of the user's eye gaze;
a second obtaining module, configured to obtain spatial position information of the user's eyes;
and the display module is used for determining a target image according to the screen position information and the space position information and displaying the target image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, first eye data in a face image of a user acquired by a camera are acquired, a target distance between eyes of the user and a screen is determined, screen position information watched by the eyes of the user is determined according to the target distance and the first eye data, space position information of the eyes of the user is acquired, a target image is determined according to the screen position information and the space position information, and the target image is displayed. Since the screen position information is determined according to the target distance and the first eye data, that is, the screen position information is related to the target distance, the target image is determined according to the screen position information and the spatial position information, and the target image is displayed. Therefore, when the user watches the picture displayed on the electronic equipment, if the user needs to watch more screen pictures, the user can tilt down, so that the spatial position information of the eyes and the screen watching position information after the user tilts down can be obtained in real time, and the target image can be changed. Or if the user needs to see more screen pictures, the user can change the distance between the eyes of the user and the screen, and then can see different picture contents. Therefore, different picture contents can be watched without moving the position of the electronic equipment, and the problem of inconvenient operation of a user is solved.
Drawings
FIG. 1 is a flow chart of the steps of an image display method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an electronic device provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a calibration position provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a projection method provided in an embodiment of the present application;
FIG. 5 is a flowchart illustrating steps of another image display method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image display device provided in an embodiment of the present application;
fig. 7 is a schematic structural view of another image display device provided in the embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application;
fig. 9 is a schematic hardware structure diagram of another electronic device for implementing the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of an image display method provided in an embodiment of the present application, where the method may include the steps of:
step 101, acquiring first eye data in a face image of a user acquired by a camera, and determining a target distance between eyes of the user and a screen.
Referring to fig. 2, fig. 2 is a schematic view of an electronic device provided in an embodiment of the present application. The electronic device is provided with an Infrared (IR) camera, an RGB camera and a laser emitter, wherein the RGB camera is red (R), green (G) and blue (B), and RGB represents colors of red, green and blue channels. When the eyes of the user watch on the object displayed on the screen of the electronic device, the infrared camera can acquire an IR-style facial image of the user, and the RGB camera can acquire an RGB-style facial image of the user.
The user may be any user using the electronic device, when the user uses the electronic device to view an object displayed on a screen (for example, when viewing a two-dimensional picture or a three-dimensional object), a face image of the user acquired by a camera is acquired, a face position in the face image is identified through a face position detection model, an eye image area in the face image is determined according to a preset proportion and a preset position of the eye area in the face area, eye socket edge coordinates and pupil edge coordinates in the eye image area are calculated through an edge extraction operator (for example, a sobel operator or a Roberts operator, and the eye socket edge coordinates and the pupil edge coordinates are used as first eye data. For example, referring to fig. 2, in a case where an infrared camera and an RGB camera are mounted on an electronic apparatus, first eye data in an IR-style face image and first eye data in an RGB-style face image may be acquired.
It should be noted that, in the case that the electronic device has a laser emitter, the edge coordinates of the glistening bright point on the eye of the laser in the eye image area may also be obtained by an edge extraction operator, and the orbit edge coordinates, the pupil edge coordinates, and the edge coordinates of the glistening bright point are used as the first eye data. Only the orbit edge coordinates, the pupil edge coordinates, and the edge coordinates of the glint spot of the laser light on the left eye may be obtained, and the orbit edge coordinates, the pupil edge coordinates, and the edge coordinates of the glint spot of the laser light on the left eye of the left eye may be taken as the first eye data. Or, only the orbit edge coordinates, the pupil edge coordinates, and the edge coordinates of the glint spot on the right eye of the right eye are obtained, and the orbit edge coordinates, the pupil edge coordinates, and the edge coordinates of the glint spot on the right eye of the laser are used as the first eye data. Alternatively, the eye socket edge coordinates of the left eye, the pupil edge coordinates, and the edge coordinates of the glint spot of the laser on the left eye (i.e., the eye data of the left eye) and the eye socket edge coordinates of the right eye, the pupil edge coordinates, and the edge coordinates of the glint spot of the laser on the right eye (i.e., the eye data of the right eye) are obtained at the same time, and the eye data of the left eye and the eye data of the right eye are taken as the first eye data.
In the case where the user's eyes are gazing at the screen, a target distance between the user's eyes and the screen may be determined. As shown in fig. 2, in the case that the electronic device is equipped with the laser emitter 201, the laser emitted by the laser emitter 201 illuminates the face of the user, the light reflected by the face enters the camera, the time length from the time when the laser emitter emits the laser to the time when the camera receives the reflected light is recorded, half of the product of the light speed and the time length is calculated, the half of the product is the distance between the eyes of the user and the screen calculated by the light speed, and the distance can be used as the target distance, that is, the target distance is determined by laser ranging.
In the case where a plurality of cameras are mounted on the electronic apparatus, the distance between the eyes of the user and the screen may be determined by the eye position information in the face image acquired by the plurality of cameras, and the distance may be used as the target distance. Namely, the target distance is determined by using the binocular disparity principle. For example, in a case where one infrared camera 202 and one RGB camera 203 are mounted on the electronic apparatus, an image captured by the infrared camera is an IR-style face image, and an image captured by the RGB camera is an RGB-style face image. From the left-eye position information (a1, B1) in the IR-style face image and the left-eye position information (a2, B2) in the RGB-style face image, it is possible to calculate the distance between the eyes of the user and the screen, which is the slope calculated from the two left-eye position information (a1, B1) and (a2, B2), that is, the target distance by the binocular disparity principle. If the electronic device is equipped with more than two cameras, the distance between the eyes of the user and the screen can be obtained by a fitting method (for example, a least square method) according to the left-eye position information in the face image respectively acquired by the cameras. Without a laser transmitter, the greater the number of cameras employed, the more accurate the distance between the user's eye and the screen as determined by the fitting method.
In the case where a plurality of cameras and laser transmitters are mounted on the electronic device, as shown in fig. 2, the target distance may be calculated by combining the distance measured by the laser and the distance measured by the binocular parallax. For example, a weight corresponding to the distance measured by the laser and a weight corresponding to the distance measured by the binocular parallax are set, and the target distance is equal to the distance measured by the laser × the weight corresponding to the distance measured by the laser + the distance measured by the binocular parallax × the weight corresponding to the distance measured by the binocular parallax.
And 102, determining screen position information of the user's eye gaze according to the target distance and the first eye data.
Wherein determining the screen position information of the user's eye gaze according to the target distance and the first eye data may be implemented as follows:
and determining screen position information watched by the eyes of the user according to the target distance, the first eye data and a fitting function corresponding to the camera, wherein the fitting function is used for describing the position of any pixel point on the screen, the eye data corresponding to the position of the pixel point and the relation between any distance between the eyes corresponding to the eye data and the screen.
The fitting function is used for describing the position of any pixel point on the screen, the eye data corresponding to the position of the pixel point and the relation between any distance between the eye corresponding to the eye data and the screen.
The fitting function corresponding to the camera can be obtained through the following method:
acquiring second eye data in the face image of the user, which is acquired by the camera, under the condition that the distance between the eyes of the user and the screen is a preset distance and the eyes of the user watch on the calibration position on the screen;
determining a first fitting function corresponding to the camera according to the preset distance, the calibration position and second eye data corresponding to the calibration position, wherein the first fitting function is used for describing the relationship among the position of a pixel point on a screen, the eye data corresponding to the position of the pixel point and the preset distance;
and determining a second fitting function according to the corresponding relation between the first distance and the zoom ratio and the first fitting function, and taking the second fitting function as the fitting function corresponding to the camera, wherein the first distance is the distance between the eyes of the user and the screen.
The calibration position is a specific position preset on the screen, for example, as shown in fig. 3, fig. 3 is a schematic diagram of a calibration position provided in an embodiment of the present application, and a total of 8 calibration positions are set, where the 8 calibration positions include a calibration position 301, a calibration position 302, a calibration position 303, a calibration position 304, a calibration position 305, a calibration position 306, a calibration position 307, and a calibration position 308. At a preset distance, the eyes of the user gaze at the calibration position 301, and second eye data 1 in the face image of the user acquired by the camera is acquired, where the second eye data 1 is eye data corresponding to the calibration position 301 when the user gazes at the preset distance, that is, the calibration position 301 corresponds to the preset distance and the second eye data 1. Similarly, at the preset distance, the eyes of the user gaze at the calibration position 302, and second eye data 2 in the face image of the user acquired by the camera is acquired, where the second eye data 2 is eye data corresponding to the user gazing at the calibration position 302 at the preset distance, that is, the calibration position 301 corresponds to the preset distance and the second eye data 2. By analogy, it may be determined that the calibration position 303 corresponds to the preset distance and the second eye data 3, the calibration position 304 corresponds to the preset distance and the second eye data 4, the calibration position 305 corresponds to the preset distance and the second eye data 5, the calibration position 306 corresponds to the preset distance and the second eye data 6, the calibration position 307 corresponds to the preset distance and the second eye data 7, and the calibration position 308 corresponds to the preset distance and the second eye data 8. According to all the calibration positions, the preset distances corresponding to the calibration positions and the second eye data, the first fitting function can be obtained through a least square method or a fitting method such as a Chebyshev polynomial.
After the fitting function corresponding to the camera is obtained, as long as the target distance between the eyes of the user and the screen and the first eye data of the user are known, the screen position information watched by the eyes of the user can be obtained through the fitting function corresponding to the camera. The screen position information is, for example, coordinates of a screen position of gaze.
And 103, acquiring the spatial position information of the eyes of the user.
According to the parameters of the camera and the eye position information in the face image collected by the camera, the angle between the screen and the connecting line between the eyes of the user and the camera can be determined, and according to the angle and the distance between the eyes of the user and the screen, the space position information of the eyes of the user can be obtained.
And step 104, determining a target image according to the screen position information and the space position information, and displaying the target image.
Wherein, the determining a target image according to the screen position information and the spatial position information, and displaying the target image can be realized by the following steps:
determining the gazing direction of the eyes of the user according to the screen position information and the spatial position information of the eyes;
Determining a projection plane according to the gazing direction and the distance between the eyes of the user and a gazed object on the screen;
determining a display area range corresponding to the projection plane according to the spatial position information, the size of the screen and the projection plane;
and determining the target image according to the display area range, and displaying the target image.
When the determined target image is a three-dimensional image, the definition of the object can be determined according to the distance between the object in the target image and the object watched by the eyes of the user. Since the human eye is generally sensitive to objects within a certain distance from the object being looked at, it is not sensitive to objects surrounding beyond this certain distance. Therefore, by utilizing the characteristics, high-definition rendering can be used for objects within a certain distance range from the gazed object, and low-definition rendering can be used for objects beyond the certain distance, so that the calculation amount is reduced. For example, if the user watches a three-dimensional object at this time, all objects in a range corresponding to the display area range are displayed, for example, the user watches a three-dimensional image of a vase on a screen (that is, the object watched by the user is the vase), a plane where the vase is located is a projection plane, an apple is located behind the vase, if the apple falls into the range corresponding to the area range, the apple behind the vase and the vase will be displayed, that is, the displayed target image is a three-dimensional image, meanwhile, the clarity of the apple can be determined according to the distance between the apple and the vase, if a certain distance range is set to be 20 cm, and the distance between the apple and the vase is 30 cm, the apple can be rendered with a lower resolution, and the clarity of the displayed three-dimensional apple image is reduced. If the distance between the apple and the vase is 10 cm, the apple can be rendered by adopting a larger resolution ratio, and the definition of a displayed three-dimensional apple image is ensured.
According to the screen position information and the space position information, the target image can be determined in a projection mode and displayed. Referring to fig. 4, fig. 4 is a schematic diagram of a projection method provided in an embodiment of the present application. One of planes perpendicular to a connecting line between the spatial position 401 of the eye and the screen gaze position 402 is taken as a projection plane 403 (a direction from the spatial position 401 to the screen gaze position 402 is a gaze direction of the eye of the user), a distance between the projection plane 403 and the eye of the user is equal to a distance between the eye of the user and an object to be gazed, and position information corresponding to the spatial position 401 of the eye is spatial position information. For example, when the eyes of the user watch the vase in the screen, a straight line is determined between the spatial position of the eyes of the user and the vase, and a projection plane is determined from a plane perpendicular to the straight line, wherein the distance between the projection plane and the eyes of the user is equal to the distance between the vase and the eyes of the user. If the user views a two-dimensional object at this time, the display area range 405 on the projection plane can be determined according to the projection plane, the spatial position information, and the size of the screen 404, and the image on the display area range 405 can be displayed as a target object.
In the image display method provided by this embodiment, under the condition that the eyes of the user gaze at the screen, first eye data in the face image of the user collected by the camera is obtained, a target distance between the eyes of the user and the screen is determined, screen position information gazed at by the eyes of the user is determined according to the target distance, the first eye data and a fitting function corresponding to the camera, spatial position information of the eyes of the user is obtained, a target image is determined according to the screen position information and the spatial position information, and the target image is displayed. Since the screen position information is determined according to the target distance and the first eye data, that is, the screen position information is related to the target distance, the target image is determined according to the screen position information and the spatial position information, and the target image is displayed. Therefore, when the user watches the picture displayed on the electronic equipment, if the user needs to watch more screen pictures, the user can tilt down, so that the spatial position information of the eyes and the screen watching position information after the user tilts down can be obtained in real time, and the target image can be changed. Or if the user needs to see more screen pictures, the user can change the distance between the eyes of the user and the screen, and then can see different picture contents. Therefore, different picture contents can be watched without moving the position of the electronic equipment, and the problem of inconvenient operation of a user is solved. For example, when a picture is viewed, if the picture is large, a user does not need to slide the picture up and down, left and right, and only needs to tilt down the head or change the distance between the eyes and the screen, the pictures in different areas of the picture can be viewed, so that the picture of the whole picture can be viewed.
It should be noted that, the closer the user's eyes are to the screen, the larger the target on the image is seen; the farther from the screen, the smaller the object in the picture is seen. The user can change the distance between the eyes of the user and the screen, so that the size of the target in the seen picture meets the watching requirement of the user.
Referring to fig. 5, fig. 5 is a flowchart illustrating steps of another image display method according to an embodiment of the present application, where the method includes the following steps:
and 501, acquiring second eye data in the face image of the user, which is acquired by the camera, under the condition that the distance between the eyes of the user and the screen is a preset distance and the eyes of the user watch the calibrated position on the screen.
Step 502, determining a first fitting function corresponding to the camera according to the preset distance, the calibration position and second eye data corresponding to the calibration position, wherein the first fitting function is used for describing the relationship among the position of the pixel point on the screen, the eye data corresponding to the position of the pixel point and the preset distance.
And 503, determining a second fitting function according to the corresponding relation between the first distance and the scaling rate and the first fitting function, and taking the second fitting function as the fitting function corresponding to the camera.
According to steps 501 to 503, a fitting function corresponding to the camera can be determined. In the case where the number of cameras is at least one, the fitting function corresponding to each camera can be obtained by using steps 501 to 503.
It should be noted that the fitting function corresponding to one camera may be a second fitting function 1 determined according to eye data of a left eye of a user, or a second fitting function 2 determined according to eye data of a right eye of the user; or a second fitting function 1 and a second fitting function 2.
Step 504, under the condition that the eyes of the user watch the screen, acquiring first eye data in the face image of the user, which is acquired by the camera, and determining a target distance between the eyes of the user and the screen.
The method comprises the following steps of acquiring first eye data in a face image of a user acquired by a camera:
acquiring an eye area in a face image of a user acquired by a camera;
the orbital edge coordinates of the left and/or right eye and the edge coordinates of the pupil in the eye region, and the edge coordinates of the glint spot on the left eye by the laser emitted from the laser emitter are acquired, and the orbital edge coordinates of the left and/or right eye and the edge coordinates of the pupil, and the edge coordinates of the glint spot on the left eye are taken as first eye data.
Wherein, when the number of the cameras is a plurality and the electronic device comprises a laser emitter; determining the target distance between the user's eye and the screen may be accomplished by:
acquiring left eye position information and/or right eye position information in face images of users acquired by a plurality of cameras;
determining a second distance between the eyes of the user and the screen according to each left eye position information and/or right eye position information;
under the condition that the eyes of the user watch the screen, acquiring the time difference between the laser emitted by the laser emitter and the face image of the user collected by the camera;
determining a third distance between the eyes of the user and the screen according to the time difference;
and determining the target distance according to the second distance, the weight corresponding to the second distance, the third distance and the weight corresponding to the third distance.
It should be noted that, in the case of acquiring left eye position information and right eye position information in the face image of the user acquired by the plurality of cameras, the second distance between the eyes of the user and the screen is determined according to each of the left eye position information and the right eye position information. For example, in the case where one infrared camera and one RGB camera are mounted on the electronic apparatus, the distance between the eyes of the user and the screen, that is, the slope 1 calculated from the left-eye position information (a1, B1) and (a2, B2), can be calculated from the left-eye position information (a1, B1) in the IR-style face image and the left-eye position information (a2, B2) in the RGB-style face image. Meanwhile, from the right eye position information (A3, B4) in the IR-style face image and the right eye position information (a4, B4) in the RGB-style face image, a distance between the user's eyes and the screen, that is, a slope 2 calculated from the right eye position information (A3, B3) and (a4, B4), may be calculated, and an average of the slope 1 and the slope 2 is taken as a second distance between the user's eyes and the screen.
It should be noted that, in the case of acquiring only left eye position information in the face images of the users acquired by the plurality of cameras, the slope 1 may be used as the second distance; in the case where only the right-eye position information in the face images of the user captured by the plurality of cameras is acquired, the slope 2 may be taken as the second distance.
Since the precision of the laser ranging is high, the weight corresponding to the set third distance may be greater than the weight corresponding to the second distance, for example, the weight corresponding to the third distance is set to 0.7, the weight corresponding to the second distance is set to 0.3, and if the second distance is equal to 1.2 and the third distance is equal to 1, the target distance is equal to 1.2 × 0.3+1 × 0.7, that is, the target distance is equal to 1.06.
And 505, determining first screen position information corresponding to a target camera according to a fitting function corresponding to any one target camera in each camera, a target distance and first eye data in a face image of the user, which is acquired by the target camera, wherein the first screen position information is information of a screen position watched by eyes of the user.
Referring to fig. 2, the electronic device includes an infrared camera and an RGB camera, and the infrared camera is taken as a target camera as an example. If the second fitting function 1 corresponding to the infrared camera is obtained through steps 501 to 503 (the second fitting function 1 is a fitting function obtained through the eye data of the left eye of the user), in step 504, the eye data of the left eye of the user is obtained, the eye data of the left eye is used as the first eye data, and the first screen position information 1 corresponding to the infrared camera can be determined according to the second fitting function 1, the eye data of the left eye, and the target distance, where the first screen position information is the position information obtained through the second fitting function 1.
If the second fitting function 2 corresponding to the infrared camera is obtained through steps 501 to 503 (the second fitting function 2 is a fitting function obtained through the eye data of the right eye of the user), in step 504, the eye data of the right eye of the user is obtained, the eye data of the right eye is used as the first eye data, and the first screen position information 2 corresponding to the infrared camera can be determined according to the second fitting function 2, the eye data of the right eye, and the target distance, where the first screen position information is the position information obtained through the second fitting function 2.
If the second fitting function 1 and the second fitting function corresponding to the infrared camera are obtained through steps 501 to 503, in step 504, the eye data of the left eye and the eye data of the right eye of the user may be obtained, the eye data of the left eye is used as the first eye data 1, and the eye data of the right eye is used as the first eye data 2. According to the second fitting function 1, the eye data of the left eye and the target distance, first screen position information 1 corresponding to the infrared camera can be determined; meanwhile, according to the second fitting function 2, the eye data of the right eye and the target distance, the first screen position information 2 corresponding to the infrared camera can be determined. In this case, the first screen position information corresponding to the infrared camera includes first screen position information 1 and first screen position information 2.
When the RGB camera is taken as the target camera for example, the first screen position information corresponding to the RGB camera can be determined similarly to the determination of the first screen position information corresponding to the infrared camera by taking the infrared camera as the target camera for example.
Step 506, determining the screen position information watched by the eyes of the user according to the first screen position information corresponding to the at least one target camera.
Optionally, determining the screen position information watched by the eyes of the user according to the first screen position information corresponding to the at least one target camera may be implemented as follows:
under the condition that unstable second screen position information exists in all the first screen position information, determining screen position information watched by eyes of a user according to other screen position information except the second screen position information in all the first screen position information, wherein the second screen position information is the first screen position information corresponding to the infrared camera, and the fluctuation range of coordinates corresponding to the unstable second screen position information is larger than a preset range;
and under the condition that unstable second screen position information does not exist in all the first screen position information, determining the screen position information watched by the eyes of the user according to all the first screen position information.
Because the edge coordinates of the pupil and the edge coordinates of the eye socket can be more accurately obtained according to the IR style face image collected by the infrared camera, the interference of the iris can be avoided, the accuracy of the obtained first eye data of the user is higher, and the accuracy of the screen position information obtained according to the first eye data is higher. However, the infrared camera is affected by the glasses coating, and if a user wears glasses, the first eye data of the user obtained in practical application is unclear, and further the first screen fixation position information corresponding to the infrared camera is unstable, for example, if the preset range of fluctuation of the abscissa and the ordinate of the first screen fixation position information is greater than 0 and less than 1, the screen fixation position information is considered to be stable, otherwise, the first screen fixation position information is considered to be unstable, for example, if the screen fixation position information (0, 1) corresponding to the infrared camera jumps to (3, 2) quickly, that is, the fluctuation range of the abscissa is 3, and the fluctuation range of the ordinate is 1; from (3, 2), jumping to (1, 0) quickly, namely the fluctuation range of the abscissa is 2, the fluctuation range of the ordinate is 2, the first screen fixation position information fluctuates sharply in a short time, and therefore unstable second screen position information exists in the first screen position information. Compared with an infrared camera, the RGB camera is not affected by the coating of glasses, so that the screen position information corresponding to the RGB camera is not severely fluctuated generally. In summary, by combining the characteristics of the infrared camera and the RGB camera, the screen position information corresponding to the infrared camera and the screen position information corresponding to the RGB camera can be combined under the condition that the first screen position information is stable, so that the final screen position information is determined to be the screen position information watched by the eyes of the user, and the accuracy of the obtained screen position information watched by the eyes of the user is ensured to a certain extent.
For example, when the first screen position information corresponding to the infrared camera is stable, the screen position information focused by the eyes of the user may be determined according to the first screen position information corresponding to the infrared camera and the first screen position information corresponding to the RGB camera. For example, if the first screen position information corresponding to the infrared camera is one position information, the position is the first screen position information 1, the first screen position information corresponding to the RGB camera is one position information, and the position is the first screen position information a, an average value of a ordinate of the first screen position information 1 and a ordinate of the first screen position information a may be used as a ordinate of the screen position information watched by the eyes of the user, and an average value of an abscissa of the first screen position information 1 and an abscissa of the first screen position information a may be used as an abscissa of the screen position information watched by the eyes of the user, thereby determining the screen position information watched by the eyes of the user.
Or, if the first screen position information corresponding to the infrared camera is unstable (that is, if unstable second screen position information exists in all the first screen position information), the first screen position information corresponding to the RGB camera is directly used as the screen position information watched by the eyes of the user without considering the first screen position information corresponding to the infrared camera, so that the accuracy of the obtained screen position information watched by the eyes of the user is ensured to a certain extent.
And step 507, acquiring the spatial position information of the eyes of the user.
And step 508, determining a target image according to the screen position information and the space position information, and displaying the target image.
According to the image display method provided by the embodiment, the effect of expanding the display picture can be realized only by adding the camera on the basis of the traditional electronic equipment, and the method can be directly applied to part of the electronic equipment with the camera, so that the realization cost and difficulty are low.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image display device provided in an embodiment of the present application, where the image display device 600 includes:
a first obtaining module 610, configured to obtain first eye data in the face image of the user acquired by a camera, and determine a target distance between the eyes of the user and the screen;
a first determining module 620, configured to determine, according to the target distance and the first eye data, screen position information of the user's eye gaze;
a second obtaining module 630, configured to obtain spatial position information of the user's eye;
and the display module 640 is configured to determine a target image according to the screen position information and the spatial position information, and display the target image.
In the embodiment of the application, under the condition that the eyes of a user watch a screen, first eye data in a face image of the user collected by a camera is obtained, a target distance between the eyes of the user and the screen is determined, screen position information watched by the eyes of the user is determined according to the target distance and the first eye data, space position information of the eyes of the user is obtained, a target image is determined according to the screen position information and the space position information, and the target image is displayed. Since the screen position information is determined according to the target distance and the first eye data, that is, the screen position information is related to the target distance, the target image is determined according to the screen position information and the spatial position information, and the target image is displayed. Therefore, when the user watches the picture displayed on the electronic equipment, if the user needs to watch more screen pictures, the user can tilt down, so that the spatial position information of the eyes and the screen watching position information after the user tilts down can be obtained in real time, and the target image can be changed. Or if the user needs to see more screen pictures, the user can change the distance between the eyes of the user and the screen, and then can see different picture contents. Therefore, different picture contents can be watched without moving the position of the electronic equipment, and the problem of inconvenient operation of a user is solved.
Optionally, the first determining module 620 is specifically configured to determine, according to the target distance, the first eye data, and a fitting function corresponding to the camera, screen position information watched by the eyes of the user, where the fitting function is used to describe a position of a pixel point on the screen, eye data corresponding to the position of the pixel point, and a relationship between an eye corresponding to the eye data and any distance between the screen and the eye corresponding to the eye data.
Alternatively, referring to fig. 7, fig. 7 is a schematic structural diagram of another image display device provided in an embodiment of the present application, where the image display device 700 includes:
a third obtaining module 710, configured to obtain second eye data in the face image of the user, which is acquired by the camera, when a distance between the eyes of the user and the screen is a preset distance and the eyes of the user gaze at the calibrated position on the screen;
a second determining module 720, configured to determine a first fitting function corresponding to the camera according to the preset distance, the calibration position, and the second eye data corresponding to the calibration position, where the first fitting function is used to describe a relationship among the position of the pixel point on the screen, the eye data corresponding to the position of the pixel point, and the preset distance;
A third determining module 730, configured to determine a second fitting function according to the corresponding relationship between the first distance and the zoom ratio and the first fitting function, and use the second fitting function as the fitting function corresponding to the camera, where the first distance is a distance between an eye of the user and the screen.
Optionally, the number of the cameras is at least one; the first determining module 620 includes:
a first determining unit 6201, configured to determine, according to a fitting function corresponding to any one target camera in each camera, the target distance, and first eye data in a face image of a user acquired by the target camera, first screen position information corresponding to the target camera, where the first screen position information is information of a screen position gazed by eyes of the user;
a second determining unit 6202, configured to determine, according to the first screen position information corresponding to the at least one target camera, screen position information watched by the eyes of the user.
Optionally, in a case where all the cameras include infrared cameras;
the second determining unit 6202 is specifically configured to determine, when unstable second screen position information exists in all the first screen position information, screen position information watched by eyes of the user according to other screen position information except the second screen position information in all the first screen position information, where the second screen position information is first screen position information corresponding to the infrared camera;
And under the condition that unstable second screen position information does not exist in all the first screen position information, determining the screen position information watched by the eyes of the user according to all the first screen position information.
Optionally, the display module 640 is specifically configured to determine a gazing direction of the eyes of the user according to the screen position information and the spatial position information of the eyes;
determining a projection plane according to the gazing direction and the distance between the eyes of the user and a gazed object on the screen;
determining a display area range corresponding to the projection plane according to the spatial position information, the size of the screen and the projection plane;
and determining the target image according to the display area range, and displaying the target image.
Optionally, the number of the cameras is multiple, and the electronic device includes a laser emitter;
the first obtaining module 610 is specifically configured to obtain left eye position information and/or right eye position information in the face image of the user, which is acquired by a plurality of cameras;
determining a second distance between the eyes of the user and the screen according to each left eye position information and/or right eye position information;
Under the condition that the eyes of the user watch the screen, acquiring the time difference between the laser emitted by the laser emitter and the face image of the user collected by the camera;
determining a third distance between the user's eye and the screen according to the time difference;
and determining the target distance according to the second distance, the weight corresponding to the second distance, the third distance and the weight corresponding to the third distance.
Optionally, the first obtaining module 610 is specifically configured to obtain an eye region in the face image of the user, which is acquired by the camera; acquiring orbital edge coordinates and pupil edge coordinates of a left eye and/or a right eye in the eye area and edge coordinates of a reflex bright point on the left eye by laser emitted by the laser emitter, and taking the orbital edge coordinates and the pupil edge coordinates of the left eye and/or the right eye and the edge coordinates of the reflex bright point on the left eye as the first eye data.
The image display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display device provided in the embodiment of the present application can implement each process implemented by the image display device in the method embodiments of fig. 1 and fig. 5, and is not described herein again to avoid repetition.
Optionally, an electronic device is further provided in an embodiment of the present application, as shown in fig. 8, fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application. The electronic device 800 includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction implements the processes of the above-described embodiment of the image display method when executed by the processor 801, and can achieve the same technical effects, and therefore, for avoiding repetition, the description thereof is omitted here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 9 is a schematic hardware structure diagram of another electronic device for implementing the embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 910 is configured to, when a user's eyes watch a screen, acquire first eye data in an image of a face of the user acquired by a camera, and determine a target distance between the user's eyes and the screen;
determining screen position information watched by the eyes of the user according to the target distance, the first eye data and a fitting function corresponding to the camera, wherein the fitting function is used for describing the position of any pixel point on the screen, the eye data corresponding to the position of the pixel point and the relation between any distance between the eyes corresponding to the eye data and the screen;
Acquiring spatial position information of the eyes of the user;
according to the screen position information and the spatial position information, a target image is determined and displayed through the display unit 906.
The processor 910 is further configured to obtain second eye data in the face image of the user, which is acquired by the camera, when a distance between the eyes of the user and the screen is a preset distance and the eyes of the user gaze at the calibrated position on the screen;
determining a first fitting function corresponding to the camera according to the preset distance, the calibration position and the second eye data corresponding to the calibration position, wherein the first fitting function is used for describing the relationship among the position of the pixel point on the screen, the eye data corresponding to the position of the pixel point and the preset distance;
and determining a second fitting function according to the corresponding relation between the first distance and the zoom ratio and the first fitting function, and taking the second fitting function as the fitting function corresponding to the camera, wherein the first distance is the distance between the eyes of the user and the screen.
The number of the cameras is at least one; the processor 910 is further configured to determine first screen position information corresponding to any one target camera in each camera according to the fitting function corresponding to the target camera, the target distance, and first eye data in a face image of the user acquired by the target camera, where the first screen position information is information of a screen position gazed by eyes of the user;
and determining the screen position information watched by the eyes of the user according to the first screen position information corresponding to at least one target camera.
The processor 910 is further configured to, when unstable second screen position information exists in all the first screen position information, determine, according to other screen position information except the second screen position information in all the first screen position information, screen position information watched by eyes of the user, where the second screen position information is first screen position information corresponding to the infrared camera, and a fluctuation range of a coordinate corresponding to the unstable second screen position information is greater than a preset range;
and under the condition that unstable second screen position information does not exist in all the first screen position information, determining the screen position information watched by the eyes of the user according to all the first screen position information.
A processor 910, further configured to determine a gazing direction of the eyes of the user according to the screen position information and the spatial position information of the eyes;
determining a projection plane according to the gazing direction and the distance between the eyes of the user and a gazed object on the screen;
determining a display area range corresponding to the projection plane according to the spatial position information, the size of the screen and the projection plane;
according to the display area range, the target image is determined and displayed through the display unit 906.
The number of the cameras is multiple, and the processor 910 is further configured to obtain left eye position information and/or right eye position information in the face image of the user, which is acquired by the multiple cameras;
determining a second distance between the eyes of the user and the screen according to each left eye position information and/or right eye position information;
under the condition that the eyes of the user watch the screen, acquiring the time difference between the laser emitted by the laser emitter and the face image of the user collected by the camera;
determining a third distance between the user's eye and the screen according to the time difference;
And determining the target distance according to the second distance, the weight corresponding to the second distance, the third distance and the weight corresponding to the third distance.
A processor 910, further configured to obtain an eye region in the face image of the user acquired by the camera;
acquiring orbital edge coordinates and pupil edge coordinates of a left eye and/or a right eye in the eye area and edge coordinates of a reflex bright point on the left eye by laser emitted by the laser emitter, and taking the orbital edge coordinates and the pupil edge coordinates of the left eye and/or the right eye and the edge coordinates of the reflex bright point on the left eye as the first eye data.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), random-access Memory (RAM), magnetic or optical disks, etc.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the graphics processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. An image display method, comprising:
acquiring first eye data in a face image of a user acquired by a camera, and determining a target distance between eyes of the user and the screen;
determining screen position information of the user's eye gaze according to the target distance and the first eye data;
acquiring spatial position information of the eyes of the user;
and determining a target image according to the screen position information and the space position information, and displaying the target image.
2. The method of claim 1, wherein determining screen location information of the user's eye gaze based on the target distance and the first eye data comprises:
and determining screen position information watched by the eyes of the user according to the target distance, the first eye data and a fitting function corresponding to the camera, wherein the fitting function is used for describing the position of any pixel point on the screen, the eye data corresponding to the position of the pixel point and the relation between any distance between the eyes corresponding to the eye data and the screen.
3. The method of claim 2, wherein prior to acquiring first eye data in an image of a user's face captured by a camera and determining a target distance between the user's eyes and the screen, further comprising:
acquiring second eye data in the face image of the user, which is acquired by the camera, under the condition that the distance between the eyes of the user and the screen is a preset distance and the eyes of the user watch on the calibrated position on the screen;
determining a first fitting function corresponding to the camera according to the preset distance, the calibration position and the second eye data corresponding to the calibration position, wherein the first fitting function is used for describing the relationship among the position of the pixel point on the screen, the eye data corresponding to the position of the pixel point and the preset distance;
and determining a second fitting function according to the corresponding relation between the first distance and the zoom ratio and the first fitting function, and taking the second fitting function as the fitting function corresponding to the camera, wherein the first distance is the distance between the eyes of the user and the screen.
4. The method of claim 2, wherein the number of cameras is at least one; the determining, according to the target distance, the first eye data, and the fitting function corresponding to the camera, screen position information of the user's eye gaze includes:
determining first screen position information corresponding to a target camera according to a fitting function corresponding to any one target camera in each camera, the target distance and first eye data in a face image of a user, wherein the first eye data are acquired by the target camera, and the first screen position information is information of a screen position watched by eyes of the user;
and determining the screen position information watched by the eyes of the user according to the first screen position information corresponding to at least one target camera.
5. The method of claim 4, wherein in the case where infrared cameras are included in all of the cameras; the determining, according to first screen location information corresponding to at least one of the target cameras, screen location information watched by the eyes of the user includes:
under the condition that unstable second screen position information exists in all the first screen position information, determining screen position information watched by eyes of the user according to other screen position information except the second screen position information in all the first screen position information, wherein the second screen position information is first screen position information corresponding to the infrared camera, and the fluctuation range of coordinates corresponding to the unstable second screen position information is larger than a preset range;
And under the condition that unstable second screen position information does not exist in all the first screen position information, determining the screen position information watched by the eyes of the user according to all the first screen position information.
6. The method according to any one of claims 1-5, wherein the determining a target image according to the screen position information and the spatial position information and displaying the target image comprises:
determining the gazing direction of the eyes of the user according to the screen position information and the spatial position information of the eyes;
determining a projection plane according to the gazing direction and the distance between the eyes of the user and a gazed object on the screen;
determining a display area range corresponding to the projection plane according to the spatial position information, the size of the screen and the projection plane;
and determining the target image according to the display area range, and displaying the target image.
7. The method of claim 1, wherein the number of cameras is plural, the electronic device comprises a laser emitter; the determining a target distance between the user's eye and the screen includes:
Acquiring left eye position information and/or right eye position information in the face image of the user acquired by a plurality of cameras;
determining a second distance between the eyes of the user and the screen according to each left eye position information and/or right eye position information;
under the condition that the eyes of the user watch the screen, acquiring the time difference between the laser emitted by the laser emitter and the face image of the user collected by the camera;
determining a third distance between the user's eye and the screen according to the time difference;
and determining the target distance according to the second distance, the weight corresponding to the second distance, the third distance and the weight corresponding to the third distance.
8. The method of claim 1, wherein, in the case where the electronic device includes a laser transmitter, the acquiring first eye data in the image of the user's face captured by the camera comprises:
acquiring an eye area in the face image of the user acquired by the camera;
acquiring orbital edge coordinates and pupil edge coordinates of a left eye and/or a right eye in the eye area and edge coordinates of a reflex bright point on the left eye by laser emitted by the laser emitter, and taking the orbital edge coordinates and the pupil edge coordinates of the left eye and/or the right eye and the edge coordinates of the reflex bright point on the left eye as the first eye data.
9. An image display apparatus, comprising:
the first acquisition module is used for acquiring first eye data in a face image of a user acquired by a camera and determining a target distance between the eyes of the user and the screen;
a first determining module, configured to determine, according to the target distance and the first eye data, screen position information of the user's eye gaze;
a second obtaining module, configured to obtain spatial position information of the user's eyes;
and the display module is used for determining a target image according to the screen position information and the space position information and displaying the target image.
10. The apparatus according to claim 9, wherein the first determining module is specifically configured to determine the screen position information of the user's eye gaze according to the target distance, the first eye data, and a fitting function corresponding to the camera, where the fitting function is used to describe a position of a pixel point on the screen, eye data corresponding to the position of the pixel point, and a relationship between an eye corresponding to the eye data and any distance between the screen and the eye corresponding to the eye data.
11. The apparatus of claim 10, further comprising:
the third acquisition module is used for acquiring second eye data in the face image of the user, which is acquired by the camera, under the condition that the distance between the eyes of the user and the screen is a preset distance and the eyes of the user watch on the calibrated position on the screen;
a second determining module, configured to determine a first fitting function corresponding to the camera according to the preset distance, the calibration position, and the second eye data corresponding to the calibration position, where the first fitting function is used to describe a relationship among a position of a pixel point on the screen, eye data corresponding to the position of the pixel point, and the preset distance;
and the third determining module is used for determining a second fitting function according to the corresponding relation between the first distance and the zoom ratio and the first fitting function, and taking the second fitting function as the fitting function corresponding to the camera, wherein the first distance is the distance between the eyes of the user and the screen.
12. The apparatus of claim 10, wherein the number of cameras is at least one; the first determining module includes:
The first determining unit is used for determining first screen position information corresponding to a target camera according to a fitting function corresponding to any one target camera in each camera, the target distance and first eye data in a face image of a user, wherein the first eye data is acquired by the target camera, and the first screen position information is information of a screen position watched by eyes of the user;
and the second determining unit is used for determining the screen position information watched by the eyes of the user according to the first screen position information corresponding to at least one target camera.
13. The apparatus of claim 12, wherein in the case of an infrared camera included in all of the cameras;
the second determining unit is specifically configured to determine, when unstable second screen position information exists in all the first screen position information, screen position information watched by eyes of the user according to other screen position information except the second screen position information in all the first screen position information, where the second screen position information is first screen position information corresponding to the infrared camera, and a fluctuation range of a coordinate corresponding to the unstable second screen position information is greater than a preset range;
And under the condition that unstable second screen position information does not exist in all the first screen position information, determining the screen position information watched by the eyes of the user according to all the first screen position information.
14. The apparatus according to any of claims 9 to 13, wherein the display module is specifically configured to determine a gazing direction of the eyes of the user according to the screen position information and the spatial position information of the eyes;
determining a projection plane according to the gazing direction and the distance between the eyes of the user and a gazed object on the screen;
determining a display area range corresponding to the projection plane according to the spatial position information, the size of the screen and the projection plane;
and determining the target image according to the display area range, and displaying the target image.
15. The apparatus of claim 9, wherein the number of cameras is plural, and the electronic device comprises a laser emitter;
the first acquisition module is specifically used for acquiring left eye position information and/or right eye position information in the face image of the user acquired by the multiple cameras;
Determining a second distance between the eyes of the user and the screen according to each left eye position information and/or right eye position information;
under the condition that the eyes of the user watch the screen, acquiring the time difference between the laser emitted by the laser emitter and the face image of the user collected by the camera;
determining a third distance between the user's eye and the screen according to the time difference;
and determining the target distance according to the second distance, the weight corresponding to the second distance, the third distance and the weight corresponding to the third distance.
16. The apparatus according to claim 9, wherein the first obtaining module is specifically configured to obtain an eye region in the face image of the user captured by the camera; acquiring orbital edge coordinates and pupil edge coordinates of a left eye and/or a right eye in the eye area and edge coordinates of a reflex bright point on the left eye by laser emitted by the laser emitter, and taking the orbital edge coordinates and the pupil edge coordinates of the left eye and/or the right eye and the edge coordinates of the reflex bright point on the left eye as the first eye data.
17. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image display method according to any one of claims 1 to 8.
18. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image display method according to any one of claims 1 to 8.
CN202010605425.0A 2020-06-29 2020-06-29 Image display method and device, electronic equipment and readable storage medium Active CN111857461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605425.0A CN111857461B (en) 2020-06-29 2020-06-29 Image display method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605425.0A CN111857461B (en) 2020-06-29 2020-06-29 Image display method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111857461A true CN111857461A (en) 2020-10-30
CN111857461B CN111857461B (en) 2021-12-24

Family

ID=72989163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605425.0A Active CN111857461B (en) 2020-06-29 2020-06-29 Image display method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111857461B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327346A (en) * 2021-12-27 2022-04-12 北京百度网讯科技有限公司 Display method, display device, electronic apparatus, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103809737A (en) * 2012-11-13 2014-05-21 华为技术有限公司 Method and device for human-computer interaction
CN108595011A (en) * 2018-05-03 2018-09-28 北京京东金融科技控股有限公司 Information displaying method, device, storage medium and electronic equipment
CN109271914A (en) * 2018-09-07 2019-01-25 百度在线网络技术(北京)有限公司 Detect method, apparatus, storage medium and the terminal device of sight drop point
CN110706283A (en) * 2019-11-14 2020-01-17 Oppo广东移动通信有限公司 Calibration method and device for sight tracking, mobile terminal and storage medium
US20200050280A1 (en) * 2018-08-10 2020-02-13 Beijing 7Invensun Technology Co., Ltd. Operation instruction execution method and apparatus, user terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103809737A (en) * 2012-11-13 2014-05-21 华为技术有限公司 Method and device for human-computer interaction
CN108595011A (en) * 2018-05-03 2018-09-28 北京京东金融科技控股有限公司 Information displaying method, device, storage medium and electronic equipment
US20200050280A1 (en) * 2018-08-10 2020-02-13 Beijing 7Invensun Technology Co., Ltd. Operation instruction execution method and apparatus, user terminal and storage medium
CN109271914A (en) * 2018-09-07 2019-01-25 百度在线网络技术(北京)有限公司 Detect method, apparatus, storage medium and the terminal device of sight drop point
CN110706283A (en) * 2019-11-14 2020-01-17 Oppo广东移动通信有限公司 Calibration method and device for sight tracking, mobile terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327346A (en) * 2021-12-27 2022-04-12 北京百度网讯科技有限公司 Display method, display device, electronic apparatus, and storage medium
CN114327346B (en) * 2021-12-27 2023-09-29 北京百度网讯科技有限公司 Display method, display device, electronic apparatus, and storage medium

Also Published As

Publication number Publication date
CN111857461B (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US11803055B2 (en) Sedentary virtual reality method and systems
CN106662930B (en) Techniques for adjusting a perspective of a captured image for display
US10182720B2 (en) System and method for interacting with and analyzing media on a display using eye gaze tracking
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
US11663689B2 (en) Foveated rendering using eye motion
EP3671408B1 (en) Virtual reality device and content adjusting method therefor
EP3286601B1 (en) A method and apparatus for displaying a virtual object in three-dimensional (3d) space
CN109496293B (en) Extended content display method, device, system and storage medium
EP3683656A1 (en) Virtual reality (vr) interface generation method and apparatus
CN111970456B (en) Shooting control method, device, equipment and storage medium
TW202025719A (en) Method, apparatus and electronic device for image processing and storage medium thereof
US20200402321A1 (en) Method, electronic device and storage medium for image generation
CN111857461B (en) Image display method and device, electronic equipment and readable storage medium
TWI603225B (en) Viewing angle adjusting method and apparatus of liquid crystal display
US10345595B2 (en) Head mounted device with eye tracking and control method thereof
CN111782053B (en) Model editing method, device, equipment and storage medium
JP6686319B2 (en) Image projection device and image display system
US20200166997A1 (en) Method for transmission of eye tracking information, head mounted display and computer device
CN117478931A (en) Information display method, information display device, electronic equipment and storage medium
CN116193246A (en) Prompt method and device for shooting video, electronic equipment and storage medium
CN117372475A (en) Eyeball tracking method and electronic equipment
CN115599331A (en) Rendering picture updating method and device, augmented reality equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant