CN118330888A - Picture display method and device, wearable device and storage medium - Google Patents

Picture display method and device, wearable device and storage medium Download PDF

Info

Publication number
CN118330888A
CN118330888A CN202410541308.0A CN202410541308A CN118330888A CN 118330888 A CN118330888 A CN 118330888A CN 202410541308 A CN202410541308 A CN 202410541308A CN 118330888 A CN118330888 A CN 118330888A
Authority
CN
China
Prior art keywords
display
area
parameter
user
display parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410541308.0A
Other languages
Chinese (zh)
Inventor
张吉松
夏勇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beehive Century Technology Co ltd
Original Assignee
Beijing Beehive Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beehive Century Technology Co ltd filed Critical Beijing Beehive Century Technology Co ltd
Priority to CN202410541308.0A priority Critical patent/CN118330888A/en
Publication of CN118330888A publication Critical patent/CN118330888A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a picture display method and device, a wearable device and a storage medium, belonging to the field of electronic equipment, wherein the method comprises the following steps: determining a first display parameter of the first region based on an eye state of the first user; the first user is a user wearing the first device, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first device for the first user; determining a second display parameter of the second region; the second area belongs to the target display area, and is different from the first area, and the second display parameter is different from the first display parameter; the first region is displayed with the first display parameter, and the second region is displayed with the second display parameter. The picture display method and device, the wearable device and the storage medium can reduce the calculated amount of the wearable device.

Description

Picture display method and device, wearable device and storage medium
Technical Field
The disclosure belongs to the technical field of electronic equipment, and in particular relates to a picture display method and device, wearable equipment and a storage medium.
Background
The head wearable device (including smart glasses or smart helmets, etc.) may superimpose computer-generated images, text, or other information into the user's field of view, creating an augmented real world. The user can interact with the virtual information in real time in a voice mode, a gesture mode and the like. For example, the user may operate the 3D model directly with both hands, and the model may change accordingly. In summary, the head wearable device may provide a richer and more convenient use experience for the user. However, there is a problem in that the calculation amount is large in the process of rendering the virtual image, and further improvement is required.
Disclosure of Invention
The disclosure aims to provide a picture display method and device, a wearable device and a storage medium, so as to reduce the calculation amount of the wearable device.
In a first aspect of an embodiment of the present disclosure, there is provided a picture display method, including:
Determining a first display parameter of the first region based on an eye state of the first user;
The first user is a user wearing first equipment, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first equipment for the first user;
Determining a second display parameter of the second region; the second region belongs to a target display region, and is different from the first region, and the second display parameter is different from the first display parameter;
and displaying the first area by the first display parameter, and displaying the second area by the second display parameter.
In a second aspect of the embodiments of the present disclosure, there is provided a picture display device including:
a first processing module for determining a first display parameter of the first region based on an eye state of the first user;
The first user is a user wearing first equipment, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first equipment for the first user;
a second processing unit for determining a second display parameter of the second region; the second region belongs to a target display region, and is different from the first region, and the second display parameter is different from the first display parameter;
And the picture display module is used for displaying the first area by the first display parameter and displaying the second area by the second display parameter.
In a third aspect of the disclosed embodiments, a wearable device is provided, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the steps of the above-mentioned picture display method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described screen display method.
The method and the device for displaying the picture, the wearable device and the storage medium have the beneficial effects that:
In this embodiment, considering that the eye state may be affected by the image quality, for example, the blurred image effect may cause eye fatigue more easily, the pupil may become smaller when the image contrast is too high, etc., first, the first display parameter of the eye gazing area (the first area) is determined according to the eye state, for example, when eye fatigue is detected, the image quality of the first area is improved by adjusting the first display parameter, so as to alleviate eye fatigue and improve the user experience. Then, considering that the second area other than the first area is not an area where the human eye gazes, the picture quality can be appropriately reduced, so that the calculation amount of image rendering can be reduced, and therefore, a second display parameter different from the first display parameter is set, and the second area is displayed with the second display parameter. By the method, the calculated amount is reduced while the picture quality of the eye fixation area is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart of a method for displaying a picture according to an embodiment of the disclosure;
FIG. 2 is a flowchart illustrating another method for displaying images according to an embodiment of the disclosure;
fig. 3 is a block diagram of a frame display device according to an embodiment of the disclosure;
Fig. 4 is a schematic block diagram of a wearable device provided by an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
First, some terms involved in the embodiments of the present disclosure will be explained.
Eye movement tracking: eye tracking technology is a technology capable of monitoring and recording movement of a person's line of sight, the basic principle of which is to capture eye movements, including pupil position, cornea curvature, etc., by various sensors. This information is converted into an electrical signal and then analyzed by an algorithm to derive the direction of the user's line of sight.
And (3) image identification: image recognition is a method of automatically recognizing and classifying objects in an image using computer vision techniques. The basic processes of image recognition technology include information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision.
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method for displaying a picture according to an embodiment of the disclosure, where the method includes:
s101: a first display parameter of the first region is determined based on an eye state of the first user.
The first user is a user wearing the first device, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first device for the first user.
In this embodiment, the first device includes, but is not limited to, AR glasses, VR glasses, XR glasses, smart helmets, and the like head wearable devices. Taking AR glasses as an example, when the first user uses the first device, the display area that the first user can see is a target display area, and at the same time, through the eye tracking technology, the first device can detect the sight direction of the first user in real time, and determine the eye gazing area (first area) according to the sight direction of the first user.
There are various methods for determining the gaze area of the human eye according to the direction of the line of sight, for example, the gaze point of the first user in the target display area may be determined according to the direction of the line of sight of the first user, then a circular area is determined with the gaze point as the center of a circle and with a constant R as the radius, and the circular area is used as the first area.
The first display parameters may include resolution, sampling rate, texture mapping parameters, etc., and the picture quality of the first region may be improved by adjusting the first display parameters, but the calculation amount of image rendering may also be increased.
In this embodiment, considering that the eye state may be affected by the picture quality, for example, the blurred picture effect may cause eye fatigue more easily, the pupil may become small when the image is too bright, etc., the first display parameter of the eye gazing area (first area) is determined according to the eye state, for example, when eye fatigue is detected, the picture quality of the first area is improved by adjusting the first display parameter, so as to alleviate eye fatigue.
S102: determining a second display parameter of the second region; the second region belongs to the target display region, and is different from the first region, and the second display parameter is different from the first display parameter.
In this embodiment, the second area is all or part of the target display area except the first area, and the second display parameter may also include resolution, sampling rate, texture map parameter, and the like. Since the second region is not a region where the human eye gazes, the picture quality of the second region can be reduced by adjusting the second display parameter appropriately, so that the calculation amount of image rendering can be reduced.
S103: the first region is displayed with the first display parameter, and the second region is displayed with the second display parameter.
In this embodiment, the target display area is divided into the first area and the second area, and each area displays the picture with different display parameters, so that the requirements of the user on the picture quality and the calculation amount can be better considered.
As can be seen from the above, in this embodiment, considering that the eye state is affected by the picture quality, for example, the blurred picture effect is more likely to cause eye fatigue, first, the first display parameter of the eye gazing area (the first area) is determined according to the eye state, for example, when eye fatigue is detected, the picture quality of the first area is improved by adjusting the first display parameter, so as to alleviate eye fatigue. Then, considering that the second area other than the first area is not an area where the human eye gazes, the picture quality can be appropriately reduced, so that the calculation amount of image rendering can be reduced, and therefore, a second display parameter different from the first display parameter is set, and the second area is displayed with the second display parameter. By the method, the calculated amount is reduced while the picture quality of the eye fixation area is ensured.
In one embodiment of the present disclosure, the screen display method further includes:
And carrying out image recognition on the eye image of the first user to obtain the eye state of the first user.
In this embodiment, the eye state of the first user may be obtained by an image recognition method. Taking the eye fatigue degree as an example, by analyzing the human eye image, data such as blink frequency, red blood wire number, eye pouch, black eye ring and the like can be obtained, and the eye fatigue degree value can be determined according to the data.
Specifically, the eye image may be collected first, and then the collected eye image is input into a pre-trained neural network model to obtain the eye state. The pre-trained neural network model may be implemented using convolutional neural networks, cyclic neural networks, YOLO detection algorithms, and the like, without limitation.
In one embodiment of the present disclosure, determining a first display parameter for a first region based on an eye state of a first user includes:
and determining the first display parameter corresponding to the eye state of the first user based on the corresponding relation between the eye state and the first display parameter.
In this embodiment, the correspondence between different values of different eye states and the first display parameter may be pre-constructed, so that an appropriate first display parameter may be quickly determined according to the eye state. For example, when a higher value of eye fatigue is detected, a larger resolution may be set to improve the picture quality.
In one embodiment of the present disclosure, determining a second display parameter for a second region includes:
Determining a first step size based on the first frequency; the first frequency is the frequency of the change of the human eye gazing direction of the first user in the first time.
And determining the first display parameter as a first parameter, and adjusting the first parameter based on the first step length to obtain a second display parameter.
In the present embodiment, the second area is not an area where the first user gazes, and thus the picture quality can be appropriately reduced. Specifically, the first step size may be increased or decreased as the second display parameter based on the first display parameter. For example, the resolution in the first display parameter may be reduced by a first step as the resolution of the second region, the sampling rate in the first display parameter may be reduced by a second step as the sampling rate of the second region, or the compression rate of the texture map in the first display parameter may be increased as the compression rate of the texture map of the second region.
When the first step length is set, the frequency of the change of the eye gaze direction (first frequency) is considered, if the first frequency is higher, the first user frequently moves the sight line when using the first device, and at the moment, the specific position of the first area can be frequently changed, so that the first step length can be set smaller, the picture quality of the second area is prevented from being too low, and when the first user transfers the sight line to the second area, the picture rendering time is long, and interface jamming occurs. On the contrary, if the first frequency is lower, it indicates that the line of sight of the first user is relatively fixed when the first device is used, and at this time, the specific position of the first area is not frequently changed, so that the first step size can be set to be larger, so as to reduce the rendering calculation amount of the second area.
In one embodiment of the present disclosure, determining a first step based on a first frequency includes:
and determining a first step corresponding to the first frequency based on the negative correlation between the first frequency and the first step.
In this embodiment, the first step corresponding to the first frequency may be calculated based on the negative correlation between the first frequency and the first step. Specifically, the following calculation formula may be adopted:
where s represents a first step size, f represents a first frequency, Representing the maximum value of the first frequency, the minimum value corresponding to the first step lengthRepresenting the minimum value of the first frequency, corresponding to the maximum value of the first step length;s、All expressed as percentages.
In one embodiment of the present disclosure, adjusting the first parameter based on the first step size to obtain the second display parameter includes:
And taking the first step length as an adjustment step length of the first parameter, and adjusting the first parameter to obtain a second display parameter.
In this embodiment, the first step size may be increased or decreased based on the first parameter, to obtain the second display parameter. For example, taking a first parameter corresponding to the resolution as an example, the first step size is reduced on the basis of the first parameter, so as to obtain the resolution of the second area.
The second region may be further divided into a plurality of sub-regions, and gradient differences are presented between display parameters of the plurality of sub-regions. For example, increasing or decreasing a first step size based on the first parameter to obtain a display parameter of the first sub-region; and increasing or decreasing the two first step sizes on the basis of the first parameters to obtain the display parameters of the second sub-area, and the like.
In one embodiment of the present disclosure, the second area includes a plurality of second points, and the adjusting the first parameter based on the first step length to obtain a second display parameter includes:
Taking the first step length as a unit adjustment step length, and adjusting the first parameters based on the distance between the first point location and the second point location and the unit adjustment step length to obtain second display parameters corresponding to each second point location;
the first point location is any point location in the first region.
In this embodiment, the display parameter of each second point may also be calculated according to the distance between each second point in the second area and the first area. Taking any second point position D in the second area as an example, firstly selecting any point position in the first area as a first point position (the center of the first area can be selected), and calculating the distance between the second point position D and the first point position to obtain the distance L between the second point position D and the first area. And then taking the first step length as a unit adjustment step length, multiplying the distance L by the first step length to obtain a third step length of the second point location D, and adding or reducing the third step length on the basis of the first parameter to obtain the display parameter of the second point location D.
In one embodiment of the present disclosure, when the first device is in the screen display state, the method further includes:
Acquiring an eye state of a first user at a first time node; the first time node comprises a plurality of designated time points.
In response to a change in the eye state, the first display parameter is adjusted according to the eye state.
In this embodiment, the first time node may be a plurality of time points set in advance, or may be a time point determined according to a time period. The eye state is detected sequentially at a plurality of time points, the change of the eye state can be found in time, and the first display parameter is adjusted according to the change of the eye state.
For example, when the first user just starts to use the first device, the eye fatigue degree is light, and after a period of use, the eye fatigue degree increases, and at this time, the picture quality can be improved by adjusting the first display parameter, thereby reducing the eye fatigue degree.
The first time node can also be adjusted according to the use conditions of different first users, taking the user A as an example, and detecting the eye state of the user A once every hour according to a preset time point, and if the eye state detected twice before and after is not changed greatly, the eye state of the user A can be adjusted to be detected once every two hours; on the contrary, if the eye state detected twice before and after changes greatly, the eye state detection of the user A can be adjusted to be carried out once every half hour, so that the picture quality is improved in time, and the eye discomfort of the user A is relieved.
In one embodiment of the present disclosure, the screen display method further includes:
And in response to the first frequency being below the frequency threshold, adjusting the first display parameter based on the second step size.
In this embodiment, if the first frequency is lower than the frequency threshold, it indicates that the first user has little line of sight leaving the first area when using the first device, and at this time, the first display parameter may be adjusted, so as to improve the picture quality of the first area, and alleviate the eye fatigue caused by the first user staring at the first area for a long time.
Fig. 2 is a flowchart illustrating another method for displaying a frame according to an embodiment of the disclosure. The method provided by the embodiment of fig. 2 may be performed by an application in a first device, comprising the steps of:
s201: and carrying out image recognition on the eye image of the first user to obtain the eye state of the first user.
S202: and determining the first display parameter corresponding to the eye state of the first user based on the corresponding relation between the eye state and the first display parameter.
S203: determining a first step size based on the first frequency; the first frequency is the frequency of the change of the human eye gazing direction of the first user in the first time;
and determining the first display parameter as a first parameter, and adjusting the first parameter based on the first step length to obtain a second display parameter.
S204: the first region is displayed with the first display parameter, and the second region is displayed with the second display parameter.
In the embodiment, firstly, an eye image of a first user is subjected to image recognition to obtain an eye state of the first user, and a first display parameter is determined based on a corresponding relation between the eye state and the first display parameter; then determining a first step length based on the frequency of the change of the eye fixation azimuth in the first time, determining a first display parameter as a first parameter, and adjusting the first parameter based on the first step length to obtain a second display parameter; finally, the first area (the human eye gazing area) is displayed by the first display parameter, the second area (other areas) is displayed by the second display parameter, and the calculated amount is reduced while the picture quality of the human eye gazing area is ensured.
Corresponding to the above embodiment of the screen display method, fig. 3 is a block diagram of a structure of a screen display device according to an embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 3, the screen display device 30 includes: a first processing module 31, a second processing module 32, and a picture display module 33.
Wherein, the first processing module 31 is configured to determine a first display parameter of the first region based on an eye state of the first user.
The first user is a user wearing the first device, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first device for the first user.
A second processing unit 32 for determining a second display parameter of the second region; the second region belongs to the target display region, and is different from the first region, and the second display parameter is different from the first display parameter.
The screen display module 33 is configured to display the first area with the first display parameter and display the second area with the second display parameter.
In one embodiment of the present disclosure, the first processing module 31 is specifically configured to:
And carrying out image recognition on the eye image of the first user to obtain the eye state of the first user.
In one embodiment of the present disclosure, the first processing module 31 is further configured to:
and determining the first display parameter corresponding to the eye state of the first user based on the corresponding relation between the eye state and the first display parameter.
In one embodiment of the present disclosure, the second processing module 32 is specifically configured to:
Determining a first step size based on the first frequency; the first frequency is the frequency of the change of the human eye gazing direction of the first user in the first time.
And determining the first display parameter as a first parameter, and adjusting the first parameter based on the first step length to obtain a second display parameter.
In one embodiment of the present disclosure, the second processing module 32 is further configured to:
and determining a first step corresponding to the first frequency based on the negative correlation between the first frequency and the first step.
In one embodiment of the present disclosure, the second processing module 32 is further configured to:
And taking the first step length as an adjustment step length of the first parameter, and adjusting the first parameter to obtain a second display parameter.
In one embodiment of the present disclosure, the second processing module 32 is further configured to:
Taking the first step length as a unit adjustment step length, and adjusting the first parameters based on the distance between the first point location and the second point location and the unit adjustment step length to obtain second display parameters corresponding to each second point location;
the first point location is any point location in the first region.
In one embodiment of the present disclosure, the first processing module 31 is further configured to:
Acquiring an eye state of a first user at a first time node; the first time node comprises a plurality of designated time points.
In response to a change in the eye state, the first display parameter is adjusted according to the eye state.
In one embodiment of the present disclosure, the first processing module 31 is further configured to:
And in response to the first frequency being below the frequency threshold, adjusting the first display parameter based on the second step size.
Referring to fig. 4, fig. 4 is a schematic block diagram of a wearable device provided by an embodiment of the present disclosure. The wearable device 400 in the present embodiment as shown in fig. 4 may include: one or more processors 401, one or more input devices 402, one or more output devices 403, and one or more memories 404. The processor 401, the input device 402, the output device 403, and the memory 404 communicate with each other via a communication bus 405. The memory 404 is used to store a computer program comprising program instructions. The processor 401 is arranged to execute program instructions stored in the memory 404. Wherein the processor 401 is configured to invoke program instructions to perform the functions of the modules/units of the various device embodiments described above, such as the functions of the modules 31-33 shown in fig. 3.
It should be appreciated that in the disclosed embodiments, the Processor 401 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 402 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output device 403 may include a display (LCD, etc.), a speaker, etc.
The memory 404 may include read only memory and random access memory and provide instructions and data to the processor 401. A portion of memory 404 may also include non-volatile random access memory. For example, memory 404 may also store information of device type.
In a specific implementation, the processor 401, the input device 402, and the output device 403 described in the embodiments of the present disclosure may perform the implementation described in the first embodiment and the second embodiment of the method for displaying a picture provided in the embodiments of the present disclosure, and may also perform the implementation of the wearable device described in the embodiments of the present disclosure, which is not described herein again.
In another embodiment of the disclosure, a computer readable storage medium is provided, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, where the program instructions, when executed by a processor, implement all or part of the procedures in the method embodiments described above, or may be implemented by instructing related hardware by the computer program, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by the processor, implements the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The computer readable storage medium may be an internal storage unit of the wearable device of any of the foregoing embodiments, such as a hard disk or a memory of the wearable device. The computer readable storage medium may also be an external storage device of the wearable device, such as a plug-in hard disk equipped on the wearable device, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the wearable device. The computer readable storage medium is used to store a computer program and other programs and data required by the wearable device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working procedures of the wearable device and unit described above may refer to the corresponding procedures in the foregoing method embodiments, which are not repeated here.
In several embodiments provided by the present application, it should be understood that the disclosed wearable devices and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via some interfaces or units, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present disclosure.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a specific embodiment of the present disclosure, but the protection scope of the present disclosure is not limited thereto, and any equivalent modifications or substitutions will be apparent to those skilled in the art within the scope of the present disclosure, and these modifications or substitutions should be covered in the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A picture display method, comprising:
Determining a first display parameter of the first region based on an eye state of the first user;
The first user is a user wearing first equipment, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first equipment for the first user;
Determining a second display parameter of the second region; the second region belongs to a target display region, and is different from the first region, and the second display parameter is different from the first display parameter;
and displaying the first area by the first display parameter, and displaying the second area by the second display parameter.
2. The picture display method according to claim 1, further comprising:
and carrying out image recognition on the eye image of the first user to obtain the eye state of the first user.
3. The picture display method as claimed in claim 1, wherein the determining the first display parameter of the first region based on the eye state of the first user comprises:
And determining the first display parameter corresponding to the eye state of the first user based on the corresponding relation between the eye state and the first display parameter.
4. The picture display method as claimed in claim 1, wherein determining the second display parameter of the second area comprises:
Determining a first step size based on the first frequency; the first frequency is the frequency of the change of the human eye gazing direction of the first user in the first time;
And determining the first display parameter as a first parameter, and adjusting the first parameter based on the first step length to obtain the second display parameter.
5. The picture display method as claimed in claim 4, wherein the determining a first step size based on the first frequency comprises:
and determining the first step length corresponding to the first frequency based on the negative correlation between the first frequency and the first step length.
6. The method for displaying a picture according to claim 4, wherein said adjusting the first parameter based on the first step size to obtain the second display parameter comprises:
And taking the first step length as an adjustment step length of the first parameter, and adjusting the first parameter to obtain the second display parameter.
7. The method of displaying a picture according to claim 4, wherein the second area includes a plurality of second points, the adjusting the first parameter based on a first step size to obtain the second display parameter includes:
Taking the first step length as a unit adjustment step length, and adjusting the first parameter based on the distance between the first point location and the second point location and the unit adjustment step length to obtain a second display parameter corresponding to each second point location;
The first point location is any point location in the first region.
8. The picture display method as claimed in claim 4, wherein when the first device is in a picture display state, further comprising:
acquiring an eye state of the first user at a first time node; the first time node comprises a plurality of designated time points;
and adjusting the first display parameter according to the eye state in response to the eye state having a change.
9. The picture display method as claimed in claim 4, further comprising:
and in response to the first frequency being below a frequency threshold, adjusting the first display parameter based on a second step size.
10. A picture display device, comprising:
a first processing module for determining a first display parameter of the first region based on an eye state of the first user;
The first user is a user wearing first equipment, the first area is an area corresponding to the eye gaze direction of the first user, the first area belongs to a target display area, and the target display area belongs to a display area provided by the first equipment for the first user;
a second processing unit for determining a second display parameter of the second region; the second region belongs to a target display region, and is different from the first region, and the second display parameter is different from the first display parameter;
And the picture display module is used for displaying the first area by the first display parameter and displaying the second area by the second display parameter.
11. A wearable device comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 9 when executing the computer program.
12. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 9.
CN202410541308.0A 2024-04-30 2024-04-30 Picture display method and device, wearable device and storage medium Pending CN118330888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410541308.0A CN118330888A (en) 2024-04-30 2024-04-30 Picture display method and device, wearable device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410541308.0A CN118330888A (en) 2024-04-30 2024-04-30 Picture display method and device, wearable device and storage medium

Publications (1)

Publication Number Publication Date
CN118330888A true CN118330888A (en) 2024-07-12

Family

ID=91781719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410541308.0A Pending CN118330888A (en) 2024-04-30 2024-04-30 Picture display method and device, wearable device and storage medium

Country Status (1)

Country Link
CN (1) CN118330888A (en)

Similar Documents

Publication Publication Date Title
US11715231B2 (en) Head pose estimation from local eye region
CN110032271B (en) Contrast adjusting device and method, virtual reality equipment and storage medium
JP7178403B2 (en) Detailed Eye Shape Model for Robust Biometric Applications
CN104331168B (en) Display adjusting method and electronic equipment
CN108720851B (en) Driving state detection method, mobile terminal and storage medium
CN109993115B (en) Image processing method and device and wearable device
CN110889826B (en) Eye OCT image focus region segmentation method, device and terminal equipment
CN108428214B (en) Image processing method and device
EP3453317B1 (en) Pupil radius compensation
CN109740491A (en) A kind of human eye sight recognition methods, device, system and storage medium
CN108875485A (en) A kind of base map input method, apparatus and system
CN110619303A (en) Method, device and terminal for tracking point of regard and computer readable storage medium
CN112101124B (en) Sitting posture detection method and device
JP2022538669A (en) Improved eye tracking latency
CN112183200B (en) Eye movement tracking method and system based on video image
WO2018078857A1 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN116048244B (en) Gaze point estimation method and related equipment
KR20180109217A (en) Method for enhancing face image and electronic device for the same
CN111723636B (en) Fraud detection using optokinetic responses
CN114943924B (en) Pain assessment method, system, equipment and medium based on facial expression video
CN112200109A (en) Face attribute recognition method, electronic device, and computer-readable storage medium
CN118330888A (en) Picture display method and device, wearable device and storage medium
JP7439932B2 (en) Information processing system, data storage device, data generation device, information processing method, data storage method, data generation method, recording medium, and database
CN108537552B (en) Payment method, device and system based on lens
CN113132642A (en) Image display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication