WO2021004138A1 - Screen display method, terminal device, and storage medium - Google Patents

Screen display method, terminal device, and storage medium Download PDF

Info

Publication number
WO2021004138A1
WO2021004138A1 PCT/CN2020/087136 CN2020087136W WO2021004138A1 WO 2021004138 A1 WO2021004138 A1 WO 2021004138A1 CN 2020087136 W CN2020087136 W CN 2020087136W WO 2021004138 A1 WO2021004138 A1 WO 2021004138A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
myopia
screen
level
image
Prior art date
Application number
PCT/CN2020/087136
Other languages
French (fr)
Chinese (zh)
Inventor
梅锦振华
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021004138A1 publication Critical patent/WO2021004138A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters

Definitions

  • This application belongs to the field of computer vision technology in artificial intelligence, and in particular relates to screen display methods, terminal devices and storage media.
  • the greater the number and amplitude of the adjusted parameters the greater the impact on the visual effect of the user viewing the screen, which makes the user extremely uncomfortable when viewing the screen, which in turn causes the efficiency of human-computer interaction between the smart device and the user to decrease.
  • the inventor realizes that the existing screen display solutions are not very adaptable, and therefore cannot meet the actual eye fatigue relief needs of different users, and ensure the efficiency of human-computer interaction between users and smart devices.
  • the embodiments of the present application provide a screen display method, terminal device, and storage medium to solve the problem that the screen display solution in the prior art has poor adaptability and cannot meet the actual eye fatigue relief needs of different users to ensure that users and The efficiency of human-computer interaction of smart devices.
  • the first aspect of the embodiments of the present application provides a screen display method, including:
  • a second aspect of the embodiments of the present application provides a terminal device, the terminal device includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes the following steps : Obtain the user's identity information; search for the user's myopia level based on the identity information; if the myopia level search fails, obtain the user's face image, and identify the user's myopia level based on the face image; Match the corresponding display parameter set based on the myopia level of the user, and adjust the display parameter of the screen based on the obtained display parameter set.
  • the third aspect of the embodiments of the present application provides a computer-readable storage medium.
  • the one or more processors perform the following steps: obtain user identity information Find the myopia level of the user based on the identity information; if the myopia level search fails, obtain the face image of the user, and identify the myopia level of the user based on the face image; based on the myopia level of the user The level matches the corresponding display parameter set, and the display parameter adjustment of the screen is performed based on the obtained display parameter set.
  • the embodiment of this application has the beneficial effect that: the embodiment of this application recognizes the user’s myopia level in real time according to the actual collected face images, and adaptively selects different display parameters according to the user’s different myopia conditions
  • the solution adjusts the screen display, so that the screen display can specifically meet the actual eye fatigue relief needs of different users, and the display parameters are adjusted according to the user's actual situation, so that the actual display situation can further meet the user's viewing
  • the comfort level ensures the efficiency of human-computer interaction between users and smart devices.
  • FIG. 1 is a schematic diagram of the implementation process of the screen display method provided by Embodiment 1 of the present application;
  • FIG. 2 is a schematic diagram of the implementation process of the screen display method provided in the second embodiment of the present application.
  • FIG. 3 is a schematic diagram of the implementation process of the screen display method provided by the third embodiment of the present application.
  • FIG. 4 is a schematic diagram of the implementation process of the screen display method provided by the fourth embodiment of the present application.
  • FIG. 5 is a schematic diagram of the implementation process of the screen display method provided by Embodiment 5 of the present application.
  • FIG. 6 is a schematic diagram of the implementation process of the screen display method provided by the sixth embodiment of the present application.
  • FIG. 7 is a schematic diagram of the implementation process of the screen display method provided in the seventh embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a screen display device provided in Embodiment 8 of the present application.
  • FIG. 9 is a schematic diagram of a terminal device provided in Embodiment 9 of the present application.
  • the user’s myopia is divided into multiple different myopia levels according to the specific degree of myopia, and appropriate display parameter sets are set for users with different myopia levels to ensure Corresponding visual effects and fatigue relief effects, intelligently recognize the user’s myopia level when the user is watching the screen, and select a preset suitable display parameter set according to the user’s actual myopia level, and finally control according to the selected display parameter set
  • the actual display color matching and other indicators of the screen enable the screen to display the most suitable effect, and realize the adjustment of the display parameters according to the actual situation of the user, so that the actual display scheme can further satisfy the user's viewing comfort and ensure that the user and The human-computer interaction efficiency of smart devices.
  • the details are as follows:
  • Fig. 1 shows the implementation flowchart of the screen display method provided in the first embodiment of the present application, and the details are as follows:
  • the screen display method in the embodiment of this application is applied to the terminal device, and in order to ensure the acquisition of the user's face image in the embodiment of this application, the terminal device in the embodiment of this application should have an image acquisition module/device , Such as a camera, or a communication connection with a third-party image acquisition device.
  • the terminal device may be a personal computer, a tablet computer, a mobile phone, and other devices, which are specifically determined by actual application scenarios.
  • each user corresponds to an identity information.
  • the identity information can be stored in the form of a unique identification, or in the form of images, etc., which are specifically set by the technicians according to actual needs.
  • the embodiment will pre-store the identity information of some users and associate the identity information with the myopia level of the corresponding user, so as to realize the screen display adjustment in the first time according to the user's identity.
  • the storage of user identity information and myopia level includes, but is not limited to: input by technicians based on acquired user information, input by users based on their actual myopia situation, and identification of the user's myopia level by the terminal device each time After that, the user's identity information is recorded at the same time and the two are stored in association, and any one or more of them are implemented.
  • the classification rules for myopia levels can also be provided by the technicians themselves. For example, they can be simply divided into three levels: severe, minor, and normal, or they can be divided into more detailed categories, which are not limited here.
  • user identity information and The storage of myopia level can be stored locally in the terminal device, or in a third device such as a stored server, which is specifically set by the technician according to the actual situation.
  • the embodiment of the present application After pre-associating and storing the identity information and myopia levels of some users, the embodiment of the present application will obtain the identity information of the current user to determine whether the current user has the corresponding myopia level. Among them, corresponding to the different storage methods of user identity information, this The application embodiment can also obtain user identity information in many ways, including but not limited to, directly identifying user identity information based on the personal information of the currently logged-in user, or performing biometric identification on the current user to determine identity information, etc. The technical personnel select the corresponding identity information acquisition method according to the actual user identity information storage method, which is not limited here.
  • S102 Search the user's myopia level based on the identity information.
  • the embodiment of the present application will search for the corresponding myopia level locally or in the third device through a communication connection.
  • the level is pre-stored, and the corresponding myopia level can be found directly at this time. If it has not been pre-stored, the search will fail at this time.
  • S103 If the search for myopia level fails, obtain a face image of the user, and identify the myopia level of the user based on the face image.
  • the embodiment of the present application directly obtains the user's face image, and recognizes the user's myopia level based on the face image.
  • the specific myopia level method is not limited here, and can be selected by the technicians according to the actual situation, including but not limited to: identifying the myopia level according to whether the user wears glasses, or according to the user's squinting degree (ie, human The degree of eye closure) is used to identify the myopia level, and the method of other related embodiments of this application can also be referred to for the myopia level identification.
  • the embodiments of this application only need to roughly identify the degree of myopia of the user, and classify the myopia levels required by the embodiments of this application, that is, it is not necessary to identify the specific degree of myopia of the user during recognition. Therefore, the myopia level recognition method selected in the embodiments of this application can be a more accurate myopia data recognition algorithm, or a relatively low accuracy myopia level recognition algorithm to meet different hardware requirements. The actual requirements of the configured terminal device improve the adaptability of the embodiments of the present application to different scenarios.
  • S104 Match the corresponding display parameter set based on the user's myopia level, and adjust the display parameter of the screen based on the obtained display parameter set.
  • the display parameter set includes at least one display parameter data of color matching, clarity, saturation, contrast, and brightness, etc.
  • the specific data content contained therein can be supported by technicians according to actual user needs and terminal equipment
  • the adjusted parameters are set.
  • this embodiment of the present application will pre-divide the user’s myopia degree into multiple myopia levels, and set a corresponding appropriate display parameter set for each myopia level. After obtaining the user’s myopia level, perform the procedure based on the myopia level. The matching determines a suitable display parameter set for the user, and finally adjusts the screen according to the specific display parameter data in the display parameter set, such as reducing the brightness.
  • the reduction of these lights in the corresponding display scheme is less to ensure better display effect.
  • the reduction is larger. It is ensured that the eye fatigue relief effect is better, thereby achieving a balance between eye fatigue relief and display effect.
  • blue and red are both harmful to the eyes
  • a color matching with less blue and red content, or no blue and The red color is used as the background color, and multiple corresponding display schemes are set. The higher the myopia degree, the less red and blue the background color contains.
  • the embodiment of the present application obtains the display parameter set while also It measures the light intensity of the environment where the terminal device is located, for example, uses a light sensor to detect the light intensity, then sets the screen display parameters according to the display parameter set, and adjusts the display brightness and contrast according to the ambient light intensity in real time.
  • the user’s myopia is divided into multiple different levels of myopia according to the specific degree of myopia, and appropriate display parameter sets are set for users of different levels of myopia to ensure the corresponding visual effect and the fatigue relief effect.
  • the user intelligently recognizes the user’s myopia level when watching the screen, and selects a preset suitable display parameter set according to the user’s actual myopia level, and finally controls the actual display color and other indicators of the screen according to the selected display parameter set, so that the screen
  • the most suitable effect can be displayed, and the display parameters can be adjusted according to the actual situation of the user, so that the actual display solution can further satisfy the user's viewing comfort and ensure the efficiency of human-computer interaction between the user and the smart device.
  • the advantage of pre-storage is that there is more time to accurately measure the user’s myopia.
  • the user can even directly input his own myopia degree and astigmatism into the terminal device, and then the terminal device converts it into the corresponding myopia level. , Thereby improving the accuracy of matching users.
  • S201 If the face image contains glasses, obtain size information of the screen and detect the distance between the user's eyes and the screen.
  • human eye recognition is first performed based on the acquired face image, and then the glasses are searched based on the identified human eye position.
  • the details are as follows:
  • Step 1 Perform eye recognition and positioning on the face image, and determine the position of the eye in the face image.
  • Step 2 Detect the closed curve contained in the face image based on the position of the human eye, and identify the closed curve containing the human eye in the curve based on the position of the human eye.
  • Step 3 If there are closed curves containing human eyes inside the curve, then these closed curves are recognized as glasses, and it is determined that the face image contains glasses. If there is no closed curve containing human eyes inside the curve, then it is determined that the face image contains glasses. Contains glasses.
  • the frame of the glasses After determining the position of the human eye, because the glasses are generally located around the glasses, and after the gray-scale binarization, the frame of the glasses will form a closed curve surrounding the eyes. Therefore, only need to search around the eyes in the face image It is enough to find a closed curve that meets the requirements. If it exists, it is determined that the user wears glasses. Since the shape of the glasses is generally known data, some possible glasses shapes can also be preset here, and then the closed curve is shape-matched before identifying whether the human eye is included, and only one of the shapes is matched. A high closed curve, such as a closed curve with a matching degree of more than 60%, is used to search and determine the position of the human eye.
  • the size information can be directly read the hardware parameters of the screen, and the human eye distance needs to be measured with the depth sensor based on the recognition and positioning of the human eye, so as to obtain the corresponding human eye distance. Therefore, when quantifying the distance between human eyes, the average distance between the two human eyes and the screen can be used as the final human eye distance.
  • S202 Identify the user's myopia level based on the size information and the human eye distance.
  • this embodiment of the present application will pre-calculate the habit data of some users with different myopia degrees in the case of different screen sizes, and obtain the corresponding data according to the data analysis.
  • a threshold that is, set different threshold arrays for screens of different sizes, and when judging the degree of myopia, first read the corresponding threshold array according to the device screen size, and then query and determine the user's myopia level based on the distance of the human eye.
  • S301 If the face image does not include glasses, detect the distance between the user's eyes and the screen, the degree of closure of the user's eyes, and the degree of frowning of the user based on the face image.
  • S302 Identify the myopia level of the user based on the distance of the human eye, the degree of eye closure, and the degree of frowning.
  • users who do not wear glasses may be non-myopia users to a large extent (in the embodiment of this application, such users can be classified as users with the lowest myopia level), so when performing myopia level recognition
  • the screen size is not directly used as the quantification index of myopia level, but firstly determines whether the user is nearsighted based on the distance of the eyes, the degree of eye closure, and the degree of frowning. If the recognition result is that the user is not nearsighted, such as three indicators If all values are lower than the corresponding threshold, it indicates that the user is not myopic. In this case, the embodiment of the present application directly determines that the user has the lowest myopia level.
  • this embodiment of the application will further read the size information of the screen, and further quantify the degree of the user's nearsightedness based on the size information, the distance of the human eye, the degree of eye closure, and the degree of frowning. Get the corresponding myopia level.
  • the embodiment of the present application will advance Statistics on the habit of viewing screens of users with different degrees of myopia (including non-myopia users) under different screen sizes.
  • the eye distance, the degree of eye closure, and the degree of frown According to the analysis, the corresponding thresholds for distinguishing myopia and non-myopia are obtained.
  • the eye distance, eye closure degree and frown degree are analyzed for different sizes of screens, and different threshold arrays corresponding to different myopia levels are obtained. And when judging the degree of myopia, first read the corresponding threshold array according to the device screen size, and then query and determine the user's myopia level according to the distance of the human eye.
  • the embodiment of this application realizes the effective identification and quantification of the degree of nearsightedness of users who do not wear glasses through the comprehensive analysis of multiple indicators of the user.
  • adaptive scene recognition of different application scenarios can be realized.
  • matching of adaptive identification methods so that the embodiments of the present application can greatly improve the adaptive and accurate identification of different scenes and users.
  • the fourth embodiment of the present application includes:
  • S401 Perform grayscale processing on an image of a human eye region in a face image.
  • S402 Calculate a grayscale threshold t based on formula (1), and perform binarization processing on the grayscale image based on the grayscale threshold t.
  • N is the total number of pixels in the grayscale image
  • n i represents the number of pixels with grayscale value i
  • k is the maximum grayscale value of the pixels in the grayscale image
  • ⁇ (t)2 is the variance of t.
  • the adaptive calculation of the gray-scale threshold t is performed based on the actual gray-scale value of the image pixels in the human eye area.
  • t is the optimal threshold, which can ensure that the gray-scale The effect of binarization, so that the calculated gray threshold can better meet the needs of actual image binarization, and make the binarization effect of grayscale image better.
  • S403 Calculate the average value of the human eye height-to-width ratio in the gray-scale binarized image to obtain the degree of human eye closure.
  • S501 Obtain an image of the eyebrow region in the face image, and detect the pupil distance between the pupil of the eye in the face image and the screen.
  • the position of the user’s eyes is determined based on the position of the eyes of the user, and then the position of the user’s eyes is further located based on the position of the eyes.
  • the embodiment of the present application will further locate the pupil of the human eye, and use a depth sensor to detect the distance between the pupil and the screen for subsequent comparison.
  • the pupil positioning method can be set by the technician, including but not limited to positioning using a cluster model.
  • S502 Draw a depth space image corresponding to the image of the eyebrow region, and find the distance between the eyebrow center and the screen and the shortest distance between the eyebrow center and the screen in the depth space image based on the depth space image.
  • S503 Calculate the distance between the eyebrow center and the distance between the shortest distance and the pupil distance respectively, and calculate the ratio of the two differences.
  • the degree of frowning can be quantified to a certain extent by comparing the distance between the eyebrow center and the screen and the distance between the eyebrow peak and the screen.
  • this application uses the distance between the pupil and the screen as a reference value to quantify the distance between the center of the eyebrow and the peak of the eyebrow and the screen, and finally the ratio of the two calculated differences is used to achieve a certain degree of quantification of frowning behavior.
  • the embodiment of the application first uses the depth sensor to depict the entire eyebrow region image corresponding to the depth space image of the screen, and at the same time locates the eyebrow center, and then finds the eyebrow center distance and the eyebrow peak distance (that is, the shortest distance from the screen). distance). Finally, calculate the difference between the eyebrow center distance and the pupil distance, the difference between the eyebrow peak distance and the pupil distance, and calculate the ratio of the two differences.
  • the center of the eyebrows can be positioned directly as the center of the eyebrows. The greater the ratio, the higher the degree of frowning.
  • the embodiment of the present application will simultaneously identify whether there are wrinkles on the user's eyebrows to assist in determining whether the user's frowning behavior exists and the degree of frowning.
  • the specific wrinkle recognition method can be selected by the technicians, including but not limited to facial skin wrinkle recognition using Gabor filter and BP neural network, wrinkle recognition based on skin luster (ie, light reflection) can also be used.
  • the embodiment of the present application will determine that the user has a frowning behavior, and will quantify the degree of the user's wrinkle based on the ratio and one or more preset thresholds.
  • this embodiment of the application will directly use the display parameter set corresponding to the myopia level to adjust the screen.
  • the display parameter set corresponding to the myopia level For the specific adjustment principle and implementation principle, please refer to the related description of the first embodiment of this application. I will not repeat it here.
  • the embodiment of the present application in order to realize the pre-associated storage of user identity information and the corresponding myopia level, the embodiment of the present application will perform a myopia level test on the user in advance, as shown in FIG. 7, including:
  • the control screen simultaneously displays multiple preset patterns with the same attributes except for colors, where each preset pattern uniquely corresponds to one color, and the colors corresponding to the multiple preset patterns include at least red, green, and yellow.
  • S702 Output a pattern definition adjustment prompt to the user, so that the user inputs a corresponding definition adjustment instruction according to the definition of the preset pattern to perform definition adjustment, until the user has the same definition according to the plurality of patterns viewed.
  • S703 Receive a sharpness adjustment instruction for a preset pattern input by the user, and based on the instructions for the red, green, and yellow preset patterns in the sharpness adjustment instruction, identify the user's myopia level, and compare the identified myopia level with The user's identity information is stored in association.
  • White light is a composite light composed of seven colors of red, orange, yellow, green, cyan, blue, and purple. The wavelength and refractive index of each color light are different, so white light will occur after passing through a denser medium. Dispersion. In visible light, red light has the longest wavelength, the smallest refractive index, and the fastest speed, while the purple light has the shortest wavelength, the largest refractive index, and the slowest speed. Emmetropia can make the focus of yellow light fall on the retina, then the focus of red light falls behind the retina, and the focus of green light falls in front of the retina. The distance between red light and green light is basically equal to the retina, so red light and green light The diameter of the circle of confusion formed on the retina is basically the same, so the clarity of the optotype in the red and green background is basically the same.
  • the patient will feel that the red and green visual targets have the same clarity.
  • the focus of yellow light will fall in front of the retina, and the focus of red light will be behind the yellow light, so it is closer to the retina, while the focus of green light
  • the yellow light is more forward and therefore farther away from the retina. Therefore, the diameter of the red light forming a circle of confusion on the retina is smaller than that of the green light, so the subject will feel that the visual target in the red background is clearer. Therefore, during the red-green test, if the user feels that the red vision mark is clearer, he needs to increase the nearsightedness or reduce the farsightedness to make the red and green visions equally clear.
  • the focus of yellow light will fall behind the retina, and the focus of red light will be behind the yellow light, so it is farther away from the retina, while the focus of green light
  • the yellow light is more forward, and therefore closer to the retina, so the diameter of the circle of dispersion formed by the green light on the retina is smaller than that of the red light, so the subject will feel that the visual target in the green background is clearer. Therefore, during the red and green test, if the user feels that the green vision mark is clearer, he needs to reduce the nearsightedness or increase the farsightedness to make the red and green sights equally clear.
  • multiple color patterns with the same definition will be displayed at the same time, and the user will adjust the definition according to the actual definition he sees.
  • the definition of all color patterns is the same, and the user's myopia level is quantified according to the actual adjustment degree of the red and green patterns and multiple preset thresholds.
  • the test method for myopia level can be integrated into the system setting function of the terminal device. After the user completes the user registration or the user logs in, the user is prompted to perform the test to improve the effectiveness of the test.
  • the embodiment of the present application in order to ensure the viewing effect of the real-time user’s screen and effectively alleviate eye fatigue, the embodiment of the present application will update the user’s identity information in a preset period, and after discovering that the real-time user has changed, the screen will be updated again.
  • Display parameter adjustment including:
  • identity information changes, based on the updated identity information, return to perform the operation of finding the user's myopia level based on the identity information.
  • the specific value of the preset period can be set by the technician himself, preferably, it can be set to 5 minutes.
  • the method of reacquiring the identity information please refer to the method of obtaining identity information in the first embodiment of this application, which will not be repeated here. If the updated identity information is different from the identity information before the update, it is determined that the identity information has changed.
  • the embodiment of this application will directly return to S102 of the first embodiment of this application to process the new identity information so as to adjust the screen display of the new user in time.
  • FIG. 8 shows a structural block diagram of a screen display device provided in an embodiment of the present application.
  • the screen display device illustrated in FIG. 6 may be the execution subject of the screen display method provided in the first embodiment.
  • the screen display device includes:
  • the information obtaining module 81 is used to obtain the user's identity information.
  • the level search module 82 is configured to search for the myopia level of the user based on the identity information.
  • the level recognition module 83 is configured to obtain the face image of the user if the search for the myopia level fails, and identify the myopia level of the user based on the face image.
  • the display adjustment module 84 is configured to match a corresponding display parameter set based on the user's myopia level, and adjust the display parameters of the screen based on the obtained display parameter set.
  • the level identification module 83 includes:
  • the face image includes glasses, obtaining size information of the screen and detecting the distance between the user's eyes and the screen.
  • the myopia level of the user is identified.
  • the level identification module 84 includes:
  • the data detection module is configured to detect, based on the face image, the distance between the user’s eyes and the screen, the degree of closure of the user’s eyes, and the user’s Degree of frown.
  • the myopia recognition module is configured to recognize the myopia level of the user based on the distance of the human eyes, the degree of closure of the human eyes, and the degree of frowning.
  • the myopia recognition module includes:
  • N is the total number of pixels in the grayscale image
  • n i represents the number of pixels with grayscale value i
  • k is the maximum grayscale value of the pixels in the grayscale image
  • ⁇ (t)2 is the variance of t.
  • the data detection module includes:
  • the screen display device further includes:
  • the corresponding display parameter set is matched according to the found myopia level, and the display parameter set of the screen is adjusted based on the obtained display parameter set.
  • the screen display device further includes:
  • the pattern definition adjustment prompt is output to the user, so that the user inputs the corresponding definition adjustment instruction according to the definition of the preset pattern to adjust the definition, until the user adjusts the definition according to the plurality of images viewed.
  • the pattern definitions are the same.
  • the screen display device further includes:
  • first”, “second”, etc. are used in the text in some embodiments of the present application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first table may be named the second table, and similarly, the second table may be named the first table without departing from the scope of the various described embodiments.
  • the first form and the second form are both forms, but they are not the same form.
  • FIG. 9 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 9 of this embodiment includes: a processor 90 and a memory 91.
  • the memory 91 stores a computer program 92 that can run on the processor 90.
  • the processor 90 executes the computer program 92, the steps in the above embodiments of the screen display method are implemented, for example, steps 101 to 108 shown in FIG. 1.
  • the processor 90 executes the computer program 92
  • the functions of the modules/units in the foregoing device embodiments for example, the functions of the modules 81 to 84 shown in FIG. 8 are realized.
  • the terminal device 9 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 90 and a memory 91.
  • FIG. 9 is only an example of the terminal device 9 and does not constitute a limitation on the terminal device 9. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include an input sending device, a network access device, a bus, and the like.
  • the so-called processor 90 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or memory of the terminal device 9.
  • the memory 91 may also be an external storage device of the terminal device 9, such as a plug-in hard disk equipped on the terminal device 9, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device.
  • the memory 91 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 91 can also be used to temporarily store data that has been sent or will be sent.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (Read-Only Memory, ROM) , Random Access Memory (RAM), electrical carrier signal, telecommunications signal, and software distribution media.
  • this application also proposes a computer-readable storage medium that is stored with a volatile storage medium or a non-volatile storage medium, and the computer-readable storage medium is stored by one or more
  • processors are caused to perform the following steps: obtain the user's identity information; search for the user's myopia level based on the identity information; if the myopia level search fails, obtain the user's face image, And recognize the myopia level of the user based on the face image; match the corresponding display parameter set based on the myopia level of the user, and adjust the display parameters of the screen based on the obtained display parameter set.

Abstract

The present application provides a screen display method, a terminal device, and a storage medium, applicable to the technical field of computer vision in artificial intelligence. The method comprises: obtaining identity information of a user; searching for the degree of myopia of the user on the basis of the identity information; if the search for the degree of myopia fails, obtaining a face image of the user, and identifying the degree of myopia of the user on the basis of the face image; and obtaining a corresponding display parameter set by matching on the basis of the degree of myopia of the user, and performing display parameter adjustment on the screen on the basis of the obtained display parameter set. According to embodiments of the present application, a display parameter condition can be adjusted according to the actual situation of the user, so that the actual display situation can further satisfy the viewing comfort of the user, and the human-computer interaction efficiency between the user and the intelligent device is ensured.

Description

一种屏幕显示方法、终端设备及存储介质Screen display method, terminal equipment and storage medium
本发明要求于2019年7月5日提交中国专利局、申请号为201910603946.X,申请名称为“一种屏幕显示方法、装置及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present invention claims the priority of a Chinese patent application filed with the Chinese Patent Office on July 5, 2019, the application number is 201910603946.X, and the application name is "a screen display method, device and terminal equipment", the entire content of which is by reference Incorporated in this application.
技术领域Technical field
本申请属于人工智能中的计算机视觉技术领域,尤其涉及屏幕显示方法、终端设备及存储介质。This application belongs to the field of computer vision technology in artificial intelligence, and in particular relates to screen display methods, terminal devices and storage media.
背景技术Background technique
随着科技的发展,手机、平板和电脑等智能设备已经成为了现代人生活不可或缺的一部分,同时由于生活习惯和工作需要,人们往往难以避免地长时间面对智能设备的屏幕,从而使得越来越多的人成为了近视眼。With the development of technology, smart devices such as mobile phones, tablets, and computers have become an indispensable part of modern people’s lives. At the same time, due to living habits and work needs, people often face the screen of smart devices for a long time, which makes More and more people have become nearsighted.
为了缓解用户查看屏幕时的眼疲劳,保护用户视力,现有技术中推出了如“护眼模式”和“暖色模式”等屏幕的显示方案,这些方案的原理基本都是直接将屏幕的颜色饱和度亮度等调节到一个固定的显示参数方案,使得用户在查看屏幕时眼部疲劳感降低一点。实际情况中,使用固定显示参数的显示方案虽然可以一定程度的缓解眼疲劳,但一方面不同用户实际情况不同导致事宜的显示方案也会存在一定差异,使得固定显示参数难以满足不同用户的实际需求,另一方面调节的参数数量和幅度越大对用户查看屏幕的视觉效果影响也越大,使得用户看屏幕时的极为不舒服,进而导致智能设备与用户的人机交互效率降低。发明人意识到现有的屏幕显示方案适应性不强,因此无法满足不同用户的实际眼部疲劳缓解需求,并保障用户与智能设备的人机交互效率。In order to relieve the eyestrain of the user when viewing the screen and protect the eyesight of the user, the prior art has introduced screen display solutions such as "eye protection mode" and "warm color mode". The principle of these solutions is basically to directly saturate the color of the screen. The degree of brightness, etc. are adjusted to a fixed display parameter scheme, so that the user’s eye fatigue is reduced a bit when viewing the screen. In actual situations, although the display scheme using fixed display parameters can relieve eye fatigue to a certain extent, on the one hand, the actual situation of different users will lead to certain differences in the display schemes, making it difficult for fixed display parameters to meet the actual needs of different users. On the other hand, the greater the number and amplitude of the adjusted parameters, the greater the impact on the visual effect of the user viewing the screen, which makes the user extremely uncomfortable when viewing the screen, which in turn causes the efficiency of human-computer interaction between the smart device and the user to decrease. The inventor realizes that the existing screen display solutions are not very adaptable, and therefore cannot meet the actual eye fatigue relief needs of different users, and ensure the efficiency of human-computer interaction between users and smart devices.
技术问题technical problem
有鉴于此,本申请实施例提供了一种屏幕显示方法、终端设备及存储介质,以解决现有技术中屏幕显示方案适应性较差,无法满足不同用户的实际眼部疲劳缓解需求保障用户与智能设备的人机交互效率的问题。In view of this, the embodiments of the present application provide a screen display method, terminal device, and storage medium to solve the problem that the screen display solution in the prior art has poor adaptability and cannot meet the actual eye fatigue relief needs of different users to ensure that users and The efficiency of human-computer interaction of smart devices.
技术解决方案Technical solutions
本申请实施例的第一方面提供了一种屏幕显示方法,包括:The first aspect of the embodiments of the present application provides a screen display method, including:
获取用户的身份信息;Obtain user's identity information;
基于所述身份信息查找所述用户的近视等级;Searching for the myopia level of the user based on the identity information;
若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;If the search for myopia level fails, acquiring a face image of the user, and identifying the myopia level of the user based on the face image;
基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。Match the corresponding display parameter set based on the myopia level of the user, and adjust the display parameter of the screen based on the obtained display parameter set.
本申请实施例的第二方面提供了一种终端设备,所述终端设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,所述处理器执行以下步骤:获取用户的身份信息;基于所述身份信息查找所述用户的近视等级;若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。A second aspect of the embodiments of the present application provides a terminal device, the terminal device includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes the following steps : Obtain the user's identity information; search for the user's myopia level based on the identity information; if the myopia level search fails, obtain the user's face image, and identify the user's myopia level based on the face image; Match the corresponding display parameter set based on the myopia level of the user, and adjust the display parameter of the screen based on the obtained display parameter set.
本申请实施例的第三方面提供了一种计算机可读存储介质,该计算机可读存储介质被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取用户的身份信息;基于所述身份信息查找所述用户的近视等级;若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。The third aspect of the embodiments of the present application provides a computer-readable storage medium. When the computer-readable storage medium is executed by one or more processors, the one or more processors perform the following steps: obtain user identity information Find the myopia level of the user based on the identity information; if the myopia level search fails, obtain the face image of the user, and identify the myopia level of the user based on the face image; based on the myopia level of the user The level matches the corresponding display parameter set, and the display parameter adjustment of the screen is performed based on the obtained display parameter set.
有益效果Beneficial effect
本申请实施例与现有技术相比存在的有益效果是:本申请实施例根据实际采集的人脸图像来实时识别用户的近视情况等级,并根据用户不同的近视情况来自适应选用不同的显示参数方案进行屏幕显示调整,从而使得屏幕的显示可以针对性地满足不同用户的实际眼部疲劳缓解需求,同时根据用户实际情况来进行显示参数情况调节,使得实际显示情况更能进一步地满足用户查看的舒适度,保证了用户与智能设备的人机交互效率。Compared with the prior art, the embodiment of this application has the beneficial effect that: the embodiment of this application recognizes the user’s myopia level in real time according to the actual collected face images, and adaptively selects different display parameters according to the user’s different myopia conditions The solution adjusts the screen display, so that the screen display can specifically meet the actual eye fatigue relief needs of different users, and the display parameters are adjusted according to the user's actual situation, so that the actual display situation can further meet the user's viewing The comfort level ensures the efficiency of human-computer interaction between users and smart devices.
附图说明Description of the drawings
图1是本申请实施例一提供的屏幕显示方法的实现流程示意图;FIG. 1 is a schematic diagram of the implementation process of the screen display method provided by Embodiment 1 of the present application;
图2是本申请实施例二提供的屏幕显示方法的实现流程示意图;2 is a schematic diagram of the implementation process of the screen display method provided in the second embodiment of the present application;
图3是本申请实施例三提供的屏幕显示方法的实现流程示意图;3 is a schematic diagram of the implementation process of the screen display method provided by the third embodiment of the present application;
图4是本申请实施例四提供的屏幕显示方法的实现流程示意图;4 is a schematic diagram of the implementation process of the screen display method provided by the fourth embodiment of the present application;
图5是本申请实施例五提供的屏幕显示方法的实现流程示意图;FIG. 5 is a schematic diagram of the implementation process of the screen display method provided by Embodiment 5 of the present application;
图6是本申请实施例六提供的屏幕显示方法的实现流程示意图;6 is a schematic diagram of the implementation process of the screen display method provided by the sixth embodiment of the present application;
图7是本申请实施例七提供的屏幕显示方法的实现流程示意图;FIG. 7 is a schematic diagram of the implementation process of the screen display method provided in the seventh embodiment of the present application;
图8是本申请实施例八提供的屏幕显示装置的结构示意图;FIG. 8 is a schematic structural diagram of a screen display device provided in Embodiment 8 of the present application;
图9是本申请实施例九提供的终端设备的示意图。FIG. 9 is a schematic diagram of a terminal device provided in Embodiment 9 of the present application.
本发明的实施方式Embodiments of the invention
为了便于理解本申请,此处先对本申请实施例进行简要说明,由于现有技术中,缓解眼疲劳方法都是直接以一个固定的显示参数方案进行屏幕显示,这样既无法满足不同用户的实际需求,又会对用户屏幕观看舒适度造成影响,使得人机交互效率降低。In order to facilitate the understanding of the application, the embodiments of the application are briefly described here. Since in the prior art, the methods for alleviating eye fatigue are directly displayed on the screen with a fixed display parameter scheme, which cannot meet the actual needs of different users. , It will also affect the user's screen viewing comfort and reduce the efficiency of human-computer interaction.
考虑到实际情况中,一方面对不同近视程度的用户而言,由于不同颜色光的波长和折射率不同,同一光源发出不同颜色的光最终在眼球中的焦点位置会存在差别,例如红光容易若在视网膜后面,绿光容易落在视网膜前,因此即使是面对同一个屏幕,不同近视程度的用户所看到的显示内容也是会存在一定差别的,特别是显示内容的清晰度会受到较大影响,因此同一显示方案对近视程度越高的用户,看起来视觉效果越差且不清晰,使得用户看起来也越吃力容易产生眼疲劳情况。另一方面,不同近视程度的用户眼睛疲劳耐受度也不一样,一般近视严重的更容易感到疲劳。正是基于这些实际情况的需求,本申请实施例中会根据具体的近视眼程度,将用户近视划分为多个不同的近视等级,并针对不同近视等级用户设置对应合适的显示参数集,以保证对应的视觉效果和缓解疲劳效果,在用户观看屏幕时再智能识别用户的近视等级,并根据用户实际的近视等级来选取预先设置的合适的显示参数集,最后根据挑选出的显示参数集来控制屏幕的实际显示配色等指标,使得屏幕得以显示出最适宜的效果,实现了根据用户实际情况来进行显示参数情况调节,使得实际显示方案更能进一步地满足用户查看的舒适度,保证了用户与智能设备的人机交互效率。详述如下:Taking into account the actual situation, on the one hand, for users with different degrees of myopia, due to the different wavelengths and refractive indexes of different colors of light, the same light source emits different colors of light in the final focus position of the eyeball, such as red light. If behind the retina, the green light is likely to fall in front of the retina. Therefore, even when facing the same screen, users with different degrees of myopia will see different display content, especially the clarity of the display content. Therefore, the user with the higher degree of myopia in the same display solution will have poorer and unclear visual effects, which makes the user look harder and prone to eye fatigue. On the other hand, users with different degrees of myopia have different tolerance to eye fatigue. Generally, those with severe myopia are more prone to fatigue. It is precisely based on the requirements of these actual conditions that in the embodiments of this application, the user’s myopia is divided into multiple different myopia levels according to the specific degree of myopia, and appropriate display parameter sets are set for users with different myopia levels to ensure Corresponding visual effects and fatigue relief effects, intelligently recognize the user’s myopia level when the user is watching the screen, and select a preset suitable display parameter set according to the user’s actual myopia level, and finally control according to the selected display parameter set The actual display color matching and other indicators of the screen enable the screen to display the most suitable effect, and realize the adjustment of the display parameters according to the actual situation of the user, so that the actual display scheme can further satisfy the user's viewing comfort and ensure that the user and The human-computer interaction efficiency of smart devices. The details are as follows:
图1示出了本申请实施例一提供的屏幕显示方法的实现流程图,详述如下:Fig. 1 shows the implementation flowchart of the screen display method provided in the first embodiment of the present application, and the details are as follows:
S101,获取用户的身份信息。S101: Obtain user identity information.
应当说明地,在本申请实施例中屏幕显示方法应用于终端设备之中,同时为了保证本申请实施例中对用户人脸图像的获取,本申请实施例的终端设备应当具有图像获取模块/装置,如摄像头,或者与第三方图像获取设备通信连接。具体而言,在本申请实施例中,终端设备可以是个人电脑、平板电脑和手机等装置,具体由实际应用的场景决定。It should be noted that the screen display method in the embodiment of this application is applied to the terminal device, and in order to ensure the acquisition of the user's face image in the embodiment of this application, the terminal device in the embodiment of this application should have an image acquisition module/device , Such as a camera, or a communication connection with a third-party image acquisition device. Specifically, in the embodiments of the present application, the terminal device may be a personal computer, a tablet computer, a mobile phone, and other devices, which are specifically determined by actual application scenarios.
在本申请实施例中,每个用户对应着一个身份信息,该身份信息可以是以唯一标识的形式存储,也可以是以图像等形式储存,具体由技术人员根据实际需求设定,同时本申请实施例会预先储存一些用户的身份信息并对这些身份信息关联对应用户的近视等级,以实现根据用户身份第一时间进行屏幕显示调整。其中,对用户身份信息和近视等级的储存,包括但不限于:由技术人员根据获取到的用户资料来录入、用户根据自身实际近视情况来 录入以及由终端设备在每次对用户进行近视等级识别之后,同时对用户进行身份信息记录并将两者关联储存,其中的任意一种或多种方式实现。对近视等级的划分规则,也可由技术人员自行设备,如可以简单的划分为严重、轻微和正常三个级别,也可以进行更为细致的划分,此处不予限定,同时对用户身份信息和近视等级的储存,既可以是储存在终端设备本地,也可以是存储的服务器等第三设备之中,具体由技术人员根据实际情况设定。In the embodiments of this application, each user corresponds to an identity information. The identity information can be stored in the form of a unique identification, or in the form of images, etc., which are specifically set by the technicians according to actual needs. The embodiment will pre-store the identity information of some users and associate the identity information with the myopia level of the corresponding user, so as to realize the screen display adjustment in the first time according to the user's identity. Among them, the storage of user identity information and myopia level includes, but is not limited to: input by technicians based on acquired user information, input by users based on their actual myopia situation, and identification of the user's myopia level by the terminal device each time After that, the user's identity information is recorded at the same time and the two are stored in association, and any one or more of them are implemented. The classification rules for myopia levels can also be provided by the technicians themselves. For example, they can be simply divided into three levels: severe, minor, and normal, or they can be divided into more detailed categories, which are not limited here. At the same time, user identity information and The storage of myopia level can be stored locally in the terminal device, or in a third device such as a stored server, which is specifically set by the technician according to the actual situation.
在预先关联存储好一些用户的身份信息和近视等级之后,本申请实施例会获取当前用户的身份信息,以判断当前用户是否具有对应的近视等级,其中,对应于用户身份信息储存方式的不同,本申请实施例对用户身份信息获取的方式也可以有多种,包括但不限于如直接根据当前登录用户的个人信息来识别用户身份信息,或者对当前用户进行生物识别来确定身份信息等,具体由技术人员根据实际用户身份信息存储方式,来选定对应的身份信息获取方式,此处不予限定。After pre-associating and storing the identity information and myopia levels of some users, the embodiment of the present application will obtain the identity information of the current user to determine whether the current user has the corresponding myopia level. Among them, corresponding to the different storage methods of user identity information, this The application embodiment can also obtain user identity information in many ways, including but not limited to, directly identifying user identity information based on the personal information of the currently logged-in user, or performing biometric identification on the current user to determine identity information, etc. The technical personnel select the corresponding identity information acquisition method according to the actual user identity information storage method, which is not limited here.
S102,基于身份信息查找用户的近视等级。S102: Search the user's myopia level based on the identity information.
在确定出当前用户的身份信息之后,根据对身份信息和近视等级的储存方式,本申请实施例会在终端设备本地或通过通信连接在第三设备中查找对应的近视等级,若当前用户对应的近视等级被预先储存过,此时可以直接查找出对应的近视等级,若未被预先储存过,则此时会出现查找失败情况。After the identity information of the current user is determined, according to the storage method of the identity information and the myopia level, the embodiment of the present application will search for the corresponding myopia level locally or in the third device through a communication connection. The level is pre-stored, and the corresponding myopia level can be found directly at this time. If it has not been pre-stored, the search will fail at this time.
S103,若近视等级查找失败,获取用户的人脸图像,并基于人脸图像识别用户的近视等级。S103: If the search for myopia level fails, obtain a face image of the user, and identify the myopia level of the user based on the face image.
若近视等级查找失败,说明无法直接获知用户的身份信息并进行屏幕调整,此时本申请实施例会直接获取用户的人脸图像,并基于人脸图像来进行用户的近视等级识别。其中,具体的近视等级方法此处不予限定,可由技术人员根据实际情况自行选定,包括但不限于如:根据用户是否配戴眼镜来识别近视等级,或者根据用户的眯眼程度(即人眼闭合程度)来识别近视等级,亦可以参考本申请其他相关实施例的方法来进行近视等级识别。应当说明地,由于本申请实施例中只是需要对用户的近视程度进行大致程度识别,并划分得到本申请实施例所需的近视等级,即在识别时并不一定需要识别出用户具体的近视度数和散光度等精确数据,因此在本申请实施例中选用的近视等级识别方法,既可以是较为精确的近视数据识别算法,也可以是精确度相对较低的近视等级识别算法,以满足不同硬件配置的终端设备实际需求,提升本申请实施例对不同场景的适应性程度。If the myopia level search fails, it means that the user's identity information cannot be directly obtained and screen adjustments are performed. In this case, the embodiment of the present application directly obtains the user's face image, and recognizes the user's myopia level based on the face image. Among them, the specific myopia level method is not limited here, and can be selected by the technicians according to the actual situation, including but not limited to: identifying the myopia level according to whether the user wears glasses, or according to the user's squinting degree (ie, human The degree of eye closure) is used to identify the myopia level, and the method of other related embodiments of this application can also be referred to for the myopia level identification. It should be noted that the embodiments of this application only need to roughly identify the degree of myopia of the user, and classify the myopia levels required by the embodiments of this application, that is, it is not necessary to identify the specific degree of myopia of the user during recognition. Therefore, the myopia level recognition method selected in the embodiments of this application can be a more accurate myopia data recognition algorithm, or a relatively low accuracy myopia level recognition algorithm to meet different hardware requirements. The actual requirements of the configured terminal device improve the adaptability of the embodiments of the present application to different scenarios.
S104,基于用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。S104: Match the corresponding display parameter set based on the user's myopia level, and adjust the display parameter of the screen based on the obtained display parameter set.
在本申请实施例,显示参数集内包含配色、清晰度、饱和度、对比度和亮度等中的至少一种显示参数数据,其中包含的具体数据内容可由技术人员根据实际用户需求,以及终端设备支持调整的参数来下设置。In the embodiment of the present application, the display parameter set includes at least one display parameter data of color matching, clarity, saturation, contrast, and brightness, etc. The specific data content contained therein can be supported by technicians according to actual user needs and terminal equipment The adjusted parameters are set.
由上述说明可知,本申请实施例会预先将用户的近视程度划分为多个近视等级,并为每种近视等级设置对应适宜的显示参数集,在获取到用户的近视等级之后,再基于近视等级进行匹配确定出用户适宜的显示参数集,最后再根据显示参数集中的具体显示参数数据来对屏幕进行调整,例如降低亮度等。As can be seen from the above description, this embodiment of the present application will pre-divide the user’s myopia degree into multiple myopia levels, and set a corresponding appropriate display parameter set for each myopia level. After obtaining the user’s myopia level, perform the procedure based on the myopia level. The matching determines a suitable display parameter set for the user, and finally adjusts the screen according to the specific display parameter data in the display parameter set, such as reducing the brightness.
作为本申请的一个可选实施例,考虑到实际情况中,由于不同颜色的光对人眼的影响程度不同,人眼眼球的视网膜上有吸收峰位于蓝光区,会激发使其释放出自由基离子,这容易导致视网膜色素上皮萎缩,从而引起光敏感细胞的死亡,因此一般而言蓝光对人眼的伤害极大,同时红光对人眼的伤害也较大,因此在设置显示参数集时,可以适当地减少蓝光和红光比例,以减少对人眼伤害缓解眼疲劳情况,但另一方面,减少这些颜色的又会直接导致屏幕显示效果受到影响,因此本方案中,会针对不同近视等级的用户进行显示方案的搭配设计,对于近视度数低的用户,对应显示方案中对这些光线的减少量较少,以保证显示效果较佳,对近视度数高的,则减少量较大,以保证眼疲劳缓解效果较佳,从而实现了眼疲劳缓解和显示效果之间的平衡。具体而言,由于蓝色和红色都对眼睛伤害较大,因 此本申请实施例中,在对显示参数集设置配色的时候,采用蓝色和红色含量较少的配色,或者不含蓝色和红色的配色来作为背景色,并设置对应的多个显示方案,其中近视度数越高,背景色中包含的红蓝色越少。As an optional embodiment of the present application, considering that in actual situations, because different colors of light have different effects on human eyes, there is an absorption peak on the retina of the human eye that is located in the blue region, which will excite and release free radicals. Ions, which easily cause the retinal pigment epithelium to shrink and cause the death of light-sensitive cells. Therefore, in general, blue light is very harmful to human eyes, while red light is also more harmful to human eyes. Therefore, when setting the display parameter set , The ratio of blue light and red light can be appropriately reduced to reduce eye damage and relieve eye fatigue. On the other hand, reducing these colors will directly affect the screen display effect. Therefore, this solution will target different myopia Level users carry out the matching design of the display scheme. For users with low myopia, the reduction of these lights in the corresponding display scheme is less to ensure better display effect. For users with high myopia, the reduction is larger. It is ensured that the eye fatigue relief effect is better, thereby achieving a balance between eye fatigue relief and display effect. Specifically, since blue and red are both harmful to the eyes, in the embodiment of the application, when setting the color matching for the display parameter set, a color matching with less blue and red content, or no blue and The red color is used as the background color, and multiple corresponding display schemes are set. The higher the myopia degree, the less red and blue the background color contains.
作为本申请的一个优选实施例,考虑到实际情况中,不同环境光线强度下相同屏幕亮度的显示效果和护眼效果也会存在差异,因此本申请实施例在获取到显示参数集的同时,还会测量终端设备所处环境的光线强度,例如使用光线传感器检测光线强度,再根据显示参数集来设置屏幕显示参数,并实时根据环境光线强度来调整显示亮度和对比度。As a preferred embodiment of the present application, considering that in actual situations, the display effect and eye protection effect of the same screen brightness under different ambient light intensities will also be different. Therefore, the embodiment of the present application obtains the display parameter set while also It measures the light intensity of the environment where the terminal device is located, for example, uses a light sensor to detect the light intensity, then sets the screen display parameters according to the display parameter set, and adjusts the display brightness and contrast according to the ambient light intensity in real time.
本申请实施例中根据具体的近视眼程度,将用户近视划分为多个不同的近视等级,并针对不同近视等级用户设置对应合适的显示参数集,以保证对应的视觉效果和缓解疲劳效果,在用户观看屏幕时再智能识别用户的近视等级,并根据用户实际的近视等级来选取预先设置的合适的显示参数集,最后根据挑选出的显示参数集来控制屏幕的实际显示配色等指标,使得屏幕得以显示出最适宜的效果,实现了根据用户实际情况来进行显示参数情况调节,使得实际显示方案更能进一步地满足用户查看的舒适度,保证了用户与智能设备的人机交互效率。其中,预先存储的好处在于可以有更多的时间来准确测量用户的近视情况,用户甚至可以直接将自己的近视度数和散光度等信息输入至终端设备,再由终端设备转化为相应的近视等级,从而提高对用户匹配的准确性。In the embodiments of this application, the user’s myopia is divided into multiple different levels of myopia according to the specific degree of myopia, and appropriate display parameter sets are set for users of different levels of myopia to ensure the corresponding visual effect and the fatigue relief effect. The user intelligently recognizes the user’s myopia level when watching the screen, and selects a preset suitable display parameter set according to the user’s actual myopia level, and finally controls the actual display color and other indicators of the screen according to the selected display parameter set, so that the screen The most suitable effect can be displayed, and the display parameters can be adjusted according to the actual situation of the user, so that the actual display solution can further satisfy the user's viewing comfort and ensure the efficiency of human-computer interaction between the user and the smart device. Among them, the advantage of pre-storage is that there is more time to accurately measure the user’s myopia. The user can even directly input his own myopia degree and astigmatism into the terminal device, and then the terminal device converts it into the corresponding myopia level. , Thereby improving the accuracy of matching users.
作为本申请实施例一中进行近视等级识别的一种具体实现方式,考虑到实际情况中,是否佩戴眼镜是识别用户是否近视的一种极为有效的方法,因此,本申请实施例中选择了通过识别用户是否配戴眼镜,以及配戴眼镜的情况下眼睛与屏幕的距离,来量化识别用户的近视等级,如图2所示,包括:As a specific implementation of myopia level recognition in the first embodiment of this application, considering that in actual situations, whether to wear glasses is an extremely effective method for identifying whether a user is nearsighted or not, therefore, the adoption of Identify whether the user wears glasses, and the distance between the eyes and the screen when wearing glasses, to quantify and identify the user's myopia level, as shown in Figure 2, including:
S201,若人脸图像中包含眼镜,获取屏幕的尺寸信息并检测用户眼睛与屏幕的人眼距离。S201: If the face image contains glasses, obtain size information of the screen and detect the distance between the user's eyes and the screen.
为了实现对用户是否配戴眼镜的识别,本申请实施例中首先会根据获取到的人脸图像进行人眼识别,再基于识别出的人眼位置来寻找眼镜,详述如下:In order to realize the recognition of whether the user wears glasses, in the embodiment of the present application, human eye recognition is first performed based on the acquired face image, and then the glasses are searched based on the identified human eye position. The details are as follows:
步骤1,对人脸图像进行人眼识别定位,确定出人眼在人脸图像中的位置。Step 1. Perform eye recognition and positioning on the face image, and determine the position of the eye in the face image.
其中具体的人眼定位方法有很多,具体可由技术人员自身选定,此处不予限定,包括但不限于如进行图形匹配,因为眼睛轮廓图形是固定的的,因此可以匹配出其中满足要求的图形,再根据两个只睛具有对称性,且灰度值均与周围皮肤的灰度值会存在明显差异,来确定最终的人眼位置。或者先对获取到的人脸图像进行区域划分,从上至下划分为3部分a、b、c,其中占比:a=30%,b=40%,c=30%,只分析b的部分,并利用opencv首先对每一帧进行人脸检测,检测到后算出b的区域,再对b区域进行灰度二值化处理,得到对应的二值化图像。再对灰度二值化图像进行人眼定位。Among them, there are many specific eye positioning methods, which can be selected by the technicians themselves. They are not limited here, including but not limited to pattern matching. Because the eye contour pattern is fixed, it can be matched to meet the requirements. Graphics, and then based on the symmetry of the two eyes, and the gray value of the surrounding skin will have a significant difference in gray value to determine the final eye position. Or first divide the acquired face image into 3 parts a, b, and c from top to bottom, the proportions of which are: a=30%, b=40%, c=30%, only analyze b Part, and use opencv to first detect the face of each frame, calculate the area of b after detection, and then perform gray-scale binarization on the b area to obtain the corresponding binarized image. Then the human eyes are positioned on the gray-scale binary image.
步骤2,基于人眼位置,检测人脸图像中包含的封闭曲线,并基于人眼位置识别其中曲线内部包含人眼的封闭曲线。Step 2: Detect the closed curve contained in the face image based on the position of the human eye, and identify the closed curve containing the human eye in the curve based on the position of the human eye.
步骤3,若存在曲线内部包含人眼的封闭曲线,则将这些封闭曲线识别为眼镜,并判定人脸图像中包含眼镜,若不存在曲线内部包含人眼的封闭曲线,则判定人脸图像中包含眼镜。Step 3. If there are closed curves containing human eyes inside the curve, then these closed curves are recognized as glasses, and it is determined that the face image contains glasses. If there is no closed curve containing human eyes inside the curve, then it is determined that the face image contains glasses. Contains glasses.
确定出人眼位置之后,由于眼镜一般都是位于眼镜周围,且灰度二值化之后,眼镜边框会形成一个封闭的包围住眼睛的曲线,因此,只需要在人脸图像内对眼睛周围搜索是否寻找满足要求的封闭曲线即可,若存在,则判定用户佩戴了眼镜。由于眼镜的形状一般也是已知的数据,因此,这里还可以预先设置一些可能出现的眼镜形状,再对识别是否包含人眼之前对封闭曲线进行形状匹配,并仅将其中某种形状匹配度较高的封闭曲线,如匹配度超过60%的封闭曲线,来进行人眼位置查找和判定。After determining the position of the human eye, because the glasses are generally located around the glasses, and after the gray-scale binarization, the frame of the glasses will form a closed curve surrounding the eyes. Therefore, only need to search around the eyes in the face image It is enough to find a closed curve that meets the requirements. If it exists, it is determined that the user wears glasses. Since the shape of the glasses is generally known data, some possible glasses shapes can also be preset here, and then the closed curve is shape-matched before identifying whether the human eye is included, and only one of the shapes is matched. A high closed curve, such as a closed curve with a matching degree of more than 60%, is used to search and determine the position of the human eye.
同时,考虑到实际情况中对于近视用户而言,其在查看屏幕内容时,相对正常视力的用户一般都会下意识地贴近屏幕,例如看手机屏幕时,一般都会比较靠近屏幕,同时对于不同大小的屏幕而言,用户一般观看时的距离也会存在明显差别,例如观看电脑电视等大 尺寸屏幕时的距离,明显会高于手机平板灯小尺寸屏幕,因此,本申请实施例中会同时获取屏幕的尺寸信息以及用户眼睛和屏幕的人眼距离,以为后续量化用户近视等级提高数据。其中,尺寸信息直接读取屏幕的硬件参数即可,而人眼距离,则需要在识别定位出人眼的基础上,结合深度传感器进行测量,从而得到相应的人眼距离,由于用户有一对人眼,因此在量化人眼距离时,可以采用两个人眼与屏幕距离均值,来作为最终的人眼距离。At the same time, considering the actual situation for myopic users, when viewing the screen content, users with relatively normal eyesight will generally subconsciously close to the screen. For example, when looking at the screen of a mobile phone, they will generally be closer to the screen, and for screens of different sizes. In other words, the distance when users generally watch is also significantly different. For example, the distance when watching a large-size screen such as a computer TV is obviously higher than that of a small-size screen of a mobile phone tablet lamp. Therefore, in the embodiment of this application, the screen information will be obtained at the same time. The size information and the distance between the user’s eyes and the screen are used to improve the data for subsequent quantification of the user’s myopia level. Among them, the size information can be directly read the hardware parameters of the screen, and the human eye distance needs to be measured with the depth sensor based on the recognition and positioning of the human eye, so as to obtain the corresponding human eye distance. Therefore, when quantifying the distance between human eyes, the average distance between the two human eyes and the screen can be used as the final human eye distance.
S202,基于尺寸信息以及人眼距离,识别用户的近视等级。S202: Identify the user's myopia level based on the size information and the human eye distance.
为了实现基于屏幕尺寸和人眼距离对用户近视等级的量化识别,本申请实施例会预先统计在不同尺寸屏幕情况下,一些不同近视程度的用户查看屏幕的习惯数据,并根据数据分析得到对应的多个阈值,即针对不同大小的屏幕设置不同的阈值数组,并在进行近视程度判断时,先根据设备屏幕大小来读取对应的阈值数组,再根据人眼距离来查询判断用户的近视等级。In order to realize the quantitative recognition of the user’s myopia level based on the screen size and the human eye distance, this embodiment of the present application will pre-calculate the habit data of some users with different myopia degrees in the case of different screen sizes, and obtain the corresponding data according to the data analysis. A threshold, that is, set different threshold arrays for screens of different sizes, and when judging the degree of myopia, first read the corresponding threshold array according to the device screen size, and then query and determine the user's myopia level based on the distance of the human eye.
作为本申请实施例一中进行近视等级识别的一种具体实现方式,考虑到实际情况中,近视用户也极有可能会不戴眼镜,因此即使在本申请实施例二的基础上,也难以准确识别非眼镜用户的近视等级,为了实现对非眼镜用户的近视等级识别,如图3所示,包括:As a specific implementation of myopia level recognition in the first embodiment of the present application, considering that in actual situations, myopic users are most likely to not wear glasses, so even on the basis of the second embodiment of the present application, it is difficult to be accurate Recognizing the myopia level of non-glass users, in order to realize the recognition of the myopia level of non-glass users, as shown in Figure 3, it includes:
S301,若人脸图像中不包含眼镜,基于人脸图像检测用户眼睛与屏幕的人眼距离、用户的人眼闭合度以及用户的皱眉程度。S301: If the face image does not include glasses, detect the distance between the user's eyes and the screen, the degree of closure of the user's eyes, and the degree of frowning of the user based on the face image.
同样的,在本申请实施例中也需要对用户进行是否配戴眼镜的识别,具体的识别方法,可以参考本申请实施例二中的说明,此处不予赘述,或者也可以由技术人员自行设置。Similarly, in the embodiment of this application, it is also necessary to identify whether the user wears glasses. For the specific identification method, please refer to the description in the second embodiment of this application, which will not be repeated here, or it can be done by the technical personnel. Set up.
考虑到实际情况中用户在查看屏幕内容时,对于没有配戴眼镜的近视用户而言观看的难度较大,因此通常会习惯性的眯眼,以对光线的入射起到限制的作用使得自己看得更清楚一些,同时容易眼部肌肉的疲劳和紧张,出现一些皱眉的情况,且为了看的更清楚省力也会相对靠近屏幕一些。正是基于上述没有佩戴眼镜的近视用户通常会出现的症状,在识别出用户没有佩戴眼镜之后,本申请实施例会从三方面对用户进行检测评定。具体而言:Considering that in actual situations, when users are viewing screen content, it is more difficult for nearsighted users who do not wear glasses to watch, so they usually habitually squint to limit the incidence of light so that they can see. It is more clear, and at the same time it is prone to fatigue and tension of the eye muscles, and some frowning occurs. In order to see more clearly, it will be relatively close to the screen. It is precisely based on the aforementioned symptoms that myopic users who do not wear glasses usually have, after recognizing that the user does not wear glasses, the embodiment of the present application will detect and evaluate the user from three aspects. in particular:
对人眼距离的检测,可参考本申请实施例二相关说明,此处不予赘述。For the detection of the human eye distance, please refer to the related description of the second embodiment of the present application, which will not be repeated here.
对于人眼闭合度的检测,可以直接选取现有技术中的一些人眼闭合算法进行处理,例如基于PERCLOS方法进行检测,也可以由技术人员自行设计,亦可以参考本申请实施例四相关说明,此处不进行限定。For the detection of human eye closure, some human eye closure algorithms in the prior art can be directly selected for processing, such as detection based on the PERCLOS method, or it can be designed by the technicians themselves, or you can refer to the related description of Embodiment 4 of this application. There is no limitation here.
对皱眉程度的识别,可以直接选取一些现有的表情识别算法来进行处理,或者由技术人员自行设计,亦可以参考本申请实施例五相关说明,此处不进行限定。For the recognition of the degree of frowning, some existing facial expression recognition algorithms can be directly selected for processing, or they can be designed by the technicians themselves, or they can refer to the related description of Embodiment 5 of the present application, which is not limited here.
S302,基于人眼距离、人眼闭合度以及皱眉程度识别用户的近视等级。S302: Identify the myopia level of the user based on the distance of the human eye, the degree of eye closure, and the degree of frowning.
其中,应当说明地,由于未佩戴眼镜的用户很大程度上可能是未近视用户(在本申请实施例中,可以将这类用户划分为最低近视等级的用户),因此在进行近视等级识别时,本申请实施例中并没有直接将屏幕尺寸作为近视等级量化指标,而是先基于人眼距离、人眼闭合度和皱眉程度判断用户是否近视,若识别结果为用户非近视,如三个指标都低于对应的阈值,则说明用户非近视,此时本申请实施例会直接判定用户为近视等级为最低级。反之,若识别结果为用户是近视眼,此时本申请实施例会进一步地读取屏幕的尺寸信息,并进一步地基于尺寸信息、人眼距离、人眼闭合度和皱眉程度来量化用户近视程度,得到对应的近视等级。Among them, it should be noted that users who do not wear glasses may be non-myopia users to a large extent (in the embodiment of this application, such users can be classified as users with the lowest myopia level), so when performing myopia level recognition In the embodiments of this application, the screen size is not directly used as the quantification index of myopia level, but firstly determines whether the user is nearsighted based on the distance of the eyes, the degree of eye closure, and the degree of frowning. If the recognition result is that the user is not nearsighted, such as three indicators If all values are lower than the corresponding threshold, it indicates that the user is not myopic. In this case, the embodiment of the present application directly determines that the user has the lowest myopia level. Conversely, if the recognition result is that the user is nearsighted, this embodiment of the application will further read the size information of the screen, and further quantify the degree of the user's nearsightedness based on the size information, the distance of the human eye, the degree of eye closure, and the degree of frowning. Get the corresponding myopia level.
其中,为了实现基于人眼距离、人眼闭合度和皱眉程度对用户是否近视识别,以及基于尺寸信息、人眼距离、人眼闭合度和皱眉程度对用户近视程度识别量化,本申请实施例会预先统计在不同尺寸屏幕情况下,一些不同近视程度的用户(包括非近视用户)查看屏幕的习惯数据,同时,一方面针对近视用户和非近视用户,进行人眼距离、人眼闭合度和皱眉程度的分析,得到对应的划分近视和非近视的区分阈值,另一方面即针对不同大小的屏幕进行人眼距离、人眼闭合度和皱眉程度的分析,得到不同近视等级对应的不同的阈值数组,并在进行近视程度判断时,先根据设备屏幕大小来读取对应的阈值数组,再根据人眼距离来查询判断用户的近视等级。Among them, in order to realize the recognition of whether the user is nearsighted based on the distance of the human eye, the degree of closure of the human eye, and the degree of frowning, and the recognition and quantification of the degree of myopia of the user based on the size information, the distance of the human eye, the degree of closure of the human eye, and the degree of frowning, the embodiment of the present application will advance Statistics on the habit of viewing screens of users with different degrees of myopia (including non-myopia users) under different screen sizes. At the same time, on the one hand, for myopia users and non-myopia users, the eye distance, the degree of eye closure, and the degree of frown According to the analysis, the corresponding thresholds for distinguishing myopia and non-myopia are obtained. On the other hand, the eye distance, eye closure degree and frown degree are analyzed for different sizes of screens, and different threshold arrays corresponding to different myopia levels are obtained. And when judging the degree of myopia, first read the corresponding threshold array according to the device screen size, and then query and determine the user's myopia level according to the distance of the human eye.
本申请实施例通过对用户多项指标的综合分析,实现了对未佩戴眼镜用户近视程度的有效识别量化,同时在本申请实施例二的基础上,可以实现对不同应用场景的自适应场景识别以及自适应识别方法的匹配,从而使得本申请实施例可以极大地提高对不同场景和用户的自适应准确识别。The embodiment of this application realizes the effective identification and quantification of the degree of nearsightedness of users who do not wear glasses through the comprehensive analysis of multiple indicators of the user. At the same time, on the basis of the second embodiment of this application, adaptive scene recognition of different application scenarios can be realized. And the matching of adaptive identification methods, so that the embodiments of the present application can greatly improve the adaptive and accurate identification of different scenes and users.
作为本申请实施例三中进行人眼闭合度检测的一种具体实现方式,如图4所示,本申请实施例四,包括:As a specific implementation for detecting the degree of closure of human eyes in the third embodiment of the present application, as shown in FIG. 4, the fourth embodiment of the present application includes:
S401,对人脸图像中的人眼区域图像进行灰度化处理。S401: Perform grayscale processing on an image of a human eye region in a face image.
S402,基于公式(1)计算灰度阈值t,并基于灰度阈值t对灰度化图像进行二值化处理。S402: Calculate a grayscale threshold t based on formula (1), and perform binarization processing on the grayscale image based on the grayscale threshold t.
δ(t) 2-n 1(1-n 1)(r 1-r 2) 2    (1) δ(t) 2 -n 1 (1-n 1 )(r 1 -r 2 ) 2 (1)
[根据细则26改正13.05.2020] 
Figure 1
[Corrected according to Rule 26 13.05.2020]
Figure 1
其中,N是灰度化图像内的总像素数,n i表示灰度值为i的像素点个数,k为灰度化图像中像素点的最大灰度值,t∈[0,k],δ(t)2为对t求方差。 Among them, N is the total number of pixels in the grayscale image, n i represents the number of pixels with grayscale value i, k is the maximum grayscale value of the pixels in the grayscale image, t∈[0,k] , Δ(t)2 is the variance of t.
本申请实施例中,会基于人眼区域图像像素的实际灰度值情况,进行灰度阈值t的自适应计算,其中,当方差取最大值时,t为最优阈值,可以保证对灰度二值化的效果,从而使得计算得到灰度阈值更能满足实际图像二值化需求,使得对灰度化图像二值化效果更好。In the embodiment of the present application, the adaptive calculation of the gray-scale threshold t is performed based on the actual gray-scale value of the image pixels in the human eye area. When the variance takes the maximum value, t is the optimal threshold, which can ensure that the gray-scale The effect of binarization, so that the calculated gray threshold can better meet the needs of actual image binarization, and make the binarization effect of grayscale image better.
S403,计算灰度二值化图像中人眼高宽比均值,得到人眼闭合度。S403: Calculate the average value of the human eye height-to-width ratio in the gray-scale binarized image to obtain the degree of human eye closure.
这里可以采用历遍法对人眼上下左右边缘历遍,计算出人眼高宽比r=Eh/Ew,也可以使用技术人员自行设计的方法计算人眼高宽比均值,此处不予限定。Here, the traversal method can be used to traverse the upper and lower left and right edges of the human eye to calculate the eye aspect ratio r=Eh/Ew, or the method designed by the technicians can be used to calculate the average eye aspect ratio, which is not limited here. .
作为本申请实施例三中进行皱眉程度检测的一种具体实现方式,如图5所示,本申请As a specific implementation of frowning degree detection in the third embodiment of the application, as shown in FIG. 5, the application
实施例五,包括:The fifth embodiment includes:
S501,获取人脸图像中的眉部区域图像,并检测人脸图像中人眼瞳孔与屏幕的瞳孔距离。S501: Obtain an image of the eyebrow region in the face image, and detect the pupil distance between the pupil of the eye in the face image and the screen.
为了实现皱眉程度识别,首先需要定位出人脸图像中的眉部,在本申请实施例中,会基于人眼定位来确定出用户人眼位置,再基于人眼位置来进一步地定位出位于人眼之间的眉部区域。具体而言,根据人眼位置定位出额头区域,并将靠近人眼一侧的n%的额头区域定位为眉部区域,这里的n可以取50或者其他值。In order to recognize the degree of frowning, it is first necessary to locate the eyebrows in the face image. In the embodiment of this application, the position of the user’s eyes is determined based on the position of the eyes of the user, and then the position of the user’s eyes is further located based on the position of the eyes. The brow area between the eyes. Specifically, the forehead area is located according to the position of the human eye, and n% of the forehead area on the side of the human eye is positioned as the eyebrow area, where n can take 50 or other values.
同时,本申请实施例还会进一步地对人眼进行瞳孔定位,并利用深度传感器检测瞳孔与屏幕之间的距离值,以供后续比对使用。其中瞳孔定位方法可由技术人员自行设定,包括但不限于如使用星团模型进行定位等。At the same time, the embodiment of the present application will further locate the pupil of the human eye, and use a depth sensor to detect the distance between the pupil and the screen for subsequent comparison. The pupil positioning method can be set by the technician, including but not limited to positioning using a cluster model.
S502,绘制眉部区域图像对应的深度空间图像,并基于深度空间图像查找眉心与屏幕 的眉心距离,以及深度空间图像中与屏幕的最短距离。S502: Draw a depth space image corresponding to the image of the eyebrow region, and find the distance between the eyebrow center and the screen and the shortest distance between the eyebrow center and the screen in the depth space image based on the depth space image.
S503,分别计算眉心距离和最短距离与瞳孔距离的距离差值,并计算两个差值的比值。考虑到实际情况中当用户出现皱眉行为时,眉头紧皱会导致眉心和眉毛出现较为明显的高度差,因此理论上通过比较眉心与屏幕距离和眉峰与屏幕距离,可以一定程度上量化皱眉程度,但实际情况中,考虑到不同人皱眉的具体情况不同,直接算眉心和眉峰的高度差难以设定一个有效的阈值保证识别准确性,但瞳孔的距离不会随着皱眉改变,因此,本申请实施例会以瞳孔与屏幕的距离作为参考值,来分别量化眉心和眉峰与屏幕的距离情况,最后再将算出来的两个差值的比值,来实现对皱眉行为的一定程度的量化。S503: Calculate the distance between the eyebrow center and the distance between the shortest distance and the pupil distance respectively, and calculate the ratio of the two differences. Considering that in actual situations, when the user frowns, the frowning of the eyebrows will cause the eyebrows and the eyebrows to have a more obvious height difference. Therefore, in theory, the degree of frowning can be quantified to a certain extent by comparing the distance between the eyebrow center and the screen and the distance between the eyebrow peak and the screen. However, in actual situations, considering the different situations of different people's frowns, it is difficult to directly calculate the height difference between the center of the eyebrows and the height of the eyebrows. It is difficult to set an effective threshold to ensure recognition accuracy, but the distance of the pupils will not change with the frown. Therefore, this application The embodiment uses the distance between the pupil and the screen as a reference value to quantify the distance between the center of the eyebrow and the peak of the eyebrow and the screen, and finally the ratio of the two calculated differences is used to achieve a certain degree of quantification of frowning behavior.
具体而言,本申请实施例首先会利用深度传感器描绘出整个眉部区域图像对应与屏幕的深度空间图像,同时定位出其中眉心,再查找出其中的眉心距离和眉峰距离(即与屏幕的最短距离)。最后,分别计算眉心距离-瞳孔距离的差值,眉峰距离与瞳孔距离的差值,并计算两个差值的比值。其中,眉心定位,可直接将两个眉毛之间的中点作为眉心,比值越大,说明皱眉程度越高。Specifically, the embodiment of the application first uses the depth sensor to depict the entire eyebrow region image corresponding to the depth space image of the screen, and at the same time locates the eyebrow center, and then finds the eyebrow center distance and the eyebrow peak distance (that is, the shortest distance from the screen). distance). Finally, calculate the difference between the eyebrow center distance and the pupil distance, the difference between the eyebrow peak distance and the pupil distance, and calculate the ratio of the two differences. Among them, the center of the eyebrows can be positioned directly as the center of the eyebrows. The greater the ratio, the higher the degree of frowning.
S504,识别眉部区域图像中是否存在皱纹。S504: Identify whether there are wrinkles in the image of the eyebrow region.
S505,基于比值以及皱纹识别结果,计算皱眉程度。S505: Calculate the degree of frown based on the ratio and the result of wrinkle recognition.
考虑到实际情况中,有些人本身眉心和眉峰高度差值就较大,因此若直接使用比值来计算皱眉程度,有可能导致皱眉误识别的情况,而另一方面,实际情况中,用户在皱眉时通常也会伴有皱纹的出现,因此,本申请实施例会同时识别用户眉部是否有皱纹存在,以辅助判别用户的皱眉行为是否存在,以及皱眉程度如何。其中,具体的皱纹识别方法可由技术人员自行选定,包括但不限于如使用Gabor滤波器和BP神经网络的人脸皮肤皱纹识别,也可以使用基于皮肤光泽(即反光情况)的皱纹识别等。Taking into account the actual situation, some people have a large difference between the height of the eyebrow center and the height of the eyebrow peak. Therefore, if the ratio is directly used to calculate the degree of frown, it may lead to misidentification of the frown. On the other hand, in the actual situation, the user is frowning The time is usually accompanied by the appearance of wrinkles. Therefore, the embodiment of the present application will simultaneously identify whether there are wrinkles on the user's eyebrows to assist in determining whether the user's frowning behavior exists and the degree of frowning. Among them, the specific wrinkle recognition method can be selected by the technicians, including but not limited to facial skin wrinkle recognition using Gabor filter and BP neural network, wrinkle recognition based on skin luster (ie, light reflection) can also be used.
若识别出用户眉部区域存在皱纹,本申请实施例会判定用户存在皱眉行为,同时会根据比值大小和预先设置的一个或多个阈值,来量化用户的皱纹程度。If it is recognized that there is a wrinkle in the user's eyebrow area, the embodiment of the present application will determine that the user has a frowning behavior, and will quantify the degree of the user's wrinkle based on the ratio and one or more preset thresholds.
作为本申请实施例六,如图6所示,包括:As the sixth embodiment of this application, as shown in Figure 6, it includes:
S601,若近视等级查找成功,根据查找出的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。S601: If the myopia level search is successful, match the corresponding display parameter set according to the found myopia level, and adjust the display parameters of the screen based on the obtained display parameter set.
若查找到了用户身份信息对应的近视等级,本申请实施例会直接使用该近视等级对应的显示参数集来进行屏幕的调整,具体调整原理和实施原理等,可参考本申请实施例一相关说明,此处不予赘述。If the myopia level corresponding to the user identity information is found, this embodiment of the application will directly use the display parameter set corresponding to the myopia level to adjust the screen. For the specific adjustment principle and implementation principle, please refer to the related description of the first embodiment of this application. I will not repeat it here.
作为本申请实施例七,为了实现对用户身份信息和对应近视等级的预关联存储,本申请实施例会预先对用户进行近视等级测试,如图7所示,包括:As the seventh embodiment of the present application, in order to realize the pre-associated storage of user identity information and the corresponding myopia level, the embodiment of the present application will perform a myopia level test on the user in advance, as shown in FIG. 7, including:
S701,控制屏幕同时显示多张除颜色外其他属性均相同的预设图案,其中,每张预设图案唯一对应一种颜色,且多张预设图案对应的颜色中,至少包含红色、绿色和黄色。S701: The control screen simultaneously displays multiple preset patterns with the same attributes except for colors, where each preset pattern uniquely corresponds to one color, and the colors corresponding to the multiple preset patterns include at least red, green, and yellow.
S702,向用户输出图案清晰度调节提示,以使得用户根据查看到预设图案的清晰度输入对应的清晰度调节指令进行清晰度调节,直至用户根据查看到的多张图案清晰度均相同。S702: Output a pattern definition adjustment prompt to the user, so that the user inputs a corresponding definition adjustment instruction according to the definition of the preset pattern to perform definition adjustment, until the user has the same definition according to the plurality of patterns viewed.
S703,接收用户输入的对预设图案的清晰度调节指令,并基于清晰度调节指令中对红色、绿色和黄色的预设图案的指令,识别用户的近视等级,并将识别出的近视等级与用户的身份信息关联储存。S703: Receive a sharpness adjustment instruction for a preset pattern input by the user, and based on the instructions for the red, green, and yellow preset patterns in the sharpness adjustment instruction, identify the user's myopia level, and compare the identified myopia level with The user's identity information is stored in association.
白色光是由红、橙、黄、绿、青、蓝、紫七种颜色的光混合而成的复合光,各种颜色光的波长、折射率不同,故白色光经过较密介质后会发生色散。在可见光中,红色光波长最长,折射率最小、速度最快,而紫光的波长最短、折射率最大、速度最慢。正视眼恰能使黄光的焦点落在视网膜上,则红光的焦点落在视网膜后,绿光的焦点落在视网膜前,红光和绿光距视网膜距离基本相当,故红光和绿光在视网膜上形成的弥散圆的直径也基本相当,故正视眼观察红绿背景里视标的清晰度也基本相同。White light is a composite light composed of seven colors of red, orange, yellow, green, cyan, blue, and purple. The wavelength and refractive index of each color light are different, so white light will occur after passing through a denser medium. Dispersion. In visible light, red light has the longest wavelength, the smallest refractive index, and the fastest speed, while the purple light has the shortest wavelength, the largest refractive index, and the slowest speed. Emmetropia can make the focus of yellow light fall on the retina, then the focus of red light falls behind the retina, and the focus of green light falls in front of the retina. The distance between red light and green light is basically equal to the retina, so red light and green light The diameter of the circle of confusion formed on the retina is basically the same, so the clarity of the optotype in the red and green background is basically the same.
因而通过屈光不正矫正后,若球镜度适当,患者会感到红绿视标的清晰度相同。若被检者处于近视状态时(即近视欠矫或远视过矫时),黄光的焦点会落在视网膜前,红光的 焦点在黄光之后,因而更靠近视网膜,而绿光的焦点比黄光更靠前,因而更远离视网膜,故红光在视网膜上形成弥散圆的直径要比绿光小,所以被检者会感到红色背景里的视标更清晰。因而红绿测试时,用户若觉得红色视标更清晰,需要增加近视度或减少远视度,使红绿视标同样清晰。Therefore, after correcting the refractive error, if the spherical degree is appropriate, the patient will feel that the red and green visual targets have the same clarity. If the subject is nearsighted (ie undercorrected nearsightedness or overcorrected for hyperopia), the focus of yellow light will fall in front of the retina, and the focus of red light will be behind the yellow light, so it is closer to the retina, while the focus of green light The yellow light is more forward and therefore farther away from the retina. Therefore, the diameter of the red light forming a circle of confusion on the retina is smaller than that of the green light, so the subject will feel that the visual target in the red background is clearer. Therefore, during the red-green test, if the user feels that the red vision mark is clearer, he needs to increase the nearsightedness or reduce the farsightedness to make the red and green visions equally clear.
若被检者处于远视状态时(即近视过矫或远视欠矫时),黄光的焦点会落在视网膜后,红光的焦点在黄光之后,因而更远离视网膜,而绿光的焦点比黄光更靠前,因而更靠近视网膜故绿光在视网膜上形成弥散圆的直径要比红光小,所以被检者会感到绿色背景里的视标更清晰。因而红绿测试时,用户若觉得绿色视标更清晰,需要减少近视度或增加远视度,使红绿视标同样清晰。If the subject is in a state of hyperopia (that is, when myopia is overcorrected or hyperopia is undercorrected), the focus of yellow light will fall behind the retina, and the focus of red light will be behind the yellow light, so it is farther away from the retina, while the focus of green light The yellow light is more forward, and therefore closer to the retina, so the diameter of the circle of dispersion formed by the green light on the retina is smaller than that of the red light, so the subject will feel that the visual target in the green background is clearer. Therefore, during the red and green test, if the user feels that the green vision mark is clearer, he needs to reduce the nearsightedness or increase the farsightedness to make the red and green sights equally clear.
正是基于以上原理,本申请实施例中会同时显示清晰度相同的多种颜色图案,并由用户根据自己看到的实际清晰度来进行清晰度的调整,当用户调整完成至觉得看到的所有颜色图案清晰度相同,再根据对红绿图案的实际调整程度,以及预先设置的多个程度阈值来量化用户的近视等级。Based on the above principles, in the embodiments of the present application, multiple color patterns with the same definition will be displayed at the same time, and the user will adjust the definition according to the actual definition he sees. The definition of all color patterns is the same, and the user's myopia level is quantified according to the actual adjustment degree of the red and green patterns and multiple preset thresholds.
在本申请实施例中,近视等级的测试方法,可以集成于终端设备的系统设置功能之中,在用户完整用户注册或者用户登录之后,提示用户进行测试,以提升测试的有效性。In the embodiment of the present application, the test method for myopia level can be integrated into the system setting function of the terminal device. After the user completes the user registration or the user logs in, the user is prompted to perform the test to improve the effectiveness of the test.
作为本申请的一个实施例,为了保证对实时用户屏幕的观看效果以及有效缓解眼部疲劳,本申请实施例会以预设周期更新用户的身份信息,并在发现实时用户改变之后,重新进行屏幕的显示参数调整,包括:As an embodiment of the present application, in order to ensure the viewing effect of the real-time user’s screen and effectively alleviate eye fatigue, the embodiment of the present application will update the user’s identity information in a preset period, and after discovering that the real-time user has changed, the screen will be updated again. Display parameter adjustment, including:
以预设周期更新身份信息,并判断更新前后身份信息是否发生变化。Update the identity information in a preset period and determine whether the identity information has changed before and after the update.
若身份信息发生变化,基于更新后的身份信息,返回执行基于身份信息查找用户的近视等级的操作。If the identity information changes, based on the updated identity information, return to perform the operation of finding the user's myopia level based on the identity information.
其中,预设周期的具体值可由技术人员自行设备,优选地,可以设置为5分钟。重新获取身份信息的方式可参考本申请实施例一中对身份信息的获取方式,此处不予赘述。若更新后的身份信息和更新前的身份信息不同,则判定为身份信息发生变化。Among them, the specific value of the preset period can be set by the technician himself, preferably, it can be set to 5 minutes. For the method of reacquiring the identity information, please refer to the method of obtaining identity information in the first embodiment of this application, which will not be repeated here. If the updated identity information is different from the identity information before the update, it is determined that the identity information has changed.
身份信息发生变化之后,本申请实施例会直接返回本申请实施例一S102中,对新的身份信息进行处理,以对新的用户屏幕显示进行及时调整。After the identity information changes, the embodiment of this application will directly return to S102 of the first embodiment of this application to process the new identity information so as to adjust the screen display of the new user in time.
对应于上文实施例的方法,图8示出了本申请实施例提供的屏幕显示装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。图6示例的屏幕显示装置可以是前述实施例一提供的屏幕显示方法的执行主体。Corresponding to the method of the above embodiment, FIG. 8 shows a structural block diagram of a screen display device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown. The screen display device illustrated in FIG. 6 may be the execution subject of the screen display method provided in the first embodiment.
参照图8,该屏幕显示装置包括:Referring to Figure 8, the screen display device includes:
信息获取模块81,用于获取用户的身份信息。The information obtaining module 81 is used to obtain the user's identity information.
等级查找模块82,用于基于所述身份信息查找所述用户的近视等级。The level search module 82 is configured to search for the myopia level of the user based on the identity information.
等级识别模块83,用于若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级。The level recognition module 83 is configured to obtain the face image of the user if the search for the myopia level fails, and identify the myopia level of the user based on the face image.
显示调节模块84,用于基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。The display adjustment module 84 is configured to match a corresponding display parameter set based on the user's myopia level, and adjust the display parameters of the screen based on the obtained display parameter set.
进一步地,等级识别模块83,包括:Further, the level identification module 83 includes:
若所述人脸图像中包含眼镜,获取所述屏幕的尺寸信息并检测所述用户眼睛与所述屏幕的人眼距离。If the face image includes glasses, obtaining size information of the screen and detecting the distance between the user's eyes and the screen.
基于所述尺寸信息以及所述人眼距离,识别所述用户的近视等级。Based on the size information and the human eye distance, the myopia level of the user is identified.
进一步地,等级识别模块84,包括:Further, the level identification module 84 includes:
数据检测模块,用于若所述人脸图像中不包含眼镜,基于所述人脸图像检测所述用户眼睛与所述屏幕的人眼距离、所述用户的人眼闭合度以及所述用户的皱眉程度。The data detection module is configured to detect, based on the face image, the distance between the user’s eyes and the screen, the degree of closure of the user’s eyes, and the user’s Degree of frown.
近视识别模块,用于基于所述人眼距离、所述人眼闭合度以及所述皱眉程度识别所述用户的近视等级。The myopia recognition module is configured to recognize the myopia level of the user based on the distance of the human eyes, the degree of closure of the human eyes, and the degree of frowning.
进一步地,近视识别模块,包括:Further, the myopia recognition module includes:
对所述人脸图像中的人眼区域图像进行灰度化处理。Perform grayscale processing on the human eye region image in the face image.
基于下式计算灰度阈值t,并基于灰度阈值t对灰度化图像进行二值化处理:Calculate the grayscale threshold t based on the following formula, and perform binarization processing on the grayscale image based on the grayscale threshold t:
δ(t) 2=n 1(1-n 1)(r 1-r 2) 2 δ(t) 2 =n 1 (1-n 1 )(r 1 -r 2 ) 2
[根据细则26改正13.05.2020] 
Figure 2
[Corrected according to Rule 26 13.05.2020]
Figure 2
其中,N是灰度化图像内的总像素数,n i表示灰度值为i的像素点个数,k为灰度化图像中像素点的最大灰度值,t∈[0,k],δ(t)2为对t求方差。 Among them, N is the total number of pixels in the grayscale image, n i represents the number of pixels with grayscale value i, k is the maximum grayscale value of the pixels in the grayscale image, t∈[0,k] , Δ(t)2 is the variance of t.
计算灰度二值化图像中人眼高宽比均值,得到所述人眼闭合度。Calculate the average value of the human eye aspect ratio in the gray-scale binarized image to obtain the human eye closure.
进一步地,数据检测模块,包括:Further, the data detection module includes:
获取所述人脸图像中的眉部区域图像,并检测所述人脸图像中人眼瞳孔与所述屏幕的瞳孔距离。Obtain an image of the eyebrow region in the face image, and detect the pupil distance between the pupil of the eye in the face image and the screen.
绘制所述眉部区域图像对应的深度空间图像,并基于所述深度空间图像查找眉心与所述屏幕的眉心距离,以及所述深度空间图像中与所述屏幕的最短距离。Draw a depth space image corresponding to the image of the eyebrow region, and find the distance between the center of the eyebrow and the center of the eyebrow of the screen based on the depth space image, and the shortest distance between the center of the eyebrow and the screen in the depth space image.
分别计算所述眉心距离和所述最短距离与所述瞳孔距离的距离差值,并计算两个差值的比值。Calculate the distance between the center of the eyebrow and the distance difference between the shortest distance and the pupil distance respectively, and calculate the ratio of the two differences.
识别所述眉部区域图像中是否存在皱纹。Identify whether there are wrinkles in the eyebrow region image.
基于所述比值以及皱纹识别结果,计算所述皱眉程度。Based on the ratio and the result of wrinkle recognition, the degree of frown is calculated.
进一步地,该屏幕显示装置,还包括:Further, the screen display device further includes:
若近视等级查找成功,根据查找出的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。If the myopia level search is successful, the corresponding display parameter set is matched according to the found myopia level, and the display parameter set of the screen is adjusted based on the obtained display parameter set.
进一步地,该屏幕显示装置,还包括:Further, the screen display device further includes:
控制所述屏幕同时显示多张除颜色外其他属性均相同的预设图案,其中,每张所述预设图案唯一对应一种颜色,且多张所述预设图案对应的颜色中,至少包含红色、绿色和黄色。Control the screen to simultaneously display multiple preset patterns with the same attributes except for the color, where each preset pattern uniquely corresponds to one color, and the colors corresponding to the multiple preset patterns include at least Red, green and yellow.
向所述用户输出图案清晰度调节提示,以使得所述用户根据查看到所述预设图案的清晰度输入对应的清晰度调节指令进行清晰度调节,直至所述用户根据查看到的多张所述图案清晰度均相同。The pattern definition adjustment prompt is output to the user, so that the user inputs the corresponding definition adjustment instruction according to the definition of the preset pattern to adjust the definition, until the user adjusts the definition according to the plurality of images viewed. The pattern definitions are the same.
接收所述用户输入的对所述预设图案的所述清晰度调节指令,并基于所述清晰度调节指令中对红色、绿色和黄色的所述预设图案的指令,识别所述用户的近视等级,并将识别出的近视等级与所述用户的身份信息关联储存。Receive the sharpness adjustment instruction for the preset pattern input by the user, and identify the user's myopia based on the instruction for the red, green, and yellow preset patterns in the sharpness adjustment instruction And store the identified myopia level in association with the user’s identity information.
进一步地,该屏幕显示装置,还包括:Further, the screen display device further includes:
以预设周期更新所述身份信息,并判断更新前后所述身份信息是否发生变化。Update the identity information in a preset period, and determine whether the identity information changes before and after the update.
若所述身份信息发生变化,基于更新后的所述身份信息,返回执行所述基于所述身份 信息查找所述用户的近视等级的操作。If the identity information changes, based on the updated identity information, return to perform the operation of searching the user's myopia level based on the identity information.
本申请实施例提供的屏幕显示装置中各模块实现各自功能的过程,具体可参考前述图1所示实施例一的描述,此处不再赘述。For the process of implementing respective functions of each module in the screen display device provided in the embodiment of the present application, please refer to the description of the first embodiment shown in FIG. 1 for details, which will not be repeated here.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution. The execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
还应理解的是,虽然术语“第一”、“第二”等在文本中在一些本申请实施例中用来描述各种元素,但是这些元素不应该受到这些术语的限制。这些术语只是用来将一个元素与另一元素区分开。例如,第一表格可以被命名为第二表格,并且类似地,第二表格可以被命名为第一表格,而不背离各种所描述的实施例的范围。第一表格和第二表格都是表格,但是它们不是同一表格。It should also be understood that although the terms “first”, “second”, etc. are used in the text in some embodiments of the present application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, the first table may be named the second table, and similarly, the second table may be named the first table without departing from the scope of the various described embodiments. The first form and the second form are both forms, but they are not the same form.
图9是本申请一实施例提供的终端设备的示意图。如图9所示,该实施例的终端设备9包括:处理器90、存储器91,所述存储器91中存储有可在所述处理器90上运行的计算机程序92。所述处理器90执行所述计算机程序92时实现上述各个屏幕显示方法实施例中的步骤,例如图1所示的步骤101至108。或者,所述处理器90执行所述计算机程序92时实现上述各装置实施例中各模块/单元的功能,例如图8所示模块81至84的功能。FIG. 9 is a schematic diagram of a terminal device provided by an embodiment of the present application. As shown in FIG. 9, the terminal device 9 of this embodiment includes: a processor 90 and a memory 91. The memory 91 stores a computer program 92 that can run on the processor 90. When the processor 90 executes the computer program 92, the steps in the above embodiments of the screen display method are implemented, for example, steps 101 to 108 shown in FIG. 1. Alternatively, when the processor 90 executes the computer program 92, the functions of the modules/units in the foregoing device embodiments, for example, the functions of the modules 81 to 84 shown in FIG. 8 are realized.
所述终端设备9可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器90、存储器91。本领域技术人员可以理解,图9仅仅是终端设备9的示例,并不构成对终端设备9的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入发送设备、网络接入设备、总线等。The terminal device 9 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, a processor 90 and a memory 91. Those skilled in the art can understand that FIG. 9 is only an example of the terminal device 9 and does not constitute a limitation on the terminal device 9. It may include more or less components than shown in the figure, or a combination of certain components, or different components. For example, the terminal device may also include an input sending device, a network access device, a bus, and the like.
所称处理器90可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 90 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
所述存储器91可以是所述终端设备9的内部存储单元,例如终端设备9的硬盘或内存。所述存储器91也可以是所述终端设备9的外部存储设备,例如所述终端设备9上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器91还可以既包括所述终端设备9的内部存储单元也包括外部存储设备。所述存储器91用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器91还可以用于暂时地存储已经发送或者将要发送的数据。The memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or memory of the terminal device 9. The memory 91 may also be an external storage device of the terminal device 9, such as a plug-in hard disk equipped on the terminal device 9, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device. The memory 91 is used to store the computer program and other programs and data required by the terminal device. The memory 91 can also be used to temporarily store data that has been sent or will be sent.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。If the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (Read-Only Memory, ROM) , Random Access Memory (RAM), electrical carrier signal, telecommunications signal, and software distribution media.
在一个实施例中,本申请还提出了一种存储有计算机可读存储介质,所述存储介质为 易失性存储介质或非易失性存储介质,该计算机可读存储介质被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取用户的身份信息;基于所述身份信息查找所述用户的近视等级;若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。In one embodiment, this application also proposes a computer-readable storage medium that is stored with a volatile storage medium or a non-volatile storage medium, and the computer-readable storage medium is stored by one or more When the processor is executed, one or more processors are caused to perform the following steps: obtain the user's identity information; search for the user's myopia level based on the identity information; if the myopia level search fails, obtain the user's face image, And recognize the myopia level of the user based on the face image; match the corresponding display parameter set based on the myopia level of the user, and adjust the display parameters of the screen based on the obtained display parameter set.

Claims (22)

  1. 一种屏幕显示方法,其中,包括:A screen display method, which includes:
    获取用户的身份信息;Obtain user's identity information;
    基于所述身份信息查找所述用户的近视等级;Searching for the myopia level of the user based on the identity information;
    若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;If the search for myopia level fails, acquiring a face image of the user, and identifying the myopia level of the user based on the face image;
    基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。Match the corresponding display parameter set based on the myopia level of the user, and adjust the display parameter of the screen based on the obtained display parameter set.
  2. 如权利要求1所述的屏幕显示方法,其中,所述基于所述人脸图像识别所述用户的近视等级,包括:8. The screen display method of claim 1, wherein the recognizing the myopia level of the user based on the face image comprises:
    若所述人脸图像中包含眼镜,获取所述屏幕的尺寸信息并检测所述用户眼睛与所述屏幕的人眼距离;If the face image includes glasses, acquiring size information of the screen and detecting the distance between the user's eyes and the screen;
    基于所述尺寸信息以及所述人眼距离,识别所述用户的近视等级。Based on the size information and the human eye distance, the myopia level of the user is identified.
  3. 如权利要求1所述的屏幕显示方法,其中,所述基于所述人脸图像识别所述用户的近视等级,还包括:8. The screen display method of claim 1, wherein the identifying the myopia level of the user based on the face image further comprises:
    若所述人脸图像中不包含眼镜,基于所述人脸图像检测所述用户眼睛与所述屏幕的人眼距离、所述用户的人眼闭合度以及所述用户的皱眉程度;If the face image does not include glasses, detecting the distance between the user's eyes and the screen, the degree of closure of the user's eyes, and the degree of frowning of the user based on the face image;
    基于所述人眼距离、所述人眼闭合度以及所述皱眉程度识别所述用户的近视等级。Identify the myopia level of the user based on the distance of the human eye, the degree of closure of the human eye, and the degree of frowning.
  4. [根据细则26改正13.05.2020] 
    如权利要求3所述的屏幕显示方法,其中,所述基于所述人脸图像计算所述用户的人眼闭合度,包括:
    对所述人脸图像中的人眼区域图像进行灰度化处理;
    基于下式计算灰度阈值t,并基于灰度阈值t对灰度化图像进行二值化处理:
    δ(t) 2=n 1(1-n 1) (r 1-r 2) 2
    Figure c1

    其中,N是灰度化图像内的总像素数,n i表示灰度值为i的像素点个数,k为灰度化图像中像素点的最大灰度值,t∈[0,k],δ(t)2为对t求方差;
    计算灰度二值化图像中人眼高宽比均值,得到所述人眼闭合度。
    [Corrected according to Rule 26 13.05.2020]
    8. The screen display method of claim 3, wherein the calculating the user's eye closure based on the face image comprises:
    Performing grayscale processing on the image of the human eye area in the face image;
    Calculate the grayscale threshold t based on the following formula, and perform binarization processing on the grayscale image based on the grayscale threshold t:
    δ(t) 2 =n 1 (1-n 1 ) (r 1 -r 2 ) 2
    Figure c1

    Among them, N is the total number of pixels in the grayscale image, n i represents the number of pixels with grayscale value i, k is the maximum grayscale value of the pixels in the grayscale image, t∈[0,k] , Δ(t)2 is the variance of t;
    Calculate the average value of the human eye aspect ratio in the gray-scale binarized image to obtain the human eye closure.
  5. 如权利要求3所述的屏幕显示方法,其中,所述基于所述人脸图像计算所述用户的皱眉程度,包括:8. The screen display method of claim 3, wherein the calculating the degree of frown of the user based on the face image comprises:
    获取所述人脸图像中的眉部区域图像,并检测所述人脸图像中人眼瞳孔与所述屏幕的瞳孔距离;Acquiring an image of the eyebrow region in the face image, and detecting the pupil distance between the pupil of the eye in the face image and the screen;
    绘制所述眉部区域图像对应的深度空间图像,并基于所述深度空间图像查找眉心与所述屏幕的眉心距离,以及所述深度空间图像中与所述屏幕的最短距离;Drawing a depth space image corresponding to the image of the eyebrow region, and finding the distance between the center of the eyebrow and the center of the eyebrow of the screen based on the depth space image, and the shortest distance between the center of the eyebrow and the screen in the depth space image;
    分别计算所述眉心距离和所述最短距离与所述瞳孔距离的距离差值,并计算两个差值的比值;Calculating the distance between the eyebrow center and the distance difference between the shortest distance and the pupil distance respectively, and calculating the ratio of the two differences;
    识别所述眉部区域图像中是否存在皱纹;Identifying whether there are wrinkles in the image of the brow region;
    基于所述比值以及皱纹识别结果,计算所述皱眉程度。Based on the ratio and the result of wrinkle recognition, the degree of frown is calculated.
  6. 如权利要求1所述的屏幕显示方法,其中,还包括:The screen display method of claim 1, further comprising:
    若近视等级查找成功,根据查找出的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。If the myopia level search is successful, the corresponding display parameter set is matched according to the found myopia level, and the display parameter set of the screen is adjusted based on the obtained display parameter set.
  7. 如权利要求1至6任意一项所述的屏幕显示方法,其中,在所述获取用户的身份信息之前,还包括:The screen display method according to any one of claims 1 to 6, wherein before said obtaining the user's identity information, it further comprises:
    控制所述屏幕同时显示多张除颜色外其他属性均相同的预设图案,其中,每张所述预设图案唯一对应一种颜色,且多张所述预设图案对应的颜色中,至少包含红色、绿色和黄色;Control the screen to simultaneously display multiple preset patterns with the same attributes except for the color, where each preset pattern uniquely corresponds to one color, and the colors corresponding to the multiple preset patterns include at least Red, green and yellow;
    向所述用户输出图案清晰度调节提示,以使得所述用户根据查看到所述预设图案的清晰度输入对应的清晰度调节指令进行清晰度调节,直至所述用户查看到的多张所述图案清晰度均相同;The pattern definition adjustment prompt is output to the user, so that the user inputs the corresponding definition adjustment instruction according to the definition of viewing the preset pattern to adjust the definition until the user sees multiple sheets of the The pattern definition is the same;
    接收所述用户输入的对所述预设图案的所述清晰度调节指令,并基于所述清晰度调节指令中对红色、绿色和黄色的所述预设图案的指令,识别所述用户的近视等级,并将识别出的近视等级与所述用户的身份信息关联储存。Receive the sharpness adjustment instruction for the preset pattern input by the user, and identify the user's myopia based on the instruction for the red, green, and yellow preset patterns in the sharpness adjustment instruction And store the identified myopia level in association with the user’s identity information.
  8. 如权利要求1至6任意一项所述的屏幕显示方法,其中,在所述基于得到的显示参数集对屏幕进行显示参数调整之后,还包括:7. The screen display method according to any one of claims 1 to 6, wherein after said adjusting the display parameters of the screen based on the obtained display parameter set, the method further comprises:
    以预设周期更新所述身份信息,并判断更新前后所述身份信息是否发生变化;Update the identity information in a preset period, and determine whether the identity information changes before and after the update;
    若所述身份信息发生变化,基于更新后的所述身份信息,返回执行所述基于所述身份信息查找所述用户的近视等级的操作。If the identity information changes, based on the updated identity information, return to perform the operation of finding the user's myopia level based on the identity information.
  9. 一种终端设备,其中,所述终端设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现一种屏幕显示方法,所述屏幕显示方法包括以下步骤:A terminal device, wherein the terminal device includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements a screen display when the computer program is executed Method, the screen display method includes the following steps:
    获取用户的身份信息;基于所述身份信息查找所述用户的近视等级;若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。Obtain the user's identity information; search for the user's myopia level based on the identity information; if the myopia level search fails, obtain the user's face image, and identify the user's myopia level based on the face image; The user's myopia level matches the corresponding display parameter set, and the screen is adjusted based on the obtained display parameter set.
  10. 如权利要求9所述的终端设备,其中,所述基于所述人脸图像识别所述用户的近视等级,包括:若所述人脸图像中包含眼镜,获取所述屏幕的尺寸信息并检测所述用户眼睛与所述屏幕的人眼距离;基于所述尺寸信息以及所述人眼距离,识别所述用户的近视等级。The terminal device of claim 9, wherein the recognizing the myopia level of the user based on the face image comprises: if the face image contains glasses, obtaining size information of the screen and detecting all The human eye distance between the user’s eyes and the screen; and based on the size information and the human eye distance, the myopia level of the user is identified.
  11. 如权利要求9所述的终端设备,其中,所述基于所述人脸图像识别所述用户的近视等级,还包括:若所述人脸图像中不包含眼镜,基于所述人脸图像检测所述用户眼睛与所述屏幕的人眼距离、所述用户的人眼闭合度以及所述用户的皱眉程度;基于所述人眼距离、所述人眼闭合度以及所述皱眉程度识别所述用户的近视等级。The terminal device according to claim 9, wherein the recognizing the myopia level of the user based on the face image further comprises: if the face image does not include glasses, detecting all based on the face image The distance between the user’s eyes and the screen, the degree of eye closure of the user, and the degree of frowning of the user; the user is identified based on the distance of the human eye, the degree of eye closure, and the degree of frowning Myopia level.
  12. [根据细则26改正13.05.2020] 
    如权利要求11所述的终端设备,其中,所述基于所述人脸图像计算所述用户的人眼闭合度,包括:
    对所述人脸图像中的人眼区域图像进行灰度化处理;
    基于下式计算灰度阈值t,并基于灰度阈值t对灰度化图像进行二值化处理:
    δ(t) 2=n 1(1-n 1) (r 1-r 2) 2
    Figure c2

    其中,N是灰度化图像内的总像素数,n i表示灰度值为i的像素点个数,k为灰度化图像中像素点的最大灰度值,t∈[0,k],δ(t)2为对t求方差;计算灰度二值化图像中人眼高宽比均值,得到所述人眼闭合度。
    [Corrected according to Rule 26 13.05.2020]
    The terminal device of claim 11, wherein the calculating the user's eye closure based on the face image comprises:
    Performing grayscale processing on the image of the human eye area in the face image;
    Calculate the grayscale threshold t based on the following formula, and perform binarization processing on the grayscale image based on the grayscale threshold t:
    δ(t) 2 =n 1 (1-n 1 ) (r 1 -r 2 ) 2
    Figure c2

    Among them, N is the total number of pixels in the grayscale image, n i represents the number of pixels with grayscale value i, k is the maximum grayscale value of the pixels in the grayscale image, t∈[0,k] , Δ(t)2 is the variance of t; the average value of the human eye aspect ratio in the gray-scale binarized image is calculated to obtain the human eye closure.
  13. 如权利要求11所述的终端设备,其中,所述基于所述人脸图像计算所述用户的皱眉程度,包括:The terminal device of claim 11, wherein the calculating the degree of frown of the user based on the face image comprises:
    获取所述人脸图像中的眉部区域图像,并检测所述人脸图像中人眼瞳孔与所述屏幕的瞳孔距离;绘制所述眉部区域图像对应的深度空间图像,并基于所述深度空间图像查找眉心与所述屏幕的眉心距离,以及所述深度空间图像中与所述屏幕的最短距离;分别计算所述眉心距离和所述最短距离与所述瞳孔距离的距离差值,并计算两个差值的比值;识别所述眉部区域图像中是否存在皱纹;基于所述比值以及皱纹识别结果,计算所述皱眉程度。Acquire the eyebrow region image in the face image, and detect the pupil distance between the human eye pupil and the screen in the face image; draw the depth space image corresponding to the eyebrow region image, and based on the depth The space image searches for the distance between the eyebrow center and the screen and the shortest distance between the eyebrow center and the screen in the depth space image; respectively calculate the distance between the eyebrow center and the distance between the shortest distance and the pupil distance, and calculate The ratio of the two differences; identifying whether there are wrinkles in the image of the eyebrow region; and calculating the degree of frown based on the ratio and the result of wrinkle recognition.
  14. 如权利要求9所述的终端设备,其中,还包括:The terminal device according to claim 9, further comprising:
    若近视等级查找成功,根据查找出的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。If the myopia level search is successful, the corresponding display parameter set is matched according to the found myopia level, and the display parameter set of the screen is adjusted based on the obtained display parameter set.
  15. 如权利要求9至14任意一项所述的终端设备,其中,在所述获取用户的身份信息之前,还包括:控制所述屏幕同时显示多张除颜色外其他属性均相同的预设图案,其中,每张所述预设图案唯一对应一种颜色,且多张所述预设图案对应的颜色中,至少包含红色、绿色和黄色;向所述用户输出图案清晰度调节提示,以使得所述用户根据查看到所述预设图案的清晰度输入对应的清晰度调节指令进行清晰度调节,直至所述用户查看到的多张所述图案清晰度均相同;接收所述用户输入的对所述预设图案的所述清晰度调节指令,并基于所述清晰度调节指令中对红色、绿色和黄色的所述预设图案的指令,识别所述用户的近视等级,并将识别出的近视等级与所述用户的身份信息关联储存;The terminal device according to any one of claims 9 to 14, wherein before said obtaining the user’s identity information, it further comprises: controlling the screen to simultaneously display multiple preset patterns with the same attributes except for colors, Wherein, each of the preset patterns uniquely corresponds to one color, and the colors corresponding to the plurality of preset patterns include at least red, green, and yellow; the pattern definition adjustment prompt is output to the user, so that all The user adjusts the definition according to the definition input corresponding to the definition of the preset pattern until the definition of the plurality of patterns viewed by the user is the same; The sharpness adjustment instruction of the preset pattern, and based on the instructions for the red, green, and yellow preset patterns in the sharpness adjustment instruction, identify the myopia level of the user, and then identify the identified myopia The rank is stored in association with the user's identity information;
    其中,在所述基于得到的显示参数集对屏幕进行显示参数调整之后,还包括:以预设周期更新所述身份信息,并判断更新前后所述身份信息是否发生变化;若所述身份信息发生变化,基于更新后的所述身份信息,返回执行所述基于所述身份信息查找所述用户的近视等级的操作。Wherein, after the adjustment of the display parameters of the screen based on the obtained display parameter set, the method further includes: updating the identity information in a preset period, and determining whether the identity information has changed before and after the update; if the identity information occurs Change, based on the updated identity information, returning to perform the operation of searching the user's myopia level based on the identity information.
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序被处理器执行时实现一种屏幕显示方法,所述屏幕显示方法包括以下步骤:获取用户的身份信息;基于所述身份信息查找所述用户的近视等级;若近视等级查找失败,获取所述用户的人脸图像,并基于所述人脸图像识别所述用户的近视等级;基于所述用户的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。A computer-readable storage medium, the computer-readable storage medium stores a computer program, wherein the computer program is executed by a processor to implement a screen display method, the screen display method includes the following steps: Identity information; find the myopia level of the user based on the identity information; if the myopia level search fails, obtain the face image of the user, and identify the myopia level of the user based on the face image; based on the user The myopia level matches the corresponding display parameter set, and adjusts the display parameters of the screen based on the obtained display parameter set.
  17. 如权利要求16所述的计算机可读存储介质,其中,所述基于所述人脸图像识别所述用户的近视等级,包括:若所述人脸图像中包含眼镜,获取所述屏幕的尺寸信息并检测所述用户眼睛与所述屏幕的人眼距离;基于所述尺寸信息以及所述人眼距离,识别所述用户的近视等级。The computer-readable storage medium according to claim 16, wherein the identifying the myopia level of the user based on the face image comprises: if the face image contains glasses, obtaining size information of the screen And detecting the human eye distance between the user's eyes and the screen; based on the size information and the human eye distance, identifying the myopia level of the user.
  18. 如权利要求16所述的计算机可读存储介质,其中,所述基于所述人脸图像识别所述用户的近视等级,还包括:若所述人脸图像中不包含眼镜,基于所述人脸图像检测所述用户眼睛与所述屏幕的人眼距离、所述用户的人眼闭合度以及所述用户的皱眉程度;基于所述人眼距离、所述人眼闭合度以及所述皱眉程度识别所述用户的近视等级。The computer-readable storage medium of claim 16, wherein the identifying the myopia level of the user based on the face image further comprises: if the face image does not contain glasses, based on the face image The image detects the distance between the user’s eyes and the screen, the degree of closure of the user’s eyes, and the degree of frowning of the user; recognition based on the distance of the human’s eyes, the degree of closure of the human eye, and the degree of frowning The myopia level of the user.
  19. [根据细则26改正13.05.2020] 
    如权利要求18所述的计算机可读存储介质,其中,所述基于所述人脸图像计算所述用户的人眼闭合度,包括:
    对所述人脸图像中的人眼区域图像进行灰度化处理;
    基于下式计算灰度阈值t,并基于灰度阈值t对灰度化图像进行二值化处理:
    δ(t) 2=n 1(1-n 1) (r 1-r 2) 2
    Figure c3

    其中,N是灰度化图像内的总像素数,n i表示灰度值为i的像素点个数,k为灰度化图像中像素点的最大灰度值,t∈[0,k],δ(t)2为对t求方差;计算灰度二值化图像中人眼高宽比均值,得到所述人眼闭合度。
    [Corrected according to Rule 26 13.05.2020]
    18. The computer-readable storage medium of claim 18, wherein the calculating the user's eye closure based on the face image comprises:
    Performing grayscale processing on the image of the human eye area in the face image;
    Calculate the grayscale threshold t based on the following formula, and perform binarization processing on the grayscale image based on the grayscale threshold t:
    δ(t) 2 =n 1 (1-n 1 ) (r 1 -r 2 ) 2
    Figure c3

    Among them, N is the total number of pixels in the grayscale image, n i represents the number of pixels with grayscale value i, k is the maximum grayscale value of the pixels in the grayscale image, t∈[0,k] , Δ(t)2 is the variance of t; calculate the average value of the human eye aspect ratio in the gray-scale binarized image to obtain the human eye closure.
  20. 如权利要求18所述的计算机可读存储介质,其中,所述基于所述人脸图像计算所述用户的皱眉程度,包括:18. The computer-readable storage medium of claim 18, wherein the calculating the degree of frown of the user based on the face image comprises:
    获取所述人脸图像中的眉部区域图像,并检测所述人脸图像中人眼瞳孔与所述屏幕的瞳孔距离;绘制所述眉部区域图像对应的深度空间图像,并基于所述深度空间图像查找眉心与所述屏幕的眉心距离,以及所述深度空间图像中与所述屏幕的最短距离;分别计算所述眉心距离和所述最短距离与所述瞳孔距离的距离差值,并计算两个差值的比值;识别所述眉部区域图像中是否存在皱纹;基于所述比值以及皱纹识别结果,计算所述皱眉程度。Acquire the eyebrow region image in the face image, and detect the pupil distance between the human eye pupil and the screen in the face image; draw the depth space image corresponding to the eyebrow region image, and based on the depth The space image searches for the distance between the eyebrow center and the screen and the shortest distance between the eyebrow center and the screen in the depth space image; respectively calculate the distance between the eyebrow center and the distance between the shortest distance and the pupil distance, and calculate The ratio of the two differences; identifying whether there are wrinkles in the image of the eyebrow region; and calculating the degree of frown based on the ratio and the result of wrinkle recognition.
  21. 如权利要求16所述的计算机可读存储介质,其中,还包括:The computer-readable storage medium of claim 16, further comprising:
    若近视等级查找成功,根据查找出的近视等级匹配对应的显示参数集,并基于得到的显示参数集对屏幕进行显示参数调整。If the myopia level search is successful, the corresponding display parameter set is matched according to the found myopia level, and the display parameter set of the screen is adjusted based on the obtained display parameter set.
  22. 如权利要求16至21任意一项所述的计算机可读存储介质,其中,在所述获取用户的身份信息之前,还包括:控制所述屏幕同时显示多张除颜色外其他属性均相同的预设图案,其中,每张所述预设图案唯一对应一种颜色,且多张所述预设图案对应的颜色中,至少包含红色、绿色和黄色;向所述用户输出图案清晰度调节提示,以使得所述用户根据查看到所述预设图案的清晰度输入对应的清晰度调节指令进行清晰度调节,直至所述用户 查看到的多张所述图案清晰度均相同;接收所述用户输入的对所述预设图案的所述清晰度调节指令,并基于所述清晰度调节指令中对红色、绿色和黄色的所述预设图案的指令,识别所述用户的近视等级,并将识别出的近视等级与所述用户的身份信息关联储存;The computer-readable storage medium according to any one of claims 16 to 21, wherein before said obtaining the user’s identity information, it further comprises: controlling the screen to simultaneously display multiple presets with the same attributes except for colors. Set patterns, wherein each of the preset patterns uniquely corresponds to one color, and the colors corresponding to the plurality of preset patterns include at least red, green, and yellow; output a pattern definition adjustment prompt to the user, So that the user can input the corresponding definition adjustment instruction according to the definition of the preset pattern to adjust the definition until the definition of the plurality of patterns viewed by the user is the same; receive the user input The sharpness adjustment instruction for the preset pattern, and based on the instructions for the red, green, and yellow preset patterns in the sharpness adjustment instruction, identify the myopia level of the user, and identify The out-of-sight myopia level is stored in association with the user’s identity information;
    其中,在所述基于得到的显示参数集对屏幕进行显示参数调整之后,还包括:以预设周期更新所述身份信息,并判断更新前后所述身份信息是否发生变化;若所述身份信息发生变化,基于更新后的所述身份信息,返回执行所述基于所述身份信息查找所述用户的近视等级的操作。Wherein, after the adjustment of the display parameters of the screen based on the obtained display parameter set, the method further includes: updating the identity information in a preset period, and determining whether the identity information has changed before and after the update; if the identity information occurs Change, based on the updated identity information, returning to perform the operation of searching the user's myopia level based on the identity information.
PCT/CN2020/087136 2019-07-05 2020-04-27 Screen display method, terminal device, and storage medium WO2021004138A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910603946.XA CN110377385B (en) 2019-07-05 2019-07-05 Screen display method and device and terminal equipment
CN201910603946.X 2019-07-05

Publications (1)

Publication Number Publication Date
WO2021004138A1 true WO2021004138A1 (en) 2021-01-14

Family

ID=68252180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087136 WO2021004138A1 (en) 2019-07-05 2020-04-27 Screen display method, terminal device, and storage medium

Country Status (2)

Country Link
CN (1) CN110377385B (en)
WO (1) WO2021004138A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012574A (en) * 2021-02-19 2021-06-22 深圳创维-Rgb电子有限公司 Screen curvature adjusting method and device, curved surface display and storage medium
CN113132642A (en) * 2021-04-26 2021-07-16 维沃移动通信有限公司 Image display method and device and electronic equipment
CN113946390A (en) * 2021-08-31 2022-01-18 广东艾檬电子科技有限公司 Screen display control method and device, electronic equipment and storage medium
CN114420010A (en) * 2021-12-30 2022-04-29 联想(北京)有限公司 Control method and device and electronic equipment

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110377385B (en) * 2019-07-05 2022-06-21 深圳壹账通智能科技有限公司 Screen display method and device and terminal equipment
CN111259743B (en) * 2020-01-09 2023-11-24 中山大学中山眼科中心 Training method and system for myopia image deep learning recognition model
CN111259125B (en) * 2020-01-14 2023-08-29 百度在线网络技术(北京)有限公司 Voice broadcasting method and device, intelligent sound box, electronic equipment and storage medium
JP7283462B2 (en) * 2020-12-07 2023-05-30 横河電機株式会社 Apparatus, method and program
CN114625456B (en) * 2020-12-11 2023-08-18 腾讯科技(深圳)有限公司 Target image display method, device and equipment
CN113766708B (en) * 2021-04-30 2023-05-26 北京字节跳动网络技术有限公司 Lighting device brightness adjusting method and device, electronic equipment and storage medium
CN115097978A (en) * 2022-06-22 2022-09-23 重庆长安新能源汽车科技有限公司 Adjusting method, device, equipment and medium of vehicle-mounted display system
CN116052616A (en) * 2023-03-01 2023-05-02 深圳市领天智杰科技有限公司 Automatic screen light adjusting method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105808190A (en) * 2016-03-14 2016-07-27 广东欧珀移动通信有限公司 Display screen display method and terminal equipment
US20170177166A1 (en) * 2015-12-19 2017-06-22 Dell Products, L.P. User aware digital vision correction
CN106980448A (en) * 2017-02-20 2017-07-25 上海与德通讯技术有限公司 A kind of display methods and mobile terminal
CN107273071A (en) * 2016-04-06 2017-10-20 富泰华工业(深圳)有限公司 Electronic installation, screen adjustment system and method
CN107800868A (en) * 2017-09-21 2018-03-13 维沃移动通信有限公司 A kind of method for displaying image and mobile terminal
CN110377385A (en) * 2019-07-05 2019-10-25 深圳壹账通智能科技有限公司 A kind of screen display method, device and terminal device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745576B (en) * 2014-01-13 2016-01-06 济南大学 The method for supervising of the display screen monitoring alarm device of child protection eyesight
KR101614468B1 (en) * 2014-11-03 2016-04-21 백석대학교산학협력단 Eye Detection and Its Opening and Closing State Recognition Method Using Block Contrast in Mobile Device
CN107221303A (en) * 2016-03-22 2017-09-29 中兴通讯股份有限公司 A kind of method, device and intelligent terminal for adjusting screen intensity
CN106200925B (en) * 2016-06-28 2019-03-19 Oppo广东移动通信有限公司 Control method, device and the mobile terminal of mobile terminal
CN106203285A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106200939B (en) * 2016-06-29 2019-04-19 Oppo广东移动通信有限公司 Sight protectio method, apparatus and terminal device based on terminal device
CN106598519A (en) * 2016-12-12 2017-04-26 合肥联宝信息技术有限公司 Screen display method and apparatus, and computer
CN108008815B (en) * 2017-11-30 2021-05-25 永目堂股份有限公司 Human-computer interaction method based on eye state recognition technology
CN108600555A (en) * 2018-06-15 2018-09-28 努比亚技术有限公司 A kind of screen color method of adjustment, mobile terminal and computer readable storage medium
CN109254809B (en) * 2018-08-01 2020-08-21 Oppo广东移动通信有限公司 Differentiated application loading method and device based on face recognition and terminal equipment
CN109271875B (en) * 2018-08-24 2019-06-14 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information
CN109710371A (en) * 2019-02-20 2019-05-03 北京旷视科技有限公司 Font adjusting method, apparatus and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170177166A1 (en) * 2015-12-19 2017-06-22 Dell Products, L.P. User aware digital vision correction
CN105808190A (en) * 2016-03-14 2016-07-27 广东欧珀移动通信有限公司 Display screen display method and terminal equipment
CN107273071A (en) * 2016-04-06 2017-10-20 富泰华工业(深圳)有限公司 Electronic installation, screen adjustment system and method
CN106980448A (en) * 2017-02-20 2017-07-25 上海与德通讯技术有限公司 A kind of display methods and mobile terminal
CN107800868A (en) * 2017-09-21 2018-03-13 维沃移动通信有限公司 A kind of method for displaying image and mobile terminal
CN110377385A (en) * 2019-07-05 2019-10-25 深圳壹账通智能科技有限公司 A kind of screen display method, device and terminal device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012574A (en) * 2021-02-19 2021-06-22 深圳创维-Rgb电子有限公司 Screen curvature adjusting method and device, curved surface display and storage medium
CN113132642A (en) * 2021-04-26 2021-07-16 维沃移动通信有限公司 Image display method and device and electronic equipment
CN113132642B (en) * 2021-04-26 2023-09-26 维沃移动通信有限公司 Image display method and device and electronic equipment
CN113946390A (en) * 2021-08-31 2022-01-18 广东艾檬电子科技有限公司 Screen display control method and device, electronic equipment and storage medium
CN114420010A (en) * 2021-12-30 2022-04-29 联想(北京)有限公司 Control method and device and electronic equipment

Also Published As

Publication number Publication date
CN110377385B (en) 2022-06-21
CN110377385A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
WO2021004138A1 (en) Screen display method, terminal device, and storage medium
US11556741B2 (en) Devices, systems and methods for predicting gaze-related parameters using a neural network
US11194161B2 (en) Devices, systems and methods for predicting gaze-related parameters
CN103190883B (en) Head-mounted display device and image adjusting method
US9727946B2 (en) Method of customizing an electronic image display device
KR20200063173A (en) Digital therapeutic corrective glasses
US11393251B2 (en) Devices, systems and methods for predicting gaze-related parameters
CN101542521A (en) Pupil color correction device and program
US11670261B2 (en) Systems and methods for switching vision correction graphical outputs on a display of an electronic device
WO2019237838A1 (en) Parameter adjustment method and apparatus for wearable device, wearable device and storage medium
JP6956986B1 (en) Judgment method, judgment device, and judgment program
TW201737237A (en) Electronic device, system and method for adjusting display device
CN115171024A (en) Face multi-feature fusion fatigue detection method and system based on video sequence
CN105380590B (en) A kind of equipment and its implementation with eye position detection function
CN117082665B (en) LED eye-protection desk lamp illumination control method and system
US20230020160A1 (en) Method for determining a value of at least one geometrico-morphological parameter of a subject wearing an eyewear
WO2021049059A1 (en) Image processing method, image processing device, and image processing program
CN108573204A (en) A kind of measurement method of pupil of human distance
US11257468B2 (en) User-mountable extended reality (XR) device
CN108703738A (en) A kind of measuring system and method for hyperopic refractive degree
TW202139918A (en) Methods and apparatus for detecting a presence and severity of a cataract in ambient lighting
CN108567406A (en) A kind of analytical measurement system and method for human eye diopter
CN108567407A (en) A kind of measuring system and method for astigmatism diopter
WO2021095278A1 (en) Image processing method, image processing device, and image processing program
Wong et al. Automatic pupillary light reflex detection in eyewear computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20836235

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 17/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20836235

Country of ref document: EP

Kind code of ref document: A1