WO2020232855A1 - 基于微表情调节屏幕显示的方法及装置 - Google Patents

基于微表情调节屏幕显示的方法及装置 Download PDF

Info

Publication number
WO2020232855A1
WO2020232855A1 PCT/CN2019/101947 CN2019101947W WO2020232855A1 WO 2020232855 A1 WO2020232855 A1 WO 2020232855A1 CN 2019101947 W CN2019101947 W CN 2019101947W WO 2020232855 A1 WO2020232855 A1 WO 2020232855A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
micro
expression
screen display
user
Prior art date
Application number
PCT/CN2019/101947
Other languages
English (en)
French (fr)
Inventor
张起
郑如刚
徐志成
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020232855A1 publication Critical patent/WO2020232855A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • This application relates to the field of image recognition, and in particular to a method and device for adjusting screen display based on micro-expression.
  • the embodiments of the present application provide a method and device for adjusting screen display based on micro-expression. Different screen display adjustments can be made for users of different age groups with the same micro-expression variation, which improves the accuracy of screen display adjustments and enhances user satisfaction.
  • an embodiment of the present application provides a method for adjusting screen display based on micro expressions, the method including:
  • the target display list is determined from multiple display lists according to the above-mentioned target user age group, where each display list corresponds to a user age group, and each display list includes multiple micro-expression changes of the corresponding user age group and each The screen display information corresponding to the change of the micro expression;
  • an embodiment of the present application provides a device for adjusting screen display based on micro-expression, and the device includes:
  • the micro-expression information acquisition module is used to acquire the first micro-expression information in the user's face image collected at the first moment when the target user uses the terminal, and determine the target user age group corresponding to the first micro-expression information according to the age detection model ;
  • the target display list determination module is used to determine the target display list from a plurality of display lists according to the target user age range determined by the micro expression information acquisition module, wherein each display list corresponds to a user age range, and each display list It includes multiple micro-expression changes of the corresponding user's age group and screen display information corresponding to each micro-expression change;
  • the micro-expression variation determination module is used to obtain the second micro-expression information in the user's face image collected at the second moment when the target user uses the terminal, and the first micro-expression information determined by the micro-expression information acquisition module And the second micro-expression information to determine the first micro-expression change amount, wherein the second moment is a moment after the first moment;
  • the screen display information determination module is configured to determine the first micro-expression change corresponding to the first micro-expression change from the target display list determined by the target display list determination module according to the first micro-expression change determined by the micro-expression change determination module One screen displays information;
  • the screen display information adjustment module is configured to adjust the current screen display configuration of the terminal according to the first screen display information determined by the screen display information determination module.
  • an embodiment of the present application provides a terminal.
  • the terminal includes a processor and a memory, and the processor and the memory are connected to each other.
  • the memory is used to store a computer program that supports the terminal to execute the method provided in the first aspect and/or any one of the possible implementations of the first aspect, the computer program includes program instructions, and the processor is configured to call the program Instructions to execute the method provided in the first aspect and/or any possible implementation of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause the processor to execute The method provided by the foregoing first aspect and/or any possible implementation manner of the first aspect.
  • the embodiment of the application sets different display lists for the micro-expression variation of different user age groups, and then can make different screen display adjustments for users of different age groups with the same micro-expression variation, thereby improving the accuracy of the screen display adjustment , Enhance user satisfaction.
  • FIG. 1 is a schematic flowchart of a method for adjusting screen display based on micro-expression provided by an embodiment of the present application
  • FIG. 2 is another schematic flowchart of a method for adjusting screen display based on micro-expressions provided by an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a device for adjusting a screen display based on micro-expression provided by an embodiment of the present application
  • Figure 4 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the method for adjusting the screen display based on the micro-expression provided in the embodiments of this application can be widely applied to various terminals with display screens, such as smart phones, desktop computers, notebook computers, tablet computers, self-service terminals, smart marketing equipment, etc., for convenience Description, unified description as terminal.
  • the user's face image corresponding to the first moment can be obtained by acquiring the user's face at the first moment.
  • the first micro-expression information in the image can determine the target user age group corresponding to the first micro-expression information according to the age detection model, and the target display list can be determined from multiple display lists according to the target user age group.
  • the first micro-expression change amount can be determined, and the first micro-expression change can be determined according to the determined first micro-expression change
  • the first screen display information corresponding to the first micro-expression change amount can be determined from the target display list, and the screen display information of the current terminal can be adjusted according to the first screen display information.
  • the embodiment of the application sets different display lists for the micro-expression variation of different user age groups, and then can make different screen display adjustments for users of different age groups with the same micro-expression variation, thereby improving the accuracy of the screen display adjustment , Enhance user satisfaction.
  • the methods and related devices provided by the embodiments of the present application will be described in detail below with reference to FIGS. 1 to 4 respectively.
  • the methods provided in the embodiments of the present application may include methods for obtaining micro-expression information, determining the age group of the target user, determining the target display list, determining the amount of micro-expression change and corresponding screen display information, and adjusting the current screen display of the terminal based on the screen display information Wait for the data processing stage.
  • the implementation manners of the above-mentioned data processing stages can be referred to the implementation manners shown in Figures 1 to 2 below.
  • FIG. 1 is a schematic flowchart of a method for adjusting a screen display based on a micro-expression provided by an embodiment of the application.
  • the method provided in the embodiment of the present application may include the following steps 101 to 105:
  • the changes in facial features mainly include changes in facial shape and facial texture.
  • changes in facial shape and facial texture Such as the growth of facial bones, changes in facial muscle elasticity and increases in wrinkles.
  • changes in facial features are mainly reflected in changes in facial shape, and in adulthood, the influence of age on human faces is concentrated in changes in facial texture.
  • the user's face image during the user's use of the terminal can be collected through the built-in camera on the terminal or an external camera connected to the terminal, and the user's face image includes at least micro-expression information.
  • the micro-expression information includes facial texture information, eyebrow spacing information, eye opening distance information, lip corner curvature information, face shape information, and pupil color information, etc., which are specifically determined according to actual application scenarios and are not limited here.
  • the eye occluder includes myopia glasses, reading glasses, sunglasses, etc.
  • the user's face image corresponding to the first moment can be obtained by using the built-in camera on the terminal or the external camera connected to the terminal to collect the user's face image when the target user uses the terminal at the first moment, where ,
  • the user’s face image at the first moment includes at least first micro-expression information, and the first micro-expression information includes first facial texture information, first eyebrow spacing information, first eye opening distance information, and first lip corner Curvature information, face shape information, pupil color information, etc.
  • the construction of the age detection model may include data processing stages such as the modeling data collection of the age detection model, the training of the age detection model, and the testing of the age detection model.
  • the modeling data of the age detection model can be derived from facial texture information and/or eyebrow spacing information and/or face shape information and/or pupil color of the same person at different ages in the face image database Facial feature information such as information.
  • the modeling data of the age detection model can also be derived from a large number of facial texture information and/or eyebrow spacing information and/or information of different people of different ages in the face image database.
  • facial feature information such as facial shape information and/or pupil color information.
  • it can be composed of the user’s age group and facial feature information such as facial texture information and/or eyebrow spacing information and/or face shape information and/or pupil color information corresponding to the age group.
  • the information feature pair is input to the initial network model of the age detection model, and the age group and its corresponding facial texture information and/or eyebrow spacing information and/or face shape information included in the information feature pair input through the above initial network model pair And/or facial feature information such as pupil color information is learned to construct an age detection model that can output the corresponding user age group when any facial feature information is input.
  • the user’s age group used for training and facial feature information such as facial texture information and/or eyebrow spacing information and/or face shape information and/or pupil color information corresponding to the age group may be the same person at different ages.
  • Facial feature information such as facial texture information and/or eyebrow spacing information and/or face shape information and/or pupil color information corresponding to segments and different age groups.
  • Facial feature information such as texture information and/or eyebrow spacing information and/or facial shape information and/or pupil color information.
  • the age detection model can collect facial feature information such as facial texture information and/or eyebrow spacing information and/or facial shape information and/or pupil color information of any group of users with known user ages as age detection Model test data. And input each set of test data into the completed age detection model, and compare the user’s age group output based on the age detection model with the user’s actual age group. If the user’s age group output by the age detection model is between the user’s actual age group The age error of is less than the preset accuracy, indicating that the constructed age detection model meets the construction requirements. On the contrary, it indicates that the constructed age detection model does not meet the construction requirements, and the training of the age detection model continues until it meets the requirements.
  • a display list corresponding to each of the multiple age groups can be constructed, wherein each display list includes multiple micro-expression changes of the corresponding user age group and a screen display corresponding to each micro-expression change Information, screen display information includes one or more of screen display font size, screen display brightness, and screen display font spacing. Therefore, the target display list can be determined from the multiple display lists according to the determined age range of the target user.
  • the user age group includes the first age group, the second age group, and the third age group, where the user in the first age group is 10 to 35 years old. Users in the second age group are 36 to 59 years old, and users in the third age group are 60 to 85 years old.
  • the display list corresponding to the first age stage is the first display list
  • the display list corresponding to the second age stage is the second display list
  • the display list corresponding to the third age stage is the third display list.
  • the first display list, the second display list, and the third display list all include multiple micro expression changes and the corresponding screen display information for each micro expression change.
  • the screens corresponding to the same degree of micro-expression change in the first display list, the second display list, and the third display list The displayed information is also different.
  • the screen display information corresponding to 10% of the micro-expression changes in the first display list is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 3%.
  • the screen display information corresponding to 10% of the micro-expression change in the display list is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 5%.
  • the third display list is the screen display information corresponding to the micro-expression change of 10% Increase 8% for screen display font size, screen display brightness and screen display font spacing.
  • myopia refers to a myopic refractive error in which the ability of the eyes to recognize distant targets is reduced and the near vision is normal.
  • Presbyopia is a physiological phenomenon, neither a pathological state nor a refractive error. It is a visual problem that inevitably occurs after people enter middle-aged and old age, and is one of the signs that the body is beginning to age.
  • the people who wear myopia glasses are generally teenagers or middle-aged people, while the people who wear presbyopia glasses are generally the elderly. Although both glasses are used to correct vision, the internal factors are completely different. Therefore, by analyzing the collected face images of the target user, it can be determined whether there are eye occluders on the face image of the target user.
  • the eye occluders include myopia glasses, reading glasses, and sunglasses. (Generally speaking, what distinguishes sunglasses from other glasses is the color of sunglasses lenses) and so on. If it is detected that there are glasses on the user's face image of the target user and the glasses lenses are not colored, the target user's glasses type can be determined through the determined age range of the target user, and then the eye health status of the target user can be known. That is, if the target user age group is the first age group and the user wears glasses, then the glasses type is myopia glasses, indicating that the target user is myopia; if the target user age group is the third age group and the user wears glasses, the glasses type is presbyopia Mirror, indicating that the target user is presbyopia.
  • myopia and/or presbyopia can be corrected by wearing their respective glasses, in fact the myopia and non-myopia of the same user age group wearing myopia glasses, and the presbyopia and presbyopia of the same user age group wearing presbyopia glasses
  • the vision gap of non-presbyopic people and the sensitivity of their eyes to the external environment are also different. Therefore, in a display list corresponding to each user's age group, different screen display information corresponding to the same micro-expression change under the filter conditions of glasses and no glasses can also be set according to whether the user wears glasses.
  • the user age group includes a first age group, a second age group, and a third age group, where the user in the first age group is 10 to 35 years old. Users in the second age group are 36 to 59 years old, and users in the third age group are 60 to 85 years old.
  • the display list corresponding to the first age stage is the first display list
  • the display list corresponding to the second age stage is the second display list
  • the display list corresponding to the third age stage is the third display list.
  • the first display list, the second display list, and the third display list all include multiple micro expression changes and the corresponding screen display information for each micro expression change.
  • the screens corresponding to the same degree of micro-expression change in the first display list, the second display list, and the third display list The displayed information is different.
  • the display list can also include whether to wear glasses or not. Different screen display information corresponding to the same degree of micro expression change with glasses. For example, assuming that the micro-expression changes are all 10%, without glasses, the screen display information corresponding to 10% of the micro-expression changes in the first display list is the screen display font size, screen display brightness, and screen display font spacing.
  • the screen display information corresponding to 10% of the micro-expression change in the second display list is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 5%
  • the micro-expression change in the third display list The screen display information corresponding to 10% is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 8%.
  • the screen display information corresponding to the 10% change of micro expressions in the first display list is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 4%
  • the screen display information corresponding to a change of 10% is the screen display font size, screen display brightness, and the screen display font spacing increase by 7%.
  • the screen display information corresponding to the micro expression change of 10% in the third display list is the screen display font size , Screen display brightness and screen display font spacing are increased by 11%.
  • the brightness of the user's surrounding environment and the distance between the user's eyes and the terminal display screen will also affect the user's current terminal screen display font size, screen display brightness, and screen display font Distance experience, therefore, the ratio of the brightness of the surrounding environment where the user is located and/or the brightness of the surrounding environment to the screen display brightness of the terminal display screen and/or the detected distance value of the user’s eyes from the terminal display screen Include it in each display list as a filter condition, and set different screen display information corresponding to the same micro-expression variation under different conditions.
  • the user's face image corresponding to the second moment can be obtained by using the camera built in the terminal or the external camera connected to the terminal to collect the user's face image when the target user uses the terminal at the second moment, where ,
  • the second moment is another moment after the first moment has passed the preset time or any moment after the first moment, and the user’s face image at the second moment includes at least the second micro-expression information, and the second micro-expression information Including second facial texture information, second eyebrow spacing information, second eye opening distance information, second lip corner curvature information, face shape information, pupil color information, and so on.
  • the first facial texture information change can be obtained value.
  • the change value of the first eyebrow spacing information can be obtained.
  • the first eye opening distance information can be obtained Change value of driving distance information.
  • the first lip corner curvature information change can be obtained value.
  • the first facial texture information change value the first eyebrow spacing information change value, the first eye opening distance information change value, and/or the first lip corner curvature information change value
  • the first micro-expression change amount may be one of the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, or the first lip corner curvature information change value, or Multiple.
  • the first micro expression change can be the maximum or minimum of the four information change values. value. If it is based on the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, and the first lip corner curvature information change value, the first micro-information change value is determined at the same time. The amount of expression change, then the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, and the first lip corner curvature information change value can be multiplied by the corresponding weight values. The sum is performed, and the value obtained by the summation is determined as the first micro-expression change amount or the four information change values are directly summed, and the value obtained by the summation is determined as the first micro-expression change amount.
  • the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, and the first lip corner curvature information change value are 5%, 3%, and 3%, respectively
  • the first micro-expression change amount is 5%.
  • the first micro-expression change amount is determined and the first face
  • the weight values of the change value of the partial texture information, the change value of the first eyebrow spacing information, the change value of the first eye opening distance information, and the change value of the first lip angle information are 5/12, 3/12, 3/12 and 1 respectively /12, then the value obtained after weighting and summing is 3.7%, that is, the first micro-expression change is 3.7%.
  • the screen display information corresponding to the micro-expression variation that is successfully matched can be determined as the first The screen displays information.
  • the presence of glasses and/or the brightness of the surrounding environment where the user is located and/or the ratio of the brightness of the surrounding environment to the screen display brightness of the terminal display screen is added to the target display list / Or the distance between the user's eyes and the terminal display screen and other screen display information corresponding to each micro-expression variation under the filtering conditions, by comparing the first micro-expression variation with the multiple micro-expression variation in the target display list After the matching is performed, the corresponding filter conditions are matched one by one, and finally the micro-expression variation in the target display list and the screen display information when each filter condition is successfully matched are determined as the first screen display information.
  • the first micro-expression change is compared with the multiple micro-expression changes in the target display list.
  • the screen display information corresponding to the micro-expression variation that is successfully matched can be adjusted up or down by a certain range and then determined as the first screen display information.
  • the target display list is the first display list.
  • the screen display information corresponding to 10% of the micro-expression change in the display list is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 3%.
  • the micro-expression change in the first display list corresponds to 15%
  • the screen display information is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 5%.
  • the target user’s first micro-expression variation is 10% and the user wears glasses.
  • the screen display information corresponding to the first micro-expression variation is the screen display font size, screen display brightness, and screen display font The spacing is increased by 3%. Since it is detected that the target user is wearing glasses, the screen display information can be determined by increasing the screen display font size, screen display brightness, and screen display font spacing by 3%. It is the first screen display information, that is, the first screen display information is that the screen display font size, screen display brightness, and screen display font spacing are all increased by 4%.
  • the screen display font size of the current terminal may be adjusted according to the determined first screen display font size in the first screen display information.
  • the screen display brightness of the current terminal can be adjusted according to the first screen display brightness in the first screen display information. Adjust the current terminal's screen display font spacing according to the first screen display font spacing in the first screen display information.
  • the user's face image corresponding to the first moment can be obtained by collecting the user's face image when the target user uses the terminal at the first moment through the built-in camera on the terminal or the external camera connected to the terminal, where:
  • the user’s face image at the first moment includes at least first micro-expression information, which includes first facial texture information, first eyebrow spacing information, first eye opening distance information, and first lip corner curvature. Information, facial shape information, pupil color information, etc.
  • the age detection model may be based on the age detection model by inputting the first facial texture information and/or the first eyebrow spacing information and/or the face shape information and/or pupil color information in the first micro-expression information of the target user. Output the target user age range corresponding to the target user.
  • the target display list can be determined from multiple display lists according to the age range of the target user.
  • the user's face image corresponding to the second moment can be obtained by using the camera built in the terminal or an external camera connected to the terminal to collect the user's face image at the second moment when the target user uses the terminal.
  • the user's face at the second moment The image includes at least second micro-expression information.
  • the second micro-expression information includes second facial texture information, second eyebrow spacing information, second eye opening distance information, second lip corner curvature information, face shape information, and Pupil color information, etc.
  • the opening distance information change value and/or the first lip corner curvature information change value, the first micro-expression change amount can be determined according to the obtained above-mentioned information change values, and then the first micro-expression change amount is determined in the target display list corresponding to According to the first screen display information, the current terminal screen display information can be adjusted according to the first screen display information.
  • the micro-expression changes of different user age groups correspond to different display lists, so that different screen display adjustments can be made for users of different age groups with the same micro-expression change, and the micro-expression changes are more This comprehensive measurement of information change values improves the accuracy of screen display adjustment, enhances user satisfaction, and has high flexibility.
  • FIG. 2 is another schematic flowchart of a method for adjusting a screen display based on a micro expression provided by an embodiment of the present application.
  • the method for adjusting the screen display based on the micro-expression provided by the embodiment of the present application can be described by the implementation manner provided in the following steps 201 to 206:
  • steps 201-204 can refer to steps 101-104 in the corresponding embodiment of FIG. 1 above, and details are not described herein again.
  • the screen display font size of the current terminal may be adjusted according to the determined first screen display font size in the first screen display information.
  • the screen display brightness of the current terminal can be adjusted according to the first screen display brightness in the first screen display information. Adjust the current terminal's screen display font spacing according to the first screen display font spacing in the first screen display information.
  • the built-in camera on the terminal or the terminal collects the face image of the third user corresponding to the third moment when the target user uses the terminal. Then, perform secondary adjustment or multiple adjustments of the screen display information according to the specific implementation of steps 201-205, and each time the screen display information is adjusted, the number of adjustments of the terminal screen display during the target user's use of the terminal is recorded.
  • the number of adjustments displayed on the screen of the terminal when the target user uses the terminal is obtained, and the obtained adjustment number is compared with the preset number.
  • the number of adjustments is greater than or equal to the preset number of times, it means that after multiple adjustments of the screen display information, the target user is still not satisfied with the adjusted screen display.
  • the historical adjustment record of the screen display information of the terminal can be obtained, and after the historical adjustment record is analyzed, each screen display information corresponding to each micro-expression variation in the target display list can be adjusted or optimized or updated.
  • the user's face image corresponding to the first moment can be obtained by collecting the user's face image when the target user uses the terminal at the first moment through the built-in camera on the terminal or the external camera connected to the terminal, where:
  • the user’s face image at the first moment includes at least first micro-expression information, which includes first facial texture information, first eyebrow spacing information, first eye opening distance information, and first lip corner curvature. Information, facial shape information, pupil color information, etc.
  • the age detection model may be based on the age detection model by inputting the first facial texture information and/or the first eyebrow spacing information and/or the face shape information and/or pupil color information in the first micro-expression information of the target user. Output the target user age range corresponding to the target user.
  • the target display list can be determined from multiple display lists according to the age range of the target user.
  • the user's face image corresponding to the second moment can be obtained by using the camera built in the terminal or an external camera connected to the terminal to collect the user's face image at the second moment when the target user uses the terminal.
  • the user's face at the second moment The image includes at least second micro-expression information.
  • the second micro-expression information includes second facial texture information, second eyebrow spacing information, second eye opening distance information, second lip corner curvature information, face shape information, and Pupil color information, etc.
  • the opening distance information change value and/or the first lip corner curvature information change value, the first micro-expression change amount can be determined according to the obtained above-mentioned information change values, and then the first micro-expression change amount is determined in the target display list corresponding to The first screen displays information.
  • the screen display of the current terminal can be adjusted according to the first screen display information.
  • the number of adjustments of the terminal screen display during the target user's use of the terminal is recorded. By obtaining the number of adjustments, and comparing the number of adjustments with the preset number, if the number of adjustments is greater than or equal to the preset number, then by obtaining and analyzing historical adjustment records to adjust or optimize the corresponding micro-expression changes in the target display list Display information on each screen.
  • the micro-expression changes of different user age groups correspond to different display lists, and different screen display adjustments can be made for users of different age groups with the same micro-expression variation, which improves the accuracy of the screen display adjustment.
  • the micro-expression variation is comprehensively measured by a variety of information change values, which can improve the accuracy of the micro-expression variation and the accuracy of the screen display adjustment, thereby enhancing user satisfaction, and optimizing the display list to make flexibility Higher and more applicable.
  • FIG. 3 is a schematic structural diagram of an apparatus for adjusting a screen display based on micro-expression provided by an embodiment of the present application.
  • the device for adjusting screen display based on micro-expression provided by the embodiment of the present application includes:
  • the micro-expression information acquisition module 31 is configured to acquire the first micro-expression information in the user's face image collected at the first moment when the target user uses the terminal, and determine the age of the target user corresponding to the first micro-expression information according to the age detection model segment;
  • the target display list determining module 32 is configured to determine the target display list from a plurality of display lists according to the target user age range determined by the micro expression information acquisition module 31, wherein one display list corresponds to one user age range, and one display list It includes multiple micro-expression changes and screen display information corresponding to each micro-expression change;
  • the micro-expression variation determining module 33 is configured to obtain the second micro-expression information in the user's face image collected at the second moment when the target user uses the terminal, and according to the first micro-expression information determined by the micro-expression information acquisition module 31 The expression information and the second micro-expression information determine the first micro-expression change amount, wherein the second moment is a moment after the first moment;
  • the screen display information determining module 34 is configured to determine the first micro-expression variation from the target display list determined by the target display list determining module 32 according to the first micro-expression variation determined by the micro-expression variation determining module 33 The corresponding first screen display information;
  • the screen display information adjustment module 35 is configured to adjust the current screen display configuration of the terminal according to the first screen display information determined by the screen display information determination module 34.
  • micro-expression information acquisition module 31 is used to:
  • the aforementioned micro-expression variation determination module 33 includes:
  • the facial texture information change determination unit 331 is configured to obtain the second facial texture information in the second micro-expression information, and compare the second facial texture information with the first facial texture information in the first micro-expression information Compare to obtain the first facial texture information change value; and/or
  • the eyebrow spacing information change determination unit 332 is configured to obtain the second eyebrow spacing information in the second micro-expression information, and compare the second eyebrow spacing information with the first eyebrow spacing information in the first micro-expression information to obtain Change value of the first eyebrow spacing information; and/or
  • the eye opening distance information change determination unit 333 is configured to obtain the second eye opening distance information in the second micro-expression information, and to compare the second eye opening distance information with the first eye in the first micro-expression information The opening distance information is compared to obtain the change value of the first eye opening distance information; and/or
  • the lip corner curvature information change determination unit 334 is configured to acquire the second lip corner curvature information in the second micro-expression information, and compare the second lip corner curvature information with the first lip corner curvature information in the first micro-expression information Compare to obtain the change value of the first lip angle curvature information;
  • the micro-expression change determination unit 335 is used to determine the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, and/or the first lip corner curvature information The change value determines the first micro-expression change amount.
  • micro-expression variation determining unit 335 is used to:
  • the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, and the first lip angle curvature information change value are respectively multiplied by the corresponding weight values to obtain And determine the value obtained by the sum as the first micro-expression variation.
  • the above-mentioned screen display information determining module 34 is used to:
  • the first micro-expression variation is matched with the multiple micro-expression variation in the target display list, and the screen display information corresponding to the successfully matched micro-expression variation is determined as the first screen display information.
  • the foregoing device for adjusting screen display based on micro-expression further includes:
  • the display list update module 36 is configured to obtain the adjustment times of the screen display of the terminal when the target user uses the terminal;
  • the screen display information corresponding to each micro-expression variation in the target display list is adjusted.
  • the aforementioned screen display information adjustment module 35 is used to:
  • the foregoing device further includes an age detection model training module 37, and the foregoing age detection model training module 37 includes:
  • the training sample obtaining unit 371 is configured to obtain multiple training samples, and one of the training samples includes facial texture information of a sample user and the age of the user;
  • the age detection model training unit 372 is used to train the initial network model based on the facial texture information included in each training sample and the age of the user to obtain the above-mentioned age detection model, and the above-mentioned age detection model is used to output the user according to the input facial texture information generation.
  • the aforementioned training sample acquisition unit 371 is specifically configured to:
  • the above-mentioned device for adjusting the screen display based on the micro-expression can execute the implementation manners provided in the above-mentioned steps in FIGS. 1 to 2 through various built-in functional modules.
  • the aforementioned micro-expression information acquisition module 31 can be used to perform the aforementioned steps to collect the user's face image at the first moment, acquire the first micro-expression information in the user's face image at the first moment, and determine the target user's age range.
  • the target display list determining module 32 can be used to execute the implementation manners described in the relevant steps of determining the target display list in the foregoing steps.
  • the aforementioned micro-expression variation determination module 33 may be used to perform the above steps to collect the user's face image at the second moment, obtain the second micro-expression information in the user's face image at the second moment, and determine the first micro-expression variation, etc.
  • the above-mentioned screen display information determining module 34 can be used to perform implementations such as determining the first screen display information corresponding to the first micro-expression variation in the above-mentioned steps.
  • the implementation methods provided in the above-mentioned steps which will not be repeated here.
  • the above-mentioned screen display information adjustment module 35 can be used to perform implementation methods such as adjusting the current screen display information according to the first screen display information in the above-mentioned steps. For details, please refer to the implementation methods provided in the above-mentioned steps and will not be repeated here.
  • the above-mentioned display list update module 36 can be used to implement implementations such as adjusting the screen display information corresponding to each micro-expression variation in the display list in the above-mentioned steps. For details, please refer to the implementation methods provided in the above-mentioned steps, which will not be repeated here.
  • the above-mentioned age detection model training module 37 can be used to perform implementation methods such as obtaining training samples in the above steps and training an age detection model based on the training samples. For details, please refer to the implementation methods provided in the above steps, which will not be repeated here.
  • the device for adjusting the screen display based on the micro-expression is based on the collected user's face image at the first moment of the target user, and can compare the first micro-expression information in the user's face image at the first moment
  • the first face texture information and/or the first eyebrow spacing information and/or the face shape information and/or pupil color information are input into the age detection model to obtain the target user age group corresponding to the target user.
  • the target display list can be determined from multiple display lists.
  • the user's face image corresponding to the second moment can be obtained by acquiring the user's face image collected at the second moment, wherein the user's face image at the second moment includes at least the second micro-expression information.
  • the corresponding first facial texture information change value and the first eyebrow spacing information change can be obtained Value, the first eye opening distance information change value, and/or the first lip corner curvature information change value.
  • the first micro-expression variation can be determined according to the obtained above-mentioned information change values, and the first screen display information corresponding to the first micro-expression variation can be determined in the target display list.
  • the screen display information of the current display screen can be adjusted according to the first screen display information.
  • the number of adjustments of the screen display during the use of the target user is recorded.
  • the micro-expression changes of different user age groups correspond to different display lists, and different screen display adjustments can be made for users of different age groups with the same micro-expression variation, which improves the accuracy of the screen display adjustment.
  • the micro-expression variation is comprehensively measured by a variety of information change values, which can improve the accuracy of the micro-expression variation and the accuracy of the screen display adjustment, thereby enhancing user satisfaction, and optimizing the display list to make flexibility Higher, stronger applicability.
  • FIG. 4 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal in this embodiment may include: one or more processors 401 and a memory 402.
  • the aforementioned processor 401 and memory 402 are connected through a bus 403.
  • the memory 402 is configured to store a computer program, and the computer program includes program instructions.
  • the processor 401 is configured to execute the program instructions stored in the memory 402, and perform the following operations:
  • the target display list is determined from multiple display lists according to the above-mentioned target user age group, where each display list corresponds to a user age group, and each display list includes multiple micro-expression changes of the corresponding user age group and each The screen display information corresponding to the change of the micro expression;
  • the aforementioned processor 401 is configured to:
  • the aforementioned processor 401 is configured to:
  • the first micro-expression change is determined according to the change value of the first facial texture information, the change value of the first eyebrow spacing information, the change value of the first eye opening distance information, and/or the change value of the first lip angle curvature information the amount.
  • the aforementioned processor 401 is configured to:
  • the first facial texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value, and the first lip angle curvature information change value are respectively multiplied by the corresponding weight values to obtain And determine the value obtained by the sum as the first micro-expression variation.
  • the aforementioned processor 401 is configured to:
  • the first micro-expression variation is matched with the multiple micro-expression variation in the target display list, and the screen display information corresponding to the successfully matched micro-expression variation is determined as the first screen display information.
  • the aforementioned processor 401 is configured to:
  • the screen display information corresponding to each micro-expression variation in the target display list is adjusted.
  • the aforementioned processor 401 is configured to:
  • the aforementioned processor 401 is configured to:
  • the initial network model is trained based on the facial texture information included in each training sample and the user's age range to obtain an age detection model.
  • the above-mentioned age detection model is used to output the user's age range according to the input facial texture information.
  • the aforementioned processor 401 is configured to:
  • the above-mentioned processor 401 may be a central processing unit (CPU), and the processor may also be other general-purpose processors or digital signal processors (DSP). , Application specific integrated circuit (ASIC), ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 402 may include a read-only memory and a random access memory, and provides instructions and data to the processor 401. A part of the memory 402 may also include a non-volatile random access memory. For example, the memory 402 may also store device type information.
  • the foregoing terminal can execute the implementation manners provided in the steps in Figures 1 to 2 through its built-in functional modules.
  • the implementation manners provided in the foregoing steps which will not be repeated here.
  • the terminal may collect the first facial texture information in the first micro-expression information in the user's face image at the first moment based on the collected face image of the target user at the first moment. And/or the first eyebrow spacing information and/or face shape information and/or pupil color information are input into the age detection model to obtain the target user age group corresponding to the target user. According to the determined age range of the target user, the target display list can be determined from multiple display lists. The user's face image corresponding to the second moment can be obtained by acquiring the user's face image collected at the second moment, wherein the user's face image at the second moment includes at least the second micro-expression information.
  • the corresponding first facial texture information change value and the first eyebrow spacing information change can be obtained Value, the first eye opening distance information change value, and/or the first lip corner curvature information change value.
  • the first micro-expression variation can be determined according to the obtained above-mentioned information change values, and the first screen display information corresponding to the first micro-expression variation can be determined in the target display list.
  • the screen display information of the current terminal can be adjusted according to the first screen display information.
  • the number of adjustments to the screen display during the target user's use of the terminal is recorded.
  • the micro-expression changes of different user age groups correspond to different display lists, and different screen display adjustments can be made for users of different age groups with the same micro-expression variation, which improves the accuracy of the screen display adjustment.
  • the micro-expression variation is comprehensively measured by a variety of information change values, which can improve the accuracy of the micro-expression variation and the accuracy of the screen display adjustment, thereby enhancing user satisfaction, and optimizing the display list to make flexibility Higher and more applicable.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program includes program instructions that, when executed by a processor, implement the steps shown in FIGS. 1 to 2
  • the computer program includes program instructions that, when executed by a processor, implement the steps shown in FIGS. 1 to 2
  • the provided method for adjusting the screen display based on the micro-expression please refer to the implementation manners provided in the above steps for details, which will not be repeated here.
  • the foregoing computer-readable storage medium may be an apparatus for adjusting a screen display based on micro-expression provided in any of the foregoing embodiments or an internal storage unit of the foregoing terminal, such as a hard disk or memory of an electronic device.
  • the computer-readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a smart media card (SMC), or a secure digital (SD) card equipped on the electronic device. Flash card (flash card) etc.
  • the computer-readable storage medium may also include both an internal storage unit of the electronic device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the electronic device.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.

Abstract

本申请实施例公开了一种基于微表情调节屏幕显示的方法及装置,该方法适用于情绪识别。该方法包括:获取目标用户使用终端的第一时刻对应的第一微表情信息,并确定与第一微表情信息对应的目标用户年龄段;根据目标用户年龄段从多个显示列表中确定目标显示列表;获取目标用户使用终端的第二时刻对应的第二微表情信息,并根据第一微表情信息和第二微表情信息确定第一微表情变化量;根据第一微表情变化量从目标显示列表中确定第一屏幕显示信息;根据第一屏幕显示信息调节当前终端的屏幕显示配置。采用本申请实施例,可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高屏幕显示调节的准确性,增强用户满意度。

Description

基于微表情调节屏幕显示的方法及装置
本申请要求于2019年05月21日提交中国专利局、申请号为2019104219472、申请名称为“基于微表情调节屏幕显示的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别领域,尤其涉及一种基于微表情调节屏幕显示的方法及装置。
背景技术
随着通信技术的发展,越来越多的终端走进人们的生活中,也有越来越多的信息通过终端的显示屏向外界呈现。如今,人们已经习惯了通过浏览终端显示屏上的图片、阅读终端显示屏上的文字、观看终端显示屏上的视频来获取信息。但是,长时间使用电子产品容易使人产生眩晕、疲劳、晃眼的感觉,甚至会损害人眼视力。目前终端显示屏上的显示字体和屏幕亮度是固定不变的,或者需要用户进行手动调节,这显然不符合当前人们对人性化和智能化生活的追求,且降低了用户的使用体验。
发明内容
本申请实施例提供一种基于微表情调节屏幕显示的方法及装置。可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高屏幕显示调节的准确性,增强用户满意度。
第一方面,本申请实施例提供了一种基于微表情调节屏幕显示的方法,该方法包括:
获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与上述第一微表情信息对应的目标用户年龄段;
根据上述目标用户年龄段从多个显示列表中确定出目标显示列表,其中,每一个显示列表对应一个用户年龄段,每一个显示列表中包括所对应用户年龄段的多个微表情变化量以及各个微表情变化量对应的屏幕显示信息;
获取上述目标用户使用上述终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据上述第一微表情信息和上述第二微表情信息确定出第一微表情变化量,其中,上述第二时刻为上述第一时刻之后的一个时刻;
根据上述第一微表情变化量从上述目标显示列表中确定上述第一微表情变化量对应的第一屏幕显示信息;
根据上述第一屏幕显示信息调节当前上述终端的屏幕显示配置。
第二方面,本申请实施例提供了一种基于微表情调节屏幕显示的装置,该装置包括:
微表情信息获取模块,用于获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与上述第一微表情信息对应的目标用户年龄段;
目标显示列表确定模块,用于根据上述微表情信息获取模块确定的上述目标用户年龄段从多个显示列表中确定出目标显示列表,其中,每一个显示列表对应一个用户年龄段, 每一个显示列表中包括所对应用户年龄段的多个微表情变化量以及各个微表情变化量对应的屏幕显示信息;
微表情变化量确定模块,用于获取上述目标用户使用上述终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据上述微表情信息获取模块确定的上述第一微表情信息和上述第二微表情信息确定出第一微表情变化量,其中,上述第二时刻为上述第一时刻之后的一个时刻;
屏幕显示信息确定模块,用于根据上述微表情变化量确定模块确定的上述第一微表情变化量从上述目标显示列表确定模块确定的上述目标显示列表中确定上述第一微表情变化量对应的第一屏幕显示信息;
屏幕显示信息调节模块,用于根据上述屏幕显示信息确定模块确定的上述第一屏幕显示信息调节当前上述终端的屏幕显示配置。
第三方面,本申请实施例提供了一种终端,该终端包括处理器和存储器,该处理器和存储器相互连接。该存储器用于存储支持该终端执行上述第一方面和/或第一方面任一种可能的实现方式提供的方法的计算机程序,该计算机程序包括程序指令,该处理器被配置用于调用上述程序指令,执行上述第一方面和/或第一方面任一种可能的实施方式所提供的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包括程序指令,该程序指令当被处理器执行时使该处理器执行上述第一方面和/或第一方面任一种可能的实施方式所提供的方法。
本申请实施例针对不同用户年龄段的微表情变化量设置了不同的显示列表,进而可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高屏幕显示调节的准确性,增强用户满意度。
附图说明
图1是本申请实施例提供的基于微表情调节屏幕显示方法的一流程示意图;
图2是本申请实施例提供的基于微表情调节屏幕显示方法的另一流程示意图;
图3是本申请实施例提供的基于微表情调节屏幕显示装置的结构示意图;
图4是本申请实施例提供的终端的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的基于微表情调节屏幕显示的方法,可广泛适用于各种具有显示屏的终端,例如智能手机、台式电脑、笔记本电脑、平板电脑、自助终端、智能营销设备等,为方便描述,统一描述为终端。通过终端上自带的摄像头或者与终端相连的外接摄像头在第一时刻采集目标用户使用终端时的用户人脸图像可得到第一时刻对应的用户人脸图像,通过获取第一时刻的用户人脸图像中的第一微表情信息,可根据年龄检测模型确定第一微 表情信息对应的目标用户年龄段,并根据目标用户年龄段可从多个显示列表中确定出目标显示列表。通过获取目标用户使用终端过程中的第二时刻的用户人脸图像中的第二微表情信息,再结合第一微表情信息可确定出第一微表情变化量,根据确定的第一微表情变化量可从目标显示列表中确定出第一微表情变化量对应的第一屏幕显示信息,并根据第一屏幕显示信息可调节当前终端的屏幕显示信息。本申请实施例针对不同用户年龄段的微表情变化量设置了不同的显示列表,进而可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高屏幕显示调节的准确性,增强用户满意度。
下面将结合图1至图4分别对本申请实施例提供的方法及相关装置分别进行详细说明。本申请实施例提供的方法中可包括用于获取微表情信息、确定目标用户年龄段、确定目标显示列表、确定微表情变化量及对应的屏幕显示信息、以及基于屏幕显示信息调节终端当前屏幕显示等数据处理阶段。其中,上述各个数据处理阶段的实现方式可参见如下图1至图2所示的实现方式。
参见图1,图1为本申请实施例提供的基于微表情调节屏幕显示方法的一流程示意图。本申请实施例提供的方法可以包括如下步骤101至105:
101、获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与第一微表情信息对应的目标用户年龄段。
在一些可行的实施方式中,随着用户年龄的增长,其脸部特征和瞳孔颜色会发生不同程度的改变,其中,脸部特征的改变主要包括脸部形状变化和脸部纹理变化两个方面,例如脸部骨骼的增长,脸部肌肉弹性的变化以及皱纹的增加。在青少年时期,脸部特征的变化主要体现在脸部形状的改变,到了成年,年龄对于人脸的影响则集中体现于脸部纹理的变化。通过终端上自带的摄像头或者与终端相连的外接摄像头可采集用户使用终端过程中的用户人脸图像,用户人脸图像中至少包括微表情信息。其中,微表情信息中包括脸部纹理信息、眉间距信息、眼睛睁开距离信息、唇角弯曲信息、脸部形状信息和瞳孔颜色信息等,具体根据实际应用场景确定,在此不做限制。可选的,通过对采集到的用户人脸图像进行分析,还可以确定用户人脸图像上是否存在眼部遮挡物,其中,眼部遮挡物包括近视眼镜,老花眼镜,太阳眼镜等。
在一些可行的实施方式中,利用终端上自带的摄像头或者与终端相连的外接摄像头在第一时刻采集目标用户使用终端时的用户人脸图像可得到第一时刻对应的用户人脸图像,其中,第一时刻的用户人脸图像中至少包括第一微表情信息,第一微表情信息中包括第一脸部纹理信息、第一眉间距信息、第一眼睛睁开距离信息、第一唇角弯曲信息、脸部形状信息和瞳孔颜色信息等。通过获取采集到的目标用户的第一微表情信息中的第一脸部纹理信息和/或第一眉间距信息和/或脸部形状信息和/或瞳孔颜色信息,并将第一脸部纹理信息和/或第一眉间距信息和/或脸部形状信息和/或瞳孔颜色信息输入年龄检测模型可基于年龄检测模型输出目标用户对应的目标用户年龄段。其中,年龄检测模型的构建可包括年龄检测模型的建模数据采集,年龄检测模型的训练,以及年龄检测模型的测试等数据处理阶段。不难理解的是,年龄检测模型的建模数据可来源于人脸图像数据库中同一个人在不同年龄段的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息。可选的,为了增强年龄检测模型的预测精度,年龄检测模型的建模数据也可来源 于人脸图像数据库中的大量不同年龄段的不同人的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息。在进行年龄检测模型的训练时,可将由用户年龄段以及与该年龄段对应的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息组成的信息特征对输入年龄检测模型的初始网络模型中,通过上述初始网络模型对输入的信息特征对中包括的年龄段及其对应的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息进行学习,构建输入任一面部特征信息时能够输出对应的用户年龄段的年龄检测模型。其中,用于训练的用户年龄段以及与该年龄段对应的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息可以是同一个人在不同年龄段及不同年龄段对应的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息,也可以是大量不同人的不同年龄段及对应的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息。年龄检测模型构建完成之后,可采集任意几组已知用户年龄段的用户的脸部纹理信息和/或眉间距信息和/或脸部形状信息和/或瞳孔颜色信息等面部特征信息作为年龄检测模型的测试数据。并将各组测试数据输入构建完成的年龄检测模型,基于年龄检测模型输出的用户年龄段与用户的实际年龄段进行比较,若由年龄检测模型输出的用户年龄段与用户的实际年龄段之间的年龄误差小于预设精度,说明构建完成的年龄检测模型符合构建要求,反之,说明构建完成的年龄检测模型不符合构建要求,则继续进行年龄检测模型的训练,直到符合要求。
102、根据目标用户年龄段从多个显示列表中确定出目标显示列表。
在一些可行的实施方式中,随着人年龄的增长,人眼皮中的肌肉也会随着时间的推移而变弱,因此不同年龄段对屏幕显示的要求是不一样的。而且随着人年龄的增长,人脸部肌肉弹性也会变得不一样,换句话说,不同年龄段的人对相同情绪的反映反馈在脸部肌肉上时,其脸部肌肉的改变程度也是不一样的,即微表情变化量是不一样的。因此,可构建多个年龄段中每个年龄段分别对应的一个显示列表,其中,每一个显示列表中包括所对应用户年龄段的多个微表情变化量以及各个微表情变化量对应的屏幕显示信息,屏幕显示信息包括屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距中的一个或多个。于是,根据确定的目标用户年龄段可从多个显示列表中确定出目标显示列表。
举例来说,假设用户年龄段包括第一年龄阶段、第二年龄阶段和第三年龄阶段,其中,第一年龄阶段的用户年龄在10岁到35岁。第二年龄阶段的用户年龄在36岁到59岁,第三年龄阶段的用户年龄在60岁到85岁。第一年龄阶段对应的显示列表为第一显示列表,第二年龄阶段对应的显示列表为第二显示列表,第三年龄阶段对应的显示列表为第三显示列表。其中,第一显示列表、第二显示列表和第三显示列表中皆包括多个微表情变化量以及各个微表情变化量对应的屏幕显示信息,不难理解的是,由于不同年龄段的人对相同情绪的反映反馈在脸部肌肉上时,其微表情变化量是不一样的,因此,第一显示列表、第二显示列表和第三显示列表中相同程度的微表情变化量所对应的屏幕显示信息也是不一样的。例如,假设微表情变化量皆为10%,第一显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大3%,第二显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字 体间距皆增大5%,第三显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大8%。
可选的,在一些可行的实施方式中,近视眼是指眼睛辨认远方目标的能力降低,而近视力正常的一种近视屈光不正性眼病。而老花眼是一种生理现象,不是病理状态也不属于屈光不正,是人们步入中老年后必然出现的视觉问题,是身体开始衰老的信号之一。一般而言,近视眼镜的佩戴人群一般是青少年或中年人,而老花眼镜的佩戴人群一般是老年人,两种眼镜虽然都是为了矫正视力,但其内因却是完全不一样。于是,通过对采集到的目标用户的用户人脸图像进行分析,还可以确定目标用户的用户人脸图像上是否存在眼部遮挡物,其中,眼部遮挡物包括近视眼镜,老花眼镜,太阳眼镜(一般而言,太阳眼镜区别于其他眼镜的地方在于太阳眼镜镜片有颜色)等。若检测到目标用户的用户人脸图像上存在眼镜且眼镜镜片无颜色,则通过确定的目标用户年龄段可确定目标用户的眼镜类型,进而得知目标用户的眼睛健康状态。即若目标用户年龄段是第一年龄阶段且用户佩戴眼镜,则眼镜类型是近视眼镜,说明目标用户是近视眼;若目标用户年龄段是第三年龄阶段且用户佩戴眼镜,则眼镜类型是老花眼镜,说明目标用户是老花眼。虽然近视眼和/或老花眼可以通过佩戴各自对应的眼镜矫正视力,但实际上相同用户年龄段的佩戴近视眼镜后的近视眼与非近视眼人群、相同用户年龄段的佩戴老花眼镜后的老花眼与非老花眼人群的视力差距以及眼睛对外界环境(这里指屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距等屏幕显示信息)的敏感度也是不一样的。于是,在每个用户年龄段对应的一个显示列表中,还可以针对用户是否佩戴眼镜,设置有眼镜和无眼镜筛选条件下的相同微表情变化量对应的不同屏幕显示信息。
举例来说,举例来说,假设用户年龄段包括第一年龄阶段、第二年龄阶段和第三年龄阶段,其中,第一年龄阶段的用户年龄在10岁到35岁。第二年龄阶段的用户年龄在36岁到59岁,第三年龄阶段的用户年龄在60岁到85岁。第一年龄阶段对应的显示列表为第一显示列表,第二年龄阶段对应的显示列表为第二显示列表,第三年龄阶段对应的显示列表为第三显示列表。其中,第一显示列表、第二显示列表和第三显示列表中皆包括多个微表情变化量以及各个微表情变化量对应的屏幕显示信息,不难理解的是,由于不同年龄段的人对相同情绪的反映反馈在脸部肌肉上时,其微表情变化量是不一样的,因此,第一显示列表、第二显示列表和第三显示列表中相同程度的微表情变化量所对应的屏幕显示信息是不一样的。而对于相同年龄段的用户来说,相同程度的微表情变化量的佩戴眼镜的用户与没有佩戴眼镜的用户的屏幕显示信息也应该有所不同,因此显示列表中还可以包括对戴眼镜与不带眼镜的相同程度微表情变化量对应的不同屏幕显示信息。例如,假设微表情变化量皆为10%,在没有眼镜的情况下,第一显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大3%,第二显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大5%,第三显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大8%。在有眼镜的情况下,第一显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大4%,第二显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示 字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大7%,第三显示列表中微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大11%。
可选的,在一些可行的实施方式中,用户所处的周围环境的亮度以及用户的眼睛距离终端显示屏幕的距离也会影响用户对当前终端的屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距的体验,因此,也可以将用户所处的周围环境的亮度和/或周围环境的亮度与终端显示屏幕的屏幕显示亮度的比值和/或检测到的用户的眼睛距离终端显示屏幕的距离值纳入各个显示列表中成为筛选条件,并设置不同条件下的相同微表情变化量对应的不同屏幕显示信息。
103、获取目标用户使用终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据第一微表情信息和第二微表情信息确定出第一微表情变化量。
在一些可行的实施方式中,利用终端上自带的摄像头或者与终端相连的外接摄像头在第二时刻采集目标用户使用终端时的用户人脸图像可得到第二时刻对应的用户人脸图像,其中,第二时刻为第一时刻经过预设时长之后的另一个时刻或是第一时刻之后的任意时刻,第二时刻的用户人脸图像中至少包括第二微表情信息,第二微表情信息中包括第二脸部纹理信息、第二眉间距信息、第二眼睛睁开距离信息、第二唇角弯曲信息、脸部形状信息和瞳孔颜色信息等。通过获取第二微表情信息中的第二脸部纹理信息,并将第二脸部纹理信息与第一微表情信息中的第一脸部纹理信息进行比较,可得到第一脸部纹理信息变化值。通过获取第二微表情信息中的第二眉间距信息,并将第二眉间距信息与第一微表情信息中的第一眉间距信息进行比较,可得到第一眉间距信息变化值。通过获取第二微表情信息中的第二眼睛睁开距离信息,并将第二眼睛睁开距离信息与第一微表情信息中的第一眼睛睁开距离信息进行比较,可得到第一眼睛睁开距离信息变化值。通过获取第二微表情信息中的第二唇角弯曲信息,并将第二唇角弯曲信息与第一微表情信息中的第一唇角弯曲信息进行比较,可得到第一唇角弯曲信息变化值。根据第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和/或第一唇角弯曲信息变化值可确定出第一微表情变化量。其中,第一微表情变化量可以是上述第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值或第一唇角弯曲信息变化值中的一个或多个。例如,如果只将上述四种信息变化值中的一种信息变化值作为第一微表情变化量,那么第一微表情变化量可以是上述四种信息变化值中的最大值,也可以是最小值。如果是同时根据第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和第一唇角弯曲信息变化值这四种信息变化值确定出第一微表情变化量,那么可将第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和第一唇角弯曲信息变化值分别乘以对应的权重值后进行求和,并将求和得到的值确定为第一微表情变化量或者将四种信息变化值直接进行求和,并将求和得到的值确定为第一微表情变化量。
举例来说,假设第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和第一唇角弯曲信息变化值分别是5%,3%,3%和1%,如果只将上述四种信息变化值中的最大信息变化值作为第一微表情变化量,那么第一微表情变化量即为5%。如果是同时根据第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变 化值和第一唇角弯曲信息变化值确定第一微表情变化量且第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和第一唇角弯曲信息变化值的权重值分别是5/12,3/12,3/12和1/12,那么经加权后再求和得到的值即为3.7%,即第一微表情变化量为3.7%。
104、根据第一微表情变化量从目标显示列表中确定第一微表情变化量对应的第一屏幕显示信息。
在一些可行的实施方式中,通过将第一微表情变化量与目标显示列表中的多个微表情变化量进行匹配,可将匹配成功的微表情变化量所对应的屏幕显示信息确定为第一屏幕显示信息。
可选的,在一些可行的实施方式中,若目标显示列表中添加了存在眼镜和/或用户所处的周围环境的亮度和/或周围环境的亮度与终端显示屏幕的屏幕显示亮度的比值和/或用户的眼睛距离终端显示屏幕的距离值等筛选条件下的各个微表情变化量对应的各个屏幕显示信息,则通过将第一微表情变化量与目标显示列表中的多个微表情变化量进行匹配后,再经过一一匹配对应的筛选条件,最后将目标显示列表中微表情变化量以及各个筛选条件匹配成功时的屏幕显示信息确定为第一屏幕显示信息。
可选的,在一些可行的实施方式中,如果目标显示列表中没有添加存在眼镜和/或用户所处的周围环境的亮度和/或周围环境的亮度与终端显示屏幕的屏幕显示亮度的比值和/或用户的眼睛距离终端显示屏幕的距离值等筛选条件下的各个微表情变化量对于的各个屏幕显示信息,则通过将第一微表情变化量与目标显示列表中的多个微表情变化量进行匹配,可将匹配成功的微表情变化量所对应的屏幕显示信息再上调或下调一定幅度后确定为第一屏幕显示信息。
举例来说,假设显示列表中只有微表情变化量对应的屏幕显示信息,无其他的筛选条件,如果目标用户年龄段是第一年龄阶段,则确定目标显示列表是第一显示列表,其中,第一显示列表中的微表情变化量10%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大3%,第一显示列表中的微表情变化量15%对应的屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大5%。目标用户的第一微表情变化量是10%且用户佩戴眼镜,则通过查询目标显示列表,可得到第一微表情变化量对应的屏幕显示信息是屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大3%,由于检测到目标用户佩戴了眼镜,因此可在屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大3%的基础上再上调1%后的屏幕显示信息确定为第一屏幕显示信息,即第一屏幕显示信息为屏幕显示字体大小、屏幕显示亮度以及屏幕显示字体间距皆增大4%。
105、根据第一屏幕显示信息调节当前终端的屏幕显示配置。
在一些可行的实施方式中,可根据确定的第一屏幕显示信息中的第一屏幕显示字体大小调节当前终端的屏幕显示字体大小。根据第一屏幕显示信息中的第一屏幕显示亮度可调节当前终端的屏幕显示亮度。根据第一屏幕显示信息中的第一屏幕显示字体间距调节当前终端的屏幕显示字体间距。
在本申请实施例中,通过终端上自带的摄像头或者与终端相连的外接摄像头在第一时 刻采集目标用户使用终端时的用户人脸图像可得到第一时刻对应的用户人脸图像,其中,第一时刻的用户人脸图像中至少包括第一微表情信息,第一微表情信息中包括第一脸部纹理信息、第一眉间距信息、第一眼睛睁开距离信息、第一唇角弯曲信息、脸部形状信息和瞳孔颜色信息等。通过将获取的目标用户的第一微表情信息中的第一脸部纹理信息和/或第一眉间距信息和/或脸部形状信息和/或瞳孔颜色信息输入年龄检测模型可基于年龄检测模型输出目标用户对应的目标用户年龄段。根据目标用户年龄段可从多个显示列表中确定出目标显示列表。利用终端上自带的摄像头或者与终端相连的外接摄像头在第二时刻采集目标用户使用终端时的用户人脸图像可得到第二时刻对应的用户人脸图像,其中,第二时刻的用户人脸图像中至少包括第二微表情信息,第二微表情信息中包括第二脸部纹理信息、第二眉间距信息、第二眼睛睁开距离信息、第二唇角弯曲信息、脸部形状信息和瞳孔颜色信息等。通过将第一微表情信息中包括的各项信息与第二微表情信息中包括的各项信息进行比较,可得到第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和/或第一唇角弯曲信息变化值,根据得到的上述各项信息变化值可确定第一微表情变化量,进而在目标显示列表中确定第一微表情变化量对应的第一屏幕显示信息,根据第一屏幕显示信息可调节当前终端的屏幕显示信息。本申请实施例中不同用户年龄段的微表情变化量对应了不同的显示列表,进而可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,且微表情变化量由多种信息变化值综合度量,提高了屏幕显示调节的准确性,增强了用户满意度,灵活性高。
参见图2,图2是本申请实施例提供的基于微表情调节屏幕显示方法的另一流程示意图。本申请实施例提供的基于微表情调节屏幕显示的方法可通过如下步骤201至206提供的实现方式进行说明:
201、获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与第一微表情信息对应的目标用户年龄段。
202、根据目标用户年龄段从多个显示列表中确定出目标显示列表。
203、获取目标用户使用终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据第一微表情信息和第二微表情信息确定出第一微表情变化量。
204、根据第一微表情变化量从目标显示列表中确定第一微表情变化量对应的第一屏幕显示信息。
其中,步骤201-204的具体实现方式可以参见上述图1对应实施例中的步骤101-104,这里不再进行赘述。
205、根据第一屏幕显示信息调节当前终端的屏幕显示配置。
在一些可行的实施方式中,可根据确定的第一屏幕显示信息中的第一屏幕显示字体大小调节当前终端的屏幕显示字体大小。根据第一屏幕显示信息中的第一屏幕显示亮度可调节当前终端的屏幕显示亮度。根据第一屏幕显示信息中的第一屏幕显示字体间距调节当前终端的屏幕显示字体间距。
可选的,在一些可行的实施方式中,在进行屏幕显示信息的一次调节后,为了持续观察用户对调节后的终端的屏幕显示信息是否满意,还可以利用终端上自带的摄像头或者与终端相连的外接摄像头采集目标用户使用终端时的第三时刻对应的第三用户人脸图像。然 后按照上述步骤201-205的具体实现方式进行屏幕显示信息的二次调节或多次调节,并在每调节一次屏幕显示信息时,记录目标用户使用终端的过程中终端的屏幕显示的调节次数。
206、获取目标用户使用终端的过程中终端的屏幕显示的调节次数,若调节次数大于或者等于预设次数,则调整目标显示列表中的各个微表情变化量对应的屏幕显示信息。
在一些可行的实施方式中,获取目标用户使用终端过程中终端的屏幕显示的调节次数,并将获取到的调节次数与预设次数进行比较。当调节次数大于或者等于预设次数时,说明经多次屏幕显示信息调节后,目标用户对调节后的屏幕显示依旧不满意。此时,可获取终端的屏幕显示信息的历史调节记录,并通过对历史调节记录进行分析后,调整或优化或更新目标显示列表中的各个微表情变化量对应的各个屏幕显示信息。
在本申请实施例中,通过终端上自带的摄像头或者与终端相连的外接摄像头在第一时刻采集目标用户使用终端时的用户人脸图像可得到第一时刻对应的用户人脸图像,其中,第一时刻的用户人脸图像中至少包括第一微表情信息,第一微表情信息中包括第一脸部纹理信息、第一眉间距信息、第一眼睛睁开距离信息、第一唇角弯曲信息、脸部形状信息和瞳孔颜色信息等。通过将获取的目标用户的第一微表情信息中的第一脸部纹理信息和/或第一眉间距信息和/或脸部形状信息和/或瞳孔颜色信息输入年龄检测模型可基于年龄检测模型输出目标用户对应的目标用户年龄段。根据目标用户年龄段可从多个显示列表中确定出目标显示列表。利用终端上自带的摄像头或者与终端相连的外接摄像头在第二时刻采集目标用户使用终端时的用户人脸图像可得到第二时刻对应的用户人脸图像,其中,第二时刻的用户人脸图像中至少包括第二微表情信息,第二微表情信息中包括第二脸部纹理信息、第二眉间距信息、第二眼睛睁开距离信息、第二唇角弯曲信息、脸部形状信息和瞳孔颜色信息等。通过将第一微表情信息中包括的各项信息与第二微表情信息中包括的各项信息进行比较,可得到第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和/或第一唇角弯曲信息变化值,根据得到的上述各项信息变化值可确定第一微表情变化量,进而在目标显示列表中确定第一微表情变化量对应的第一屏幕显示信息。根据第一屏幕显示信息可调节当前终端的屏幕显示,同时,每调节一次屏幕显示时,记录目标用户使用终端过程中终端的屏幕显示的调节次数。通过获取调节次数,并将调节次数与预设次数进行比较,若调节次数大于或者等于预设次数时,则通过获取并分析历史调节记录以调整或优化目标显示列表中的各个微表情变化量对应的各个屏幕显示信息。本申请实施例中不同用户年龄段的微表情变化量对应了不同的显示列表,进而可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高了屏幕显示调节的准确性,其中微表情变化量由多种信息变化值综合度量既能提高微表情变化量的精确度同时也能提高屏幕显示调节的准确度,从而增强用户满意度,且通过优化显示列表使灵活性更高,适用性更强。
参见图3,图3是本申请实施例提供的基于微表情调节屏幕显示装置的结构示意图。本申请实施例提供的基于微表情调节屏幕显示的装置包括:
微表情信息获取模块31,用于获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与上述第一微表情信息对应的目标用户年龄段;
目标显示列表确定模块32,用于根据上述微表情信息获取模块31确定的上述目标用户年龄段从多个显示列表中确定出目标显示列表,其中,一个显示列表对应一个用户年龄段,一个显示列表中包括多个微表情变化量以及各个微表情变化量对应的屏幕显示信息;
微表情变化量确定模块33,用于获取上述目标用户使用上述终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据上述微表情信息获取模块31确定的上述第一微表情信息和上述第二微表情信息确定出第一微表情变化量,其中,上述第二时刻为上述第一时刻之后的一个时刻;
屏幕显示信息确定模块34,用于根据上述微表情变化量确定模块33确定的上述第一微表情变化量从上述目标显示列表确定模块32确定的上述目标显示列表中确定上述第一微表情变化量对应的第一屏幕显示信息;
屏幕显示信息调节模块35,用于根据上述屏幕显示信息确定模块34确定的上述第一屏幕显示信息调节当前上述终端的屏幕显示配置。
在一些可行的实施方式中,上述微表情信息获取模块31用于:
获取上述第一微表情信息中的第一脸部纹理信息,将上述第一脸部纹理信息输入年龄检测模型并基于上述年龄检测模型输出上述第一脸部纹理信息对应的目标用户年龄段。
在一些可行的实施方式中,上述微表情变化量确定模块33包括:
脸部纹理信息变化确定单元331,用于获取上述第二微表情信息中的第二脸部纹理信息,将上述第二脸部纹理信息与上述第一微表情信息中的第一脸部纹理信息进行比较以得到第一脸部纹理信息变化值;和/或
眉间距信息变化确定单元332,用于获取上述第二微表情信息中的第二眉间距信息,将上述第二眉间距信息与上述第一微表情信息中的第一眉间距信息进行比较以得到第一眉间距信息变化值;和/或
眼睛睁开距离信息变化确定单元333,用于获取上述第二微表情信息中的第二眼睛睁开距离信息,将上述第二眼睛睁开距离信息与上述第一微表情信息中的第一眼睛睁开距离信息进行比较以得到第一眼睛睁开距离信息变化值;和/或
唇角弯曲信息变化确定单元334,用于获取上述第二微表情信息中的第二唇角弯曲信息,将上述第二唇角弯曲信息与上述第一微表情信息中的第一唇角弯曲信息进行比较以得到第一唇角弯曲信息变化值;
微表情变化量确定单元335,用于根据上述第一脸部纹理信息变化值、上述第一眉间距信息变化值、上述第一眼睛睁开距离信息变化值和/或上述第一唇角弯曲信息变化值确定出上述第一微表情变化量。
在一些可行的实施方式中,上述微表情变化量确定单元335用于:
将上述第一脸部纹理信息变化值、上述第一眉间距信息变化值、上述第一眼睛睁开距离信息变化值和上述第一唇角弯曲信息变化值分别乘以对应的权重值后进行求和,并将求和得到的值确定为上述第一微表情变化量。
在一些可行的实施方式中,上述屏幕显示信息确定模块34用于:
将上述第一微表情变化量与上述目标显示列表中的多个微表情变化量进行匹配,并将匹配成功的微表情变化量对应的屏幕显示信息确定为上述第一屏幕显示信息。
在一些可行的实施方式中,上述基于微表情调节屏幕显示的装置还包括:
显示列表更新模块36,用于获取上述目标用户使用上述终端的过程中上述终端的屏幕显示的调节次数;
若上述调节次数大于或者等于预设次数,则调整上述目标显示列表中的各个微表情变化量对应的屏幕显示信息。
在一些可行的实施方式中,上述屏幕显示信息调节模块35用于:
根据上述第一屏幕显示信息中的第一屏幕显示字体大小调节当前上述终端的屏幕显示字体大小;和/或
根据上述第一屏幕显示信息中的第一屏幕显示亮度调节当前上述终端的屏幕显示亮度;和/或
根据上述第一屏幕显示信息中的第一屏幕显示字体间距调节当前上述终端的屏幕显示字体间距。
在一些可行的实施方式中,上述装置还包括年龄检测模型训练模块37,上述年龄检测模型训练模块37包括:
训练样本获取单元371,用于获取多个训练样本,其中一个训练样本中包括一个样本用户的脸部纹理信息和用户年龄段;
年龄检测模型训练单元372,用于基于各训练样本中包括的脸部纹理信息和用户年龄段训练初始网络模型以得到上述年龄检测模型,上述年龄检测模型用于根据输入的脸部纹理信息输出用户年龄段。
在一些可行的实施方式中,上述训练样本获取单元371具体用于:
从人脸图像数据库中获取多个不同人的不同用户年龄段的脸部纹理信息作为训练样本。
具体实现中,上述基于微表情调节屏幕显示的装置可通过其内置的各个功能模块执行如上述图1至图2中各个步骤所提供的实现方式。例如,上述微表情信息获取模块31可用于执行上述各个步骤中采集第一时刻的用户人脸图像,获取第一时刻的用户人脸图像中的第一微表情信息以及确定目标用户年龄段等实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。上述目标显示列表确定模块32可用于执行上述各个步骤中确定目标显示列表等相关步骤所描述的实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。上述微表情变化量确定模块33可用于执行上述各个步骤中采集第二时刻的用户人脸图像,获取第二时刻的用户人脸图像中的第二微表情信息、确定第一微表情变化量等实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。上述屏幕显示信息确定模块34可用于执行上述各个步骤中确定第一微表情变化量对应的第一屏幕显示信息等实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。上述屏幕显示信息调节模块35可用于执行上述各个步骤中根据第一屏幕显示信息调节当前的屏幕显示信息等实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。上述显示列表更新模块36可用于执行上述各个步骤中调整显示列表中的各个微表情变化量对应的屏幕显示信息等实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。上述年龄检测模型训练模块37可用于执行上述各个步骤中获取训练样本、基于训练样本训练年龄检测模型等实现方式,具体可参见上述各个步骤所提供的实现方式,在此不 再赘述。
在本申请实施例中,基于微表情调节屏幕显示的装置基于采集到的目标用户的第一时刻的用户人脸图像,可将第一时刻的用户人脸图像中的第一微表情信息中的第一脸部纹理信息和/或第一眉间距信息和/或脸部形状信息和/或瞳孔颜色信息输入年龄检测模型中可得到目标用户对应的目标用户年龄段。根据确定的目标用户年龄段可从多个显示列表中确定出目标显示列表。通过获取第二时刻采集的用户人脸图像可得到第二时刻对应的用户人脸图像,其中,第二时刻的用户人脸图像中至少包括第二微表情信息。通过将第一微表情信息中包括的第一脸部纹理信息、第一眉间距信息、第一眼睛睁开距离信息和/或第一唇角弯曲信息分别与第二微表情信息中包括的第二脸部纹理信息、第二眉间距信息、第二眼睛睁开距离信息和/或第二唇角弯曲信息进行比较,可得到对应的第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和/或第一唇角弯曲信息变化值。根据得到的上述各项信息变化值可确定第一微表情变化量,进而在目标显示列表中确定第一微表情变化量对应的第一屏幕显示信息。根据第一屏幕显示信息可调节当前显示屏的屏幕显示信息,同时,每调节一次屏幕显示信息,记录目标用户使用过程中的屏幕显示的调节次数。通过获取调节次数,并将调节次数与预设次数进行比较,若调节次数大于或者等于预设次数时,则通过获取并分析历史调节记录以调整或优化目标显示列表中的各个微表情变化量对应的各个屏幕显示信息。本申请实施例中不同用户年龄段的微表情变化量对应了不同的显示列表,进而可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高了屏幕显示调节的准确性,其中微表情变化量由多种信息变化值综合度量既能提高微表情变化量的精确度同时也能提高屏幕显示调节的准确度,从而增强用户满意度,且通过优化显示列表使灵活性更高,适用性更强。
参见图4,图4是本申请实施例提供的终端的结构示意图。如图4所示,本实施例中的终端可以包括:一个或多个处理器401和存储器402。上述处理器401和存储器402通过总线403连接。存储器402用于存储计算机程序,该计算机程序包括程序指令,处理器401用于执行存储器402存储的程序指令,执行如下操作:
获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与上述第一微表情信息对应的目标用户年龄段;
根据上述目标用户年龄段从多个显示列表中确定出目标显示列表,其中,每一个显示列表对应一个用户年龄段,每一个显示列表中包括所对应用户年龄段的多个微表情变化量以及各个微表情变化量对应的屏幕显示信息;
获取上述目标用户使用上述终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据上述第一微表情信息和上述第二微表情信息确定出第一微表情变化量,其中,上述第二时刻为上述第一时刻之后的一个时刻;
根据上述第一微表情变化量从上述目标显示列表中确定上述第一微表情变化量对应的第一屏幕显示信息;
根据上述第一屏幕显示信息调节当前上述终端的屏幕显示配置。
在一些可行的实施方式中,上述处理器401用于:
获取上述第一微表情信息中的第一脸部纹理信息,将上述第一脸部纹理信息输入年龄 检测模型并基于上述年龄检测模型输出上述第一脸部纹理信息对应的目标用户年龄段。
在一些可行的实施方式中,上述处理器401用于:
获取上述第二微表情信息中的第二脸部纹理信息,将上述第二脸部纹理信息与上述第一微表情信息中的第一脸部纹理信息进行比较以得到第一脸部纹理信息变化值;和/或
获取上述第二微表情信息中的第二眉间距信息,将上述第二眉间距信息与上述第一微表情信息中的第一眉间距信息进行比较以得到第一眉间距信息变化值;和/或
获取上述第二微表情信息中的第二眼睛睁开距离信息,将上述第二眼睛睁开距离信息与上述第一微表情信息中的第一眼睛睁开距离信息进行比较以得到第一眼睛睁开距离信息变化值;和/或
获取上述第二微表情信息中的第二唇角弯曲信息,将上述第二唇角弯曲信息与上述第一微表情信息中的第一唇角弯曲信息进行比较以得到第一唇角弯曲信息变化值;
根据上述第一脸部纹理信息变化值、上述第一眉间距信息变化值、上述第一眼睛睁开距离信息变化值和/或上述第一唇角弯曲信息变化值确定出上述第一微表情变化量。
在一些可行的实施方式中,上述处理器401用于:
将上述第一脸部纹理信息变化值、上述第一眉间距信息变化值、上述第一眼睛睁开距离信息变化值和上述第一唇角弯曲信息变化值分别乘以对应的权重值后进行求和,并将求和得到的值确定为上述第一微表情变化量。
在一些可行的实施方式中,上述处理器401用于:
将上述第一微表情变化量与上述目标显示列表中的多个微表情变化量进行匹配,并将匹配成功的微表情变化量对应的屏幕显示信息确定为上述第一屏幕显示信息。
在一些可行的实施方式中,上述处理器401用于:
获取上述目标用户使用上述终端的过程中上述终端的屏幕显示的调节次数;
若上述调节次数大于或者等于预设次数,则调整上述目标显示列表中的各个微表情变化量对应的屏幕显示信息。
在一些可行的实施方式中,上述处理器401用于:
根据上述第一屏幕显示信息中的第一屏幕显示字体大小调节当前上述终端的屏幕显示字体大小;和/或
根据上述第一屏幕显示信息中的第一屏幕显示亮度调节当前上述终端的屏幕显示亮度;和/或
根据上述第一屏幕显示信息中的第一屏幕显示字体间距调节当前上述终端的屏幕显示字体间距。
在一些可行的实施方式中,上述处理器401用于:
获取多个训练样本,其中一个训练样本中包括一个样本用户的脸部纹理信息和用户年龄段;
基于各训练样本中包括的脸部纹理信息和用户年龄段训练初始网络模型以得到年龄检测模型,上述年龄检测模型用于根据输入的脸部纹理信息输出用户年龄段。
在一些可行的实施方式中,上述处理器401用于:
从人脸图像数据库中获取多个不同人的不同用户年龄段的脸部纹理信息作为训练样本。
应当理解,在一些可行的实施方式中,上述处理器401可以是中央处理模块(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。该存储器402可以包括只读存储器和随机存取存储器,并向处理器401提供指令和数据。存储器402的一部分还可以包括非易失性随机存取存储器。例如,存储器402还可以存储设备类型的信息。
具体实现中,上述终端可通过其内置的各个功能模块执行如上述图1至图2中各个步骤所提供的实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。
在本申请实施例中,终端可基于采集到的目标用户的第一时刻的用户人脸图像,可将第一时刻的用户人脸图像中的第一微表情信息中的第一脸部纹理信息和/或第一眉间距信息和/或脸部形状信息和/或瞳孔颜色信息输入年龄检测模型中可得到目标用户对应的目标用户年龄段。根据确定的目标用户年龄段可从多个显示列表中确定出目标显示列表。通过获取第二时刻采集的用户人脸图像可得到第二时刻对应的用户人脸图像,其中,第二时刻的用户人脸图像中至少包括第二微表情信息。通过将第一微表情信息中包括的第一脸部纹理信息、第一眉间距信息、第一眼睛睁开距离信息和/或第一唇角弯曲信息分别与第二微表情信息中包括的第二脸部纹理信息、第二眉间距信息、第二眼睛睁开距离信息和/或第二唇角弯曲信息进行比较,可得到对应的第一脸部纹理信息变化值、第一眉间距信息变化值、第一眼睛睁开距离信息变化值和/或第一唇角弯曲信息变化值。根据得到的上述各项信息变化值可确定第一微表情变化量,进而在目标显示列表中确定第一微表情变化量对应的第一屏幕显示信息。根据第一屏幕显示信息可调节当前终端的屏幕显示信息,同时,每调节一次屏幕显示信息,记录目标用户使用终端过程中的屏幕显示的调节次数。通过获取调节次数,并将调节次数与预设次数进行比较,若调节次数大于或者等于预设次数时,则通过获取并分析历史调节记录以调整或优化目标显示列表中的各个微表情变化量对应的各个屏幕显示信息。本申请实施例中不同用户年龄段的微表情变化量对应了不同的显示列表,进而可针对具有相同微表情变化量的不同年龄段的用户做不同的屏幕显示调节,提高了屏幕显示调节的准确性,其中微表情变化量由多种信息变化值综合度量既能提高微表情变化量的精确度同时也能提高屏幕显示调节的准确度,从而增强用户满意度,且通过优化显示列表使灵活性更高,适用性更强。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包括程序指令,该程序指令被处理器执行时实现图1至图2中各个步骤所提供的基于微表情调节屏幕显示的方法,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。
上述计算机可读存储介质可以是前述任一实施例提供的基于微表情调节屏幕显示的装置或者上述终端的内部存储单元,例如电子设备的硬盘或内存。该计算机可读存储介质也可以是该电子设备的外部存储设备,例如该电子设备上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card) 等。进一步地,该计算机可读存储介质还可以既包括该电子设备的内部存储单元也包括外部存储设备。该计算机可读存储介质用于存储该计算机程序以及该电子设备所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种基于微表情调节屏幕显示的方法,其特征在于,所述方法包括:
    获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与所述第一微表情信息对应的目标用户年龄段;
    根据所述目标用户年龄段从多个显示列表中确定出目标显示列表,其中,每一个显示列表对应一个用户年龄段,每一个显示列表中包括所对应用户年龄段的多个微表情变化量以及各个微表情变化量对应的屏幕显示信息;
    获取所述目标用户使用所述终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据所述第一微表情信息和所述第二微表情信息确定出第一微表情变化量,其中,所述第二时刻为所述第一时刻之后的一个时刻;
    根据所述第一微表情变化量从所述目标显示列表中确定所述第一微表情变化量对应的第一屏幕显示信息;
    根据所述第一屏幕显示信息调节当前所述终端的屏幕显示配置。
  2. 根据权利要求1所述方法,其特征在于,所述根据年龄检测模型确定与所述第一微表情信息对应的目标用户年龄段,包括:
    获取所述第一微表情信息中的第一脸部纹理信息,将所述第一脸部纹理信息输入年龄检测模型并基于所述年龄检测模型输出所述第一脸部纹理信息对应的目标用户年龄段。
  3. 根据权利要求1或2所述方法,其特征在于,所述根据所述第一微表情信息和所述第二微表情信息确定出第一微表情变化量,包括:
    获取所述第二微表情信息中的第二脸部纹理信息,将所述第二脸部纹理信息与所述第一微表情信息中的第一脸部纹理信息进行比较以得到第一脸部纹理信息变化值;和/或
    获取所述第二微表情信息中的第二眉间距信息,将所述第二眉间距信息与所述第一微表情信息中的第一眉间距信息进行比较以得到第一眉间距信息变化值;和/或
    获取所述第二微表情信息中的第二眼睛睁开距离信息,将所述第二眼睛睁开距离信息与所述第一微表情信息中的第一眼睛睁开距离信息进行比较以得到第一眼睛睁开距离信息变化值;和/或
    获取所述第二微表情信息中的第二唇角弯曲信息,将所述第二唇角弯曲信息与所述第一微表情信息中的第一唇角弯曲信息进行比较以得到第一唇角弯曲信息变化值;
    根据所述第一脸部纹理信息变化值、所述第一眉间距信息变化值、所述第一眼睛睁开距离信息变化值和/或所述第一唇角弯曲信息变化值确定出所述第一微表情变化量。
  4. 根据权利要求3所述方法,其特征在于,所述根据所述第一脸部纹理信息变化值、所述第一眉间距信息变化值、所述第一眼睛睁开距离信息变化值和所述第一唇角弯曲信息变化值确定出所述第一微表情变化量,包括:
    将所述第一脸部纹理信息变化值、所述第一眉间距信息变化值、所述第一眼睛睁开距离信息变化值和所述第一唇角弯曲信息变化值分别乘以对应的权重值后进行求和,并将求和得到的值确定为所述第一微表情变化量。
  5. 根据权利要求1-4任一项所述方法,其特征在于,所述根据所述第一微表情变化量 从所述目标显示列表中确定所述第一微表情变化量对应的第一屏幕显示信息,包括:
    将所述第一微表情变化量与所述目标显示列表中的多个微表情变化量进行匹配,并将匹配成功的微表情变化量对应的屏幕显示信息确定为所述第一屏幕显示信息。
  6. 根据权利要求1所述方法,其特征在于,所述根据所述第一屏幕显示信息调节当前所述终端的屏幕显示配置之后,所述方法还包括:
    获取所述目标用户使用所述终端的过程中所述终端的屏幕显示的调节次数;
    若所述调节次数大于或者等于预设次数,则调整所述目标显示列表中的各个微表情变化量对应的屏幕显示信息。
  7. 根据权利要求1所述方法,其特征在于,所述根据所述第一屏幕显示信息调节当前所述终端的屏幕显示配置,包括:
    根据所述第一屏幕显示信息中的第一屏幕显示字体大小调节当前所述终端的屏幕显示字体大小;和/或
    根据所述第一屏幕显示信息中的第一屏幕显示亮度调节当前所述终端的屏幕显示亮度;和/或
    根据所述第一屏幕显示信息中的第一屏幕显示字体间距调节当前所述终端的屏幕显示字体间距。
  8. 根据权利要求1-7任一项所述方法,其特征在于,所述方法还包括:
    获取多个训练样本,其中一个训练样本中包括一个样本用户的脸部纹理信息和用户年龄段;
    基于各训练样本中包括的脸部纹理信息和用户年龄段训练初始网络模型以得到年龄检测模型,所述年龄检测模型用于根据输入的脸部纹理信息输出用户年龄段。
  9. 根据权利要求8所述方法,其特征在于,所述获取多个训练样本,包括:
    从人脸图像数据库中获取多个不同人的不同用户年龄段的脸部纹理信息作为训练样本。
  10. 一种基于微表情调节屏幕显示的装置,其特征在于,所述装置包括:
    微表情信息获取模块,用于获取目标用户使用终端的第一时刻采集的用户人脸图像中的第一微表情信息,并根据年龄检测模型确定与所述第一微表情信息对应的目标用户年龄段;
    目标显示列表确定模块,用于根据所述微表情信息获取模块确定的所述目标用户年龄段从多个显示列表中确定出目标显示列表,其中,每一个显示列表对应一个用户年龄段,每一个显示列表中包括所对应用户年龄段的多个微表情变化量以及各个微表情变化量对应的屏幕显示信息;
    微表情变化量确定模块,用于获取所述目标用户使用所述终端的第二时刻采集的用户人脸图像中的第二微表情信息,并根据所述微表情信息获取模块确定的所述第一微表情信息和所述第二微表情信息确定出第一微表情变化量,其中,所述第二时刻为所述第一时刻之后的一个时刻;
    屏幕显示信息确定模块,用于根据所述微表情变化量确定模块确定的所述第一微表情变化量从所述目标显示列表确定模块确定的所述目标显示列表中确定所述第一微表情变化量对应的第一屏幕显示信息;
    屏幕显示信息调节模块,用于根据所述屏幕显示信息确定模块确定的所述第一屏幕显示信息调节当前所述终端的屏幕显示配置。
  11. 根据权利要求10所述装置,其特征在于,所述微表情信息获取模块用于:
    获取所述第一微表情信息中的第一脸部纹理信息,将所述第一脸部纹理信息输入年龄检测模型并基于所述年龄检测模型输出所述第一脸部纹理信息对应的目标用户年龄段。
  12. 根据权利要求10或11所述装置,其特征在于,所述微表情变化量确定模块包括:
    脸部纹理信息变化确定单元,用于获取上述第二微表情信息中的第二脸部纹理信息,将上述第二脸部纹理信息与上述第一微表情信息中的第一脸部纹理信息进行比较以得到第一脸部纹理信息变化值;和/或
    眉间距信息变化确定单元,用于获取上述第二微表情信息中的第二眉间距信息,将上述第二眉间距信息与上述第一微表情信息中的第一眉间距信息进行比较以得到第一眉间距信息变化值;和/或
    眼睛睁开距离信息变化确定单元,用于获取上述第二微表情信息中的第二眼睛睁开距离信息,将上述第二眼睛睁开距离信息与上述第一微表情信息中的第一眼睛睁开距离信息进行比较以得到第一眼睛睁开距离信息变化值;和/或
    唇角弯曲信息变化确定单元,用于获取上述第二微表情信息中的第二唇角弯曲信息,将上述第二唇角弯曲信息与上述第一微表情信息中的第一唇角弯曲信息进行比较以得到第一唇角弯曲信息变化值;
    微表情变化量确定单元,用于根据上述第一脸部纹理信息变化值、上述第一眉间距信息变化值、上述第一眼睛睁开距离信息变化值和/或上述第一唇角弯曲信息变化值确定出上述第一微表情变化量。
  13. 根据权利要求12所述装置,其特征在于,所述微表情变化量确定单元具体用于:
    将所述第一脸部纹理信息变化值、所述第一眉间距信息变化值、所述第一眼睛睁开距离信息变化值和所述第一唇角弯曲信息变化值分别乘以对应的权重值后进行求和,并将求和得到的值确定为所述第一微表情变化量。
  14. 根据权利要求10-13任一项所述装置,其特征在于,所述屏幕显示信息确定模块用于:
    将所述第一微表情变化量与所述目标显示列表中的多个微表情变化量进行匹配,并将匹配成功的微表情变化量对应的屏幕显示信息确定为所述第一屏幕显示信息。
  15. 根据权利要求10所述装置,其特征在于,所述装置还包括显示列表更新模块,所述显示列表更新模块用于:
    获取所述目标用户使用所述终端的过程中所述终端的屏幕显示的调节次数;
    若所述调节次数大于或者等于预设次数,则调整所述目标显示列表中的各个微表情变化量对应的屏幕显示信息。
  16. 根据权利要求10所述装置,其特征在于,所述屏幕显示信息调节模块具体用于:
    根据所述第一屏幕显示信息中的第一屏幕显示字体大小调节当前所述终端的屏幕显示字体大小;和/或
    根据所述第一屏幕显示信息中的第一屏幕显示亮度调节当前所述终端的屏幕显示亮度; 和/或
    根据所述第一屏幕显示信息中的第一屏幕显示字体间距调节当前所述终端的屏幕显示字体间距。
  17. 根据权利要求10-16任一项所述装置,其特征在于,所述装置还包括年龄检测模型训练模块,所述年龄检测模型训练模块包括:
    训练样本获取单元,用于获取多个训练样本,其中一个训练样本中包括一个样本用户的脸部纹理信息和用户年龄段;
    年龄检测模型训练单元,用于基于各训练样本中包括的脸部纹理信息和用户年龄段训练初始网络模型以得到年龄检测模型,所述年龄检测模型用于根据输入的脸部纹理信息输出用户年龄段。
  18. 根据权利要求17所述装置,其特征在于,所述训练样本获取单元具体用于:
    从人脸图像数据库中获取多个不同人的不同用户年龄段的脸部纹理信息作为训练样本。
  19. 一种终端,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接;
    所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1-9任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-9任一项所述的方法。
PCT/CN2019/101947 2019-05-21 2019-08-22 基于微表情调节屏幕显示的方法及装置 WO2020232855A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910421947.2A CN110222597B (zh) 2019-05-21 2019-05-21 基于微表情调节屏幕显示的方法及装置
CN201910421947.2 2019-05-21

Publications (1)

Publication Number Publication Date
WO2020232855A1 true WO2020232855A1 (zh) 2020-11-26

Family

ID=67821445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101947 WO2020232855A1 (zh) 2019-05-21 2019-08-22 基于微表情调节屏幕显示的方法及装置

Country Status (2)

Country Link
CN (1) CN110222597B (zh)
WO (1) WO2020232855A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527106A (zh) * 2020-11-30 2021-03-19 崔刚 基于全视觉的控制系统
CN112766238A (zh) * 2021-03-15 2021-05-07 电子科技大学中山学院 年龄预测方法及装置
CN115499538A (zh) * 2022-08-23 2022-12-20 广东以诺通讯有限公司 屏幕显示字体调节方法、装置、存储介质和计算机设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459587A (zh) * 2020-03-27 2020-07-28 北京三快在线科技有限公司 信息显示方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000305746A (ja) * 1999-04-16 2000-11-02 Mitsubishi Electric Corp 画面制御方式
CN106778623A (zh) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 一种终端屏幕控制方法、装置及电子设备
CN107292778A (zh) * 2017-05-19 2017-10-24 华中师范大学 一种基于认知情感感知的云课堂学习评价方法及其装置
CN108345874A (zh) * 2018-04-03 2018-07-31 苏州欧孚网络科技股份有限公司 一种根据视频图像识别人格特征的方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101133438B (zh) * 2005-03-01 2010-05-19 松下电器产业株式会社 电子显示介质和用于电子显示介质的屏幕控制方法
CN105607733B (zh) * 2015-08-25 2018-12-25 宇龙计算机通信科技(深圳)有限公司 调节方法、调节装置和终端
US20170092150A1 (en) * 2015-09-30 2017-03-30 Sultan Hamadi Aljahdali System and method for intelligently interacting with users by identifying their gender and age details
US10049263B2 (en) * 2016-06-15 2018-08-14 Stephan Hau Computer-based micro-expression analysis
US10515393B2 (en) * 2016-06-30 2019-12-24 Paypal, Inc. Image data detection for micro-expression analysis and targeted data services
CN106057171B (zh) * 2016-07-21 2019-05-24 Oppo广东移动通信有限公司 控制方法及控制装置
CN108960022B (zh) * 2017-09-19 2021-09-07 炬大科技有限公司 一种情绪识别方法及其装置
CN107507602A (zh) * 2017-09-22 2017-12-22 深圳天珑无线科技有限公司 屏幕亮度自动调节方法、终端及存储介质
CN107895146B (zh) * 2017-11-01 2020-05-26 深圳市科迈爱康科技有限公司 微表情识别方法、装置、系统及计算机可读存储介质
CN108256469A (zh) * 2018-01-16 2018-07-06 华中师范大学 脸部表情识别方法及装置
CN108989571B (zh) * 2018-08-15 2020-06-19 浙江大学滨海产业技术研究院 一种针对手机文字阅读的自适应字体调整方法及装置
CN109063679A (zh) * 2018-08-24 2018-12-21 广州多益网络股份有限公司 一种人脸表情检测方法、装置、设备、系统及介质
CN109543603B (zh) * 2018-11-21 2021-05-11 山东大学 一种基于宏表情知识迁移的微表情识别方法
CN109523852A (zh) * 2018-11-21 2019-03-26 合肥虹慧达科技有限公司 基于视觉监控的学习交互系统及其交互方法
CN109697421A (zh) * 2018-12-18 2019-04-30 深圳壹账通智能科技有限公司 基于微表情的评价方法、装置、计算机设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000305746A (ja) * 1999-04-16 2000-11-02 Mitsubishi Electric Corp 画面制御方式
CN106778623A (zh) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 一种终端屏幕控制方法、装置及电子设备
CN107292778A (zh) * 2017-05-19 2017-10-24 华中师范大学 一种基于认知情感感知的云课堂学习评价方法及其装置
CN108345874A (zh) * 2018-04-03 2018-07-31 苏州欧孚网络科技股份有限公司 一种根据视频图像识别人格特征的方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527106A (zh) * 2020-11-30 2021-03-19 崔刚 基于全视觉的控制系统
CN112766238A (zh) * 2021-03-15 2021-05-07 电子科技大学中山学院 年龄预测方法及装置
CN112766238B (zh) * 2021-03-15 2023-09-26 电子科技大学中山学院 年龄预测方法及装置
CN115499538A (zh) * 2022-08-23 2022-12-20 广东以诺通讯有限公司 屏幕显示字体调节方法、装置、存储介质和计算机设备
CN115499538B (zh) * 2022-08-23 2023-08-22 广东以诺通讯有限公司 屏幕显示字体调节方法、装置、存储介质和计算机设备

Also Published As

Publication number Publication date
CN110222597A (zh) 2019-09-10
CN110222597B (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
WO2020232855A1 (zh) 基于微表情调节屏幕显示的方法及装置
CN108427503B (zh) 人眼追踪方法及人眼追踪装置
US9291834B2 (en) System for the measurement of the interpupillary distance using a device equipped with a display and a camera
US20200110440A1 (en) Wearable device having a display, lens, illuminator, and image sensor
WO2021004138A1 (zh) 一种屏幕显示方法、终端设备及存储介质
Tonsen et al. A high-level description and performance evaluation of pupil invisible
KR20200004841A (ko) 셀피를 촬영하도록 사용자를 안내하기 위한 시스템 및 방법
JP2016515242A (ja) 校正不要な注視点推定の方法と装置
US20150092983A1 (en) Method for calibration free gaze tracking using low cost camera
US11178389B2 (en) Self-calibrating display device
US20180055717A1 (en) Method and Device for Improving Visual Performance
CN107205635A (zh) 鉴别观察者的眼疾病的方法和执行该方法的装置
EP3699808B1 (en) Facial image detection method and terminal device
CN106526857B (zh) 调焦方法和装置
KR102271063B1 (ko) 가상 피팅 서비스 제공 방법, 장치 및 그 시스템
US20240112329A1 (en) Distinguishing a Disease State from a Non-Disease State in an Image
WO2018219290A1 (zh) 一种信息终端
EP3364371A1 (en) User device, server, and computer program stored in computer-readable medium for determining vision information
WO2024060418A1 (zh) 基于眼部异常姿态的异常屈光状态识别方法及装置
CN111588345A (zh) 眼部疾病检测方法、ar眼镜及可读存储介质
WO2022232414A9 (en) Methods, systems, and related aspects for determining a cognitive load of a sensorized device user
WO2021139446A1 (zh) 一种抗血管内皮生长因子vegf疗效预测装置及方法
US20230346276A1 (en) System and method for detecting a health condition using eye images
US20240013431A1 (en) Image capture devices, systems, and methods
TWI729338B (zh) 皮膚檢測方法及影像處理裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19929342

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19929342

Country of ref document: EP

Kind code of ref document: A1