CN110222597B - Method and device for adjusting screen display based on micro-expressions - Google Patents

Method and device for adjusting screen display based on micro-expressions Download PDF

Info

Publication number
CN110222597B
CN110222597B CN201910421947.2A CN201910421947A CN110222597B CN 110222597 B CN110222597 B CN 110222597B CN 201910421947 A CN201910421947 A CN 201910421947A CN 110222597 B CN110222597 B CN 110222597B
Authority
CN
China
Prior art keywords
information
micro
expression
screen display
change amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910421947.2A
Other languages
Chinese (zh)
Other versions
CN110222597A (en
Inventor
张起
郑如刚
徐志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910421947.2A priority Critical patent/CN110222597B/en
Priority to PCT/CN2019/101947 priority patent/WO2020232855A1/en
Publication of CN110222597A publication Critical patent/CN110222597A/en
Application granted granted Critical
Publication of CN110222597B publication Critical patent/CN110222597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method and a device for adjusting screen display based on micro-expressions, and the method is suitable for emotion recognition. The method comprises the following steps: acquiring first micro-expression information corresponding to a first moment when a target user uses a terminal, and determining a target user age bracket corresponding to the first micro-expression information; determining a target display list from a plurality of display lists according to the age range of the target user; acquiring second micro-expression information corresponding to a second moment when a target user uses the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information; determining first screen display information from a target display list according to the first micro-expression change amount; and adjusting the screen display configuration of the current terminal according to the first screen display information. By adopting the embodiment of the application, different screen display adjustment can be performed for users with the same micro-expression variation and different age groups, the accuracy of the screen display adjustment is improved, and the satisfaction of the users is enhanced.

Description

Method and device for adjusting screen display based on micro-expressions
Technical Field
The application relates to the field of image recognition, in particular to a method and a device for adjusting screen display based on micro-expressions.
Background
With the development of communication technology, more and more terminals walk into the life of people, and more information is presented to the outside through the display screen of the terminal. Today, people have become accustomed to obtaining information by browsing pictures on the terminal display, reading text on the terminal display, and watching video on the terminal display. However, the use of electronic products for a long time tends to cause dizziness, fatigue, and dazzling feeling to a person, and even impairs the eyesight of the person. At present, the display fonts and screen brightness on the terminal display screen are fixed, or manual adjustment is required by a user, which obviously does not accord with the pursuit of people on humanized and intelligent life at present, and the use experience of the user is reduced.
Disclosure of Invention
The embodiment of the application provides a method and a device for adjusting screen display based on micro-expressions. Different screen display adjustment can be performed for users with the same micro expression variation and different age groups, so that the accuracy of screen display adjustment is improved, and the satisfaction of the users is enhanced.
In a first aspect, an embodiment of the present application provides a method for adjusting a screen display based on a micro-expression, the method including:
Acquiring first micro-expression information in a user face image acquired at a first moment when a target user uses a terminal, and determining a target user age bracket corresponding to the first micro-expression information according to an age detection model;
determining a target display list from a plurality of display lists according to the target user age groups, wherein each display list corresponds to one user age group, and each display list comprises a plurality of micro-expression change amounts of the corresponding user age group and screen display information corresponding to each micro-expression change amount;
acquiring second micro-expression information in a face image of the user acquired by the target user at a second moment of the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information, wherein the second moment is a moment after the first moment;
determining first screen display information corresponding to the first micro-expression change amount from the target display list according to the first micro-expression change amount;
and adjusting the screen display configuration of the terminal according to the first screen display information.
With reference to the first aspect, in a possible implementation manner, the determining, according to the age detection model, a target user age range corresponding to the first micro-expression information includes:
And acquiring first facial texture information in the first micro-expression information, inputting the first facial texture information into an age detection model, and outputting a target user age bracket corresponding to the first facial texture information based on the age detection model.
With reference to the first aspect, in one possible implementation manner, the determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information includes:
acquiring second facial texture information in the second micro-expression information, and comparing the second facial texture information with first facial texture information in the first micro-expression information to obtain a first facial texture information change value; and/or
Acquiring second eyebrow spacing information in the second micro-expression information, and comparing the second eyebrow spacing information with first eyebrow spacing information in the first micro-expression information to obtain a first eyebrow spacing information change value; and/or
Acquiring second eye opening distance information in the second micro-expression information, and comparing the second eye opening distance information with first eye opening distance information in the first micro-expression information to obtain a first eye opening distance information change value; and/or
Acquiring second lip angle bending information in the second micro-expression information, and comparing the second lip angle bending information with first lip angle bending information in the first micro-expression information to obtain a first lip angle bending information change value;
and determining the first micro-expression change amount according to the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and/or the first lip angle bending information change value.
With reference to the first aspect, in one possible implementation manner, the determining the first micro-expression change amount according to the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value, and the first lip angle bending information change value includes:
and multiplying the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value by corresponding weight values respectively, and then summing, and determining the sum value as the first micro expression change amount.
With reference to the first aspect, in a possible implementation manner, the determining, from the target display list, first screen display information corresponding to the first micro-expression change amount according to the first micro-expression change amount includes:
And matching the first micro-expression change amount with a plurality of micro-expression change amounts in the target display list, and determining screen display information corresponding to the successfully matched micro-expression change amounts as the first screen display information.
With reference to the first aspect, in a possible implementation manner, after the adjusting the current screen display configuration of the terminal according to the first screen display information, the method further includes:
acquiring the adjustment times of the screen display of the terminal in the process of using the terminal by the target user;
and if the adjustment times are greater than or equal to the preset times, adjusting screen display information corresponding to each micro-expression change amount in the target display list.
With reference to the first aspect, in a possible implementation manner, the adjusting the current screen display configuration of the terminal according to the first screen display information includes:
adjusting the current screen display font size of the terminal according to the first screen display font size in the first screen display information; and/or
Adjusting the screen display brightness of the terminal according to the first screen display brightness in the first screen display information; and/or
And adjusting the current screen display font spacing of the terminal according to the first screen display font spacing in the first screen display information.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting a screen display based on a micro-expression, the apparatus including:
the micro-expression information acquisition module is used for acquiring first micro-expression information in a user face image acquired at a first moment when a target user uses the terminal, and determining a target user age bracket corresponding to the first micro-expression information according to an age detection model;
the target display list determining module is used for determining a target display list from a plurality of display lists according to the target user age groups determined by the micro-expression information acquiring module, wherein each display list corresponds to one user age group, and each display list comprises a plurality of micro-expression change amounts of the corresponding user age groups and screen display information corresponding to each micro-expression change amount;
the micro-expression change amount determining module is used for acquiring second micro-expression information in a user face image acquired by the target user at a second moment of the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information determined by the micro-expression information acquiring module, wherein the second moment is a moment after the first moment;
A screen display information determining module, configured to determine, from the target display list determined by the target display list determining module, first screen display information corresponding to the first micro-expression change amount according to the first micro-expression change amount determined by the micro-expression change amount determining module;
and the screen display information adjusting module is used for adjusting the screen display configuration of the terminal according to the first screen display information determined by the screen display information determining module.
With reference to the second aspect, in one possible implementation manner, the foregoing micro-expression information obtaining module is configured to:
and acquiring first facial texture information in the first micro-expression information, inputting the first facial texture information into an age detection model, and outputting a target user age bracket corresponding to the first facial texture information based on the age detection model.
With reference to the second aspect, in one possible implementation manner, the foregoing micro-expression change amount determining module is configured to:
a facial texture information change determining unit, configured to obtain second facial texture information in the second micro-expression information, and compare the second facial texture information with first facial texture information in the first micro-expression information to obtain a first facial texture information change value; and/or
The eyebrow interval information change determining unit is used for acquiring second eyebrow interval information in the second micro-expression information, and comparing the second eyebrow interval information with first eyebrow interval information in the first micro-expression information to obtain a first eyebrow interval information change value; and/or
An eye opening distance information change determining unit configured to acquire second eye opening distance information in the second microexpressive information, and compare the second eye opening distance information with first eye opening distance information in the first microexpressive information to obtain a first eye opening distance information change value; and/or
The change determining unit is used for obtaining second lip angle bending information in the second micro-expression information, and comparing the second lip angle bending information with first lip angle bending information in the first micro-expression information to obtain a first lip angle bending information change value;
and the micro-expression change amount determining unit is used for determining the first micro-expression change amount according to the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and/or the first lip angle bending information change value.
With reference to the second aspect, in one possible implementation manner, the foregoing micro-expression change amount determining unit is configured to:
and multiplying the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value by corresponding weight values respectively, and then summing, and determining the sum value as the first micro expression change amount.
With reference to the second aspect, in one possible implementation manner, the above screen display information determining module is configured to:
and matching the first micro-expression change amount with a plurality of micro-expression change amounts in the target display list, and determining screen display information corresponding to the successfully matched micro-expression change amounts as the first screen display information.
With reference to the second aspect, in a possible implementation manner, the apparatus for adjusting a screen display based on a micro-expression further includes:
a display list updating module, configured to obtain the adjustment times of the screen display of the terminal in the process that the target user uses the terminal;
and if the adjustment times are greater than or equal to the preset times, adjusting screen display information corresponding to each micro-expression change amount in the target display list.
With reference to the second aspect, in one possible implementation manner, the above screen display information adjusting module is configured to:
adjusting the current screen display font size of the terminal according to the first screen display font size in the first screen display information; and/or
Adjusting the screen display brightness of the terminal according to the first screen display brightness in the first screen display information; and/or
And adjusting the current screen display font spacing of the terminal according to the first screen display font spacing in the first screen display information.
In a third aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, and the processor and the memory are connected to each other. The memory is configured to store a computer program supporting the terminal to perform the method provided by the first aspect and/or any of the possible implementation manners of the first aspect, the computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method provided by the first aspect and/or any of the possible implementation manners of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method provided by the first aspect and/or any of the possible implementations of the first aspect.
The embodiment of the application has the following beneficial effects:
according to the embodiment of the application, the corresponding age bracket of the target user can be determined according to the first micro-expression information by acquiring the first micro-expression information corresponding to the first moment in the process of using the terminal by the target user, and the target display list can be determined from a plurality of display lists according to the age bracket of the target user. The first micro-expression change amount can be determined by acquiring second micro-expression information corresponding to a second moment in the process that the target user uses the terminal and combining the first micro-expression information, the first screen display information corresponding to the first micro-expression change amount can be determined from the target display list according to the determined first micro-expression change amount, and finally the screen display information of the current terminal can be adjusted according to the first screen display information. According to the embodiment of the application, different display lists are set for the micro-expression change amounts of different age groups of the user, so that different screen display adjustment can be performed for the users with the same micro-expression change amount and different age groups, the accuracy of the screen display adjustment is improved, and the user satisfaction is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for adjusting a screen display based on a micro-expression according to an embodiment of the present application;
fig. 2 is another flow chart of a method for adjusting a screen display based on a micro-expression according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a micro-expression based adjustment screen display device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The method for adjusting screen display based on the micro-expressions can be widely applied to various terminals with display screens, such as smart phones, desktop computers, notebook computers, tablet computers, self-service terminals, intelligent marketing devices and the like, and is described as a terminal in a unified manner for convenience in description. The user face image corresponding to the first moment can be obtained by acquiring the user face image of the target user when the terminal is used by the terminal through a camera on the terminal or an external camera connected with the terminal at the first moment, the target user age bracket corresponding to the first micro-expression information can be determined according to the age detection model by acquiring the first micro-expression information in the user face image at the first moment, and the target display list can be determined from a plurality of display lists according to the target user age bracket. The first micro-expression change amount can be determined by acquiring second micro-expression information in a user face image at a second moment in the process of using the terminal by a target user and combining the first micro-expression information, first screen display information corresponding to the first micro-expression change amount can be determined from a target display list according to the determined first micro-expression change amount, and the screen display information of the current terminal can be adjusted according to the first screen display information. According to the embodiment of the application, different display lists are set for the micro-expression change amounts of different age groups of the user, so that different screen display adjustment can be performed for the users with the same micro-expression change amount and different age groups, the accuracy of the screen display adjustment is improved, and the user satisfaction is enhanced.
The method and the related device according to the embodiments of the present application will be described in detail below with reference to fig. 1 to 4, respectively. The method provided by the embodiment of the application can comprise the data processing stages of acquiring the micro-expression information, determining the age of the target user, determining the target display list, determining the micro-expression change amount and corresponding screen display information, adjusting the current screen display of the terminal based on the screen display information, and the like. The implementation of the respective data processing stages described above can be seen in the implementation shown in fig. 1 to 2.
Referring to fig. 1, fig. 1 is a flow chart illustrating a method for adjusting a screen display based on a micro-expression according to an embodiment of the present application. The method provided by the embodiment of the application can comprise the following steps 101 to 105:
101. and acquiring first micro-expression information in a user face image acquired at a first moment when the target user uses the terminal, and determining a target user age bracket corresponding to the first micro-expression information according to the age detection model.
In some possible embodiments, the facial features and pupil colors change to different extents as the user ages, wherein the facial features change mainly including facial shape changes and facial texture changes, such as growth of facial bones, changes in facial muscle elasticity, and wrinkles. During teenagers, facial features change mainly due to facial shape changes, and the influence of age on human faces is concentrated due to facial texture changes. The user face image in the process of using the terminal by the user can be acquired through the camera on the terminal or an external camera connected with the terminal, and the user face image at least comprises micro expression information. The micro-expression information includes facial texture information, eyebrow interval information, eye opening distance information, lip angle bending information, facial shape information, pupil color information and the like, and is specifically determined according to an actual application scene, and is not limited herein. Optionally, by analyzing the collected face image of the user, it may also be determined whether an eye shielding object exists on the face image of the user, where the eye shielding object includes myopia glasses, presbyopic glasses, sunglasses, and the like.
In some possible embodiments, a user face image corresponding to a first moment can be obtained by using a camera on the terminal or an external camera connected with the terminal to collect a user face image of a target user when the terminal is used at the first moment, wherein the user face image at the first moment at least comprises first micro-expression information, and the first micro-expression information comprises first face texture information, first eyebrow space information, first eye opening distance information, first lip angle bending information, face shape information, pupil color information and the like. The age detection model is used for acquiring first facial texture information and/or first eyebrow interval information and/or face shape information and/or pupil color information in the first micro-expression information of the target user, and inputting the first facial texture information and/or the first eyebrow interval information and/or face shape information and/or pupil color information into the age detection model, so that the age of the target user corresponding to the target user can be output based on the age detection model. The construction of the age detection model can comprise the data processing stages of modeling data acquisition of the age detection model, training of the age detection model, testing of the age detection model and the like. It is to be understood that the modeling data of the age detection model may be derived from facial feature information such as facial texture information and/or inter-eyebrow distance information and/or facial shape information and/or pupil color information of the same person in different age groups in the facial image database. Optionally, in order to enhance the prediction accuracy of the age detection model, the modeling data of the age detection model may also be derived from facial texture information and/or eyebrow spacing information and/or facial shape information and/or pupil color information of a large number of different people in different age groups in the facial image database. In training the age detection model, the user's age and facial feature information such as facial texture information and/or eyebrow space information and/or facial shape information and/or pupil color information corresponding to the user's age may be learned by the initial network model from the input information feature pair, and the user's age detection model corresponding to the user's age may be constructed by learning facial feature information such as facial texture information and/or eyebrow space information and/or facial shape information and/or pupil color information corresponding to the user's age and the input information feature pair. The facial feature information such as the user age, the facial texture information and/or the eyebrow distance information and/or the facial shape information and/or the pupil color information corresponding to the age may be facial feature information such as facial texture information and/or eyebrow distance information and/or facial shape information and/or pupil color information corresponding to the same person at different ages and different ages, or facial feature information such as facial texture information and/or eyebrow distance information and/or facial shape information and/or pupil color information corresponding to a large number of different persons. After the age detection model is built, facial texture information and/or eyebrow space information and/or facial shape information and/or pupil color information of any group of users with known ages can be collected as test data of the age detection model. And inputting the test data of each group into the constructed age detection model, comparing the user age output by the age detection model with the actual age of the user, if the age error between the user age output by the age detection model and the actual age of the user is smaller than the preset precision, indicating that the constructed age detection model meets the construction requirements, otherwise, indicating that the constructed age detection model does not meet the construction requirements, and continuing training the age detection model until the constructed age detection model meets the requirements.
102. And determining a target display list from the display lists according to the age range of the target user.
In some possible embodiments, as people age, the muscles in the skin of the human eye also weaken over time, so the requirements for screen display are different for different age groups. And as people age, the elasticity of facial muscles of people also becomes different, in other words, when people of different ages feedback on facial muscles on the same emotion, the degree of change of facial muscles is also different, namely the micro-expression change amount is different. Therefore, a display list corresponding to each age group of the age groups can be constructed, wherein each display list comprises a plurality of micro-expression change amounts of the corresponding user age groups and screen display information corresponding to the micro-expression change amounts, and the screen display information comprises one or more of screen display font size, screen display brightness and screen display font spacing. Thus, a target display list may be determined from the plurality of display lists based on the determined age bracket of the target user.
For example, assume that the user age group includes a first age group, a second age group, and a third age group, wherein the user age of the first age group is between 10 and 35 years old. The user of the second age group is aged 36 to 59 years and the user of the third age group is aged 60 to 85 years. The display list corresponding to the first age stage is a first display list, the display list corresponding to the second age stage is a second display list, and the display list corresponding to the third age stage is a third display list. The first display list, the second display list and the third display list all comprise a plurality of micro-expression change amounts and screen display information corresponding to the micro-expression change amounts, and it is easy to understand that the micro-expression change amounts are different when people of different age groups feed back the same emotion on facial muscles, so that the screen display information corresponding to the micro-expression change amounts of the same degree in the first display list, the second display list and the third display list are also different. For example, assuming that the micro-expression change amounts are all 10%, the screen display information corresponding to 10% of the micro-expression change amounts in the first display list is the screen display font size, the screen display brightness and the screen display font spacing are all increased by 3%, the screen display information corresponding to 10% of the micro-expression change amounts in the second display list is the screen display font size, the screen display brightness and the screen display font spacing are all increased by 5%, and the screen display information corresponding to 10% of the micro-expression change amounts in the third display list is the screen display font size, the screen display brightness and the screen display font spacing are all increased by 8%.
Alternatively, in some possible embodiments, near vision refers to a near vision refractive error condition in which the eye's ability to recognize a distant target is reduced, while near vision is normal. Presbyopia is a physiological phenomenon, is not a pathological state and is not refractive error, is a visual problem which is necessarily caused after people walk into middle-aged and elderly people, and is one of signals of body aging. In general, the wearing group of the myopia glasses is teenagers or middle aged people, and the wearing group of the presbyopic glasses is the aged, but the internal factors of the two glasses are completely different although the two glasses are used for correcting the eyesight. Thus, by analyzing the collected user face image of the target user, it is also possible to determine whether an eye mask exists on the user face image of the target user, wherein the eye mask includes myopia glasses, presbyopic glasses, sunglasses (generally, sunglasses are distinguished from other glasses in that the sunglasses lenses are colored), and the like. If the glasses exist on the face image of the target user and the glasses lenses are not colored, the type of the glasses of the target user can be determined according to the determined age range of the target user, and then the eye health state of the target user is known. If the age group of the target user is the first age group and the user wears glasses, the glasses type is myopia glasses, and the target user is myopia; if the target user age group is the third age group and the user is wearing glasses, the glasses type is presbyopic glasses, indicating that the target user is presbyopic. Although myopia and/or presbyopia can be corrected by wearing respective glasses, in practice, the myopia and non-myopia people of the same user age group after wearing the myopia glasses, the vision gap between presbyopia and non-presbyopia people of the same user age group after wearing the presbyopia glasses, and the sensitivity of the eyes to the external environment (here, screen display information such as screen display font size, screen display brightness, screen display font spacing, etc.) are also different. Thus, in one display list corresponding to each user age group, different screen display information corresponding to the same micro-expression change amount under the glasses and glasses-free screening conditions can be set for whether the user wears the glasses.
For example, assume that a user age group includes a first age group, a second age group, and a third age group, wherein the user age of the first age group is between 10 and 35 years old. The user of the second age group is aged 36 to 59 years and the user of the third age group is aged 60 to 85 years. The display list corresponding to the first age stage is a first display list, the display list corresponding to the second age stage is a second display list, and the display list corresponding to the third age stage is a third display list. The first display list, the second display list and the third display list all comprise a plurality of micro-expression change amounts and screen display information corresponding to the micro-expression change amounts, and it is easy to understand that the micro-expression change amounts are different when people of different age groups feed back the same emotion on facial muscles, so that the screen display information corresponding to the micro-expression change amounts of the same degree in the first display list, the second display list and the third display list are different. For users of the same age group, the screen display information of the users wearing the glasses and the users not wearing the glasses of the same degree of the micro-expression change amounts should be different, so that the display list can also comprise different screen display information corresponding to the same degree of the micro-expression change amounts of the users wearing the glasses and the users not wearing the glasses. For example, assuming that the micro-expression change amounts are 10%, in the case that there is no glasses, the screen display information corresponding to the micro-expression change amount 10% in the first display list is the screen display font size, the screen display brightness, and the screen display font spacing are all increased by 3%, the screen display information corresponding to the micro-expression change amount 10% in the second display list is the screen display font size, the screen display brightness, and the screen display font spacing are all increased by 5%, and the screen display information corresponding to the micro-expression change amount 10% in the third display list is the screen display font size, the screen display brightness, and the screen display font spacing are all increased by 8%. In the case of glasses, the screen display information corresponding to 10% of the micro-expression change amount in the first display list is the size of the screen display fonts, the screen display brightness and the screen display font spacing are increased by 4%, the screen display information corresponding to 10% of the micro-expression change amount in the second display list is the size of the screen display fonts, the screen display brightness and the screen display font spacing are increased by 7%, and the screen display information corresponding to 10% of the micro-expression change amount in the third display list is the size of the screen display fonts, the screen display brightness and the screen display font spacing are increased by 11%.
Optionally, in some possible embodiments, the brightness of the surrounding environment where the user is located and the distance between the eyes of the user and the terminal display screen also affect the experience of the user on the screen display font size, the screen display brightness and the screen display font spacing of the current terminal, so that the ratio of the brightness of the surrounding environment where the user is located and/or the brightness of the surrounding environment and the screen display brightness of the terminal display screen and/or the detected distance value between the eyes of the user and the terminal display screen can be incorporated into each display list to form a screening condition, and different screen display information corresponding to the same micro-expression change amount under different conditions can be set.
103. And acquiring second micro-expression information in a user face image acquired at a second moment when the target user uses the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information.
In some possible embodiments, the user face image corresponding to the second moment can be obtained by using a camera on the terminal or an external camera connected with the terminal to collect the user face image of the target user when the terminal is used at the second moment, wherein the second moment is another moment after the first moment passes through the preset duration or any moment after the first moment, the user face image at the second moment at least comprises second micro-expression information, and the second micro-expression information comprises second face texture information, second eyebrow interval information, second eye opening distance information, second lip angle bending information, face shape information, pupil color information and the like. The first facial texture information change value can be obtained by acquiring second facial texture information in the second micro-expression information and comparing the second facial texture information with the first facial texture information in the first micro-expression information. And obtaining a first eyebrow interval information change value by obtaining second eyebrow interval information in the second micro-expression information and comparing the second eyebrow interval information with the first eyebrow interval information in the first micro-expression information. And obtaining a first eye opening distance information change value by acquiring second eye opening distance information in the second micro-expression information and comparing the second eye opening distance information with first eye opening distance information in the first micro-expression information. And obtaining a first lip angle bending information change value by acquiring second lip angle bending information in the second micro-expression information and comparing the second lip angle bending information with the first lip angle bending information in the first micro-expression information. And determining a first micro-expression change amount according to the first face texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and/or the first lip angle bending information change value. The first micro-expression change amount may be one or more of the above-mentioned first facial texture information change value, first eyebrow interval information change value, first eye opening distance information change value, or first lip angle bending information change value. For example, if only one of the four information change values is used as the first micro-expression change amount, the first micro-expression change amount may be the maximum value or the minimum value of the four information change values. If the first micro-expression change amount is determined according to the four information change values, namely the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value, the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value are multiplied by corresponding weight values respectively and then summed, the summed value is determined to be the first micro-expression change amount or the four information change values are directly summed, and the summed value is determined to be the first micro-expression change amount.
For example, assuming that the first facial texture information variation value, the first eyebrow spacing information variation value, the first eye opening distance information variation value, and the first lip angle bending information variation value are 5%,3%, and 1%, respectively, if only the maximum information variation value of the four information variation values is taken as the first micro-expression variation amount, the first micro-expression variation amount is 5%. If the first micro-expression change amount is determined according to the first face texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value and the first lip angle bending information change value at the same time, and the weight values of the first face texture information change value, the first eyebrow spacing information change value, the first eye opening distance information change value and the first lip angle bending information change value are 5/12,3/12 and 1/12 respectively, the weighted and summed values are 3.7%, namely the first micro-expression change amount is 3.7%.
104. And determining first screen display information corresponding to the first micro-expression change amount from the target display list according to the first micro-expression change amount.
In some possible embodiments, the first micro-expression change amount is matched with a plurality of micro-expression change amounts in the target display list, so that the screen display information corresponding to the successfully matched micro-expression change amount can be determined as the first screen display information.
Optionally, in some possible embodiments, if the target display list is added with screen display information corresponding to each micro-expression variable under screening conditions such as brightness of the glasses and/or surrounding environment where the user is located and/or a ratio of brightness of the surrounding environment to screen display brightness of the terminal display screen and/or a distance value of an eye of the user from the terminal display screen, then after the first micro-expression variable is matched with a plurality of micro-expression variable in the target display list, the screening conditions corresponding to the first micro-expression variable are matched one by one, and finally, screen display information when the micro-expression variable in the target display list and each screening condition are successfully matched is determined to be the first screen display information.
Optionally, in some possible embodiments, if the target display list is not added with the screen display information corresponding to each micro-expression variable under the screening conditions such as the brightness of the glasses and/or the surrounding environment where the user is located and/or the ratio of the brightness of the surrounding environment to the screen display brightness of the terminal display screen and/or the distance value of the eyes of the user from the terminal display screen, the screen display information corresponding to the successfully matched micro-expression variable can be determined as the first screen display information after being up-regulated or down-regulated by a certain amplitude by matching the first micro-expression variable with a plurality of micro-expression variable in the target display list.
For example, assuming that only the screen display information corresponding to the micro-expression change amount is in the display list, and no other filtering condition exists, if the age of the target user is the first age, determining that the target display list is the first display list, wherein the screen display information corresponding to the micro-expression change amount of 10% in the first display list is the screen display font size, the screen display brightness and the screen display font spacing are increased by 3%, and the screen display information corresponding to the micro-expression change amount of 15% in the first display list is the screen display font size, the screen display brightness and the screen display font spacing are increased by 5%. The first micro-expression change amount of the target user is 10% and the user wears glasses, so that the screen display information corresponding to the first micro-expression change amount is obtained by inquiring the target display list, namely the screen display font size, the screen display brightness and the screen display font spacing are all increased by 3%, and the screen display information after the target user wears the glasses and the screen display font size, the screen display brightness and the screen display font spacing are increased by 3% can be up-regulated by 1% on the basis of the increase of the screen display font size, the screen display brightness and the screen display font spacing to be determined as the first screen display information, namely the first screen display information is the screen display font size, the screen display brightness and the screen display font spacing are all increased by 4%.
105. And adjusting the screen display configuration of the current terminal according to the first screen display information.
In some possible embodiments, the screen display font size of the current terminal may be adjusted according to the determined first screen display font size in the first screen display information. The screen display brightness of the current terminal can be adjusted according to the first screen display brightness in the first screen display information. And adjusting the screen display font spacing of the current terminal according to the first screen display font spacing in the first screen display information.
In the embodiment of the application, the user face image corresponding to the first moment can be obtained by collecting the user face image of the target user when the terminal is used at the first moment through a camera on the terminal or an external camera connected with the terminal, wherein the user face image at the first moment at least comprises first micro-expression information, and the first micro-expression information comprises first face texture information, first eyebrow interval information, first eye opening distance information, first lip angle bending information, face shape information, pupil color information and the like. The age detection model is used for outputting the age range of the target user corresponding to the target user based on the age detection model by inputting the first facial texture information and/or the first eyebrow interval information and/or the facial shape information and/or the pupil color information in the acquired first micro-expression information of the target user. The target display list may be determined from a plurality of display lists according to the age of the target user. And acquiring a user face image of the target user when using the terminal at a second moment by using a camera on the terminal or an external camera connected with the terminal, wherein the user face image at the second moment at least comprises second micro-expression information, and the second micro-expression information comprises second face texture information, second eyebrow interval information, second eye opening distance information, second lip angle bending information, face shape information, pupil color information and the like. By comparing each item of information included in the first micro-expression information with each item of information included in the second micro-expression information, a first face texture information change value, a first eyebrow interval information change value, a first eye opening distance information change value and/or a first lip angle bending information change value can be obtained, a first micro-expression change amount can be determined according to the obtained item of information change value, further first screen display information corresponding to the first micro-expression change amount is determined in a target display list, and screen display information of a current terminal can be adjusted according to the first screen display information. According to the embodiment of the application, the micro-expression change amounts of different age groups of the user correspond to different display lists, so that different screen display adjustment can be performed for users with the same micro-expression change amount and different age groups, the micro-expression change amounts are comprehensively measured by various information change values, the accuracy of screen display adjustment is improved, the user satisfaction is enhanced, and the flexibility is high.
Referring to fig. 2, fig. 2 is another flow chart of a method for adjusting a screen display based on a micro-expression according to an embodiment of the present application. The method for adjusting the screen display based on the micro-expressions provided by the embodiment of the application can be illustrated by the implementation manner provided by the following steps 201 to 206:
201. and acquiring first micro-expression information in a user face image acquired at a first moment when the target user uses the terminal, and determining a target user age bracket corresponding to the first micro-expression information according to the age detection model.
202. And determining a target display list from the display lists according to the age range of the target user.
203. And acquiring second micro-expression information in a user face image acquired at a second moment when the target user uses the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information.
204. And determining first screen display information corresponding to the first micro-expression change amount from the target display list according to the first micro-expression change amount.
The specific implementation manner of steps 201 to 204 may be referred to steps 101 to 104 in the corresponding embodiment of fig. 1, and will not be described herein.
205. And adjusting the screen display configuration of the current terminal according to the first screen display information.
In some possible embodiments, the screen display font size of the current terminal may be adjusted according to the determined first screen display font size in the first screen display information. The screen display brightness of the current terminal can be adjusted according to the first screen display brightness in the first screen display information. And adjusting the screen display font spacing of the current terminal according to the first screen display font spacing in the first screen display information.
Optionally, in some possible embodiments, after performing one adjustment of the screen display information, in order to continuously observe whether the user is satisfied with the adjusted screen display information of the terminal, a camera provided on the terminal or an external camera connected to the terminal may be used to collect a third user face image corresponding to a third moment when the target user uses the terminal. And then performing secondary adjustment or multiple adjustments of the screen display information according to the specific implementation manner of the steps 201-205, and recording the adjustment times of the screen display of the terminal in the process of using the terminal by the target user when the screen display information is adjusted once.
206. And acquiring the adjustment times of the screen display of the terminal in the process of using the terminal by the target user, and if the adjustment times are greater than or equal to the preset times, adjusting the screen display information corresponding to each micro expression change amount in the target display list.
In some possible embodiments, the adjustment times of the screen display of the terminal during the use of the terminal by the target user are obtained, and the obtained adjustment times are compared with the preset times. When the adjustment times are greater than or equal to the preset times, the target user is still dissatisfied with the adjusted screen display after the adjustment of the screen display information for multiple times. At this time, the history adjustment record of the screen display information of the terminal can be obtained, and after the history adjustment record is analyzed, each screen display information corresponding to each micro-expression change amount in the target display list is adjusted, optimized or updated.
In the embodiment of the application, the user face image corresponding to the first moment can be obtained by collecting the user face image of the target user when the terminal is used at the first moment through a camera on the terminal or an external camera connected with the terminal, wherein the user face image at the first moment at least comprises first micro-expression information, and the first micro-expression information comprises first face texture information, first eyebrow interval information, first eye opening distance information, first lip angle bending information, face shape information, pupil color information and the like. The age detection model is used for outputting the age range of the target user corresponding to the target user based on the age detection model by inputting the first facial texture information and/or the first eyebrow interval information and/or the facial shape information and/or the pupil color information in the acquired first micro-expression information of the target user. The target display list may be determined from a plurality of display lists according to the age of the target user. And acquiring a user face image of the target user when using the terminal at a second moment by using a camera on the terminal or an external camera connected with the terminal, wherein the user face image at the second moment at least comprises second micro-expression information, and the second micro-expression information comprises second face texture information, second eyebrow interval information, second eye opening distance information, second lip angle bending information, face shape information, pupil color information and the like. By comparing each item of information included in the first micro-expression information with each item of information included in the second micro-expression information, a first face texture information change value, a first eyebrow interval information change value, a first eye opening distance information change value and/or a first lip angle bending information change value can be obtained, a first micro-expression change amount can be determined according to the obtained each item of information change value, and further first screen display information corresponding to the first micro-expression change amount is determined in a target display list. And adjusting the screen display of the current terminal according to the first screen display information, and simultaneously, recording the adjustment times of the screen display of the terminal in the process of using the terminal by the target user when the screen display is adjusted once. And if the adjustment times are greater than or equal to the preset times, the historical adjustment record is acquired and analyzed to adjust or optimize the screen display information corresponding to each micro-expression change amount in the target display list. According to the embodiment of the application, the micro-expression change amounts of different user age groups correspond to different display lists, so that different screen display adjustment can be performed for users with the same micro-expression change amount and different age groups, and the accuracy of screen display adjustment is improved, wherein the micro-expression change amounts are comprehensively measured by various information change values, so that the accuracy of the micro-expression change amounts can be improved, the accuracy of the screen display adjustment can be improved, the user satisfaction is improved, and the flexibility and the applicability are higher by optimizing the display list.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a micro-expression based adjustment screen display device according to an embodiment of the present application. The device for adjusting the screen display based on the micro-expressions provided by the embodiment of the application comprises the following steps:
the microexpressive information obtaining module 31 is configured to obtain first microexpressive information in a user face image acquired by a target user at a first moment when the target user uses the terminal, and determine a target user age bracket corresponding to the first microexpressive information according to an age detection model;
a target display list determining module 32, configured to determine a target display list from a plurality of display lists according to the target user age group determined by the microexpressive information obtaining module 31, where one display list corresponds to one user age group, and one display list includes a plurality of microexpressive variables and screen display information corresponding to each microexpressive variable;
a micro-expression change amount determining module 33, configured to obtain second micro-expression information in a face image of the user acquired at a second moment when the target user uses the terminal, and determine a first micro-expression change amount according to the first micro-expression information and the second micro-expression information determined by the micro-expression information obtaining module 31, where the second moment is a moment after the first moment;
A screen display information determining module 34, configured to determine, from the target display list determined by the target display list determining module 32, first screen display information corresponding to the first micro-expression change amount according to the first micro-expression change amount determined by the micro-expression change amount determining module 33;
a screen display information adjusting module 35 for adjusting the current screen display configuration of the terminal according to the first screen display information determined by the screen display information determining module 34.
In some possible embodiments, the foregoing micro-expression information obtaining module 31 is configured to:
and acquiring first facial texture information in the first micro-expression information, inputting the first facial texture information into an age detection model, and outputting a target user age bracket corresponding to the first facial texture information based on the age detection model.
In some possible embodiments, the foregoing micro-expression change amount determining module 33 includes:
a facial texture information change determining unit 331, configured to obtain second facial texture information in the second micro-expression information, and compare the second facial texture information with first facial texture information in the first micro-expression information to obtain a first facial texture information change value; and/or
An eyebrow spacing information change determining unit 332, configured to obtain second eyebrow spacing information in the second micro-expression information, and compare the second eyebrow spacing information with first eyebrow spacing information in the first micro-expression information to obtain a first eyebrow spacing information change value; and/or
An eye-opening-distance-information-change determining unit 333 configured to acquire second eye-opening-distance information in the second microexpressive information, and compare the second eye-opening-distance information with first eye-opening-distance information in the first microexpressive information to obtain a first eye-opening-distance-information-change value; and/or
A change determining unit 334 of lip angle bending information, configured to obtain second lip angle bending information in the second microexpressive information, and compare the second lip angle bending information with first lip angle bending information in the first microexpressive information to obtain a change value of the first lip angle bending information;
the micro-expression change amount determining unit 335 is configured to determine the first micro-expression change amount according to the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value, and/or the first lip angle bending information change value.
In some possible embodiments, the foregoing micro-expression change amount determining unit 335 is configured to:
and multiplying the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value by corresponding weight values respectively, and then summing, and determining the sum value as the first micro expression change amount.
In some possible embodiments, the above-mentioned screen display information determining module 34 is configured to:
and matching the first micro-expression change amount with a plurality of micro-expression change amounts in the target display list, and determining screen display information corresponding to the successfully matched micro-expression change amounts as the first screen display information.
In some possible embodiments, the apparatus for adjusting a screen display based on a micro-expression further includes:
a display list updating module 36, configured to obtain the number of adjustment times of the screen display of the terminal during the process of using the terminal by the target user;
and if the adjustment times are greater than or equal to the preset times, adjusting screen display information corresponding to each micro-expression change amount in the target display list.
In some possible embodiments, the above-mentioned screen display information adjustment module 35 is configured to:
adjusting the current screen display font size of the terminal according to the first screen display font size in the first screen display information; and/or
Adjusting the screen display brightness of the terminal according to the first screen display brightness in the first screen display information; and/or
And adjusting the current screen display font spacing of the terminal according to the first screen display font spacing in the first screen display information.
In a specific implementation, the device for adjusting the screen display based on the micro-expression can execute the implementation provided by each step in fig. 1 to 2 through each function module built in the device. For example, the foregoing micro-expression information obtaining module 31 may be configured to perform the above-mentioned steps to collect the user face image at the first moment, obtain the first micro-expression information in the user face image at the first moment, determine the target user age group, and so on, and the detailed description thereof will be omitted herein. The above-mentioned target display list determining module 32 may be configured to perform the implementation manner described in the related steps, such as determining the target display list, in the above-mentioned steps, and specifically, the implementation manner provided in the above-mentioned steps may be referred to, which is not described herein. The foregoing micro-expression change amount determining module 33 may be configured to perform the above-mentioned steps to collect the user face image at the second moment, obtain the second micro-expression information in the user face image at the second moment, determine the first micro-expression change amount, and so on, and the detailed description thereof will be omitted herein. The above-mentioned screen display information determining module 34 may be configured to execute the implementation manner of determining the first screen display information corresponding to the first micro-expression change amount in the above-mentioned steps, and specifically, the implementation manner provided in the above-mentioned steps may be referred to, which is not described herein again. The above-mentioned screen display information adjusting module 35 may be configured to execute the implementation manner of adjusting the current screen display information according to the first screen display information in the above-mentioned steps, and specifically, the implementation manner provided in the above-mentioned steps may be referred to, which is not described herein again. The display list updating module 36 may be configured to execute the implementation manner of adjusting the screen display information corresponding to each micro-expression change amount in the display list in each step, and specifically, the implementation manner provided in each step may be referred to, which is not described herein.
In the embodiment of the application, the device for adjusting screen display based on the micro-expressions can input first facial texture information and/or first eyebrow space information and/or facial shape information and/or pupil color information in first micro-expression information in the user face image at the first moment into the age detection model based on the acquired user face image of the target user at the first moment, so that a target user age bracket corresponding to the target user can be obtained. The target display list may be determined from the plurality of display lists based on the determined age bracket of the target user. And obtaining a user face image corresponding to the second moment by obtaining the user face image acquired at the second moment, wherein the user face image at the second moment at least comprises second micro-expression information. The first facial texture information, the first eyebrow interval information, the first eye opening distance information and/or the first lip angle bending information included in the first micro-expression information are respectively compared with the second facial texture information, the second eyebrow interval information, the second eye opening distance information and/or the second lip angle bending information included in the second micro-expression information, so that corresponding first facial texture information change values, first eyebrow interval information change values, first eye opening distance information change values and/or first lip angle bending information change values can be obtained. And determining a first micro-expression change amount according to the obtained various information change values, and further determining first screen display information corresponding to the first micro-expression change amount in a target display list. And adjusting the screen display information of the current display screen according to the first screen display information, and simultaneously, recording the adjustment times of the screen display in the use process of the target user every time the screen display information is adjusted. And if the adjustment times are greater than or equal to the preset times, the historical adjustment record is acquired and analyzed to adjust or optimize the screen display information corresponding to each micro-expression change amount in the target display list. According to the embodiment of the application, the micro-expression change amounts of different user age groups correspond to different display lists, so that different screen display adjustment can be performed for users with the same micro-expression change amount and different age groups, and the accuracy of screen display adjustment is improved, wherein the micro-expression change amounts are comprehensively measured by various information change values, so that the accuracy of the micro-expression change amounts can be improved, the accuracy of the screen display adjustment can be improved, the user satisfaction is improved, and the flexibility and the applicability are higher by optimizing the display list.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 4, the terminal in this embodiment may include: one or more processors 401 and a memory 402. The processor 401 and the memory 402 are connected via a bus 403. The memory 402 is used for storing a computer program comprising program instructions, and the processor 401 is used for executing the program instructions stored in the memory 402 for performing the following operations:
acquiring first micro-expression information in a user face image acquired at a first moment when a target user uses a terminal, and determining a target user age bracket corresponding to the first micro-expression information according to an age detection model;
determining a target display list from a plurality of display lists according to the target user age groups, wherein each display list corresponds to one user age group, and each display list comprises a plurality of micro-expression change amounts of the corresponding user age group and screen display information corresponding to each micro-expression change amount;
acquiring second micro-expression information in a face image of the user acquired by the target user at a second moment of the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information, wherein the second moment is a moment after the first moment;
Determining first screen display information corresponding to the first micro-expression change amount from the target display list according to the first micro-expression change amount;
and adjusting the screen display configuration of the terminal according to the first screen display information.
In some possible embodiments, the processor 401 is configured to:
and acquiring first facial texture information in the first micro-expression information, inputting the first facial texture information into an age detection model, and outputting a target user age bracket corresponding to the first facial texture information based on the age detection model.
In some possible embodiments, the processor 401 is configured to:
acquiring second facial texture information in the second micro-expression information, and comparing the second facial texture information with first facial texture information in the first micro-expression information to obtain a first facial texture information change value; and/or
Acquiring second eyebrow spacing information in the second micro-expression information, and comparing the second eyebrow spacing information with first eyebrow spacing information in the first micro-expression information to obtain a first eyebrow spacing information change value; and/or
Acquiring second eye opening distance information in the second micro-expression information, and comparing the second eye opening distance information with first eye opening distance information in the first micro-expression information to obtain a first eye opening distance information change value; and/or
Acquiring second lip angle bending information in the second micro-expression information, and comparing the second lip angle bending information with first lip angle bending information in the first micro-expression information to obtain a first lip angle bending information change value;
and determining the first micro-expression change amount according to the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and/or the first lip angle bending information change value.
In some possible embodiments, the processor 401 is configured to:
and multiplying the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value by corresponding weight values respectively, and then summing, and determining the sum value as the first micro expression change amount.
In some possible embodiments, the processor 401 is configured to:
and matching the first micro-expression change amount with a plurality of micro-expression change amounts in the target display list, and determining screen display information corresponding to the successfully matched micro-expression change amounts as the first screen display information.
In some possible embodiments, the processor 401 is configured to:
acquiring the adjustment times of the screen display of the terminal in the process of using the terminal by the target user;
and if the adjustment times are greater than or equal to the preset times, adjusting screen display information corresponding to each micro-expression change amount in the target display list.
In some possible embodiments, the processor 401 is configured to:
adjusting the current screen display font size of the terminal according to the first screen display font size in the first screen display information; and/or
Adjusting the screen display brightness of the terminal according to the first screen display brightness in the first screen display information; and/or
And adjusting the current screen display font spacing of the terminal according to the first screen display font spacing in the first screen display information.
It should be appreciated that in some possible embodiments, the processor 401 may be a central processing module (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory 402 may include read only memory and random access memory and provides instructions and data to the processor 401. A portion of memory 402 may also include non-volatile random access memory. For example, the memory 402 may also store information of device type.
In a specific implementation, the terminal may execute, through each functional module built in the terminal, an implementation manner provided by each step in fig. 1 to 2, and specifically, the implementation manner provided by each step may be referred to, which is not described herein again.
In the embodiment of the application, the terminal can input the first facial texture information and/or the first inter-eyebrow distance information and/or the face shape information and/or the pupil color information in the first micro-expression information in the user face image at the first moment into the age detection model based on the acquired user face image of the target user at the first moment, so as to obtain the target user age bracket corresponding to the target user. The target display list may be determined from the plurality of display lists based on the determined age bracket of the target user. And obtaining a user face image corresponding to the second moment by obtaining the user face image acquired at the second moment, wherein the user face image at the second moment at least comprises second micro-expression information. The first facial texture information, the first eyebrow interval information, the first eye opening distance information and/or the first lip angle bending information included in the first micro-expression information are respectively compared with the second facial texture information, the second eyebrow interval information, the second eye opening distance information and/or the second lip angle bending information included in the second micro-expression information, so that corresponding first facial texture information change values, first eyebrow interval information change values, first eye opening distance information change values and/or first lip angle bending information change values can be obtained. And determining a first micro-expression change amount according to the obtained various information change values, and further determining first screen display information corresponding to the first micro-expression change amount in a target display list. And adjusting the screen display information of the current terminal according to the first screen display information, and simultaneously, recording the adjustment times of the screen display of the target user in the process of using the terminal every time the screen display information is adjusted. And if the adjustment times are greater than or equal to the preset times, the historical adjustment record is acquired and analyzed to adjust or optimize the screen display information corresponding to each micro-expression change amount in the target display list. According to the embodiment of the application, the micro-expression change amounts of different user age groups correspond to different display lists, so that different screen display adjustment can be performed for users with the same micro-expression change amount and different age groups, and the accuracy of screen display adjustment is improved, wherein the micro-expression change amounts are comprehensively measured by various information change values, so that the accuracy of the micro-expression change amounts can be improved, the accuracy of the screen display adjustment can be improved, the user satisfaction is improved, and the flexibility and the applicability are higher by optimizing the display list.
The embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, and when the program instructions are executed by a processor, implement a method for adjusting screen display based on micro-expressions provided in each step of fig. 1 to 2, and specifically, the implementation manner provided in each step is referred to herein and will not be repeated.
The computer readable storage medium may be the apparatus for adjusting a screen display based on a micro-expression provided in any of the foregoing embodiments or an internal storage unit of the terminal, for example, a hard disk or a memory of an electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the electronic device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. The computer-readable storage medium is used to store the computer program and other programs and data required by the electronic device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The terms "first," "second," "third," "fourth" and the like in the claims and in the description and drawings of the present application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments. The term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and related apparatus provided in the embodiments of the present application are described with reference to the flowchart and/or schematic structural diagrams of the method provided in the embodiments of the present application, and each flow and/or block of the flowchart and/or schematic structural diagrams of the method may be implemented by computer program instructions, and combinations of flows and/or blocks in the flowchart and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.

Claims (9)

1. A method for adjusting a screen display based on a microexpressive expression, the method comprising:
acquiring first micro-expression information in a user face image acquired at a first moment when a target user uses a terminal, and determining a target user age bracket corresponding to the first micro-expression information according to an age detection model;
determining a target display list from a plurality of display lists according to the target user age bracket, wherein each display list corresponds to one user age bracket, and each display list comprises a plurality of micro expression change amounts of the corresponding user age bracket and screen display information corresponding to each micro expression change amount;
acquiring second micro-expression information in a user face image acquired by the target user at a second moment of using the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information, wherein the second moment is a moment after the first moment;
determining first screen display information corresponding to the first micro-expression change amount from the target display list according to the first micro-expression change amount;
adjusting the screen display configuration of the terminal according to the first screen display information;
The micro-expression information comprises facial texture information, eyebrow space information, eye opening distance information, lip angle bending information, facial shape information and pupil color information;
the determining, according to an age detection model, a target user age group corresponding to the first micro-expression information includes:
acquiring first facial texture information, first eyebrow space information, first eye opening distance information, first lip angle bending information, facial shape information and pupil color information in the first micro-expression information;
inputting the first face texture information, the first eyebrow distance information, the first eye opening distance information, the first lip angle bending information, the face shape information and the pupil color information into an age detection model, and outputting a corresponding target user age bracket based on the age detection model.
2. The method of claim 1, wherein the determining a first microexpressive change amount from the first microexpressive information and the second microexpressive information comprises:
acquiring second facial texture information in the second micro-expression information, and comparing the second facial texture information with first facial texture information in the first micro-expression information to obtain a first facial texture information change value; and/or
Acquiring second eyebrow spacing information in the second micro-expression information, and comparing the second eyebrow spacing information with first eyebrow spacing information in the first micro-expression information to obtain a first eyebrow spacing information change value; and/or
Acquiring second eye opening distance information in the second micro-expression information, and comparing the second eye opening distance information with first eye opening distance information in the first micro-expression information to obtain a first eye opening distance information change value; and/or
Acquiring second lip angle bending information in the second micro-expression information, and comparing the second lip angle bending information with first lip angle bending information in the first micro-expression information to obtain a first lip angle bending information change value;
and determining the first micro-expression change amount according to the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and/or the first lip angle bending information change value.
3. The method of claim 2, wherein the determining the first micro-expression change amount from the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value, and the first lip angle bending information change value comprises:
And multiplying the first facial texture information change value, the first eyebrow interval information change value, the first eye opening distance information change value and the first lip angle bending information change value by corresponding weight values respectively, and then summing, and determining the sum value as the first micro expression change amount.
4. The method according to any one of claims 1-3, wherein the determining, from the target display list, first screen display information corresponding to the first micro-expression change amount according to the first micro-expression change amount includes:
and matching the first micro-expression change amount with a plurality of micro-expression change amounts in the target display list, and determining screen display information corresponding to the successfully matched micro-expression change amounts as the first screen display information.
5. The method of claim 1, wherein after adjusting the current screen display configuration of the terminal according to the first screen display information, the method further comprises:
acquiring the adjustment times of screen display of the terminal in the process of using the terminal by the target user;
and if the adjustment times are greater than or equal to the preset times, adjusting screen display information corresponding to each micro-expression change amount in the target display list.
6. The method of claim 1, wherein said adjusting the current screen display configuration of the terminal based on the first screen display information comprises:
adjusting the current screen display font size of the terminal according to the first screen display font size in the first screen display information; and/or
Adjusting the screen display brightness of the terminal according to the first screen display brightness in the first screen display information; and/or
And adjusting the current screen display font spacing of the terminal according to the first screen display font spacing in the first screen display information.
7. An apparatus for adjusting a screen display based on a micro-expression, the apparatus comprising:
the micro-expression information acquisition module is used for acquiring first micro-expression information in a user face image acquired at a first moment when a target user uses the terminal, and determining a target user age bracket corresponding to the first micro-expression information according to an age detection model;
the target display list determining module is used for determining a target display list from a plurality of display lists according to the target user age groups determined by the micro-expression information acquiring module, wherein each display list corresponds to one user age group, and each display list comprises a plurality of micro-expression change amounts of the corresponding user age groups and screen display information corresponding to each micro-expression change amount;
The micro-expression change amount determining module is used for acquiring second micro-expression information in a user face image acquired by the target user at a second moment of the terminal, and determining a first micro-expression change amount according to the first micro-expression information and the second micro-expression information determined by the micro-expression information acquiring module, wherein the second moment is a moment after the first moment;
the screen display information determining module is used for determining first screen display information corresponding to the first micro-expression change amount from the target display list determined by the target display list determining module according to the first micro-expression change amount determined by the micro-expression change amount determining module;
and the screen display information adjusting module is used for adjusting the current screen display configuration of the terminal according to the first screen display information determined by the screen display information determining module.
8. A terminal comprising a processor and a memory, the processor and the memory being interconnected;
the memory is for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-6.
CN201910421947.2A 2019-05-21 2019-05-21 Method and device for adjusting screen display based on micro-expressions Active CN110222597B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910421947.2A CN110222597B (en) 2019-05-21 2019-05-21 Method and device for adjusting screen display based on micro-expressions
PCT/CN2019/101947 WO2020232855A1 (en) 2019-05-21 2019-08-22 Method and apparatus for adjusting screen display on the basis of subtle expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910421947.2A CN110222597B (en) 2019-05-21 2019-05-21 Method and device for adjusting screen display based on micro-expressions

Publications (2)

Publication Number Publication Date
CN110222597A CN110222597A (en) 2019-09-10
CN110222597B true CN110222597B (en) 2023-09-22

Family

ID=67821445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910421947.2A Active CN110222597B (en) 2019-05-21 2019-05-21 Method and device for adjusting screen display based on micro-expressions

Country Status (2)

Country Link
CN (1) CN110222597B (en)
WO (1) WO2020232855A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459587A (en) * 2020-03-27 2020-07-28 北京三快在线科技有限公司 Information display method, device, equipment and storage medium
CN112527106A (en) * 2020-11-30 2021-03-19 崔刚 Control system based on full vision
CN112766238B (en) * 2021-03-15 2023-09-26 电子科技大学中山学院 Age prediction method and device
CN115499538B (en) * 2022-08-23 2023-08-22 广东以诺通讯有限公司 Screen display font adjusting method, device, storage medium and computer equipment
CN117974853A (en) * 2024-03-29 2024-05-03 成都工业学院 Self-adaptive switching generation method, system, terminal and medium for homologous micro-expression image

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101133438A (en) * 2005-03-01 2008-02-27 松下电器产业株式会社 Electronic display medium and screen display control method used for electronic display medium
CN105607733A (en) * 2015-08-25 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Regulation method, regulation device and terminal
CN106057171A (en) * 2016-07-21 2016-10-26 广东欧珀移动通信有限公司 Control method and device
CN107507602A (en) * 2017-09-22 2017-12-22 深圳天珑无线科技有限公司 Screen intensity Automatic adjustment method, terminal and storage medium
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
CN108256469A (en) * 2018-01-16 2018-07-06 华中师范大学 facial expression recognition method and device
CN108960022A (en) * 2017-09-19 2018-12-07 炬大科技有限公司 A kind of Emotion identification method and device thereof
CN108989571A (en) * 2018-08-15 2018-12-11 浙江大学滨海产业技术研究院 A kind of adaptive font method of adjustment and device for mobile phone word read
CN109063679A (en) * 2018-08-24 2018-12-21 广州多益网络股份有限公司 A kind of human face expression detection method, device, equipment, system and medium
CN109523852A (en) * 2018-11-21 2019-03-26 合肥虹慧达科技有限公司 The study interactive system and its exchange method of view-based access control model monitoring
CN109543603A (en) * 2018-11-21 2019-03-29 山东大学 A kind of micro- expression recognition method based on macro sheet feelings knowledge migration
CN109697421A (en) * 2018-12-18 2019-04-30 深圳壹账通智能科技有限公司 Evaluation method, device, computer equipment and storage medium based on micro- expression

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000305746A (en) * 1999-04-16 2000-11-02 Mitsubishi Electric Corp System for controlling image plane
US20170092150A1 (en) * 2015-09-30 2017-03-30 Sultan Hamadi Aljahdali System and method for intelligently interacting with users by identifying their gender and age details
US10049263B2 (en) * 2016-06-15 2018-08-14 Stephan Hau Computer-based micro-expression analysis
US10515393B2 (en) * 2016-06-30 2019-12-24 Paypal, Inc. Image data detection for micro-expression analysis and targeted data services
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 A kind of terminal screen control method, device and electronic equipment
CN107292778A (en) * 2017-05-19 2017-10-24 华中师范大学 A kind of cloud classroom learning evaluation method and its device based on cognitive emotion perception
CN108345874A (en) * 2018-04-03 2018-07-31 苏州欧孚网络科技股份有限公司 A method of according to video image identification personality characteristics

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101133438A (en) * 2005-03-01 2008-02-27 松下电器产业株式会社 Electronic display medium and screen display control method used for electronic display medium
CN105607733A (en) * 2015-08-25 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Regulation method, regulation device and terminal
CN106057171A (en) * 2016-07-21 2016-10-26 广东欧珀移动通信有限公司 Control method and device
CN108960022A (en) * 2017-09-19 2018-12-07 炬大科技有限公司 A kind of Emotion identification method and device thereof
CN107507602A (en) * 2017-09-22 2017-12-22 深圳天珑无线科技有限公司 Screen intensity Automatic adjustment method, terminal and storage medium
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
CN108256469A (en) * 2018-01-16 2018-07-06 华中师范大学 facial expression recognition method and device
CN108989571A (en) * 2018-08-15 2018-12-11 浙江大学滨海产业技术研究院 A kind of adaptive font method of adjustment and device for mobile phone word read
CN109063679A (en) * 2018-08-24 2018-12-21 广州多益网络股份有限公司 A kind of human face expression detection method, device, equipment, system and medium
CN109523852A (en) * 2018-11-21 2019-03-26 合肥虹慧达科技有限公司 The study interactive system and its exchange method of view-based access control model monitoring
CN109543603A (en) * 2018-11-21 2019-03-29 山东大学 A kind of micro- expression recognition method based on macro sheet feelings knowledge migration
CN109697421A (en) * 2018-12-18 2019-04-30 深圳壹账通智能科技有限公司 Evaluation method, device, computer equipment and storage medium based on micro- expression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Huang Jie ; Chen Ruiqi.The Interactive Design of the E-Books for Children in the "Screen Reader Age".2015 8th International Conference on Intelligent Computation Technology and Automation (ICICTA).2016,989-992. *
李清霞.交互式儿童视力保护视频智能终端设计.软件工程.2018,第21卷(第04期),49-51,22. *

Also Published As

Publication number Publication date
CN110222597A (en) 2019-09-10
WO2020232855A1 (en) 2020-11-26

Similar Documents

Publication Publication Date Title
CN110222597B (en) Method and device for adjusting screen display based on micro-expressions
Vogelsang et al. Potential downside of high initial visual acuity
CN107491166B (en) Method for adjusting parameters of virtual reality equipment and virtual reality equipment
US11178389B2 (en) Self-calibrating display device
US10706281B2 (en) Controlling focal parameters of a head mounted display based on estimated user age
JP6845795B2 (en) How to determine the lens design of an optical lens suitable for the wearer
CN107563325B (en) Method and device for testing fatigue degree and terminal equipment
CN105679253B (en) A kind of terminal backlight adjusting method and device
CN110610768B (en) Eye use behavior monitoring method and server
CN108537026A (en) application control method, device and server
CN106526857B (en) Focus adjustment method and device
US20210042498A1 (en) Eye state detecting method and eye state detecting system
US9959635B2 (en) State determination device, eye closure determination device, state determination method, and storage medium
CN116634920A (en) Subjective refraction inspection system
Gavas et al. Enhancing the usability of low-cost eye trackers for rehabilitation applications
CN113033413A (en) Glasses recommendation method and device, storage medium and terminal
CN111588345A (en) Eye disease detection method, AR glasses and readable storage medium
CN114943924B (en) Pain assessment method, system, equipment and medium based on facial expression video
JP7439932B2 (en) Information processing system, data storage device, data generation device, information processing method, data storage method, data generation method, recording medium, and database
CN111163680B (en) Method and system for adapting the visual and/or visual motor behaviour of an individual
CN114762029A (en) Blue light reduction
CN112734701A (en) Fundus focus detection method, fundus focus detection device and terminal equipment
Baboianu et al. Processing of captured digital images for measuring the optometric parameters required in the construction of ultra-personalized special lenses
CN117437249B (en) Segmentation method, terminal equipment and storage medium for fundus blood vessel image
CN113039478A (en) Method for determining the degree to which a lens design is adapted to a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant