WO2022022077A1 - 交互界面的显示方法和装置以及存储介质 - Google Patents

交互界面的显示方法和装置以及存储介质 Download PDF

Info

Publication number
WO2022022077A1
WO2022022077A1 PCT/CN2021/098954 CN2021098954W WO2022022077A1 WO 2022022077 A1 WO2022022077 A1 WO 2022022077A1 CN 2021098954 W CN2021098954 W CN 2021098954W WO 2022022077 A1 WO2022022077 A1 WO 2022022077A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
emotion
display
information
emotional
Prior art date
Application number
PCT/CN2021/098954
Other languages
English (en)
French (fr)
Inventor
徐志红
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/773,371 priority Critical patent/US11960640B2/en
Publication of WO2022022077A1 publication Critical patent/WO2022022077A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • the present disclosure relates to the field of human-computer interaction, and in particular, to a method for displaying an interactive interface, a display device, and a computer-readable storage medium.
  • AI Artificial Intelligence
  • human-computer interaction products that can monitor and manage people's emotions have been applied in the field of health care.
  • This human-computer interaction product can replace or assist medical personnel to assess the patient's mental state for further treatment.
  • the recorded results may be inaccurate due to patient resistance to the interactive product.
  • a first aspect of the embodiments of the present disclosure provides a method for displaying an interactive interface, including:
  • the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
  • determining a target object to be interacted with through the interactive interface from the at least one object includes:
  • an object appearing for the first time in the image or an object located at the front of the at least one object in the image is determined as a target object to be interacted with through the interactive interface.
  • the attribute information includes age and gender
  • acquiring the attribute information of the target object includes:
  • the gender of the target object is acquired according to a gender recognition algorithm.
  • the emotion information includes an emotion value
  • acquiring the emotion information of the target object includes:
  • the emotion value of the target object is acquired according to an emotion recognition algorithm.
  • controlling the display of the first object area of the interactive interface according to the attribute information of the target object and the emotion information of the target object includes:
  • the displayed image in the first object area is changed according to the emotional information of the target object.
  • the emotion information includes an emotion value
  • changing the display image in the first object area according to the emotion information of the target object includes:
  • the display image is displayed in a second display manner as the emotion feature value increases.
  • the method for displaying an interactive interface further includes:
  • the display of the second object area of the interactive interface is controlled according to the emotional information of each of the at least one objects.
  • the emotion information includes an emotion value
  • controlling the display of the second object area of the interactive interface according to the emotion information of each of the at least one object includes:
  • the background pattern in the second object region is displayed in a fourth manner as the average value of the emotion feature values increases.
  • the emotion recognition algorithm includes one of a K-nearest neighbor algorithm, a support vector machine algorithm, a clustering algorithm, a genetic algorithm, a particle swarm optimization algorithm, a convolutional neural network algorithm, and a multi-task convolutional neural network algorithm.
  • a second aspect of the embodiments of the present disclosure provides a display device for an interactive interface, including:
  • a memory configured to store program instructions
  • a processor configured to execute the program instructions to:
  • the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
  • the processor is further configured to:
  • the display of the second object area of the interactive interface is controlled according to the emotional information of each of the at least one objects.
  • a third aspect of embodiments of the present disclosure provides a computer-readable storage medium having executable instructions stored thereon that, when executed by a processor, cause the processor to perform the operations described in the first aspect of the embodiments of the present disclosure. Display method of the provided interactive interface.
  • the emotional information and attribute information of a target object to be interacted with through the interactive interface are obtained, and the emotional information and attribute information of the target object are combined to control the display of the first object area of the interactive interface.
  • Display so as to increase the interest of the display by displaying an appropriate picture, so as to alleviate and eliminate the resistance of the target object to interacting through the interactive interface, so as to obtain the emotional state information of the target object more accurately, which is beneficial to the emotional state of the target object.
  • FIG. 1 shows a schematic diagram of a human-computer interaction system provided with an interactive interface according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure
  • 3A and 3B illustrate display examples of an interactive interface according to an embodiment of the present disclosure
  • FIG. 4 shows another flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure
  • 5A and 5B illustrate another display example of an interactive interface according to an embodiment of the present disclosure
  • FIG. 6 shows an example of a display device for an interactive interface according to an embodiment of the present disclosure.
  • FIG. 7 shows another example of a display device for an interactive interface according to an embodiment of the present disclosure.
  • At least one of the “systems” shall include, but not be limited to, systems with A alone, B alone, C alone, A and B, A and C, B and C, and/or A, B, C, etc. ).
  • the interactive interface according to the embodiment of the present disclosure can be set and applied in a human-computer interaction system capable of monitoring or managing human emotions, and can be used to obtain emotional state information of a target object and evaluate the emotional state of the target object and treatment.
  • the description of the interactive interface in the following embodiments will take the above-mentioned human-computer interaction system as an example to illustrate, but those skilled in the art should understand that the interactive interface and the display method and display device of the interactive interface in the embodiments of the present disclosure are not limited to this, and can be Apply it to any other suitable product or application.
  • FIG. 1 shows a schematic diagram of a human-computer interaction system provided with an interactive interface according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure.
  • the human-computer interaction system 100 includes an interactive interface 101 and a functional area 102 .
  • the interactive interface 101 can provide a screen display to the user based on the display technology, and the functional area 102 can receive the user's input and operate the human-computer interaction system 100 based on the user's input, such as turning on or off the human-computer interaction system 100, setting the human-computer interaction system 100 Parameters of the interactive system 100, or selection of functions of the human-computer interaction system 100, etc.
  • the human-computer interaction system 100 also includes an image sensor 103, which may be configured to capture an object in order to provide the interactive interface 101 with an image containing the object, so that the interactive interface 101 can recognize the image to be passed by the interactive interface 101.
  • the interactive interface 101 selects an object to interact with.
  • the human-computer interaction system 100 in FIG. 1 is only an example, and does not constitute a limitation on the human-computer interaction system 100 and the interactive interface 101 provided on the human-computer interaction system 100 .
  • the human-computer interaction system 100 may be implemented by a mobile terminal such as a smart phone and an application installed on the mobile terminal.
  • the function of the interactive interface 101 can be realized by the screen of the smart phone
  • the function of the functional area 102 can be realized by the operation of the application
  • the function of the image sensor 103 can be realized by the camera of the smart phone.
  • the method 200 for displaying an interactive interface includes the following steps.
  • step S210 a target object to be interacted with through the interactive interface is determined from at least one object.
  • step S220 the attribute information of the target object and the emotion information of the target object are acquired.
  • step S230 the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
  • the display method 200 may determine a target object to be interacted with through an interactive interface in a scene where multiple objects exist.
  • the method for determining a target object from at least one object may include: tracking the at least one object to obtain an image about the at least one object, recognizing the image to obtain face information of the at least one object in the image, and based on at least one object For the face information of an object, the object that appears for the first time in the image or the object located at the front of the at least one object in the image is determined as the target object to be interacted with through the interactive interface.
  • the display method 200 may capture at least one object present within the field of view of the image sensor 103, i.e., using the image sensor 103 to capture an image of the object in real time. Then, the human face in the captured image is detected, and a face detection algorithm can be used to detect the human face in the captured image.
  • face detection algorithms include AInnoFace face detection algorithm, cascaded CNN (convolutional neural network, convolutional neural network) face detection algorithm, OpenCV face detection algorithm, Seetaface face detection algorithm, libfacedetect face detection algorithm , FaceNet face detection algorithm, MTCNN (Multi-task convolutional neural network, multi-task convolutional neural network) face detection algorithm, etc.
  • the embodiment of the present disclosure does not limit the face detection algorithm used, and any suitable method may be used to detect the face.
  • the display method of the interactive interface according to the embodiment of the present disclosure can provide the display of two kinds of interaction scenarios of a single person and a multi-person.
  • the single-person interaction scene is aimed at the situation where there is only a single interaction object in the scene, and interacts with the single interaction object.
  • an interaction object can be selected from among the multiple interaction objects for the situation where there are multiple interaction objects in the scene.
  • the specific interaction is based on a single-person interaction scene or based on a multi-person interaction scene can be pre-selected by setting system parameters. For example, for a scene with multiple interacting objects, the interaction can be set as a multi-person interaction scene.
  • the display method of the interactive interface can determine the target object in different ways.
  • an object that appears for the first time in a captured image can be determined as a target object to be interacted with through an interactive interface, which is suitable for both a single-person interaction scene and a multi-person interaction scene.
  • the frontmost object among the multiple objects in the captured image can also be determined as the target object to be interacted with through the interactive interface, which is suitable for a multi-person interaction scene and multiple interactive objects appear at the same time situation in the captured image.
  • the image sensor 103 may be a depth image sensor, and the captured image may be a depth image. object.
  • step S220 by tracking and detecting the target object, the image of the target object can be obtained in real time, the face information of the target object in the image can be obtained in real time, and the attribute information and emotional information of the target object can be obtained in real time according to the face information.
  • the target object can also be tracked, so as to obtain the emotional value of the target object in real time.
  • face tracking and smoothing algorithms are used to track and detect the target object.
  • the display method detects the face in the image captured by the image sensor 103 and determines the target object
  • the position of the face of the target object is identified on the image, and at the same time
  • the interactive interface 101 displays a representation of the target object. Image of human face.
  • the face image in the interactive interface can move with the movement of the target object, so as to achieve smooth tracking of the target object.
  • commonly used face tracking and smoothing algorithms include MTCNN algorithm, Laplace algorithm, particle filter algorithm, etc., and a combined technology of Kalman filter and Hungarian algorithm may also be used, which is not limited in this embodiment of the present disclosure.
  • the attribute information of the target object may include the age and gender of the target object, but is not limited thereto.
  • the step of obtaining the age and gender of the target object includes obtaining face information of the target object by recognizing an image including the face of the target object, and obtaining the age of the target object according to an age recognition algorithm based on the face information.
  • age recognition algorithms include SVM (Support Vector Machine, support vector machine), CNN and so on.
  • the gender of the target object can also be acquired according to the gender recognition algorithm based on the face information.
  • Commonly used gender recognition algorithms include SVM, CNN, etc.
  • the embodiments of the present disclosure do not limit the age recognition algorithm and gender recognition algorithm used, and any suitable method may be used.
  • the emotion information of the target object may be represented by the emotion value of the target object.
  • the step of acquiring the emotion value of the target object includes acquiring the facial information of the target object by recognizing an image including the human face of the target object, and acquiring the emotion value of the target object according to an emotion recognition algorithm based on the facial information.
  • emotion recognition algorithms include KNN (K-Nearest Neighbor, K nearest neighbor) algorithm, SVM algorithm, clustering algorithm, genetic algorithm, PSO (Particle Swarm Optimization, particle swarm optimization) algorithm, CNN algorithm, MTCNN algorithm, etc. This embodiment of the present disclosure does not limit the emotion recognition algorithm used, and any suitable method may be used.
  • 8 emotions of the target object can be identified by an emotion recognition algorithm, including neutral, happy, surprised, sad, angry, scared, disgusted and contemptuous, and each emotion corresponds to a different emotion value.
  • the emotion of the target object may be a complex state in which various emotions are intertwined. For example, the target object is in a state of contempt, but the overall emotional stability has no emotional fluctuations, that is, the target object is still in a neutral state. Therefore, it is also necessary to comprehensively judge the actual emotion category of the target object according to the above emotion value.
  • neutrality and surprise can be considered as neutral emotions, that is, when the target object is in a state of neutrality or surprise, the target object is in a calm state as a whole, and there will be no large emotional fluctuations.
  • Sadness, anger, fear, disgust, and contempt can be considered as negative emotions, that is, when the target object is in a state of sadness, anger, fear, disgust, or contempt, the target object's mood is low, or there are large negative fluctuations.
  • happiness is a positive emotion, which is easy to understand.
  • the emotion of the target object has positive fluctuations or high emotions.
  • the emotion recognition algorithm expresses the emotion of the target object with different emotion values.
  • negative emotions such as sadness, anger, fear, disgust, and contempt have lower emotional values
  • positive emotions such as happy have higher emotional values
  • neutral emotions such as neutral and surprised have emotional values in positive and negative emotions between values. Therefore, the emotion of the target object can be represented by different numerical values.
  • controlling the display of the first object area of the interactive interface according to the attribute information and emotion information of the target object includes determining the display image of the target object on the interactive interface according to the attribute information of the target object.
  • the interactive interface 101 is further divided into different object areas, and the display method according to the embodiment of the present disclosure can control and display the different object areas respectively, thereby increasing the display flexibility.
  • the interactive interface 101 includes a first object area 1011 and a second object area 1012 .
  • the first object area 1011 may be an area configured to display the target object, which presents the display image of the target object on the interactive interface 101 .
  • the second object area 1012 may be an area configured to display other content than the display avatar of the target object, such as a background area displayed on the interactive interface 101 .
  • the position of the first object area 1011 on the interactive interface 101 can be changed, and the first object area 1011 can be moved on the interactive interface 101, thereby providing a dynamic display effect.
  • Determining the display image of the target object on the interactive interface according to the attribute information of the target object may be determining the display image of the target object in the first object area of the display interface according to the age and gender of the target object.
  • the tulip shown in the figures is the display image determined according to the age and gender of the target object and used to represent the target object.
  • controlling the display of the first object area of the interactive interface according to the attribute information and emotion information of the target object further includes changing the display image in the first object area according to the emotion information of the target object.
  • changing the display image in the first object area according to the emotional information of the target object may include determining the emotional characteristic value of the target object according to the emotional value of the target object, and determining the emotional characteristic value of the target object based on the difference between the emotional characteristic value of the target object and the emotional threshold value. Compare the results to control the display of the display image.
  • the emotion feature value is smaller than the first emotion threshold, the display image of the target object is displayed in a first display manner as the emotion feature value decreases.
  • the displayed image of the target object is maintained.
  • the display image of the target object is displayed in a second display manner as the emotional characteristic value increases.
  • the first emotional threshold and the second emotional threshold are predetermined thresholds according to the emotional state of the object.
  • the value of the first emotional threshold is smaller than the value of the second emotional threshold.
  • the first emotional threshold and the second emotional threshold can be adjusted according to the actual situation of different objects.
  • the first display manner and the second display manner may be display manners associated with a display image, and may be determined in combination with a specific display image. For example, when the display image is a tulip as shown in FIG. 3A and FIG. 3B , the first display mode may be that the tulip gradually closes from an open state ( FIG.
  • the second display mode may be that the tulip gradually blooms from an open or closed state (Fig. 3B).
  • the gradual closing of the tulip from the open state can indicate that the target object is in a negative emotional state and is unwilling to communicate.
  • the gradual blooming of tulips from an open or closed state indicates that the target person is in a positive emotional state and is willing to communicate.
  • the display of the displayed image is adjusted based on the change of the emotional characteristic value, and the change of the target object's emotion can be presented through the change of the displayed image. For example, when the tulip gradually closes from the open state, it means that the target object's mood is becoming lower and lower. When the tulip gradually blooms from an open or closed state, it indicates that the target's emotions are gradually rising.
  • controlling the display of the display image in the first display mode and the second display mode respectively can not only more accurately represent the emotional state of the target object, but also present the emotional changes of the target object, which is beneficial to
  • the real-time monitoring of the emotional state of the target object can also increase the interestingness of the display, which is beneficial to mobilize the emotion of the target object, so as to perform auxiliary treatment on the target object.
  • the emotional threshold is not limited to the first emotional threshold and the second emotional threshold
  • the display mode is not limited to the first display mode and the second display mode. More display modes can be defined according to the emotional state of the target object to provide information about the target object. richer information. For example, when the emotional feature value is smaller than the first emotional threshold, and the emotional feature value gradually increases but does not reach the first emotional threshold, the display image of the target object can be displayed in the following manner (take the tulip in FIG.
  • Embodiments of the present disclosure provide a method for displaying an interactive interface based on emotion recognition.
  • the display method can recognize the emotion of the target object in real time and dynamically adjust the display of the interactive interface.
  • the target object can be classified differently by combining the age recognition algorithm and the gender recognition algorithm to display different images. Therefore, by displaying an appropriate picture, the interest of the display is increased, so as to alleviate and eliminate the resistance of the target object to the interaction through the interactive interface, so as to obtain the emotional state information of the target object more accurately, which is conducive to the evaluation of the emotional state of the target object.
  • Assessment and treatment can recognize the emotion of the target object in real time and dynamically adjust the display of the interactive interface.
  • FIG. 4 shows another flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure. As shown in FIG. 4 , the display method 400 may include the following steps.
  • step S410 a target object to be interacted with through the interactive interface is determined from at least one object.
  • step S420 the attribute information of the target object and the emotion information of the target object are acquired.
  • step S430 the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
  • step S440 the emotion information of each object in the at least one object is acquired.
  • step S450 the display of the second object area of the interactive interface is controlled according to the emotion information of each object in the at least one object.
  • steps S410 , S420 and S430 are the same as those performed in steps S210 , S220 and S230 in the display method 200 , and their operations will not be described in detail here. Also, step S440 and step S450 may be performed in parallel with step S420 and step S430. Steps S440 and S450 are described in detail below with reference to the embodiments.
  • controlling the display of the second object area of the interactive interface according to the emotional information of each of the at least one objects specifically includes determining, according to the emotional value of each of the at least one object, each of the at least one object.
  • the emotional feature values of the objects, the average value of the emotional feature values is obtained according to the emotional feature value of each object in the at least one object, and when the average value of the emotional feature values is less than the first emotional average threshold
  • the average value is reduced to display the background pattern in the second object area in a third way, and when the average value of the emotional feature values is greater than or equal to the first emotional average threshold and less than or equal to the second emotional average threshold, the second emotional average threshold is maintained.
  • the background pattern in the object region when the average value of the emotion feature values is greater than the second emotion average value threshold, the background pattern in the second object region is displayed in a fourth manner as the average value of the emotion feature values increases.
  • the average value of the emotion feature values is a value obtained by averaging the emotion feature values of each object in the acquired image including the target object, which can roughly represent the average value of all objects in the image.
  • overall emotional state. 5A and 5B illustrate another display example of an interactive interface according to an embodiment of the present disclosure.
  • the average value of the emotional feature values is smaller than the first emotional average threshold, it means that the overall emotional state of all objects in the image is low, and the whole is in negative emotions, so that the second object area ( elements such as wind or rain are added to the background image.
  • the second object area elements such as wind or rain are added to the background image.
  • the average value of the emotional feature values when the average value of the emotional feature values is greater than the second emotional average threshold, it indicates that the overall emotional state of all objects in the image is relatively positive, and the overall emotional state is in positive emotions, so that the second object area (for example, Add elements such as sunlight or rainbows to the background image).
  • the emotional information of other people in the scene where the target object is located can be better shown.
  • the emotion of the target object can be monitored more comprehensively, and diversified information can be provided for analyzing and treating the target object.
  • the emotional average threshold is not limited to the first emotional average threshold and the second emotional average threshold
  • the display mode is not limited to the third display mode and the fourth display mode. Define more display modes to provide richer information about the target object.
  • the display of the second object area may also be adjusted based on the emotional feature value of the target object itself.
  • the feature value of the emotion value of the target object is taken as the average value of the emotion feature value.
  • FIG. 6 shows an example of a display device for an interactive interface according to an embodiment of the present disclosure.
  • the display device 600 of the interactive interface includes an image acquisition module 601, a face detection module 602, an age detection module 603, a gender detection module 604, a classification module 605, a tracking detection module 606, an emotion recognition module 607, a single person Human-computer interaction module 608 , multi-person human-computer interaction module 609 and emotion record analysis module 610 .
  • the image acquisition module 601 is configured to receive an image about at least one object captured by an image sensor.
  • the face detection module 602 is configured to identify the image to obtain face information of at least one object in the image, and to determine the target object.
  • the age detection module 603 is configured to acquire the age of the target object according to an age recognition algorithm based on the face information of the target object in the face information of the at least one object.
  • the gender detection module 604 is configured to obtain the gender of the target object according to a gender recognition algorithm based on the face information of the target object in the face information of the at least one object.
  • the classification module 605 is configured to determine the displayed avatar of the target object according to the identified age information and gender information. In this example, the displayed image of the target object can be determined according to the information shown in Table 1. For example, when the gender of the target object is female and the age is between 30-50 years old, a tulip can be used as the display image of the target object.
  • the tracking detection module 606 is configured to track and detect the target object using face tracking and smoothing algorithms, and identify the position of the face of the target object on the image, so as to display the face representing the target object on the interactive interface. image.
  • the emotion recognition module 607 is configured to acquire face information from the tracking detection module 606 in real time, and acquire the emotion value of the target object according to the emotion recognition algorithm.
  • the single-person human-computer interaction module 608 provides an interactive interface under the single-person scene, and is configured to perform the following processing according to the emotional value of the target object identified by the emotional recognition module 607:
  • a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 and a 8 respectively represent the emotional values of the target object's 8 emotions, and simulate the emotional values according to the following expression (1).
  • all values in the fitting process are rounded up:
  • w 1 and w 2 are preset fitting variables
  • k is a preset constant
  • W target is the emotional feature value of the target object.
  • the single-person human-computer interaction module 608 also controls the display of the first object area according to the judgment of the following conditions:
  • the multi-person human-computer interaction module 609 provides an interactive interface under the multi-person scene, which determines the emotional characteristic value of the target object, and controls the display process of the display image of the target object according to the emotional characteristic value of the target object and the single-person human-computer interaction module. 608 is the same and will not be repeated here.
  • the multi-person human-computer interaction module 609 is also configured to perform the following processing according to the emotion value of each object identified by the emotion recognition module 607:
  • h 1 is used to represent the first emotional average threshold, and h 1 may be 30, and h 2 is used to represent the second emotional average threshold, and h 2 may be 80.
  • the multi-person human-computer interaction module 609 also controls the display of the second object area according to the judgment of the following conditions:
  • the emotional record analysis module 610 is configured to record the basic information of each target object and the emotional state information in the monitoring process.
  • FIG. 7 shows another example of a display device for an interactive interface according to an embodiment of the present disclosure.
  • the display device 700 of the interactive interface includes a memory 701 and a processor 702 .
  • the memory 701 is configured to store program instructions.
  • the processor 702 is configured to execute the program instructions to perform the following operations: determine a target object to be interacted with through the interactive interface from at least one object, obtain attribute information of the target object and emotional information of the target object, and perform the following operations according to the attribute information of the target object and the emotional information of the target object to control the display of the first object area of the interactive interface.
  • the processor 702 is further configured to acquire emotional information of each of the at least one object, and control the display of the second object area of the interactive interface according to the emotional information of each of the at least one object.
  • the electronic components of one or more systems or devices may include, but are not limited to, at least one processing unit, a memory, and a communication bus or communication device that couples various components including the memory to the processing unit.
  • a system or device may include or have access to various device-readable media.
  • System memory may include device-readable storage media in the form of volatile and/or nonvolatile memory (eg, read only memory (ROM) and/or random access memory (RAM)).
  • ROM read only memory
  • RAM random access memory
  • system memory may also include an operating system, application programs, other program modules, and program data.
  • Embodiments may be implemented as a system, method or program product. Accordingly, an embodiment may take the form of an entirely hardware embodiment or an embodiment including software (including firmware, resident software, microcode, etc.), which may be collectively referred to herein as a "circuit,” “module,” or “system.” Furthermore, embodiments may take the form of a program product embodied in at least one device-readable medium having device-readable program code embodied thereon.
  • a device-readable storage medium can be any tangible, non-signal medium that can contain or store a program configured for use by or in connection with an instruction execution system, apparatus, or device A program of code.
  • a storage medium or device should be construed as non-transitory, ie, not including a signal or propagation medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种交互界面的显示方法、显示装置及计算机可读存储介质。交互界面的显示方法包括从至少一个对象中确定要通过交互界面进行交互的目标对象(S210);获取目标对象的属性信息和目标对象的情绪信息(S220);以及根据目标对象的属性信息和目标对象的情绪信息来控制交互界面的第一对象区域的显示(S230)。

Description

交互界面的显示方法和装置以及存储介质
本申请要求于2020年7月29日提交的、申请号为202010743658.7的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及人机交互领域,尤其涉及一种交互界面的显示方法、显示装置及计算机可读存储介质。
背景技术
AI(Artificial Intelligence,人工智能)技术的快速发展促进了人机交互产品的应用。目前,已经有能够对人的情绪进行监控和管理的人机交互产品被应用于健康医疗领域。这种人机交互产品可以代替或辅助医疗人员来对病人的精神状态进行评估以便进一步治疗。但是由于病人对交互产品的抵触会导致记录的结果不准确。
发明内容
本公开的实施例的第一方面提供了一种交互界面的显示方法,包括:
从至少一个对象中确定要通过交互界面进行交互的目标对象;
获取所述目标对象的属性信息和所述目标对象的情绪信息;以及
根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示。
根据实施例,从至少一个对象中确定要通过交互界面进行交互的目标对象包括:
对所述至少一个对象进行跟踪检测,以获取关于所述至少一个对象的图像;
对所述图像进行识别以获取所述图像中的所述至少一个对象的人脸信息;以及
基于所述至少一个对象的人脸信息,将所述图像中首次出现的对象或将所述图像中的所述至少一个对象中位于最前面的对象确定为要通过交互界面进行交互的目标对象。
根据实施例,所述属性信息包括年龄和性别,获取所述目标对象的属性信息包括:
基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据年龄识别算法来获取所述目标对象的年龄;以及
基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据性别识别算法来获取所述目标对象的性别。
根据实施例,所述情绪信息包括情绪值,获取所述目标对象的情绪信息包括:
基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据情绪识别算法来获取所述目标对象的情绪值。
根据实施例,根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示包括:
根据所述目标对象的属性信息来确定所述目标对象在所述交互界面的第一对象区域中的显示形象;以及
根据所述目标对象的情绪信息来改变所述第一对象区域中的所述显示形象。
根据实施例,所述情绪信息包括情绪值,根据所述目标对象的情绪信息来改变所述第一对象区域中的所述显示形象包括:
根据所述目标对象的情绪值来确定所述目标对象的情绪特征值;
当所述情绪特征值小于第一情绪阈值时,随着所述情绪特征值减小以第一显示方式显示所述显示形象;
当所述情绪特征值大于或等于所述第一情绪阈值且小于或等于第二情绪阈值时,保持所述显示形象;
当所述情绪特征值大于所述第二情绪阈值时,随着所述情绪特征值增大以第二显示方式显示所述显示形象。
根据实施例,交互界面的显示方法还包括:
获取所述至少一个对象中的每个对象的情绪信息;以及
根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示。
根据实施例,所述情绪信息包括情绪值,根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示包括:
根据所述至少一个对象中的每个对象的情绪值来确定所述至少一个对象中的每个对象的情绪特征值;
根据所述至少一个对象中的每个对象的情绪特征值来获取所述情绪特征值的平均值;
当所述情绪特征值的平均值小于第一情绪平均值阈值时,随着所述情绪特征值的平均值减小以第三方式显示所述第二对象区域中的背景图案;
当所述情绪特征值的平均值大于或等于所述第一情绪平均值阈值且小于或等于第二情绪平均值阈值时,保持所述第二对象区域中的背景图案;
当所述情绪特征值的平均值大于所述第二情绪平均值阈值时,随着所述情绪特征值的平均值增大以第四方式显示所述第二对象区域中的背景图案。
根据实施例,所述情绪识别算法包括K最邻近算法、支持向量机算法、聚类算法、遗传算法、粒子群优化算法、卷积神经网络算法和多任务卷积神经网络算法之一。
本公开的实施例的第二方面提供了一种交互界面的显示装置,包括:
存储器,被配置为存储程序指令;以及
处理器,被配置为执行所述程序指令,以执行以下操作:
从至少一个对象中确定要通过交互界面进行交互的目标对象;
获取所述目标对象的属性信息和所述目标对象的情绪信息;以及
根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示。
根据实施例,所述处理器还被配置为:
获取所述至少一个对象中的每个对象的情绪信息;以及
根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示。
本公开的实施例的第三方面提供了一种计算机可读存储介质,其上存储有可执行指令,当指令在被处理器执行时使处理器执行根据本公开的实施例的第一方面所提供的交互界面的显示方法。
本公开的实施例的交互界面的显示方法,通过获取要通过交互界面进行交互的目标对象的情绪信息和属性信息,并结合目标对象的情绪信息和属性信息来控制交互界面的第一对象区域的显示,从而通过显示适当的画面来增加显示的趣味性,以缓解并消除目标对象对经由交互界面进行交互的抵触情绪,以便更准确地获取目标对象的情绪状态信息,有利于对目标对象的情绪状态进行评估和治疗。
附图说明
通过以下参照附图对本公开实施例的描述,本公开的上述以及其他目的、特征和优点将更为清楚,在附图中:
图1示出了设置有根据本公开实施例的交互界面的人机交互系统的示意图;
图2示出了根据本公开实施例的交互界面的显示方法的流程图;
图3A和图3B示出了根据本公开实施例的交互界面的显示示例;
图4示出了根据本公开实施例的交互界面的显示方法的另一流程图;
图5A和图5B示出了根据本公开实施例的交互界面的另一显示示例;
图6示出了根据本公开实施例的交互界面的显示装置的示例;以及
图7示出了根据本公开实施例的交互界面的显示装置的另一示例。
贯穿附图,相同的附图标记指示相同的元素或元件。
具体实施方式
以下,将参照附图来描述本公开的实施例。但是应该理解,这些描述只是示例性的,而并非要限制本公开的范围。在下面的详细描述中,为便于解释,阐述了许多具体的细节以提供对本公开实施例的全面理解。然而,明显地,一个或多个实施例在没有这些具体细节的情况下也可以被实施。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本公开的概念。
在此使用的术语仅仅是为了描述具体实施例,而并非意在限制本公开。在此使用的术语“包括”、“包含”等表明了所述特征、步骤、操作和/或部件的存在,但是并不排除存在或添加一个或多个其他特征、步骤、操作或部件。
在此使用的所有术语(包括技术和科学术语)具有本领域技术人员通常所理解的含义,除非另外定义。应注意,这里使用的术语应解释为具有与本说明书的上下文相一致的含义,而不应以理想化或过于刻板的方式来解释。
在使用类似于“A、B和C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B和C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。
根据本公开的实施例的交互界面可以设置并应用于能够对人的情绪进行监控或管理的人机交互系统中,可以用于获取目标对象的情绪状态信息,并对目标对象的情绪状 态进行评估和治疗。下面实施例中关于交互界面的描述将以上述人机交互系统为例进行说明,但本领域技术人员应理解,本公开实施例的交互界面以及交互界面的显示方法和显示装置不限于此,可以将其应用于任何其他合适的产品或应用场景中。
图1示出了设置有根据本公开实施例的交互界面的人机交互系统的示意图,图2示出了根据本公开实施例的交互界面的显示方法的流程图。
如图1所示,人机交互系统100包括交互界面101和功能区102。交互界面101可以基于显示技术来向用户提供画面显示,功能区102可以接收用户的输入,并基于用户的输入对人机交互系统100进行操纵,例如开启或关闭人机交互系统100,设置人机交互系统100的参数,或对人机交互系统100的功能进行选择等。如图1所示,人机交互系统100还包括图像传感器103,可以被配置为捕获对象,以便向交互界面101提供包含对象的图像,以使交互界面101能够通过对图像进行识别来对要通过交互界面101进行交互的对象进行选择。
需要说明的是,图1的人机交互系统100仅为示例,不构成对人机交互系统100以及设置在人机交互系统100上的交互界面101的限定。例如,人机交互系统100可以由诸如智能手机之类的移动终端以及安装在移动终端上的应用实现。可以由例如智能手机的屏幕来实现交互界面101的功能,由对应用的操作来实现功能区102的功能,由智能手机的摄像头来实现图像传感器103的功能。
如图2所示,根据本公开实施例的交互界面的显示方法200包括以下步骤。
在步骤S210,从至少一个对象中确定要通过交互界面进行交互的目标对象。
在步骤S220,获取目标对象的属性信息和目标对象的情绪信息。
在步骤S230,根据目标对象的属性信息和目标对象的情绪信息来控制交互界面的第一对象区域的显示。
具体的,在步骤S210中,显示方法200可以在有多个对象存在的场景中确定要通过交互界面进行交互的目标对象。从至少一个对象中确定目标对象的方法可以包括:对至少一个对象进行跟踪检测,以获取关于至少一个对象的图像,对图像进行识别以获取图像中的至少一个对象的人脸信息,以及基于至少一个对象的人脸信息,将图像中首次出现的对象或将图像中的所述少一个对象中位于最前面的对象确定为要通过交互界面进行交互的目标对象。
根据实施例,显示方法200可以对出现在图像传感器103视野范围内的至少一个对 象进行捕捉,即利用图像传感器103实时捕捉对象的图像。然后对所捕捉的图像中的人脸进行检测,可以利用人脸检测算法检测出所捕捉的图像中的人脸。目前,常用的人脸检测算法包括AInnoFace人脸检测算法、级联CNN(convolutional neural network,卷积神经网络)人脸检测算法、OpenCV人脸检测算法、Seetaface人脸检测算法、libfacedetect人脸检测算法、FaceNet人脸检测算法、MTCNN(Multi-task convolutional neural network,多任务卷积神经网络)人脸检测算法等。本公开实施例对所采用的人脸检测算法不作限定,可以采用任何合适的方法对人脸进行检测。
根据本公开实施例的交互界面的显示方法可以提供单人和多人两种交互场景的显示。其中,单人交互场景针对场景中只存在单个交互对象的情况,并与该单个交互对象进行交互。多人交互场景可以针对场景中存在多个交互对象的情况,并从多个交互对象中选定一个交互对象进行交互。具体的交互是基于单人交互场景进行交互还是基于多人交互场景进行交互可以通过设置系统参数来事先选定。例如,对于存在多个交互对象的场景,可以将交互设置为多人交互场景。
根据本公开实施例的交互界面的显示方法可以以不同的方式确定目标对象。根据实施例,可以将首次出现在所捕捉的图像中的对象确定为要通过交互界面进行交互的目标对象,这种方式同时适用于单人交互场景和多人交互场景。根据实施例,还可以将所捕捉的图像中的多个对象中位于最前面的对象确定为要通过交互界面进行交互的目标对象,这种方式适用于多人交互场景且多个交互对象同时出现在所捕捉的图像中的情况。可以理解,图像传感器103可以是深度图像传感器,所捕捉的图像可以是深度图像,根据本公开实施例的显示方法通过对所捕捉的图像中的各个对象的深度信息进行识别来确定位于最前面的对象。
接下来,在步骤S220中,通过对目标对象的跟踪检测,可以实时获取目标对象的图像,实时获取图像中目标对象的人脸信息,并根据人脸信息实时获取目标对象的属性信息和情绪信息。
在确定了目标对象之后,还可以对目标对象进行跟踪,以便实时获取目标对象的情绪值。具体的,采用人脸跟踪和平滑算法对目标对象进行跟踪检测。在显示方法在图像传感器103所捕捉的图像中检测到人脸,并确定了目标对象之后,在图像上识别出该目标对象的人脸的位置,同时在交互界面101上显示代表该目标对象的人脸的形象。当目标对象自由移动时,交互界面中的人脸的形象可以随着目标对象的移动而移动,从而实 现对目标对象的平滑跟踪。目前,常用的人脸跟踪和平滑算法包括MTCNN算法、Laplace算法、粒子滤波算法等,也可以使用卡尔曼滤波器和Hungarian(匈牙利)算法的组合技术,本公开实施例对此不作限定。
根据实施例,目标对象的属性信息可以包括目标对象的年龄和性别,但不限于此。获取目标对象的年龄和性别的步骤包括,通过对包括目标对象的人脸的图像进行识别来获取目标对象的人脸信息,并基于人脸信息,根据年龄识别算法来获取目标对象的年龄。常用的年龄识别算法包括SVM(Support Vector Machine,支持向量机)、CNN等。进一步地,还可以基于人脸信息,根据性别识别算法来获取目标对象的性别。常用的性别识别算法包括SVM、CNN等。本公开实施例对所采用的年龄识别算法和性别识别算法不作限定,可以采用任何合适的方法。
根据实施例,目标对象的情绪信息可以由目标对象的情绪值来表示。获取目标对象的情绪值的步骤包括,通过对包括目标对象的人脸的图像进行识别来获取目标对象的人脸信息,并基于人脸信息,根据情绪识别算法来获取目标对象的情绪值。常用的情绪识别算法包括KNN(K-Nearest Neighbor,K最邻近)算法、SVM算法、聚类算法、遗传算法、PSO(Particle Swarm Optimization,粒子群优化)算法、CNN算法、MTCNN算法等。本公开实施例对所采用的情绪识别算法不作限定,可以采用任何合适的方法。
根据实施例,可以通过情绪识别算法识别目标对象的8种情绪,包括中性、高兴、吃惊、悲伤、生气、害怕、厌恶和轻蔑,每种情绪对应不同的情绪值。目标对象的情绪存在很多复杂的情况,目标对象的情绪可能是各种情绪交织在一起的复杂状态。例如目标对象处于轻蔑状态,但整体情绪稳定不存在情绪波动,即目标对象还处于中性状态中。因此,还需要根据上述情绪值来对目标对象的实际情绪类别进行综合判断。通常情况下,可以认为中性和吃惊属于中性情绪,即当目标对象处于中性或吃惊状态时,该目标对象整体处于平静的状态,不会存在较大的情绪波动。可以认为悲伤、生气、害怕、厌恶和轻蔑属于负面情绪,即当目标对象处于悲伤、生气、害怕、厌恶或轻蔑状态时,该目标对象的情绪较低落,或存在较大的负面波动。可以认为高兴属于正面情绪,容易理解,当目标对象处于高兴状态时,目标对象的情绪具有正面波动或情绪高涨。
情绪识别算法以不同的情绪值表示目标对象的情绪。通常负面情绪例如悲伤、生气、害怕、厌恶和轻蔑具有较低的情绪值,正面情绪例如高兴具有较高的情绪值,中性情绪例如中性和吃惊的情绪值处于正面情绪和负面情绪的情绪值之间。因此可以通过不同的 数值来表征目标对象的情绪。
接下来,在步骤S230中,根据目标对象的属性信息和情绪信息来控制交互界面的第一对象区域的显示包括根据目标对象的属性信息来确定目标对象在交互界面上的显示形象。根据实施例,交互界面101还被划分为不同的对象区域,根据本公开实施例的显示方法可以对不同的对象区域分别进行控制和显示,由此来增加显示的灵活性。
图3A和图3B示出了根据本公开实施例的交互界面的显示示例。如图3A和图3B所示,交互界面101包括第一对象区域1011和第二对象区域1012。其中,第一对象区域1011可以是被配置为显示目标对象的区域,其在交互界面101上呈现目标对象的显示形象。第二对象区域1012可以是被配置为显示除目标对象的显示形象之外的其他内容的区域,例如在交互界面101上显示背景区域。第一对象区域1011在交互界面101上的位置可以变化,第一对象区域1011可以在交互界面101上移动,从而提供动态的显示效果。
根据目标对象的属性信息来确定目标对象在交互界面上的显示形象,可以是根据目标对象的年龄和性别来确定目标对象在显示界面的第一对象区域中的显示形象。如图3A和图3B所示,图中所示的郁金香即为根据目标对象的年龄和性别所确定的用于代表目标对象的显示形象。
接下来,在步骤S230中,根据目标对象的属性信息和情绪信息来控制交互界面的第一对象区域的显示还包括根据目标对象的情绪信息来改变第一对象区域中的显示形象。根据实施例,根据目标对象的情绪信息来改变第一对象区域中的显示形象可以包括,根据目标对象的情绪值来确定目标对象的情绪特征值,并基于目标对象的情绪特征值与情绪阈值的比较结果来控制显示形象的显示。当情绪特征值小于第一情绪阈值时,随着情绪特征值减小以第一显示方式显示目标对象的显示形象。当情绪特征值大于或等于第一情绪阈值且小于或等于第二情绪阈值时,保持目标对象的显示形象。当情绪特征值大于第二情绪阈值时,随着情绪特征值增大以第二显示方式显示目标对象的显示形象。
由于对象实际情绪状态的复杂性,因此根据所识别的目标对象的情绪信息(情绪值)来表示目标对象的综合情绪状态。第一情绪阈值和第二情绪阈值是根据对象的情绪状态预先确定的阈值,第一情绪阈值的数值小于第二情绪阈值的数值,可以根据不同对象的实际情况调整第一情绪阈值和第二情绪阈值的数值。第一显示方式和第二显示方式可以是与显示形象相关联的显示方式,可以结合具体的显示形象进行确定。例如,当显示形 象是如图3A和图3B所示的郁金香时,第一显示方式可以是郁金香由开放状态逐渐闭合(图3A),第二显示方式可以是郁金香由开放或闭合的状态逐渐绽放(图3B)。郁金香由开放状态逐渐闭合可以表示目标对象处于负面情绪状态,不愿意进行交流。郁金香由开放或闭合的状态逐渐绽放表示目标对象处于正面情绪状态,愿意进行交流。此外,根据实施例,基于情绪特征值的变化来调整显示形象的显示,可以通过显示形象的变化呈现出目标对象情绪的变化。例如,当郁金香由开放状态逐渐闭合时,表示目标对象的情绪正在变得越来越低落。当郁金香由开放或闭合的状态逐渐绽放时,表示目标对象的情绪正逐渐高涨。
根据本公开的实施例,分别以第一显示方式和第二显示方式控制显示形象的显示,不仅可以更确切地表示出目标对象所处的情绪状态,呈现目标对象的情绪的变化,有利于对目标对象的情绪状态进行实时监控,还可以增加显示的趣味性,有利于调动目标对象的情绪,从而对目标对象进行辅助治疗。
容易理解,情绪阈值不限于第一情绪阈值和第二情绪阈值,显示方式也不限于第一显示方式和第二显示方式,可以根据目标对象的情绪状态限定更多显示方式,以提供关于目标对象的更丰富的信息。例如,当情绪特征值小于第一情绪阈值,且情绪特征值逐渐增大但未达到第一情绪阈值的情况,可以以如下方式显示目标对象的显示形象(仍以图3A中的郁金香为例):使郁金香在闭合的状态下呈现出趋向开放状态的变化,即如图3A所示,郁金香可以在开始时呈现出一个小的闭合的花苞,随着情绪特征值增大,闭合的花苞仍未开放,但花苞变大,有了要开放的趋势。同样地,当情绪特征值大于第二情绪阈值,且情绪特征值逐渐减小但未达到第二情绪阈值的情况,可以以如下方式显示目标对象的显示形象(仍以图3B中的郁金香为例):使郁金香在开放状态下呈现趋向闭合状态的变化,即如图3B所示,郁金香可以在开始时呈现出开放的较大的花朵,随着情绪特征值减小,较大的花朵变小,但仍为开放的状态。
本公开的实施例提供了一种基于情绪识别的交互界面的显示方法。该显示方法可以实时识别目标对象的情绪并动态地调整交互界面的显示。当在图像中检测出人脸时,就会出现对应的显示形象,可以结合年龄识别算法和性别识别算法对目标对象进行不同的分类,以展示不同的形象。从而通过显示适当的画面来增加显示的趣味性,以缓解并消除目标对象对经由交互界面进行交互的抵触情绪,以便更准确地获取目标对象的情绪状态信息,有利于对目标对象的情绪状态进行评估和治疗。
图4示出了根据本公开实施例的交互界面的显示方法的另一流程图,如图4所示,显示方法400可以包括以下步骤。
在步骤S410,从至少一个对象中确定要通过交互界面进行交互的目标对象。
在步骤S420,获取目标对象的属性信息和目标对象的情绪信息。
在步骤S430,根据目标对象的属性信息和目标对象的情绪信息来控制交互界面的第一对象区域的显示。
在步骤S440,获取至少一个对象中的每个对象的情绪信息。
在步骤S450,根据至少一个对象中的每个对象的情绪信息来控制交互界面的第二对象区域的显示。
其中,步骤S410、步骤S420和步骤S430所执行的操作与显示方法200中的步骤S210、步骤S220和步骤S230所执行的操作相同,此处不再详细描述其操作。此外,步骤S440和步骤S450可以步骤S420和步骤S430并行地执行。下面结合实施例详细说明步骤S440和步骤S450。
根据实施例,根据至少一个对象中的每个对象的情绪信息来控制交互界面的第二对象区域的显示具体包括,根据至少一个对象中的每个对象的情绪值来确定至少一个对象中的每个对象的情绪特征值,根据至少一个对象中的每个对象的情绪特征值来获取情绪特征值的平均值,当情绪特征值的平均值小于第一情绪平均值阈值时,随着情绪特征值的平均值减小以第三方式显示第二对象区域中的背景图案,当情绪特征值的平均值大于或等于第一情绪平均值阈值且小于或等于第二情绪平均值阈值时,保持第二对象区域中的背景图案,当情绪特征值的平均值大于第二情绪平均值阈值时,随着情绪特征值的平均值增大以第四方式显示第二对象区域中的背景图案。
在该实施例中,情绪特征值的平均值是将所获取的图像中的包括目标对象在内的每个对象的情绪特征值求平均所获得的值,其可以大致表示图像中的所有对象的整体情绪状态。图5A和图5B示出了根据本公开实施例的交互界面的另一显示示例。如图5A所示,当情绪特征值的平均值小于第一情绪平均值阈值时,说明图像中所有对象的整体情绪状态较低落,整体处于负面情绪中,由此可以在第二对象区域(例如背景图像)中增加刮风或下雨之类的要素。如图5B所示,当情绪特征值的平均值大于第二情绪平均值阈值时,说明图像中所有对象的整体情绪状态较积极,整体处于正面情绪中,由此可以在第二对象区域(例如背景图像)中增加阳光或彩虹之类的要素。
通过根据人群的情绪特征值的平均值来对第二对象区域的显示进行调整,可以更好地示出目标对象所处场景中其他人的情绪信息,而由于人的情绪容易受到外界影响,因此根据本公开的实施例,能够更全面地对目标对象的情绪进行监控,并为分析和治疗目标对象提供多元化的信息。
容易理解的是,情绪平均值阈值不限于第一情绪平均值阈值和第二情绪平均值阈值,显示方式也不限于第三显示方式和第四显示方式,可以根据目标对象所处人群的情绪状态限定更多显示方式,以提供关于目标对象的更丰富的信息。
另外,在单人交互的情况下,也可以基于目标对象自身的情绪特征值来调整第二对象区域的显示。此时,以目标对象的情绪值特征值作为情绪特征值的平均值。
图6示出了根据本公开实施例的交互界面的显示装置的示例。如图6所示,交互界面的显示装置600包括图像获取模块601、人脸检测模块602、年龄检测模块603、性别检测模块604、分类模块605、跟踪检测模块606、情绪识别模块607、单人人机交互模块608、多人人机交互模块609以及情绪记录分析模块610。其中,图像获取模块601被配置为接收由图像传感器所捕捉的关于至少一个对象的图像。人脸检测模块602被配置为对图像进行识别以获取图像中的至少一个对象的人脸信息,并确定目标对象。年龄检测模块603被配置为基于至少一个对象的人脸信息中的目标对象的人脸信息,根据年龄识别算法来获取目标对象的年龄。性别检测模块604被配置为基于至少一个对象的人脸信息中的目标对象的人脸信息,根据性别识别算法来获取目标对象的性别。分类模块605被配置为根据识别出的年龄信息和性别信息确定目标对象的显示形象。在该示例中,可以根据表1所示的信息来确定目标对象的显示形象。例如,当目标对象的性别为女,年龄为30-50岁之间时,可以用郁金香作为该目标对象的显示显示形象。
表1
性别 年龄 形象 性别 年龄 形象
0-15 小草 0-15 花蕾
15-30 小树 15-30 玫瑰
30-50 大树 30-50 郁金香
50-65 雄鹰 50-65 蔷薇
65~ 海鸥 65~ 牡丹
跟踪检测模块606被配置为对采用人脸跟踪和平滑算法对目标对象进行跟踪检测,在图像上识别出该目标对象的人脸的位置,以便在交互界面上显示代表该目标对象的人脸的形象。情绪识别模块607被配置为从跟踪检测模块606实时地获取人脸信息,并根据情绪识别算法来获取目标对象的情绪值。
单人人机交互模块608提供单人场景下的交互界面,其被配置为根据情绪识别模块607所识别的目标对象的情绪值进行如下处理:
以a 1、a 2、a 3、a 4、a 5、a 6、a 7和a 8分别表示目标对象的8种情绪的情绪值,根据下面表达式(1)的方法对情绪值进行拟合,以获得目标对象的情绪特征值,拟合过程中所有数值均为向上取整:
w 1=a 2,w 2=a 8+2a 4+a 5+a 6+a 7-a 3
如果w 1>w 2,W 目标=w 1
如果w 1=w 2,W 目标=k     (1)
如果w 1<w 2,W 目标=w 2
式中,w 1和w 2是预设的拟合变量,k是预设的常数,W 目标是目标对象的情绪特征值。
进一步地,将第一情绪阈值设为k 1=30,将第二情绪阈值k 2=80,则预设的常数可以取为k=50,使得w 1=w 2时目标对象的情绪值在k 1和k 2之间。
单人人机交互模块608还根据以下条件的判断来控制第一对象区域的显示:
当W 目标<k 1时,郁金香随着W 目标数值的减小而逐渐闭合。
当k 1≤W 目标≤k 2时,郁金香保持在一般开放状态不变化。
当W 目标>k 2时,郁金香随着W 目标数值的增大而逐渐呈现绽放状态。
多人人机交互模块609提供多人场景下的交互界面,其确定目标对象的情绪特征值,并根据目标对象的情绪特征值控制目标对象的显示形象的显示的过程与单人人机交互模块608相同,此处不再赘述。
此外,多人人机交互模块609还被配置为根据情绪识别模块607所识别的每个对象的情绪值进行如下处理:
以W 1、W 2、W 3……、W n分别表示采集到的图像中的n个对象的情绪特征值,则整个场景中所有对象的情绪特征值的平均值如表达式(2)所示:
Figure PCTCN2021098954-appb-000001
进一步地,以h 1表示第一情绪平均值阈值,h 1可以为30,以h 2表示第二情绪平均值阈值,h 2可以为80。
多人人机交互模块609还根据以下条件的判断来控制第二对象区域的显示:
当W 平均<h 1时,在显示画面的背景中增加刮风、下雨等要素。
当h 1≤W 平均≤h 2时,保持显示画面的背景不变化。
当W 平均>h 2时,在显示画面的背景中增加阳光、彩虹等要素。
情绪记录分析模块610被配置为记录每个目标对象的基本信息和监测过程中的情绪状态信息。
图7示出了根据本公开实施例的交互界面的显示装置的另一示例。如图7所示,交互界面的显示装置700包括存储器701和处理器702。其中,存储器701被配置为存储程序指令。处理器702被配置为执行程序指令以执行以下操作:从至少一个对象中确定要通过交互界面进行交互的目标对象,获取目标对象的属性信息和目标对象的情绪信息,并根据目标对象的属性信息和目标对象的情绪信息来控制交互界面的第一对象区域的显示。此外,处理器702还被配置为获取至少一个对象中的每个对象的情绪信息,并根据至少一个对象中的每个对象的情绪信息来控制交互界面的第二对象区域的显示。
此外,尽管以上各个框图中示出了多个组件,但是本领域技术人员应当理解,可以在缺少一个或多个组件或将某些组件组合的情况下实现本公开的实施例。
此外,尽管以上根据附图中所示的顺序对各个步骤进行了描述,但是本领域技术人员应当理解,可以在没有上述步骤中的一个或多个步骤的情况下实现本公开的实施例。
根据前述内容可以理解,一个或多个系统或设备的电子组件可以包括但不限于至少一个处理单元、存储器、以及将包括存储器在内的各个组件耦接到处理单元的通信总线或通信装置。系统或设备可以包括或可以访问各种设备可读介质。系统存储器可以包括易失性和/或非易失性存储器形式的设备可读存储介质(比如,只读存储器(ROM)和/或随机存取存储器(RAM))。通过示例而非限制的方式,系统存储器还可以包括操作系统、应用程序、其它程序模块和程序数据。
实施例可以实现为系统、方法或程序产品。因此,实施例可以采用全硬件实施例或者包括软件(包括固件、常驻软件、微代码等)的实施例的形式,它们在本文中可以统称为“电路”、“模块”或“系统”。此外,实施例可以采取在其上体现有设备可读程 序代码的至少一个设备可读介质中体现的程序产品的形式。
可以使用设备可读存储介质的组合。在本文档的上下文中,设备可读存储介质(“存储介质”)可以是任何有形的非信号介质,其可以包含或存储由配置为由指令执行系统、装置或设备使用或与其结合使用的程序代码组成的程序。出于本公开的目的,存储介质或设备应被解释为非暂时性的,即不包括信号或传播介质。
本公开出于说明和描述目的得以呈现,但是并非旨在穷举或限制。许多修改和变化对于本领域普通技术人员将是明显的。选择和描述实施例以便说明原理和实际应用,并且使本领域普通技术人员能理解具有适合于所预期的特定用途的各种修改的本公开的各种实施例。

Claims (12)

  1. 一种交互界面的显示方法,包括:
    从至少一个对象中确定要通过交互界面进行交互的目标对象;
    获取所述目标对象的属性信息和所述目标对象的情绪信息;以及
    根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示。
  2. 根据权利要求1所述的显示方法,其中,从至少一个对象中确定要通过交互界面进行交互的目标对象包括:
    对所述至少一个对象进行跟踪检测,以获取关于所述至少一个对象的图像;
    对所述图像进行识别以获取所述图像中的所述至少一个对象的人脸信息;以及
    基于所述至少一个对象的人脸信息,将所述图像中首次出现的对象或将所述图像中的所述至少一个对象中位于最前面的对象确定为要通过交互界面进行交互的目标对象。
  3. 根据权利要求2所述的显示方法,其中,所述属性信息包括年龄和性别,获取所述目标对象的属性信息包括:
    基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据年龄识别算法来获取所述目标对象的年龄;以及
    基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据性别识别算法来获取所述目标对象的性别。
  4. 根据权利要求2所述的显示方法,其中,所述情绪信息包括情绪值,获取所述目标对象的情绪信息包括:
    基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据情绪识别算法来获取所述目标对象的情绪值。
  5. 根据权利要求1所述的显示方法,其中,根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示包括:
    根据所述目标对象的属性信息来确定所述目标对象在所述交互界面的第一对象区域中的显示形象;以及
    根据所述目标对象的情绪信息来改变所述第一对象区域中的所述显示形象。
  6. 根据权利要求5所述的显示方法,其中,所述情绪信息包括情绪值,根据所述目标对象的情绪信息来改变所述第一对象区域中的所述显示形象包括:
    根据所述目标对象的情绪值来确定所述目标对象的情绪特征值;
    当所述情绪特征值小于第一情绪阈值时,随着所述情绪特征值减小以第一显示方式显示所述显示形象;
    当所述情绪特征值大于或等于所述第一情绪阈值且小于或等于第二情绪阈值时,保持所述显示形象;
    当所述情绪特征值大于所述第二情绪阈值时,随着所述情绪特征值增大以第二显示方式显示所述显示形象。
  7. 根据权利要求1所述的显示方法,还包括:
    获取所述至少一个对象中的每个对象的情绪信息;以及
    根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示。
  8. 根据权利要求7所述的显示方法,其中,所述情绪信息包括情绪值,根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示包括:
    根据所述至少一个对象中的每个对象的情绪值来确定所述至少一个对象中的每个对象的情绪特征值;
    根据所述至少一个对象中的每个对象的情绪特征值来获取所述情绪特征值的平均值;
    当所述情绪特征值的平均值小于第一情绪平均值阈值时,随着所述情绪特征值的平均值减小以第三方式显示所述第二对象区域中的背景图案;
    当所述情绪特征值的平均值大于或等于所述第一情绪平均值阈值且小于或等于第二情绪平均值阈值时,保持所述第二对象区域中的背景图案;
    当所述情绪特征值的平均值大于所述第二情绪平均值阈值时,随着所述情绪特征值的平均值增大以第四方式显示所述第二对象区域中的背景图案。
  9. 根据权利要求4所述的显示方法,其中,所述情绪识别算法包括K最邻近算法、支持向量机算法、聚类算法、遗传算法、粒子群优化算法、卷积神经网络算法和多任务卷积神经网络算法之一。
  10. 一种交互界面的显示装置,包括:
    存储器,被配置为存储程序指令;以及
    处理器,被配置为执行所述程序指令,以执行以下操作:
    从至少一个对象中确定要通过交互界面进行交互的目标对象;
    获取所述目标对象的属性信息和所述目标对象的情绪信息;以及
    根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示。
  11. 根据权利要求10所述的显示装置,所述处理器还被配置为:
    获取所述至少一个对象中的每个对象的情绪信息;以及
    根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示。
  12. 一种计算机可读存储介质,其上存储有可执行指令,所述指令在被处理器执行时使所述处理器执行根据权利要求1至9中任一项所述的方法。
PCT/CN2021/098954 2020-07-29 2021-06-08 交互界面的显示方法和装置以及存储介质 WO2022022077A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/773,371 US11960640B2 (en) 2020-07-29 2021-06-08 Display method and display device for interactive interface and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010743658.7A CN114093461A (zh) 2020-07-29 2020-07-29 交互界面的显示方法和装置以及存储介质
CN202010743658.7 2020-07-29

Publications (1)

Publication Number Publication Date
WO2022022077A1 true WO2022022077A1 (zh) 2022-02-03

Family

ID=80037458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098954 WO2022022077A1 (zh) 2020-07-29 2021-06-08 交互界面的显示方法和装置以及存储介质

Country Status (3)

Country Link
US (1) US11960640B2 (zh)
CN (1) CN114093461A (zh)
WO (1) WO2022022077A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827339A (zh) * 2022-04-02 2022-07-29 维沃移动通信有限公司 消息输出方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186327A (zh) * 2011-12-28 2013-07-03 宇龙计算机通信科技(深圳)有限公司 改变待机界面的解锁方法及装置
CN104063147A (zh) * 2014-06-10 2014-09-24 百度在线网络技术(北京)有限公司 移动终端中页面的控制方法和装置
CN105955490A (zh) * 2016-06-28 2016-09-21 广东欧珀移动通信有限公司 一种基于增强现实的信息处理方法、装置和移动终端
CN110070879A (zh) * 2019-05-13 2019-07-30 吴小军 一种基于变声技术制作智能表情及声感游戏的方法
US20200110927A1 (en) * 2018-10-09 2020-04-09 Irene Rogan Shaffer Method and apparatus to accurately interpret facial expressions in american sign language
CN111326235A (zh) * 2020-01-21 2020-06-23 京东方科技集团股份有限公司 一种情绪调节方法、设备及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202749023U (zh) * 2012-07-26 2013-02-20 王晓媛 患者情绪晴雨表
US10475351B2 (en) * 2015-12-04 2019-11-12 Saudi Arabian Oil Company Systems, computer medium and methods for management training systems
CN105930035A (zh) * 2016-05-05 2016-09-07 北京小米移动软件有限公司 显示界面背景的方法及装置
US10732722B1 (en) * 2016-08-10 2020-08-04 Emaww Detecting emotions from micro-expressive free-form movements
CN111048016B (zh) * 2018-10-15 2021-05-14 广东美的白色家电技术创新中心有限公司 产品展示方法、装置及系统
KR102689884B1 (ko) * 2018-11-13 2024-07-31 현대자동차주식회사 차량 및 그 제어방법
US10860864B2 (en) * 2019-01-16 2020-12-08 Charter Communications Operating, Llc Surveillance and image analysis in a monitored environment
CN109875579A (zh) * 2019-02-28 2019-06-14 京东方科技集团股份有限公司 情绪健康管理系统和情绪健康管理方法
CN111797249A (zh) * 2019-04-09 2020-10-20 华为技术有限公司 一种内容推送方法、装置与设备
CN111222444A (zh) * 2019-12-31 2020-06-02 的卢技术有限公司 一种考虑驾驶员情绪的增强现实抬头显示方法和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186327A (zh) * 2011-12-28 2013-07-03 宇龙计算机通信科技(深圳)有限公司 改变待机界面的解锁方法及装置
CN104063147A (zh) * 2014-06-10 2014-09-24 百度在线网络技术(北京)有限公司 移动终端中页面的控制方法和装置
CN105955490A (zh) * 2016-06-28 2016-09-21 广东欧珀移动通信有限公司 一种基于增强现实的信息处理方法、装置和移动终端
US20200110927A1 (en) * 2018-10-09 2020-04-09 Irene Rogan Shaffer Method and apparatus to accurately interpret facial expressions in american sign language
CN110070879A (zh) * 2019-05-13 2019-07-30 吴小军 一种基于变声技术制作智能表情及声感游戏的方法
CN111326235A (zh) * 2020-01-21 2020-06-23 京东方科技集团股份有限公司 一种情绪调节方法、设备及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827339A (zh) * 2022-04-02 2022-07-29 维沃移动通信有限公司 消息输出方法、装置和电子设备

Also Published As

Publication number Publication date
CN114093461A (zh) 2022-02-25
US11960640B2 (en) 2024-04-16
US20220404906A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
Yan et al. Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement
CN107358241B (zh) 图像处理方法、装置、存储介质及电子设备
US20220036055A1 (en) Person identification systems and methods
WO2020125623A1 (zh) 活体检测方法、装置、存储介质及电子设备
US10783354B2 (en) Facial image processing method and apparatus, and storage medium
US9075453B2 (en) Human eye controlled computer mouse interface
WO2019128508A1 (zh) 图像处理方法、装置、存储介质及电子设备
WO2019120029A1 (zh) 智能调节屏幕亮度的方法、装置、存储介质及移动终端
US20190228231A1 (en) Video segmentation using predictive models trained to provide aesthetic scores
US20120243751A1 (en) Baseline face analysis
CN113011385A (zh) 人脸静默活体检测方法、装置、计算机设备及存储介质
Zhao et al. Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection
CN111183455A (zh) 图像数据处理系统与方法
CN110163861A (zh) 图像处理方法、装置、存储介质和计算机设备
WO2022022077A1 (zh) 交互界面的显示方法和装置以及存储介质
US20240212309A1 (en) Electronic apparatus, controlling method of electronic apparatus, and computer readable medium
WO2023045626A1 (zh) 图像采集方法、装置、终端、计算机可读存储介质及计算机程序产品
CN111582654A (zh) 基于深度循环神经网络的服务质量评价方法及其装置
US9349038B2 (en) Method and apparatus for estimating position of head, computer readable storage medium thereof
CN113591550B (zh) 一种个人喜好自动检测模型构建方法、装置、设备及介质
CN111160173A (zh) 一种基于机器人的手势识别方法及机器人
Ni et al. Diverse local facial behaviors learning from enhanced expression flow for microexpression recognition
CN111222374A (zh) 测谎数据处理方法、装置、计算机设备和存储介质
Coutrot et al. Learning a time-dependent master saliency map from eye-tracking data in videos
Lee Detection and recognition of facial emotion using bezier curves

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850941

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21850941

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/08/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21850941

Country of ref document: EP

Kind code of ref document: A1