WO2022022077A1 - 交互界面的显示方法和装置以及存储介质 - Google Patents
交互界面的显示方法和装置以及存储介质 Download PDFInfo
- Publication number
- WO2022022077A1 WO2022022077A1 PCT/CN2021/098954 CN2021098954W WO2022022077A1 WO 2022022077 A1 WO2022022077 A1 WO 2022022077A1 CN 2021098954 W CN2021098954 W CN 2021098954W WO 2022022077 A1 WO2022022077 A1 WO 2022022077A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target object
- emotion
- display
- information
- emotional
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000008451 emotion Effects 0.000 claims abstract description 121
- 230000002996 emotional effect Effects 0.000 claims description 127
- 238000001514 detection method Methods 0.000 claims description 21
- 230000008909 emotion recognition Effects 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 230000007423 decrease Effects 0.000 claims description 8
- 239000002245 particle Substances 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 238000012706 support-vector machine Methods 0.000 claims description 4
- 230000002068 genetic effect Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims 1
- 230000003993 interaction Effects 0.000 description 44
- 241000722921 Tulipa gesneriana Species 0.000 description 19
- 230000007935 neutral effect Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 241000220317 Rosa Species 0.000 description 2
- 241000385223 Villosa iris Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 244000025254 Cannabis sativa Species 0.000 description 1
- 244000170916 Paeonia officinalis Species 0.000 description 1
- 235000006484 Paeonia officinalis Nutrition 0.000 description 1
- 241000422846 Sequoiadendron giganteum Species 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Definitions
- the present disclosure relates to the field of human-computer interaction, and in particular, to a method for displaying an interactive interface, a display device, and a computer-readable storage medium.
- AI Artificial Intelligence
- human-computer interaction products that can monitor and manage people's emotions have been applied in the field of health care.
- This human-computer interaction product can replace or assist medical personnel to assess the patient's mental state for further treatment.
- the recorded results may be inaccurate due to patient resistance to the interactive product.
- a first aspect of the embodiments of the present disclosure provides a method for displaying an interactive interface, including:
- the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
- determining a target object to be interacted with through the interactive interface from the at least one object includes:
- an object appearing for the first time in the image or an object located at the front of the at least one object in the image is determined as a target object to be interacted with through the interactive interface.
- the attribute information includes age and gender
- acquiring the attribute information of the target object includes:
- the gender of the target object is acquired according to a gender recognition algorithm.
- the emotion information includes an emotion value
- acquiring the emotion information of the target object includes:
- the emotion value of the target object is acquired according to an emotion recognition algorithm.
- controlling the display of the first object area of the interactive interface according to the attribute information of the target object and the emotion information of the target object includes:
- the displayed image in the first object area is changed according to the emotional information of the target object.
- the emotion information includes an emotion value
- changing the display image in the first object area according to the emotion information of the target object includes:
- the display image is displayed in a second display manner as the emotion feature value increases.
- the method for displaying an interactive interface further includes:
- the display of the second object area of the interactive interface is controlled according to the emotional information of each of the at least one objects.
- the emotion information includes an emotion value
- controlling the display of the second object area of the interactive interface according to the emotion information of each of the at least one object includes:
- the background pattern in the second object region is displayed in a fourth manner as the average value of the emotion feature values increases.
- the emotion recognition algorithm includes one of a K-nearest neighbor algorithm, a support vector machine algorithm, a clustering algorithm, a genetic algorithm, a particle swarm optimization algorithm, a convolutional neural network algorithm, and a multi-task convolutional neural network algorithm.
- a second aspect of the embodiments of the present disclosure provides a display device for an interactive interface, including:
- a memory configured to store program instructions
- a processor configured to execute the program instructions to:
- the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
- the processor is further configured to:
- the display of the second object area of the interactive interface is controlled according to the emotional information of each of the at least one objects.
- a third aspect of embodiments of the present disclosure provides a computer-readable storage medium having executable instructions stored thereon that, when executed by a processor, cause the processor to perform the operations described in the first aspect of the embodiments of the present disclosure. Display method of the provided interactive interface.
- the emotional information and attribute information of a target object to be interacted with through the interactive interface are obtained, and the emotional information and attribute information of the target object are combined to control the display of the first object area of the interactive interface.
- Display so as to increase the interest of the display by displaying an appropriate picture, so as to alleviate and eliminate the resistance of the target object to interacting through the interactive interface, so as to obtain the emotional state information of the target object more accurately, which is beneficial to the emotional state of the target object.
- FIG. 1 shows a schematic diagram of a human-computer interaction system provided with an interactive interface according to an embodiment of the present disclosure
- FIG. 2 shows a flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure
- 3A and 3B illustrate display examples of an interactive interface according to an embodiment of the present disclosure
- FIG. 4 shows another flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure
- 5A and 5B illustrate another display example of an interactive interface according to an embodiment of the present disclosure
- FIG. 6 shows an example of a display device for an interactive interface according to an embodiment of the present disclosure.
- FIG. 7 shows another example of a display device for an interactive interface according to an embodiment of the present disclosure.
- At least one of the “systems” shall include, but not be limited to, systems with A alone, B alone, C alone, A and B, A and C, B and C, and/or A, B, C, etc. ).
- the interactive interface according to the embodiment of the present disclosure can be set and applied in a human-computer interaction system capable of monitoring or managing human emotions, and can be used to obtain emotional state information of a target object and evaluate the emotional state of the target object and treatment.
- the description of the interactive interface in the following embodiments will take the above-mentioned human-computer interaction system as an example to illustrate, but those skilled in the art should understand that the interactive interface and the display method and display device of the interactive interface in the embodiments of the present disclosure are not limited to this, and can be Apply it to any other suitable product or application.
- FIG. 1 shows a schematic diagram of a human-computer interaction system provided with an interactive interface according to an embodiment of the present disclosure
- FIG. 2 shows a flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure.
- the human-computer interaction system 100 includes an interactive interface 101 and a functional area 102 .
- the interactive interface 101 can provide a screen display to the user based on the display technology, and the functional area 102 can receive the user's input and operate the human-computer interaction system 100 based on the user's input, such as turning on or off the human-computer interaction system 100, setting the human-computer interaction system 100 Parameters of the interactive system 100, or selection of functions of the human-computer interaction system 100, etc.
- the human-computer interaction system 100 also includes an image sensor 103, which may be configured to capture an object in order to provide the interactive interface 101 with an image containing the object, so that the interactive interface 101 can recognize the image to be passed by the interactive interface 101.
- the interactive interface 101 selects an object to interact with.
- the human-computer interaction system 100 in FIG. 1 is only an example, and does not constitute a limitation on the human-computer interaction system 100 and the interactive interface 101 provided on the human-computer interaction system 100 .
- the human-computer interaction system 100 may be implemented by a mobile terminal such as a smart phone and an application installed on the mobile terminal.
- the function of the interactive interface 101 can be realized by the screen of the smart phone
- the function of the functional area 102 can be realized by the operation of the application
- the function of the image sensor 103 can be realized by the camera of the smart phone.
- the method 200 for displaying an interactive interface includes the following steps.
- step S210 a target object to be interacted with through the interactive interface is determined from at least one object.
- step S220 the attribute information of the target object and the emotion information of the target object are acquired.
- step S230 the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
- the display method 200 may determine a target object to be interacted with through an interactive interface in a scene where multiple objects exist.
- the method for determining a target object from at least one object may include: tracking the at least one object to obtain an image about the at least one object, recognizing the image to obtain face information of the at least one object in the image, and based on at least one object For the face information of an object, the object that appears for the first time in the image or the object located at the front of the at least one object in the image is determined as the target object to be interacted with through the interactive interface.
- the display method 200 may capture at least one object present within the field of view of the image sensor 103, i.e., using the image sensor 103 to capture an image of the object in real time. Then, the human face in the captured image is detected, and a face detection algorithm can be used to detect the human face in the captured image.
- face detection algorithms include AInnoFace face detection algorithm, cascaded CNN (convolutional neural network, convolutional neural network) face detection algorithm, OpenCV face detection algorithm, Seetaface face detection algorithm, libfacedetect face detection algorithm , FaceNet face detection algorithm, MTCNN (Multi-task convolutional neural network, multi-task convolutional neural network) face detection algorithm, etc.
- the embodiment of the present disclosure does not limit the face detection algorithm used, and any suitable method may be used to detect the face.
- the display method of the interactive interface according to the embodiment of the present disclosure can provide the display of two kinds of interaction scenarios of a single person and a multi-person.
- the single-person interaction scene is aimed at the situation where there is only a single interaction object in the scene, and interacts with the single interaction object.
- an interaction object can be selected from among the multiple interaction objects for the situation where there are multiple interaction objects in the scene.
- the specific interaction is based on a single-person interaction scene or based on a multi-person interaction scene can be pre-selected by setting system parameters. For example, for a scene with multiple interacting objects, the interaction can be set as a multi-person interaction scene.
- the display method of the interactive interface can determine the target object in different ways.
- an object that appears for the first time in a captured image can be determined as a target object to be interacted with through an interactive interface, which is suitable for both a single-person interaction scene and a multi-person interaction scene.
- the frontmost object among the multiple objects in the captured image can also be determined as the target object to be interacted with through the interactive interface, which is suitable for a multi-person interaction scene and multiple interactive objects appear at the same time situation in the captured image.
- the image sensor 103 may be a depth image sensor, and the captured image may be a depth image. object.
- step S220 by tracking and detecting the target object, the image of the target object can be obtained in real time, the face information of the target object in the image can be obtained in real time, and the attribute information and emotional information of the target object can be obtained in real time according to the face information.
- the target object can also be tracked, so as to obtain the emotional value of the target object in real time.
- face tracking and smoothing algorithms are used to track and detect the target object.
- the display method detects the face in the image captured by the image sensor 103 and determines the target object
- the position of the face of the target object is identified on the image, and at the same time
- the interactive interface 101 displays a representation of the target object. Image of human face.
- the face image in the interactive interface can move with the movement of the target object, so as to achieve smooth tracking of the target object.
- commonly used face tracking and smoothing algorithms include MTCNN algorithm, Laplace algorithm, particle filter algorithm, etc., and a combined technology of Kalman filter and Hungarian algorithm may also be used, which is not limited in this embodiment of the present disclosure.
- the attribute information of the target object may include the age and gender of the target object, but is not limited thereto.
- the step of obtaining the age and gender of the target object includes obtaining face information of the target object by recognizing an image including the face of the target object, and obtaining the age of the target object according to an age recognition algorithm based on the face information.
- age recognition algorithms include SVM (Support Vector Machine, support vector machine), CNN and so on.
- the gender of the target object can also be acquired according to the gender recognition algorithm based on the face information.
- Commonly used gender recognition algorithms include SVM, CNN, etc.
- the embodiments of the present disclosure do not limit the age recognition algorithm and gender recognition algorithm used, and any suitable method may be used.
- the emotion information of the target object may be represented by the emotion value of the target object.
- the step of acquiring the emotion value of the target object includes acquiring the facial information of the target object by recognizing an image including the human face of the target object, and acquiring the emotion value of the target object according to an emotion recognition algorithm based on the facial information.
- emotion recognition algorithms include KNN (K-Nearest Neighbor, K nearest neighbor) algorithm, SVM algorithm, clustering algorithm, genetic algorithm, PSO (Particle Swarm Optimization, particle swarm optimization) algorithm, CNN algorithm, MTCNN algorithm, etc. This embodiment of the present disclosure does not limit the emotion recognition algorithm used, and any suitable method may be used.
- 8 emotions of the target object can be identified by an emotion recognition algorithm, including neutral, happy, surprised, sad, angry, scared, disgusted and contemptuous, and each emotion corresponds to a different emotion value.
- the emotion of the target object may be a complex state in which various emotions are intertwined. For example, the target object is in a state of contempt, but the overall emotional stability has no emotional fluctuations, that is, the target object is still in a neutral state. Therefore, it is also necessary to comprehensively judge the actual emotion category of the target object according to the above emotion value.
- neutrality and surprise can be considered as neutral emotions, that is, when the target object is in a state of neutrality or surprise, the target object is in a calm state as a whole, and there will be no large emotional fluctuations.
- Sadness, anger, fear, disgust, and contempt can be considered as negative emotions, that is, when the target object is in a state of sadness, anger, fear, disgust, or contempt, the target object's mood is low, or there are large negative fluctuations.
- happiness is a positive emotion, which is easy to understand.
- the emotion of the target object has positive fluctuations or high emotions.
- the emotion recognition algorithm expresses the emotion of the target object with different emotion values.
- negative emotions such as sadness, anger, fear, disgust, and contempt have lower emotional values
- positive emotions such as happy have higher emotional values
- neutral emotions such as neutral and surprised have emotional values in positive and negative emotions between values. Therefore, the emotion of the target object can be represented by different numerical values.
- controlling the display of the first object area of the interactive interface according to the attribute information and emotion information of the target object includes determining the display image of the target object on the interactive interface according to the attribute information of the target object.
- the interactive interface 101 is further divided into different object areas, and the display method according to the embodiment of the present disclosure can control and display the different object areas respectively, thereby increasing the display flexibility.
- the interactive interface 101 includes a first object area 1011 and a second object area 1012 .
- the first object area 1011 may be an area configured to display the target object, which presents the display image of the target object on the interactive interface 101 .
- the second object area 1012 may be an area configured to display other content than the display avatar of the target object, such as a background area displayed on the interactive interface 101 .
- the position of the first object area 1011 on the interactive interface 101 can be changed, and the first object area 1011 can be moved on the interactive interface 101, thereby providing a dynamic display effect.
- Determining the display image of the target object on the interactive interface according to the attribute information of the target object may be determining the display image of the target object in the first object area of the display interface according to the age and gender of the target object.
- the tulip shown in the figures is the display image determined according to the age and gender of the target object and used to represent the target object.
- controlling the display of the first object area of the interactive interface according to the attribute information and emotion information of the target object further includes changing the display image in the first object area according to the emotion information of the target object.
- changing the display image in the first object area according to the emotional information of the target object may include determining the emotional characteristic value of the target object according to the emotional value of the target object, and determining the emotional characteristic value of the target object based on the difference between the emotional characteristic value of the target object and the emotional threshold value. Compare the results to control the display of the display image.
- the emotion feature value is smaller than the first emotion threshold, the display image of the target object is displayed in a first display manner as the emotion feature value decreases.
- the displayed image of the target object is maintained.
- the display image of the target object is displayed in a second display manner as the emotional characteristic value increases.
- the first emotional threshold and the second emotional threshold are predetermined thresholds according to the emotional state of the object.
- the value of the first emotional threshold is smaller than the value of the second emotional threshold.
- the first emotional threshold and the second emotional threshold can be adjusted according to the actual situation of different objects.
- the first display manner and the second display manner may be display manners associated with a display image, and may be determined in combination with a specific display image. For example, when the display image is a tulip as shown in FIG. 3A and FIG. 3B , the first display mode may be that the tulip gradually closes from an open state ( FIG.
- the second display mode may be that the tulip gradually blooms from an open or closed state (Fig. 3B).
- the gradual closing of the tulip from the open state can indicate that the target object is in a negative emotional state and is unwilling to communicate.
- the gradual blooming of tulips from an open or closed state indicates that the target person is in a positive emotional state and is willing to communicate.
- the display of the displayed image is adjusted based on the change of the emotional characteristic value, and the change of the target object's emotion can be presented through the change of the displayed image. For example, when the tulip gradually closes from the open state, it means that the target object's mood is becoming lower and lower. When the tulip gradually blooms from an open or closed state, it indicates that the target's emotions are gradually rising.
- controlling the display of the display image in the first display mode and the second display mode respectively can not only more accurately represent the emotional state of the target object, but also present the emotional changes of the target object, which is beneficial to
- the real-time monitoring of the emotional state of the target object can also increase the interestingness of the display, which is beneficial to mobilize the emotion of the target object, so as to perform auxiliary treatment on the target object.
- the emotional threshold is not limited to the first emotional threshold and the second emotional threshold
- the display mode is not limited to the first display mode and the second display mode. More display modes can be defined according to the emotional state of the target object to provide information about the target object. richer information. For example, when the emotional feature value is smaller than the first emotional threshold, and the emotional feature value gradually increases but does not reach the first emotional threshold, the display image of the target object can be displayed in the following manner (take the tulip in FIG.
- Embodiments of the present disclosure provide a method for displaying an interactive interface based on emotion recognition.
- the display method can recognize the emotion of the target object in real time and dynamically adjust the display of the interactive interface.
- the target object can be classified differently by combining the age recognition algorithm and the gender recognition algorithm to display different images. Therefore, by displaying an appropriate picture, the interest of the display is increased, so as to alleviate and eliminate the resistance of the target object to the interaction through the interactive interface, so as to obtain the emotional state information of the target object more accurately, which is conducive to the evaluation of the emotional state of the target object.
- Assessment and treatment can recognize the emotion of the target object in real time and dynamically adjust the display of the interactive interface.
- FIG. 4 shows another flowchart of a method for displaying an interactive interface according to an embodiment of the present disclosure. As shown in FIG. 4 , the display method 400 may include the following steps.
- step S410 a target object to be interacted with through the interactive interface is determined from at least one object.
- step S420 the attribute information of the target object and the emotion information of the target object are acquired.
- step S430 the display of the first object area of the interactive interface is controlled according to the attribute information of the target object and the emotion information of the target object.
- step S440 the emotion information of each object in the at least one object is acquired.
- step S450 the display of the second object area of the interactive interface is controlled according to the emotion information of each object in the at least one object.
- steps S410 , S420 and S430 are the same as those performed in steps S210 , S220 and S230 in the display method 200 , and their operations will not be described in detail here. Also, step S440 and step S450 may be performed in parallel with step S420 and step S430. Steps S440 and S450 are described in detail below with reference to the embodiments.
- controlling the display of the second object area of the interactive interface according to the emotional information of each of the at least one objects specifically includes determining, according to the emotional value of each of the at least one object, each of the at least one object.
- the emotional feature values of the objects, the average value of the emotional feature values is obtained according to the emotional feature value of each object in the at least one object, and when the average value of the emotional feature values is less than the first emotional average threshold
- the average value is reduced to display the background pattern in the second object area in a third way, and when the average value of the emotional feature values is greater than or equal to the first emotional average threshold and less than or equal to the second emotional average threshold, the second emotional average threshold is maintained.
- the background pattern in the object region when the average value of the emotion feature values is greater than the second emotion average value threshold, the background pattern in the second object region is displayed in a fourth manner as the average value of the emotion feature values increases.
- the average value of the emotion feature values is a value obtained by averaging the emotion feature values of each object in the acquired image including the target object, which can roughly represent the average value of all objects in the image.
- overall emotional state. 5A and 5B illustrate another display example of an interactive interface according to an embodiment of the present disclosure.
- the average value of the emotional feature values is smaller than the first emotional average threshold, it means that the overall emotional state of all objects in the image is low, and the whole is in negative emotions, so that the second object area ( elements such as wind or rain are added to the background image.
- the second object area elements such as wind or rain are added to the background image.
- the average value of the emotional feature values when the average value of the emotional feature values is greater than the second emotional average threshold, it indicates that the overall emotional state of all objects in the image is relatively positive, and the overall emotional state is in positive emotions, so that the second object area (for example, Add elements such as sunlight or rainbows to the background image).
- the emotional information of other people in the scene where the target object is located can be better shown.
- the emotion of the target object can be monitored more comprehensively, and diversified information can be provided for analyzing and treating the target object.
- the emotional average threshold is not limited to the first emotional average threshold and the second emotional average threshold
- the display mode is not limited to the third display mode and the fourth display mode. Define more display modes to provide richer information about the target object.
- the display of the second object area may also be adjusted based on the emotional feature value of the target object itself.
- the feature value of the emotion value of the target object is taken as the average value of the emotion feature value.
- FIG. 6 shows an example of a display device for an interactive interface according to an embodiment of the present disclosure.
- the display device 600 of the interactive interface includes an image acquisition module 601, a face detection module 602, an age detection module 603, a gender detection module 604, a classification module 605, a tracking detection module 606, an emotion recognition module 607, a single person Human-computer interaction module 608 , multi-person human-computer interaction module 609 and emotion record analysis module 610 .
- the image acquisition module 601 is configured to receive an image about at least one object captured by an image sensor.
- the face detection module 602 is configured to identify the image to obtain face information of at least one object in the image, and to determine the target object.
- the age detection module 603 is configured to acquire the age of the target object according to an age recognition algorithm based on the face information of the target object in the face information of the at least one object.
- the gender detection module 604 is configured to obtain the gender of the target object according to a gender recognition algorithm based on the face information of the target object in the face information of the at least one object.
- the classification module 605 is configured to determine the displayed avatar of the target object according to the identified age information and gender information. In this example, the displayed image of the target object can be determined according to the information shown in Table 1. For example, when the gender of the target object is female and the age is between 30-50 years old, a tulip can be used as the display image of the target object.
- the tracking detection module 606 is configured to track and detect the target object using face tracking and smoothing algorithms, and identify the position of the face of the target object on the image, so as to display the face representing the target object on the interactive interface. image.
- the emotion recognition module 607 is configured to acquire face information from the tracking detection module 606 in real time, and acquire the emotion value of the target object according to the emotion recognition algorithm.
- the single-person human-computer interaction module 608 provides an interactive interface under the single-person scene, and is configured to perform the following processing according to the emotional value of the target object identified by the emotional recognition module 607:
- a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 and a 8 respectively represent the emotional values of the target object's 8 emotions, and simulate the emotional values according to the following expression (1).
- all values in the fitting process are rounded up:
- w 1 and w 2 are preset fitting variables
- k is a preset constant
- W target is the emotional feature value of the target object.
- the single-person human-computer interaction module 608 also controls the display of the first object area according to the judgment of the following conditions:
- the multi-person human-computer interaction module 609 provides an interactive interface under the multi-person scene, which determines the emotional characteristic value of the target object, and controls the display process of the display image of the target object according to the emotional characteristic value of the target object and the single-person human-computer interaction module. 608 is the same and will not be repeated here.
- the multi-person human-computer interaction module 609 is also configured to perform the following processing according to the emotion value of each object identified by the emotion recognition module 607:
- h 1 is used to represent the first emotional average threshold, and h 1 may be 30, and h 2 is used to represent the second emotional average threshold, and h 2 may be 80.
- the multi-person human-computer interaction module 609 also controls the display of the second object area according to the judgment of the following conditions:
- the emotional record analysis module 610 is configured to record the basic information of each target object and the emotional state information in the monitoring process.
- FIG. 7 shows another example of a display device for an interactive interface according to an embodiment of the present disclosure.
- the display device 700 of the interactive interface includes a memory 701 and a processor 702 .
- the memory 701 is configured to store program instructions.
- the processor 702 is configured to execute the program instructions to perform the following operations: determine a target object to be interacted with through the interactive interface from at least one object, obtain attribute information of the target object and emotional information of the target object, and perform the following operations according to the attribute information of the target object and the emotional information of the target object to control the display of the first object area of the interactive interface.
- the processor 702 is further configured to acquire emotional information of each of the at least one object, and control the display of the second object area of the interactive interface according to the emotional information of each of the at least one object.
- the electronic components of one or more systems or devices may include, but are not limited to, at least one processing unit, a memory, and a communication bus or communication device that couples various components including the memory to the processing unit.
- a system or device may include or have access to various device-readable media.
- System memory may include device-readable storage media in the form of volatile and/or nonvolatile memory (eg, read only memory (ROM) and/or random access memory (RAM)).
- ROM read only memory
- RAM random access memory
- system memory may also include an operating system, application programs, other program modules, and program data.
- Embodiments may be implemented as a system, method or program product. Accordingly, an embodiment may take the form of an entirely hardware embodiment or an embodiment including software (including firmware, resident software, microcode, etc.), which may be collectively referred to herein as a "circuit,” “module,” or “system.” Furthermore, embodiments may take the form of a program product embodied in at least one device-readable medium having device-readable program code embodied thereon.
- a device-readable storage medium can be any tangible, non-signal medium that can contain or store a program configured for use by or in connection with an instruction execution system, apparatus, or device A program of code.
- a storage medium or device should be construed as non-transitory, ie, not including a signal or propagation medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Child & Adolescent Psychology (AREA)
- Social Psychology (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Medical Informatics (AREA)
- Epidemiology (AREA)
- Psychology (AREA)
- Developmental Disabilities (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
性别 | 年龄 | 形象 | 性别 | 年龄 | 形象 |
男 | 0-15 | 小草 | 女 | 0-15 | 花蕾 |
男 | 15-30 | 小树 | 女 | 15-30 | 玫瑰 |
男 | 30-50 | 大树 | 女 | 30-50 | 郁金香 |
男 | 50-65 | 雄鹰 | 女 | 50-65 | 蔷薇 |
男 | 65~ | 海鸥 | 女 | 65~ | 牡丹 |
Claims (12)
- 一种交互界面的显示方法,包括:从至少一个对象中确定要通过交互界面进行交互的目标对象;获取所述目标对象的属性信息和所述目标对象的情绪信息;以及根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示。
- 根据权利要求1所述的显示方法,其中,从至少一个对象中确定要通过交互界面进行交互的目标对象包括:对所述至少一个对象进行跟踪检测,以获取关于所述至少一个对象的图像;对所述图像进行识别以获取所述图像中的所述至少一个对象的人脸信息;以及基于所述至少一个对象的人脸信息,将所述图像中首次出现的对象或将所述图像中的所述至少一个对象中位于最前面的对象确定为要通过交互界面进行交互的目标对象。
- 根据权利要求2所述的显示方法,其中,所述属性信息包括年龄和性别,获取所述目标对象的属性信息包括:基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据年龄识别算法来获取所述目标对象的年龄;以及基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据性别识别算法来获取所述目标对象的性别。
- 根据权利要求2所述的显示方法,其中,所述情绪信息包括情绪值,获取所述目标对象的情绪信息包括:基于所述至少一个对象的人脸信息中的所述目标对象的人脸信息,根据情绪识别算法来获取所述目标对象的情绪值。
- 根据权利要求1所述的显示方法,其中,根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示包括:根据所述目标对象的属性信息来确定所述目标对象在所述交互界面的第一对象区域中的显示形象;以及根据所述目标对象的情绪信息来改变所述第一对象区域中的所述显示形象。
- 根据权利要求5所述的显示方法,其中,所述情绪信息包括情绪值,根据所述目标对象的情绪信息来改变所述第一对象区域中的所述显示形象包括:根据所述目标对象的情绪值来确定所述目标对象的情绪特征值;当所述情绪特征值小于第一情绪阈值时,随着所述情绪特征值减小以第一显示方式显示所述显示形象;当所述情绪特征值大于或等于所述第一情绪阈值且小于或等于第二情绪阈值时,保持所述显示形象;当所述情绪特征值大于所述第二情绪阈值时,随着所述情绪特征值增大以第二显示方式显示所述显示形象。
- 根据权利要求1所述的显示方法,还包括:获取所述至少一个对象中的每个对象的情绪信息;以及根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示。
- 根据权利要求7所述的显示方法,其中,所述情绪信息包括情绪值,根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示包括:根据所述至少一个对象中的每个对象的情绪值来确定所述至少一个对象中的每个对象的情绪特征值;根据所述至少一个对象中的每个对象的情绪特征值来获取所述情绪特征值的平均值;当所述情绪特征值的平均值小于第一情绪平均值阈值时,随着所述情绪特征值的平均值减小以第三方式显示所述第二对象区域中的背景图案;当所述情绪特征值的平均值大于或等于所述第一情绪平均值阈值且小于或等于第二情绪平均值阈值时,保持所述第二对象区域中的背景图案;当所述情绪特征值的平均值大于所述第二情绪平均值阈值时,随着所述情绪特征值的平均值增大以第四方式显示所述第二对象区域中的背景图案。
- 根据权利要求4所述的显示方法,其中,所述情绪识别算法包括K最邻近算法、支持向量机算法、聚类算法、遗传算法、粒子群优化算法、卷积神经网络算法和多任务卷积神经网络算法之一。
- 一种交互界面的显示装置,包括:存储器,被配置为存储程序指令;以及处理器,被配置为执行所述程序指令,以执行以下操作:从至少一个对象中确定要通过交互界面进行交互的目标对象;获取所述目标对象的属性信息和所述目标对象的情绪信息;以及根据所述目标对象的属性信息和所述目标对象的情绪信息来控制所述交互界面的第一对象区域的显示。
- 根据权利要求10所述的显示装置,所述处理器还被配置为:获取所述至少一个对象中的每个对象的情绪信息;以及根据所述至少一个对象中的每个对象的情绪信息来控制所述交互界面的第二对象区域的显示。
- 一种计算机可读存储介质,其上存储有可执行指令,所述指令在被处理器执行时使所述处理器执行根据权利要求1至9中任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/773,371 US11960640B2 (en) | 2020-07-29 | 2021-06-08 | Display method and display device for interactive interface and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010743658.7A CN114093461A (zh) | 2020-07-29 | 2020-07-29 | 交互界面的显示方法和装置以及存储介质 |
CN202010743658.7 | 2020-07-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022022077A1 true WO2022022077A1 (zh) | 2022-02-03 |
Family
ID=80037458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/098954 WO2022022077A1 (zh) | 2020-07-29 | 2021-06-08 | 交互界面的显示方法和装置以及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11960640B2 (zh) |
CN (1) | CN114093461A (zh) |
WO (1) | WO2022022077A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114827339A (zh) * | 2022-04-02 | 2022-07-29 | 维沃移动通信有限公司 | 消息输出方法、装置和电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103186327A (zh) * | 2011-12-28 | 2013-07-03 | 宇龙计算机通信科技(深圳)有限公司 | 改变待机界面的解锁方法及装置 |
CN104063147A (zh) * | 2014-06-10 | 2014-09-24 | 百度在线网络技术(北京)有限公司 | 移动终端中页面的控制方法和装置 |
CN105955490A (zh) * | 2016-06-28 | 2016-09-21 | 广东欧珀移动通信有限公司 | 一种基于增强现实的信息处理方法、装置和移动终端 |
CN110070879A (zh) * | 2019-05-13 | 2019-07-30 | 吴小军 | 一种基于变声技术制作智能表情及声感游戏的方法 |
US20200110927A1 (en) * | 2018-10-09 | 2020-04-09 | Irene Rogan Shaffer | Method and apparatus to accurately interpret facial expressions in american sign language |
CN111326235A (zh) * | 2020-01-21 | 2020-06-23 | 京东方科技集团股份有限公司 | 一种情绪调节方法、设备及系统 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202749023U (zh) * | 2012-07-26 | 2013-02-20 | 王晓媛 | 患者情绪晴雨表 |
US10475351B2 (en) * | 2015-12-04 | 2019-11-12 | Saudi Arabian Oil Company | Systems, computer medium and methods for management training systems |
CN105930035A (zh) * | 2016-05-05 | 2016-09-07 | 北京小米移动软件有限公司 | 显示界面背景的方法及装置 |
US10732722B1 (en) * | 2016-08-10 | 2020-08-04 | Emaww | Detecting emotions from micro-expressive free-form movements |
CN111048016B (zh) * | 2018-10-15 | 2021-05-14 | 广东美的白色家电技术创新中心有限公司 | 产品展示方法、装置及系统 |
KR102689884B1 (ko) * | 2018-11-13 | 2024-07-31 | 현대자동차주식회사 | 차량 및 그 제어방법 |
US10860864B2 (en) * | 2019-01-16 | 2020-12-08 | Charter Communications Operating, Llc | Surveillance and image analysis in a monitored environment |
CN109875579A (zh) * | 2019-02-28 | 2019-06-14 | 京东方科技集团股份有限公司 | 情绪健康管理系统和情绪健康管理方法 |
CN111797249A (zh) * | 2019-04-09 | 2020-10-20 | 华为技术有限公司 | 一种内容推送方法、装置与设备 |
CN111222444A (zh) * | 2019-12-31 | 2020-06-02 | 的卢技术有限公司 | 一种考虑驾驶员情绪的增强现实抬头显示方法和系统 |
-
2020
- 2020-07-29 CN CN202010743658.7A patent/CN114093461A/zh active Pending
-
2021
- 2021-06-08 US US17/773,371 patent/US11960640B2/en active Active
- 2021-06-08 WO PCT/CN2021/098954 patent/WO2022022077A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103186327A (zh) * | 2011-12-28 | 2013-07-03 | 宇龙计算机通信科技(深圳)有限公司 | 改变待机界面的解锁方法及装置 |
CN104063147A (zh) * | 2014-06-10 | 2014-09-24 | 百度在线网络技术(北京)有限公司 | 移动终端中页面的控制方法和装置 |
CN105955490A (zh) * | 2016-06-28 | 2016-09-21 | 广东欧珀移动通信有限公司 | 一种基于增强现实的信息处理方法、装置和移动终端 |
US20200110927A1 (en) * | 2018-10-09 | 2020-04-09 | Irene Rogan Shaffer | Method and apparatus to accurately interpret facial expressions in american sign language |
CN110070879A (zh) * | 2019-05-13 | 2019-07-30 | 吴小军 | 一种基于变声技术制作智能表情及声感游戏的方法 |
CN111326235A (zh) * | 2020-01-21 | 2020-06-23 | 京东方科技集团股份有限公司 | 一种情绪调节方法、设备及系统 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114827339A (zh) * | 2022-04-02 | 2022-07-29 | 维沃移动通信有限公司 | 消息输出方法、装置和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN114093461A (zh) | 2022-02-25 |
US11960640B2 (en) | 2024-04-16 |
US20220404906A1 (en) | 2022-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yan et al. | Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement | |
CN107358241B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
US20220036055A1 (en) | Person identification systems and methods | |
WO2020125623A1 (zh) | 活体检测方法、装置、存储介质及电子设备 | |
US10783354B2 (en) | Facial image processing method and apparatus, and storage medium | |
US9075453B2 (en) | Human eye controlled computer mouse interface | |
WO2019128508A1 (zh) | 图像处理方法、装置、存储介质及电子设备 | |
WO2019120029A1 (zh) | 智能调节屏幕亮度的方法、装置、存储介质及移动终端 | |
US20190228231A1 (en) | Video segmentation using predictive models trained to provide aesthetic scores | |
US20120243751A1 (en) | Baseline face analysis | |
CN113011385A (zh) | 人脸静默活体检测方法、装置、计算机设备及存储介质 | |
Zhao et al. | Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection | |
CN111183455A (zh) | 图像数据处理系统与方法 | |
CN110163861A (zh) | 图像处理方法、装置、存储介质和计算机设备 | |
WO2022022077A1 (zh) | 交互界面的显示方法和装置以及存储介质 | |
US20240212309A1 (en) | Electronic apparatus, controlling method of electronic apparatus, and computer readable medium | |
WO2023045626A1 (zh) | 图像采集方法、装置、终端、计算机可读存储介质及计算机程序产品 | |
CN111582654A (zh) | 基于深度循环神经网络的服务质量评价方法及其装置 | |
US9349038B2 (en) | Method and apparatus for estimating position of head, computer readable storage medium thereof | |
CN113591550B (zh) | 一种个人喜好自动检测模型构建方法、装置、设备及介质 | |
CN111160173A (zh) | 一种基于机器人的手势识别方法及机器人 | |
Ni et al. | Diverse local facial behaviors learning from enhanced expression flow for microexpression recognition | |
CN111222374A (zh) | 测谎数据处理方法、装置、计算机设备和存储介质 | |
Coutrot et al. | Learning a time-dependent master saliency map from eye-tracking data in videos | |
Lee | Detection and recognition of facial emotion using bezier curves |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21850941 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21850941 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/08/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21850941 Country of ref document: EP Kind code of ref document: A1 |