WO2019205642A1 - 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 - Google Patents
基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2019205642A1 WO2019205642A1 PCT/CN2018/119384 CN2018119384W WO2019205642A1 WO 2019205642 A1 WO2019205642 A1 WO 2019205642A1 CN 2018119384 W CN2018119384 W CN 2018119384W WO 2019205642 A1 WO2019205642 A1 WO 2019205642A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- comfort
- emotion
- voice
- audio
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 30
- 230000008451 emotion Effects 0.000 claims abstract description 56
- 230000004044 response Effects 0.000 claims abstract description 21
- 230000002159 abnormal effect Effects 0.000 claims abstract description 16
- 230000002996 emotional effect Effects 0.000 claims description 40
- 230000036651 mood Effects 0.000 claims description 33
- 230000005856 abnormality Effects 0.000 claims description 14
- 230000003993 interaction Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 abstract 3
- 238000010586 diagram Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 201000003152 motion sickness Diseases 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 206010027940 Mood altered Diseases 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 230000007510 mood change Effects 0.000 description 3
- 206010027951 Mood swings Diseases 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010063659 Aversion Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/326—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0098—Details of control systems ensuring comfort, safety or stability not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/593—Recognising seat occupancy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0881—Seat occupation; Driver or passenger presence
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0062—Adapting control system settings
- B60W2050/0075—Automatic parameter input, automatic initialising or calibrating means
- B60W2050/0083—Setting, resetting, calibration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/043—Identity of occupants
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/21—Voice
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/10—Historical data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/13—Acoustic transducers and sound field adaptation in vehicles
Definitions
- Embodiments of the present disclosure relate to a method, apparatus, system, computer device, and computer readable storage medium based on emotion recognition.
- mood swings are not suitable, such as driving a car, due to various situations, drivers and other occupants are still unavoidable to have mood swings.
- a method for appeasement based on emotion recognition comprising: acquiring at least one of a voice and an image of a user; determining whether the user is emotional according to at least one of a voice and an image of the user Abnormal; in response to the user's emotional abnormality, determining a manner of appeasement according to the emotion of the user, emotionally appease the user.
- the acquiring at least one of the user's voice and image includes: acquiring at least one of the user's voice and image, and determining the acquired user's seating position on the vehicle; the responding to the user Emotional abnormality, determining a manner of appeasement according to the emotion of the user, and performing emotional comfort on the user includes: determining a manner of appeasement according to the emotion and the position of the user of the user, and performing emotional comfort on the user.
- the determining the manner of appeasement according to the emotion and the position of the user, and performing emotional comfort on the user includes: acquiring the gender and age of the user according to the collected image of the user; according to the emotion of the user , gender, age, and location of the ride, select at least one mode of comfort to emotionally appease the user.
- determining a comfort mode according to the user's emotion, and performing emotional comfort on the user includes: providing a user with a comfort mode currently set by the user according to the comfort mode preference set by the user. Like the matching method of comfort.
- the method further includes: sending the user identification and user-set comfort style preferences and/or audio and video entertainment resource preferences to the server.
- the comfort mode preference and/or audio and video entertainment resource preferences set by the user are obtained.
- the user selects at least one method of comforting the user according to the emotion, gender, age, and/or location of the user, including: sending the user's emotion, gender, age, and location of the user to the server. Receiving a weighting result of the age, gender, and emotional change of the person in accordance with the driving position and the non-driving position during the driving process, and the current user's mood, gender, age, location of the user's ride, and the manner of the push; In the comfort mode of the server push, at least one kind of comfort mode is selected to emotionally appease the user.
- acquiring at least one of a user's voice and image includes: performing voice feature collection by a directional microphone located on the vehicle, and determining whether the user is emotionally abnormal according to at least one of the user's voice and image includes: Learning the trained language feature model, language dictionary library or image model for emotion recognition to determine whether the user is emotionally abnormal.
- the seating position includes a driving position and a non-driving position
- the determining a manner of comfort according to the emotion and/or the seating position of the user, and performing emotional comfort on the user includes: for the user in the driving position, by audio And the voice interaction method to appease; for users in non-driving positions, comfort through video, audio and voice interaction methods.
- a directional microphone, a camera and/or a sound output unit are disposed on the vehicle rearview mirror, and the directional microphone, the camera and/or the sound output unit are respectively deployed to the door sides of the vehicle, to Navigating voice acquisition, image acquisition, and directional sound output for the driving position user;
- the non-driving position deploying a directional microphone, camera, and sound above the non-driving position seat and at a first angle relative to the non-driving position
- the central controller is further configured to: determine a comfort mode according to the emotion of the user, the location of the terminal device that sends the current emotion of the user, and send the comfort mode to the terminal device.
- the terminal device is further configured to: determine the gender and age of the user according to the collected image of the user, and send the same to the central controller; the central controller is configured to: according to the user's mood, gender, age And transmitting the location of the terminal device of the user's current mood, selecting at least one mode of appeasement, and transmitting the method to the terminal device.
- the terminal device is further configured to: receive the comfort mode preference set by the user, and send the response mode to the central controller; the central controller determines the comfort mode according to the user's mood, including: determining according to the comfort mode preference set by the user.
- the soothing method that matches the comfort mode preference currently set by the user.
- the terminal device is further configured to: receive audio and video entertainment resource preferences set by the user, and send the information to the central controller; the central controller determines the manner of comfort according to the emotion of the user, including: adopting in response to the comforting manner Audio and video entertainment, according to the audio and video entertainment resources preferences set by the user, determine audio and video entertainment resources that match the currently set audio and video entertainment resources preferences.
- the central controller determines the audiovisual entertainment resource based on the user identification and a history corresponding to the user identification.
- an apparatus comprising a processor and a memory; wherein: the memory includes instructions executable by the processor to cause the processor to perform the aforementioned method.
- a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implements the aforementioned methods.
- FIG. 2 is a schematic structural diagram of a comfort recognition system based on emotion recognition according to an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of an emotion recognition-based comfort system with a cloud server according to an embodiment of the present disclosure
- FIG. 4 is a schematic diagram of a data structure defined by a preference according to an embodiment of the present disclosure.
- FIG. 5 is a schematic structural diagram of a comfort recognition device based on emotion recognition according to an embodiment of the present disclosure
- FIG. 6 is a schematic structural diagram of a comfort recognition device based on emotion recognition according to an embodiment of the present disclosure.
- Step S101 Acquire a voice and/or an image of the user
- Step S102 Determine, according to the voice and/or image of the user, whether the user is emotionally abnormal
- the image may be directly captured by the camera, the voice may be directly collected from the microphone, or the voice and/or image transmitted by other devices may be received through a wired or wireless communication connection.
- Step S103 in response to the user's emotional abnormality, determining a manner of appeasement according to the emotion of the user, and performing emotional comfort on the user, for example, including:
- the driver's seat and the non-driver's comfort mode can be distinguished to provide more effective comfort to the user. For example, for the driver's seat, it is not appropriate to use the video to appease. .
- the comfort mode is determined according to the user's mood and the location of the user's ride. Abnormal users make emotional comfort, for example:
- At least one mode of comfort is selected to emotionally appease the user.
- the emotion of the user includes, for example, one or a combination of the following:
- the method of appeasement includes one or a combination of the following:
- the user is provided with a comfort mode that matches the comfort mode preference currently set by the user. and / or
- audio and video entertainment is used, and according to the audio and video entertainment resource preferences set by the user, the user is presented with audio and video entertainment resources that match the currently set audio and video entertainment resource preferences.
- the user When the user is riding, the user can log in manually or log in through face recognition. According to the registered user ID, the user can set the comfort mode preferences and/or audio and video entertainment resource preferences that the user has set, thereby providing more accurate users. service.
- the user identifier and the user's setting of the comfort mode preference and/or the audio and video entertainment resource preference may be sent to the cloud server.
- the server performs storage and statistics.
- the cloud server can push the comfort content suitable for the user according to the statistical data and the gender, age, and the like of the user.
- At least one terminal device and one central controller may be set, thereby saving hardware costs, and the terminal device is set at a position corresponding to the user, and is used for
- the central controller is used to determine the comfort method and comfort resources.
- the central controller is for example a server.
- an embodiment of the present disclosure further provides a comfort recognition system based on emotion recognition, including: a central controller 201 and at least one terminal device 202, for example,
- the terminal device 202 is configured to collect the voice and/or image of the user, determine the current emotional abnormality of the user according to the voice and/or image of the user, and send the current emotion of the user to the central controller, according to the manner of the center controller. Emotional comfort to the user;
- the central controller 201 is configured to receive a current mood of the user sent by the terminal device, determine a comfort mode according to the emotion of the user, and send the comfort mode to the terminal device.
- central controller 201 is used, for example, to:
- the appeasement mode is determined according to the emotion of the user, the location of the terminal device 202 that transmits the current mood of the user, and the appeasement mode is transmitted to the terminal device 202.
- the gender and age of the user are determined based on the image of the user and sent to the central controller 201.
- the central controller 201 is used, for example, to:
- At least one mode of appeasement is selected and transmitted to the terminal device 202 according to the user's mood, gender, age, and location of the terminal device 202 that transmits the user's current mood.
- the user's emotions include, for example, one or a combination of the following:
- the central controller 201 determines the manner of comfort according to the emotion of the user, for example, including:
- the comfort mode matching the comfort mode preference currently set by the user is determined. and / or
- audio and video entertainment is adopted, and audio and video entertainment resources matching the currently set audio and video entertainment resources are prepared according to the audio and video entertainment resource preferences set by the user.
- the central controller 201 is also used to:
- the cloud server 203 is configured to receive, according to the user's emotion, gender, age, location of the user's ride, and the weight statistics of the age, gender, and mood change of the person in the driving position and the non-driving position during the driving process, And at least one kind of comfort mode is determined with the current user's emotion, gender, age, location of the user's ride, and pushed to the central controller 201.
- a terminal device that performs audio video capture and comfort can be disposed at a position of a corresponding seat in the vehicle, and each terminal device is connected to the central controller by wire or wirelessly.
- the central controller is responsible for determining the comfort mode, appeasing the resources, and interacting with the cloud server.
- the central controller can also be responsible for emotion recognition based on voice and/or image. It is also possible to provide a device capable of fully implementing the emotion recognition based comfort method in the position of the corresponding seat in the vehicle, each device independently completing the emotional judgment and appeasement of the user corresponding to the seat.
- the directional speech feature extraction is performed by the terminal devices deployed at the respective riding positions.
- the influence of the background speech on the speech features of the corresponding position can be avoided, and the corresponding speech features are subjected to emotion recognition through a language model of deep learning and migration learning training, and identify emotions of birth, disgust, fear, and sadness; for example, deployment can be performed
- the directional microphones at each position perform voice feature collection, and perform emotional recognition through the deep learning and migration learning language feature model and language dictionary library to identify the anger, disgust, fear, sad emotion of the occupant in the corresponding position. Changes and transmit the identification results to the central controller;
- the image model of deep learning and migration learning training can also be used for emotion recognition to identify changes in birth, disgust, fear, and sadness;
- the central controller determines the appeasement method according to the emotion recognition result and the visual recognition result, and pushes the appropriation resource of the local end and the cloud server to the corresponding terminal device, and the terminal device presents the user with the appeasement. For example, according to the ride location ID, the type of emotion, the age group and the gender of the rider, the comfort mode preparation is performed, and the comfort resource of the local end and the cloud server is pushed to the emotional comfort module at the ride position end;
- the terminal device also provides a preference definition interface for the passengers at the location, and the definition can select resources according to the four categories of anger, disgust, fear, and sadness, and the resources can be defined according to the emotions of the cloud server to appease the resource management list and the resource customization manner.
- the data structure defined by the preferences is shown in Figure 4.
- the emotional comfort method pushed by the terminal device deployed in the riding position is implemented, for example, in the driving position, the emotional comfort can be performed according to the user's preference data of the driving position, and only audio and voice are available at the position.
- Interactive comfort methods for example, voice interaction can perform voice interaction of remote personnel in addition to human-machine language interaction; non-driving positions can provide video, audio and voice interaction methods.
- the central controller may further send the user preferences collected by each terminal device to the cloud server; the central controller may also perform weight statistics according to the emotional changes of the age and gender of the driving position and the non-driving position during the driving process, and perform the appeasement.
- the method and the method of appeasement prefer the local deployment of resources, so that when the user comforts, the user is comforted.
- the arrangement of the directional microphone, the camera, the sound output unit, and the video output unit can be as follows:
- a directional microphone, a camera, and a sound output unit may be disposed on the rearview mirror, and two sets of directional microphones, cameras, and sound output units on the rearview mirror are respectively deployed to the sides of the door for the directional voice. Acquisition, image acquisition and directional sound output.
- the directional microphone, camera and sound output unit can be deployed at the 30-degree angle centered on the corresponding position directly above the seat for directional voice acquisition, image acquisition and directional voice of the passengers in the rear position. Output.
- the video output unit is deployed on the back of the corresponding front seat.
- the microphone, the camera, the sound output unit, and the video output unit complete the corresponding functions through independent hardware processing modules.
- the microphone, the camera, the sound output unit, and the video output unit can send data to the central controller through the serial port, and the sound output unit and the video output unit can The audio and video data in the comfort mode transmitted by the central controller is received through the network port.
- the central controller can be combined with the central control system of the in-vehicle system to implement terminal device management, comfort mode management, and resource preference management, and establish data routing between each terminal device and the cloud server.
- Each terminal device performs language feature extraction through a directional voice collection module at each location, and performs emotion recognition through a voice emotion recognition model deployed in each terminal device; each terminal device performs face recognition of the occupant according to the visual recognition module, and recognizes The terminal device transmits the speech emotion recognition and visual recognition results to the central controller for the age and gender of the occupants at each location.
- Each occupant can set a comfort mode preference on the corresponding terminal device, and the terminal device sends the set comfort mode preference to the central controller, and the central controller performs resource application of the cloud server and resource synchronization of the cloud server; the central controller is configured according to The speech emotion recognition and visual recognition results and the user's preferences determine the comfort mode and comfort resources and send them to the terminal device.
- the embodiment of the present disclosure further provides a comfort recognition device based on emotion recognition, and the appeasement device corresponds to the aforementioned comfort method.
- the appeasement device corresponds to the aforementioned comfort method.
- the acquiring device 501 is configured to acquire the voice and/or image of the user.
- the determining device 502 is configured to determine, according to the voice and/or image of the user, whether the user is emotionally abnormal.
- the appeasing device 503 is configured to determine a comfort mode according to the user's mood in response to the user's emotional abnormality, and to emotionally appease the user.
- a unit or a module in an embodiment of the present disclosure may be implemented by a general purpose processor or a dedicated processor, for example, a central processing unit, a programmable logic circuit.
- the acquisition device 501 is used, for example, to:
- the user's voice and/or image is acquired and the acquired user's seating position is determined.
- the comforting device 503 is used, for example, to:
- the comfort mode is determined according to the user's emotions and the position of the user's ride, and the user is emotionally comforted.
- the appeasement device 503 determines the manner of appeasement according to the emotion of the user and the position of the user's ride, and emotionally appeases the user with abnormal emotions, for example, including:
- the user's emotions include, for example, one or a combination of the following:
- the method of appeasement includes one or a combination of the following:
- comforting device 503 is also used to:
- the user is provided with a comfort mode that matches the comfort mode preference currently set by the user. and / or
- comfort device 503 is also used to:
- the comforting device 503 selects at least one kind of comforting manner to appease the user according to the user's mood, gender, age, location of the user's ride, for example, including:
- the weighting result of the cloud server according to the age, gender, and mood change of the driving position and the non-driving position of the driving process, and the current user's mood, gender, age, location of the user's ride, and the manner of the push.
- the units or modules recited in the apparatus correspond to the various steps in the method described with reference to FIG.
- the operations and features described above for the method are equally applicable to the device and the unit, for example, and are not described herein.
- the device may be implemented in a browser or other security application of the electronic device in advance, or may be loaded into a browser of the electronic device or a secure application thereof by downloading or the like.
- Corresponding units in the device can cooperate with units in the electronic device to implement the solution of the embodiments of the present application.
- FIG. 6 there is shown a schematic structural diagram of a computer system suitable for implementing the emotion recognition based comforting device of the embodiment of the present application, which may be, for example, a terminal device or a central controller, or may be, for example, a terminal device. A device that is combined with a central controller.
- the computer system includes a central processing unit (CPU) 601, which may be loaded according to a program stored in a read only memory (ROM) 602 or a program loaded from a storage portion 608 into a random access memory (RAM) 603. Perform various appropriate actions and processes.
- ROM read only memory
- RAM random access memory
- various programs and data required for system operation are also stored.
- the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
- An input/output (I/O) interface 605 is also coupled to bus 604.
- the following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, etc.; an output portion 607 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 608 including a hard disk or the like. And a communication portion 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the Internet.
- Driver 610 is also coupled to I/O interface 605 as needed.
- a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 610 as needed so that a computer program read therefrom is installed into the storage portion 608 as needed.
- the central controller may also not include input portion 606 and output portion 607.
- an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for performing the method of FIG.
- the computer program can be downloaded and installed from the network via communication portion 609, and/or installed from removable media 611.
- each block of the flowchart or block diagrams can represent a module, a program segment, or a portion of code that includes one or more logic for implementing the specified.
- Functional executable instructions can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
- the units or modules described in the embodiments of the present application may be implemented by software or by hardware.
- the described unit or module may also be provided in the processor, for example, as a processor including an XX unit, a YY unit, and a ZZ unit.
- the names of these units or modules do not in some cases constitute a limitation on the unit or module itself.
- the XX unit may also be described as "a unit for XX.”
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Automation & Control Theory (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Social Psychology (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Computational Linguistics (AREA)
- Otolaryngology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Hospice & Palliative Care (AREA)
- Child & Adolescent Psychology (AREA)
- Mathematical Physics (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (20)
- 一种基于情绪识别的安抚方法,包括:获取用户的语音和图像中的至少一个;根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚。
- 如权利要求1所述的方法,其中,所述获取用户的语音和图像中的至少一个包括:采集所述用户的语音和图像中的至少一个,并确定所采集的用户在交通工具上的乘坐位置;所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚包括:根据所述用户的情绪和乘坐位置确定安抚方式,对所述用户进行情绪安抚。
- 如权利要求2所述的方法,其中,所述根据所述用户的情绪和乘坐位置确定安抚方式,对所述用户进行情绪安抚,包括:根据所述采集的所述用户的图像获取用户的性别和年龄;根据所述用户的情绪、性别、年龄和乘坐的位置,选择至少一种安抚方式对所述用户进行情绪安抚。
- 如权利要求1-3任一所述的方法,其中,所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚包括:根据用户设置的安抚方式喜好,向用户提供与该用户当前设置的安抚方式喜好匹配的安抚方式。
- 如权利要求1-4任一所述的方法,其中,所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚包括:响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,向用户展示与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
- 如权利要求4或5所述的方法,其中,还包括:将用户标识以及用户设置的安抚方式喜好和/或音视频娱乐资源喜好发送至服务器。
- 根据权利要求4-6任一所述的方法,其中,根据用户登录交通工具的用户标识,获取所述用户设置的所述安抚方式喜好和/或音视频娱乐资源喜好。
- 如权利要求3-7任一所述的方法,其中,所述根据用户的情绪、性别、年龄和/或用户乘坐位置,选择至少一种安抚方式对该用户进行情绪安抚,包括:将用户的情绪、性别、年龄、用户乘坐的位置发送至服务器;接收服务器根据行车过程中驾驶位置及非驾驶位置的人员的年龄、性别、情绪变化的权重统计结果,和当前用户的情绪、性别、年龄、用户乘坐的位置,推送的安抚方式;从所述服务器推送的安抚方式中,选择至少一种安抚方式对该用户进行情绪安抚。
- 根据权利要求1-8任一所述的方法,其中,获取用户的语音和图像中的至少一个包括:通过位于交通工具上的定向麦克风进行语音特征采集,根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常包括:通过深度学习训练出的语言特征模型、语言字典库或图像模型进行情感识别,以确定所述用户是否情绪异常。
- 根据权利要求2-9任一所述的方法,其中,所述乘坐位置包括驾驶位置和非驾驶位置,所述根据所述用户的情绪和/或乘坐位置确定安抚方式,对所述用户进行情绪安抚包括:对于在驾驶位置上的用户,通过音频和语音交互方法进行安抚;对于非驾驶位置上的用户,通过视频、音频及语音交互方法进行安抚。
- 根据权利要求10所述的方法,其中,对于所述驾驶位置,在交通工具后视镜上设置定向麦克风、摄像头和/或声音输出单元,所述定向麦克风、摄像头和/或声音输出单元分别向交通工具两侧车门方向部署,以供所述驾驶位置用户进行定向语音采集、图像采集及定向声音输出;对于所述非驾驶位置,在非驾驶位置座位上方、并且相对于所述非驾驶位置第一角度部署定向麦克风、摄像头及声音输出单元,以供所述非驾驶位置用户进行定向语音采集、图像采集及定向语音输出。
- 一种基于情绪识别的安抚装置,其中,包括:获取设备,被配置为获取用户的语音和图像中的至少一个;判断设备,被配置为根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;安抚设备,被配置为响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚。
- 一种基于情绪识别的安抚系统,包括:中心控制器和至少一个终端设备,其中,终端设备,被配置为采集用户的语音和图像中的至少一个,将用户当前的情绪发送给所述中心控制器,根据中心控制器发送的安抚方式对用户进行情绪安抚;中心控制器,被配置为接收所述终端设备发送的用户当前的情绪,根据用户的情绪确定安抚方式,并将安抚方式发送给所述终端设备。
- 如权利要求13所述的系统,其中,所述中心控制器进一步被配置为:根据用户的情绪、发送用户当前的情绪的终端设备的位置确定安抚方式,并将安抚方式发送给所述终端设备。
- 如权利要求13或14所述的系统,其中,所述终端设备还被配置为:根据所述采集的用户的图像判断用户的性别和年龄,并发送给中 心控制器;所述中心控制器被配置为:根据用户的情绪、性别、年龄、发送用户当前的情绪的终端设备的位置,选择至少一种安抚方式并发送给所述终端设备。
- 如权利要求13-15任一所述的系统,其中,所述终端设备还被配置为:接收用户设置的安抚方式喜好,并发送给中心控制器;所述中心控制器根据用户的情绪确定安抚方式,包括:根据用户设置的安抚方式喜好,确定与该用户当前设置的安抚方式喜好匹配的安抚方式。
- 如权利要求13-16任一所述的系统,其中,所述终端设备还被配置为:接收用户设置的音视频娱乐资源喜好,并发送给中心控制器;所述中心控制器根据用户的情绪确定安抚方式,包括:响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,确定与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
- 如权利要求17任一所述的系统,其中,所述中心控制器根据用户标识以及对应该用户标识的历史记录确定所述音视频娱乐资源。
- 一种计算机设备,包括处理器和存储器;其中:所述存储器包含可由所述处理器执行的指令以使得所述处理器执行如权利要求1-11任一所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现如权利要求1-11任一所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/472,012 US11498573B2 (en) | 2018-04-24 | 2018-12-05 | Pacification method, apparatus, and system based on emotion recognition, computer device and computer readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810371545.1 | 2018-04-24 | ||
CN201810371545.1A CN108549720A (zh) | 2018-04-24 | 2018-04-24 | 一种基于情绪识别的安抚方法、装置及设备、存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019205642A1 true WO2019205642A1 (zh) | 2019-10-31 |
Family
ID=63512191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/119384 WO2019205642A1 (zh) | 2018-04-24 | 2018-12-05 | 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11498573B2 (zh) |
CN (1) | CN108549720A (zh) |
WO (1) | WO2019205642A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111540358A (zh) * | 2020-04-26 | 2020-08-14 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
CN114595692A (zh) * | 2020-12-07 | 2022-06-07 | 山东新松工业软件研究院股份有限公司 | 一种情绪识别方法、系统及终端设备 |
EP4064113A4 (en) * | 2019-11-22 | 2023-05-10 | Arcsoft Corporation Limited | USER INFORMATION DETECTION METHOD AND SYSTEM, AND ELECTRONIC DEVICE |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108549720A (zh) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | 一种基于情绪识别的安抚方法、装置及设备、存储介质 |
CN109471954A (zh) * | 2018-09-29 | 2019-03-15 | 百度在线网络技术(北京)有限公司 | 基于车载设备的内容推荐方法、装置、设备和存储介质 |
CN109302486B (zh) * | 2018-10-26 | 2021-09-03 | 广州小鹏汽车科技有限公司 | 一种根据车内环境推送音乐的方法和系统 |
CN109448409A (zh) * | 2018-10-30 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | 交通信息交互的方法、装置、设备和计算机存储介质 |
CN109550133B (zh) * | 2018-11-26 | 2021-05-11 | 赵司源 | 一种情绪安抚方法及系统 |
CN109616109B (zh) * | 2018-12-04 | 2020-05-19 | 北京蓦然认知科技有限公司 | 一种语音唤醒方法、装置及系统 |
CN110598611B (zh) * | 2019-08-30 | 2023-06-09 | 深圳智慧林网络科技有限公司 | 看护系统、基于看护系统的病人看护方法和可读存储介质 |
CN112947740A (zh) * | 2019-11-22 | 2021-06-11 | 深圳市超捷通讯有限公司 | 基于动作分析的人机交互方法、车载装置 |
JP7413055B2 (ja) * | 2020-02-06 | 2024-01-15 | 本田技研工業株式会社 | 情報処理装置、車両、プログラム、及び情報処理方法 |
CN113657134B (zh) * | 2020-05-12 | 2024-04-23 | 北京地平线机器人技术研发有限公司 | 语音播放方法和装置、存储介质及电子设备 |
CN111605556B (zh) * | 2020-06-05 | 2022-06-07 | 吉林大学 | 一种防路怒症识别及控制系统 |
CN111741116B (zh) * | 2020-06-28 | 2023-08-22 | 海尔优家智能科技(北京)有限公司 | 情感交互方法、装置、存储介质及电子装置 |
CN112061058B (zh) * | 2020-09-07 | 2022-05-27 | 华人运通(上海)云计算科技有限公司 | 场景触发的方法、装置、设备和存储介质 |
CN112733763B (zh) * | 2021-01-15 | 2023-12-05 | 北京华捷艾米科技有限公司 | 人机语音交互的实现方法及装置、电子设备、存储介质 |
CN113183900A (zh) * | 2021-03-31 | 2021-07-30 | 江铃汽车股份有限公司 | 车辆人员监测方法、装置、可读存储介质及车载系统 |
CN113780062A (zh) * | 2021-07-26 | 2021-12-10 | 岚图汽车科技有限公司 | 一种基于情感识别的车载智能交互方法、存储介质及芯片 |
CN113581187A (zh) * | 2021-08-06 | 2021-11-02 | 阿尔特汽车技术股份有限公司 | 用于车辆的控制方法及相应的系统、车辆、设备和介质 |
CN114954332A (zh) * | 2021-08-16 | 2022-08-30 | 长城汽车股份有限公司 | 一种车辆控制方法、装置、存储介质及车辆 |
CN114422742A (zh) * | 2022-01-28 | 2022-04-29 | 深圳市雷鸟网络传媒有限公司 | 一种通话氛围提升方法、装置、智能设备及存储介质 |
CN117445805B (zh) * | 2023-12-22 | 2024-02-23 | 吉林大学 | 面向公交车司乘冲突的人员预警和行车控制方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201838333U (zh) * | 2010-11-12 | 2011-05-18 | 北京工业大学 | 基于驾驶员状态的音乐播放器 |
CN102874259A (zh) * | 2012-06-15 | 2013-01-16 | 浙江吉利汽车研究院有限公司杭州分公司 | 一种汽车驾驶员情绪监视及车辆控制系统 |
CN203075421U (zh) * | 2012-07-31 | 2013-07-24 | 深圳市赛格导航科技股份有限公司 | 一种基于情绪变化的音乐播放系统 |
CN103873512A (zh) * | 2012-12-13 | 2014-06-18 | 深圳市赛格导航科技股份有限公司 | 基于脸部识别技术的车载无线传输音乐的方法 |
CN106803423A (zh) * | 2016-12-27 | 2017-06-06 | 智车优行科技(北京)有限公司 | 基于用户情绪状态的人机交互语音控制方法、装置及车辆 |
CN107423351A (zh) * | 2017-05-24 | 2017-12-01 | 维沃移动通信有限公司 | 一种信息处理方法及电子设备 |
CN108549720A (zh) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | 一种基于情绪识别的安抚方法、装置及设备、存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9734685B2 (en) * | 2014-03-07 | 2017-08-15 | State Farm Mutual Automobile Insurance Company | Vehicle operator emotion management system and method |
JP6149824B2 (ja) * | 2014-08-22 | 2017-06-21 | トヨタ自動車株式会社 | 車載装置、車載装置の制御方法及び車載装置の制御プログラム |
-
2018
- 2018-04-24 CN CN201810371545.1A patent/CN108549720A/zh active Pending
- 2018-12-05 WO PCT/CN2018/119384 patent/WO2019205642A1/zh active Application Filing
- 2018-12-05 US US16/472,012 patent/US11498573B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201838333U (zh) * | 2010-11-12 | 2011-05-18 | 北京工业大学 | 基于驾驶员状态的音乐播放器 |
CN102874259A (zh) * | 2012-06-15 | 2013-01-16 | 浙江吉利汽车研究院有限公司杭州分公司 | 一种汽车驾驶员情绪监视及车辆控制系统 |
CN203075421U (zh) * | 2012-07-31 | 2013-07-24 | 深圳市赛格导航科技股份有限公司 | 一种基于情绪变化的音乐播放系统 |
CN103873512A (zh) * | 2012-12-13 | 2014-06-18 | 深圳市赛格导航科技股份有限公司 | 基于脸部识别技术的车载无线传输音乐的方法 |
CN106803423A (zh) * | 2016-12-27 | 2017-06-06 | 智车优行科技(北京)有限公司 | 基于用户情绪状态的人机交互语音控制方法、装置及车辆 |
CN107423351A (zh) * | 2017-05-24 | 2017-12-01 | 维沃移动通信有限公司 | 一种信息处理方法及电子设备 |
CN108549720A (zh) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | 一种基于情绪识别的安抚方法、装置及设备、存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4064113A4 (en) * | 2019-11-22 | 2023-05-10 | Arcsoft Corporation Limited | USER INFORMATION DETECTION METHOD AND SYSTEM, AND ELECTRONIC DEVICE |
CN111540358A (zh) * | 2020-04-26 | 2020-08-14 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
CN111540358B (zh) * | 2020-04-26 | 2023-05-26 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
CN114595692A (zh) * | 2020-12-07 | 2022-06-07 | 山东新松工业软件研究院股份有限公司 | 一种情绪识别方法、系统及终端设备 |
Also Published As
Publication number | Publication date |
---|---|
US11498573B2 (en) | 2022-11-15 |
US20210362725A1 (en) | 2021-11-25 |
CN108549720A (zh) | 2018-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019205642A1 (zh) | 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 | |
US12084045B2 (en) | Systems and methods for operating a vehicle based on sensor data | |
CN108725357B (zh) | 基于人脸识别的参数控制方法、系统与云端服务器 | |
JP7053432B2 (ja) | 制御装置、エージェント装置及びプログラム | |
KR20220041901A (ko) | 자동차 캐빈 내 이미지 처리 | |
US11014508B2 (en) | Communication support system, communication support method, and storage medium | |
CN110286745B (zh) | 对话处理系统、具有对话处理系统的车辆及对话处理方法 | |
CN109302486B (zh) | 一种根据车内环境推送音乐的方法和系统 | |
DE102018126525A1 (de) | Fahrzeuginternes System, Verfahren und Speichermedium | |
CN111717083A (zh) | 一种车辆交互方法和一种车辆 | |
JP2018027731A (ja) | 車載装置、車載装置の制御方法およびコンテンツ提供システム | |
US20230278426A1 (en) | Vehicle-mounted apparatus, information processing method, and non-transitory storage medium | |
CN112440900A (zh) | 一种车辆控制方法、装置、控制设备及汽车 | |
CN112996194A (zh) | 一种灯光控制方法及装置 | |
CN111902864A (zh) | 用于运行机动车的声音输出装置的方法、语音分析与控制装置、机动车和机动车外部的服务器装置 | |
CN111703385B (zh) | 一种内容互动方法以及一种车辆 | |
CN112951216B (zh) | 一种车载语音处理方法及车载信息娱乐系统 | |
CN117198281A (zh) | 语音交互方法、装置、电子设备及车辆 | |
JP2019105966A (ja) | 情報処理方法及び情報処理装置 | |
CN113561988A (zh) | 一种基于视线追踪的语音控制方法、汽车及可读存储介质 | |
JP2019191859A (ja) | 車両の情報提示装置及び車両の情報提示方法 | |
US20220105948A1 (en) | Vehicle agent device, vehicle agent system, and computer-readable storage medium | |
CN114792440A (zh) | 用于确定车辆内部的人员的状态的方法和装置 | |
CN116166835A (zh) | 智能推荐方法及系统 | |
CN112738447A (zh) | 一种基于智能座舱的视频会议方法和智能座舱 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18916968 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18916968 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18916968 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 10.05.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18916968 Country of ref document: EP Kind code of ref document: A1 |