WO2019205642A1 - 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 - Google Patents

基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 Download PDF

Info

Publication number
WO2019205642A1
WO2019205642A1 PCT/CN2018/119384 CN2018119384W WO2019205642A1 WO 2019205642 A1 WO2019205642 A1 WO 2019205642A1 CN 2018119384 W CN2018119384 W CN 2018119384W WO 2019205642 A1 WO2019205642 A1 WO 2019205642A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
comfort
emotion
voice
audio
Prior art date
Application number
PCT/CN2018/119384
Other languages
English (en)
French (fr)
Inventor
杨向东
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/472,012 priority Critical patent/US11498573B2/en
Publication of WO2019205642A1 publication Critical patent/WO2019205642A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/326Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/593Recognising seat occupancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0881Seat occupation; Driver or passenger presence
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means
    • B60W2050/0083Setting, resetting, calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/043Identity of occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/21Voice
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles

Definitions

  • Embodiments of the present disclosure relate to a method, apparatus, system, computer device, and computer readable storage medium based on emotion recognition.
  • mood swings are not suitable, such as driving a car, due to various situations, drivers and other occupants are still unavoidable to have mood swings.
  • a method for appeasement based on emotion recognition comprising: acquiring at least one of a voice and an image of a user; determining whether the user is emotional according to at least one of a voice and an image of the user Abnormal; in response to the user's emotional abnormality, determining a manner of appeasement according to the emotion of the user, emotionally appease the user.
  • the acquiring at least one of the user's voice and image includes: acquiring at least one of the user's voice and image, and determining the acquired user's seating position on the vehicle; the responding to the user Emotional abnormality, determining a manner of appeasement according to the emotion of the user, and performing emotional comfort on the user includes: determining a manner of appeasement according to the emotion and the position of the user of the user, and performing emotional comfort on the user.
  • the determining the manner of appeasement according to the emotion and the position of the user, and performing emotional comfort on the user includes: acquiring the gender and age of the user according to the collected image of the user; according to the emotion of the user , gender, age, and location of the ride, select at least one mode of comfort to emotionally appease the user.
  • determining a comfort mode according to the user's emotion, and performing emotional comfort on the user includes: providing a user with a comfort mode currently set by the user according to the comfort mode preference set by the user. Like the matching method of comfort.
  • the method further includes: sending the user identification and user-set comfort style preferences and/or audio and video entertainment resource preferences to the server.
  • the comfort mode preference and/or audio and video entertainment resource preferences set by the user are obtained.
  • the user selects at least one method of comforting the user according to the emotion, gender, age, and/or location of the user, including: sending the user's emotion, gender, age, and location of the user to the server. Receiving a weighting result of the age, gender, and emotional change of the person in accordance with the driving position and the non-driving position during the driving process, and the current user's mood, gender, age, location of the user's ride, and the manner of the push; In the comfort mode of the server push, at least one kind of comfort mode is selected to emotionally appease the user.
  • acquiring at least one of a user's voice and image includes: performing voice feature collection by a directional microphone located on the vehicle, and determining whether the user is emotionally abnormal according to at least one of the user's voice and image includes: Learning the trained language feature model, language dictionary library or image model for emotion recognition to determine whether the user is emotionally abnormal.
  • the seating position includes a driving position and a non-driving position
  • the determining a manner of comfort according to the emotion and/or the seating position of the user, and performing emotional comfort on the user includes: for the user in the driving position, by audio And the voice interaction method to appease; for users in non-driving positions, comfort through video, audio and voice interaction methods.
  • a directional microphone, a camera and/or a sound output unit are disposed on the vehicle rearview mirror, and the directional microphone, the camera and/or the sound output unit are respectively deployed to the door sides of the vehicle, to Navigating voice acquisition, image acquisition, and directional sound output for the driving position user;
  • the non-driving position deploying a directional microphone, camera, and sound above the non-driving position seat and at a first angle relative to the non-driving position
  • the central controller is further configured to: determine a comfort mode according to the emotion of the user, the location of the terminal device that sends the current emotion of the user, and send the comfort mode to the terminal device.
  • the terminal device is further configured to: determine the gender and age of the user according to the collected image of the user, and send the same to the central controller; the central controller is configured to: according to the user's mood, gender, age And transmitting the location of the terminal device of the user's current mood, selecting at least one mode of appeasement, and transmitting the method to the terminal device.
  • the terminal device is further configured to: receive the comfort mode preference set by the user, and send the response mode to the central controller; the central controller determines the comfort mode according to the user's mood, including: determining according to the comfort mode preference set by the user.
  • the soothing method that matches the comfort mode preference currently set by the user.
  • the terminal device is further configured to: receive audio and video entertainment resource preferences set by the user, and send the information to the central controller; the central controller determines the manner of comfort according to the emotion of the user, including: adopting in response to the comforting manner Audio and video entertainment, according to the audio and video entertainment resources preferences set by the user, determine audio and video entertainment resources that match the currently set audio and video entertainment resources preferences.
  • the central controller determines the audiovisual entertainment resource based on the user identification and a history corresponding to the user identification.
  • an apparatus comprising a processor and a memory; wherein: the memory includes instructions executable by the processor to cause the processor to perform the aforementioned method.
  • a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implements the aforementioned methods.
  • FIG. 2 is a schematic structural diagram of a comfort recognition system based on emotion recognition according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of an emotion recognition-based comfort system with a cloud server according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a data structure defined by a preference according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a comfort recognition device based on emotion recognition according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a comfort recognition device based on emotion recognition according to an embodiment of the present disclosure.
  • Step S101 Acquire a voice and/or an image of the user
  • Step S102 Determine, according to the voice and/or image of the user, whether the user is emotionally abnormal
  • the image may be directly captured by the camera, the voice may be directly collected from the microphone, or the voice and/or image transmitted by other devices may be received through a wired or wireless communication connection.
  • Step S103 in response to the user's emotional abnormality, determining a manner of appeasement according to the emotion of the user, and performing emotional comfort on the user, for example, including:
  • the driver's seat and the non-driver's comfort mode can be distinguished to provide more effective comfort to the user. For example, for the driver's seat, it is not appropriate to use the video to appease. .
  • the comfort mode is determined according to the user's mood and the location of the user's ride. Abnormal users make emotional comfort, for example:
  • At least one mode of comfort is selected to emotionally appease the user.
  • the emotion of the user includes, for example, one or a combination of the following:
  • the method of appeasement includes one or a combination of the following:
  • the user is provided with a comfort mode that matches the comfort mode preference currently set by the user. and / or
  • audio and video entertainment is used, and according to the audio and video entertainment resource preferences set by the user, the user is presented with audio and video entertainment resources that match the currently set audio and video entertainment resource preferences.
  • the user When the user is riding, the user can log in manually or log in through face recognition. According to the registered user ID, the user can set the comfort mode preferences and/or audio and video entertainment resource preferences that the user has set, thereby providing more accurate users. service.
  • the user identifier and the user's setting of the comfort mode preference and/or the audio and video entertainment resource preference may be sent to the cloud server.
  • the server performs storage and statistics.
  • the cloud server can push the comfort content suitable for the user according to the statistical data and the gender, age, and the like of the user.
  • At least one terminal device and one central controller may be set, thereby saving hardware costs, and the terminal device is set at a position corresponding to the user, and is used for
  • the central controller is used to determine the comfort method and comfort resources.
  • the central controller is for example a server.
  • an embodiment of the present disclosure further provides a comfort recognition system based on emotion recognition, including: a central controller 201 and at least one terminal device 202, for example,
  • the terminal device 202 is configured to collect the voice and/or image of the user, determine the current emotional abnormality of the user according to the voice and/or image of the user, and send the current emotion of the user to the central controller, according to the manner of the center controller. Emotional comfort to the user;
  • the central controller 201 is configured to receive a current mood of the user sent by the terminal device, determine a comfort mode according to the emotion of the user, and send the comfort mode to the terminal device.
  • central controller 201 is used, for example, to:
  • the appeasement mode is determined according to the emotion of the user, the location of the terminal device 202 that transmits the current mood of the user, and the appeasement mode is transmitted to the terminal device 202.
  • the gender and age of the user are determined based on the image of the user and sent to the central controller 201.
  • the central controller 201 is used, for example, to:
  • At least one mode of appeasement is selected and transmitted to the terminal device 202 according to the user's mood, gender, age, and location of the terminal device 202 that transmits the user's current mood.
  • the user's emotions include, for example, one or a combination of the following:
  • the central controller 201 determines the manner of comfort according to the emotion of the user, for example, including:
  • the comfort mode matching the comfort mode preference currently set by the user is determined. and / or
  • audio and video entertainment is adopted, and audio and video entertainment resources matching the currently set audio and video entertainment resources are prepared according to the audio and video entertainment resource preferences set by the user.
  • the central controller 201 is also used to:
  • the cloud server 203 is configured to receive, according to the user's emotion, gender, age, location of the user's ride, and the weight statistics of the age, gender, and mood change of the person in the driving position and the non-driving position during the driving process, And at least one kind of comfort mode is determined with the current user's emotion, gender, age, location of the user's ride, and pushed to the central controller 201.
  • a terminal device that performs audio video capture and comfort can be disposed at a position of a corresponding seat in the vehicle, and each terminal device is connected to the central controller by wire or wirelessly.
  • the central controller is responsible for determining the comfort mode, appeasing the resources, and interacting with the cloud server.
  • the central controller can also be responsible for emotion recognition based on voice and/or image. It is also possible to provide a device capable of fully implementing the emotion recognition based comfort method in the position of the corresponding seat in the vehicle, each device independently completing the emotional judgment and appeasement of the user corresponding to the seat.
  • the directional speech feature extraction is performed by the terminal devices deployed at the respective riding positions.
  • the influence of the background speech on the speech features of the corresponding position can be avoided, and the corresponding speech features are subjected to emotion recognition through a language model of deep learning and migration learning training, and identify emotions of birth, disgust, fear, and sadness; for example, deployment can be performed
  • the directional microphones at each position perform voice feature collection, and perform emotional recognition through the deep learning and migration learning language feature model and language dictionary library to identify the anger, disgust, fear, sad emotion of the occupant in the corresponding position. Changes and transmit the identification results to the central controller;
  • the image model of deep learning and migration learning training can also be used for emotion recognition to identify changes in birth, disgust, fear, and sadness;
  • the central controller determines the appeasement method according to the emotion recognition result and the visual recognition result, and pushes the appropriation resource of the local end and the cloud server to the corresponding terminal device, and the terminal device presents the user with the appeasement. For example, according to the ride location ID, the type of emotion, the age group and the gender of the rider, the comfort mode preparation is performed, and the comfort resource of the local end and the cloud server is pushed to the emotional comfort module at the ride position end;
  • the terminal device also provides a preference definition interface for the passengers at the location, and the definition can select resources according to the four categories of anger, disgust, fear, and sadness, and the resources can be defined according to the emotions of the cloud server to appease the resource management list and the resource customization manner.
  • the data structure defined by the preferences is shown in Figure 4.
  • the emotional comfort method pushed by the terminal device deployed in the riding position is implemented, for example, in the driving position, the emotional comfort can be performed according to the user's preference data of the driving position, and only audio and voice are available at the position.
  • Interactive comfort methods for example, voice interaction can perform voice interaction of remote personnel in addition to human-machine language interaction; non-driving positions can provide video, audio and voice interaction methods.
  • the central controller may further send the user preferences collected by each terminal device to the cloud server; the central controller may also perform weight statistics according to the emotional changes of the age and gender of the driving position and the non-driving position during the driving process, and perform the appeasement.
  • the method and the method of appeasement prefer the local deployment of resources, so that when the user comforts, the user is comforted.
  • the arrangement of the directional microphone, the camera, the sound output unit, and the video output unit can be as follows:
  • a directional microphone, a camera, and a sound output unit may be disposed on the rearview mirror, and two sets of directional microphones, cameras, and sound output units on the rearview mirror are respectively deployed to the sides of the door for the directional voice. Acquisition, image acquisition and directional sound output.
  • the directional microphone, camera and sound output unit can be deployed at the 30-degree angle centered on the corresponding position directly above the seat for directional voice acquisition, image acquisition and directional voice of the passengers in the rear position. Output.
  • the video output unit is deployed on the back of the corresponding front seat.
  • the microphone, the camera, the sound output unit, and the video output unit complete the corresponding functions through independent hardware processing modules.
  • the microphone, the camera, the sound output unit, and the video output unit can send data to the central controller through the serial port, and the sound output unit and the video output unit can The audio and video data in the comfort mode transmitted by the central controller is received through the network port.
  • the central controller can be combined with the central control system of the in-vehicle system to implement terminal device management, comfort mode management, and resource preference management, and establish data routing between each terminal device and the cloud server.
  • Each terminal device performs language feature extraction through a directional voice collection module at each location, and performs emotion recognition through a voice emotion recognition model deployed in each terminal device; each terminal device performs face recognition of the occupant according to the visual recognition module, and recognizes The terminal device transmits the speech emotion recognition and visual recognition results to the central controller for the age and gender of the occupants at each location.
  • Each occupant can set a comfort mode preference on the corresponding terminal device, and the terminal device sends the set comfort mode preference to the central controller, and the central controller performs resource application of the cloud server and resource synchronization of the cloud server; the central controller is configured according to The speech emotion recognition and visual recognition results and the user's preferences determine the comfort mode and comfort resources and send them to the terminal device.
  • the embodiment of the present disclosure further provides a comfort recognition device based on emotion recognition, and the appeasement device corresponds to the aforementioned comfort method.
  • the appeasement device corresponds to the aforementioned comfort method.
  • the acquiring device 501 is configured to acquire the voice and/or image of the user.
  • the determining device 502 is configured to determine, according to the voice and/or image of the user, whether the user is emotionally abnormal.
  • the appeasing device 503 is configured to determine a comfort mode according to the user's mood in response to the user's emotional abnormality, and to emotionally appease the user.
  • a unit or a module in an embodiment of the present disclosure may be implemented by a general purpose processor or a dedicated processor, for example, a central processing unit, a programmable logic circuit.
  • the acquisition device 501 is used, for example, to:
  • the user's voice and/or image is acquired and the acquired user's seating position is determined.
  • the comforting device 503 is used, for example, to:
  • the comfort mode is determined according to the user's emotions and the position of the user's ride, and the user is emotionally comforted.
  • the appeasement device 503 determines the manner of appeasement according to the emotion of the user and the position of the user's ride, and emotionally appeases the user with abnormal emotions, for example, including:
  • the user's emotions include, for example, one or a combination of the following:
  • the method of appeasement includes one or a combination of the following:
  • comforting device 503 is also used to:
  • the user is provided with a comfort mode that matches the comfort mode preference currently set by the user. and / or
  • comfort device 503 is also used to:
  • the comforting device 503 selects at least one kind of comforting manner to appease the user according to the user's mood, gender, age, location of the user's ride, for example, including:
  • the weighting result of the cloud server according to the age, gender, and mood change of the driving position and the non-driving position of the driving process, and the current user's mood, gender, age, location of the user's ride, and the manner of the push.
  • the units or modules recited in the apparatus correspond to the various steps in the method described with reference to FIG.
  • the operations and features described above for the method are equally applicable to the device and the unit, for example, and are not described herein.
  • the device may be implemented in a browser or other security application of the electronic device in advance, or may be loaded into a browser of the electronic device or a secure application thereof by downloading or the like.
  • Corresponding units in the device can cooperate with units in the electronic device to implement the solution of the embodiments of the present application.
  • FIG. 6 there is shown a schematic structural diagram of a computer system suitable for implementing the emotion recognition based comforting device of the embodiment of the present application, which may be, for example, a terminal device or a central controller, or may be, for example, a terminal device. A device that is combined with a central controller.
  • the computer system includes a central processing unit (CPU) 601, which may be loaded according to a program stored in a read only memory (ROM) 602 or a program loaded from a storage portion 608 into a random access memory (RAM) 603. Perform various appropriate actions and processes.
  • ROM read only memory
  • RAM random access memory
  • various programs and data required for system operation are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also coupled to bus 604.
  • the following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, etc.; an output portion 607 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 608 including a hard disk or the like. And a communication portion 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the Internet.
  • Driver 610 is also coupled to I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 610 as needed so that a computer program read therefrom is installed into the storage portion 608 as needed.
  • the central controller may also not include input portion 606 and output portion 607.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for performing the method of FIG.
  • the computer program can be downloaded and installed from the network via communication portion 609, and/or installed from removable media 611.
  • each block of the flowchart or block diagrams can represent a module, a program segment, or a portion of code that includes one or more logic for implementing the specified.
  • Functional executable instructions can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units or modules described in the embodiments of the present application may be implemented by software or by hardware.
  • the described unit or module may also be provided in the processor, for example, as a processor including an XX unit, a YY unit, and a ZZ unit.
  • the names of these units or modules do not in some cases constitute a limitation on the unit or module itself.
  • the XX unit may also be described as "a unit for XX.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Automation & Control Theory (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Computational Linguistics (AREA)
  • Otolaryngology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种基于情绪识别的安抚方法包括:获取用户的语音和图像中的至少一个;根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚。还提供了一种装置、设备以及存储介质。

Description

基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 技术领域
本公开实施例涉及一种基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质。
背景技术
在一些不宜出现情绪波动的场景中,例如驾驶汽车等场景,由于各种情况的出现,司机和其它乘坐人员还是难以避免的会出现情绪波动。
当车内的司机及其他乘坐人员,在行车过程中往往会出现各种情绪变化时,如生气、厌恶、恐惧、悲伤等情绪,如果情绪控制不好,就很可能因情绪产生一些不恰当的举动,容易给行车带来安全隐患。
发明内容
根据本公开至少一个实施例,提供了一种基于情绪识别的安抚方法,包括:获取用户的语音和图像中的至少一个;根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚。
例如,所述获取用户的语音和图像中的至少一个包括:采集所述用户的语音和图像中的至少一个,并确定所采集的用户在交通工具上的乘坐位置;所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚包括:根据所述用户的情绪和乘坐位置确定安抚方式,对所述用户进行情绪安抚。
例如,所述根据所述用户的情绪和乘坐位置确定安抚方式,对所述用户进行情绪安抚,包括:根据所述采集的所述用户的图像获取用户的性别和年龄;根据所述用户的情绪、性别、年龄和乘坐的位置, 选择至少一种安抚方式对所述用户进行情绪安抚。
例如,所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚包括:根据用户设置的安抚方式喜好,向用户提供与该用户当前设置的安抚方式喜好匹配的安抚方式。
例如,所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚包括:响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,向用户展示与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
例如,方法还包括:将用户标识以及用户设置的安抚方式喜好和/或音视频娱乐资源喜好发送至服务器。
例如,根据用户登录交通工具的用户标识,获取所述用户设置的所述安抚方式喜好和/或音视频娱乐资源喜好。
例如,所述根据用户的情绪、性别、年龄和/或用户乘坐位置,选择至少一种安抚方式对该用户进行情绪安抚,包括:将用户的情绪、性别、年龄、用户乘坐的位置发送至服务器;接收服务器根据行车过程中驾驶位置及非驾驶位置的人员的年龄、性别、情绪变化的权重统计结果,和当前用户的情绪、性别、年龄、用户乘坐的位置,推送的安抚方式;从所述服务器推送的安抚方式中,选择至少一种安抚方式对该用户进行情绪安抚。
例如,获取用户的语音和图像中的至少一个包括:通过位于交通工具上的定向麦克风进行语音特征采集,根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常包括:通过深度学习训练出的语言特征模型、语言字典库或图像模型进行情感识别,以确定所述用户是否情绪异常。
例如,所述乘坐位置包括驾驶位置和非驾驶位置,所述根据所述用户的情绪和/或乘坐位置确定安抚方式,对所述用户进行情绪安抚包括:对于在驾驶位置上的用户,通过音频和语音交互方法进行安抚;对于非驾驶位置上的用户,通过视频、音频及语音交互方法进行安抚。
例如,对于所述驾驶位置,在交通工具后视镜上设置定向麦克风、摄像头和/或声音输出单元,所述定向麦克风、摄像头和/或声音输出单 元分别向交通工具两侧车门方向部署,以供所述驾驶位置用户进行定向语音采集、图像采集及定向声音输出;对于所述非驾驶位置,在非驾驶位置座位上方、并且相对于所述非驾驶位置第一角度部署定向麦克风、摄像头及声音输出单元,以供所述非驾驶位置用户进行定向语音采集、图像采集及定向语音输出。
根据本公开至少一个实施例,提供了一种基于情绪识别的安抚装置,其中,包括:获取设备,被配置为获取用户的语音和图像中的至少一个;判断设备,被配置为根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;安抚设备,被配置为响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚。
根据本公开至少一个实施例,提供了一种基于情绪识别的安抚系统,包括:中心控制器和至少一个终端设备,其中,终端设备,被配置为采集用户的语音和图像中的至少一个,将用户当前的情绪发送给所述中心控制器,根据中心控制器发送的安抚方式对用户进行情绪安抚;中心控制器,被配置为接收所述终端设备发送的用户当前的情绪,根据用户的情绪确定安抚方式,并将安抚方式发送给所述终端设备。
例如,所述中心控制器进一步被配置为:根据用户的情绪、发送用户当前的情绪的终端设备的位置确定安抚方式,并将安抚方式发送给所述终端设备。
例如,所述终端设备还被配置为:根据所述采集的用户的图像判断用户的性别和年龄,并发送给中心控制器;所述中心控制器被配置为:根据用户的情绪、性别、年龄、发送用户当前的情绪的终端设备的位置,选择至少一种安抚方式并发送给所述终端设备。
例如,所述终端设备还被配置为:接收用户设置的安抚方式喜好,并发送给中心控制器;所述中心控制器根据用户的情绪确定安抚方式,包括:根据用户设置的安抚方式喜好,确定与该用户当前设置的安抚方式喜好匹配的安抚方式。
例如,所述终端设备还被配置为:接收用户设置的音视频娱乐资源喜好,并发送给中心控制器;所述中心控制器根据用户的情绪确定 安抚方式,包括:响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,确定与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
例如,所述中心控制器根据用户标识以及对应该用户标识的历史记录确定所述音视频娱乐资源。
根据本公开至少一个实施例,提供了一种设备,包括处理器和存储器;其中:所述存储器包含可由所述处理器执行的指令以使得所述处理器执行前述方法。
根据本公开至少一个实施例,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现前述方法。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1为本公开实施例提供的基于情绪识别的安抚方法流程图;
图2为本公开实施例提供的基于情绪识别的安抚系统结构示意图;
图3为本公开实施例提供的具有云服务器的基于情绪识别的安抚系统结构示意图;
图4为本公开实施例提供的喜好定义的数据结构示意图;
图5为本公开实施例提供的基于情绪识别的安抚装置结构示意图;
图6为本公开实施例提供的基于情绪识别的安抚设备结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与 发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
请参考图1,本公开实施例提供的基于情绪识别的安抚方法,包括:
步骤S101、获取用户的语音和/或图像;
步骤S102、根据用户的语音和/或图像判断用户是否情绪异常;
步骤S103、响应于用户情绪异常,根据用户的情绪确定安抚方式,并对该用户进行情绪安抚。
由于通过该方法,可以及时识别出用户的情绪,并及时对用户进行安抚,从而实现在用户出现情绪异常时,及时获得安抚,避免用户在情绪异常时,给行车或其它操作带来安全隐患。
步骤S101中,可以直接通过摄像头拍摄图像、直接从麦克风采集语音,也可以通过有线或无线的通信连接,接收其它设备发送的语音和/或图像。
进一步,该方法应用在汽车中时,可以参考用户的乘坐位置来确定安抚方式,此时,步骤S101,获取用户的语音和/或图像,例如包括:
采集所述用户的语音和/或图像,并确定所采集的用户的乘坐位置;
步骤S103,响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚,例如包括:
根据用户的情绪、乘坐的位置确定安抚方式,对用户进行情绪安抚。
参考用户的乘坐位置来确定安抚方式,可以将驾驶位和非驾驶位的安抚方式区别开,从而向用户提供更有效的安抚,例如,对于驾驶位的用户,就不宜采用播放视频的方式进行安抚。
还可以进一步参考用户的性别、年龄,选择更适合用户当前性别和年龄的安抚方式对用户进行安抚,从而提高用户的体验,此时,根据用户的情绪、用户乘坐的位置确定安抚方式,对情绪异常的用户进 行情绪安抚,例如包括:
获取用户的图像;
根据用户的图像获取用户的性别和年龄;
根据用户的情绪、性别、年龄、用户乘坐的位置,选择至少一种安抚方式对该用户进行情绪安抚。
在本公开实施例中,用户的情绪例如包括如下之一或组合:
愤怒、疲劳、晕车感、厌恶、恐惧、悲伤;
安抚方式包括如下之一或组合:
语音交互、音视频娱乐、播放行车建议。
为更有针对性的对用户进行有效安抚,可以进一步根据用户的喜好来选择安抚方式和/或安抚资源,此时,该方法还包括:
根据用户设置的安抚方式喜好,向用户提供与该用户当前设置的安抚方式喜好匹配的安抚方式。和/或
响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,向用户展示与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
用户在乘坐时,可以手动登录,也可以通过人脸识别进行登录,根据登录的用户标识,即可获取用户曾经设置的安抚方式喜好和/或音视频娱乐资源喜好,进而为用户提供更精准的服务。
更进一步,还可以在接收到用户设置的安抚方式喜好和/或音视频娱乐资源喜好后,将用户标识以及用户的设置的安抚方式喜好和/或音视频娱乐资源喜好发送至云服务器,由云服务器进行存储和统计,当有未进行喜好设置的用户需要进行安抚时,云服务器可以根据统计数据和用户的性别、年龄等资料,推送适合该用户的安抚内容。
此时,根据用户的情绪、性别、年龄、用户乘坐的位置,选择至少一种安抚方式对该用户进行情绪安抚,例如包括:
将用户的情绪、性别、年龄、用户乘坐的位置发送至云服务器;
接收云服务器根据行车过程中驾驶位置及非驾驶位置的人员的年龄、性别、情绪变化的权重统计结果,和当前用户的情绪、性别、年龄、用户乘坐的位置,推送的安抚方式;
从云服务器推送的安抚方式中,选择至少一种安抚方式对该用户进行情绪安抚。
应当注意,尽管在附图中以特定顺序描述了本公开方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。相反,流程图中描绘的步骤可以改变执行顺序。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。
对于应用在汽车内的场景,或者应用在其它具有多个用户的场景中时,可以设置至少一个终端设备和一个中心控制器,从而节省硬件成本,终端设备设置在对应用户的位置上,用来进行音频视频采集和实施安抚,中心控制器则用来确定安抚方式和安抚资源。中心控制器例如是服务器。
例如,如图2所示,本公开实施例还提供一种基于情绪识别的安抚系统,包括:中心控制器201和至少一个终端设备202,例如,
终端设备202,用于采集用户的语音和/或图像,根据用户的语音和/或图像确定用户当前的情绪异常,并将用户当前的情绪发送给中心控制器,根据中心控制器发送的安抚方式对用户进行情绪安抚;
中心控制器201,用于接收终端设备发送的用户当前的情绪,根据用户的情绪确定安抚方式,并将安抚方式发送给终端设备。
进一步,中心控制器201例如用于:
根据用户的情绪、发送用户当前的情绪的终端设备202的位置确定安抚方式,并将安抚方式发送给终端设备202。
更进一步,终端设备202还用于:
采集用户的图像;
根据用户的图像判断用户的性别和年龄,并发送给中心控制器201。
中心控制器201例如用于:
根据用户的情绪、性别、年龄、发送用户当前的情绪的终端设备202的位置,选择至少一种安抚方式并发送给终端设备202。
通常,用户的情绪例如包括如下之一或组合:
愤怒、疲劳、晕车感、厌恶、恐惧、悲伤。
安抚方式包括如下之一或组合:
语音交互、音视频娱乐、播放行车建议。
进一步,终端设备202还用于:
接收用户设置的安抚方式喜好和/或音视频娱乐资源喜好,并发送给中心控制器201;
中心控制器201根据用户的情绪确定安抚方式,例如包括:
根据用户设置的安抚方式喜好,确定与该用户当前设置的安抚方式喜好匹配的安抚方式。和/或
响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,准备与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
进一步,如图3所示,该系统中还包括云服务器203,
中心控制器201还用于:
将用户标识以及用户设置的安抚方式喜好和/或音视频娱乐资源喜好发送至云服务器203。
更进一步,中心控制器201根据用户的情绪确定安抚方式,例如包括:
将用户的情绪、性别、年龄、用户乘坐的位置发送至云服务器203;
接收云服务器203推送的安抚方式,并从云服务器203推送的安抚方式中,选择至少一种安抚方式;
云服务器203用于,接收中心控制器201发送的用户的情绪、性别、年龄、用户乘坐的位置,根据行车过程中驾驶位置及非驾驶位置的人员的年龄、性别、情绪变化的权重统计结果,和当前用户的情绪、性别、年龄、用户乘坐的位置,确定至少一种安抚方式,并推送给中心控制器201。
例如,在本公开一个例如的应用在汽车内的实施例中,可以将进行音频视频采集、进行安抚的终端设备设置在车辆中对应座位的位置,各个终端设备通过有线或者无线方式连接中心控制器,中心控制器负 责确定安抚方式、安抚资源以及与云服务器的交互,中心控制器也可以负责根据语音和/或图像进行情感识别。也可以是在车辆中对应座位的位置都设置一个能够完全实现基于情绪识别的安抚方法的设备,每个设备独立完成对应座位的用户的情绪判断和安抚。
当采用终端设备和中心控制器的方式时,通过部署在各乘车位置上的终端设备,进行定向语音特征提取。例如,可以避免背景语音对相应位置语音特征的影响,对应的语音特征通过经过深度学习和迁移学习训练的语言模型进行情感识别,识别出生气、厌恶、恐惧、悲伤情感变化;例如,可以通过部署在各位置上的定向麦克进行语音特征采集,并通过深度学习及迁移学习训练出的语言特征模型及语言字典库进行情感识别,识别出相应位置的乘车人的生气、厌恶、恐惧、悲伤情感的变化,并将识别结果传输到中心控制器;
通过终端设备中的视觉识别,也可以经过深度学习和迁移学习训练的图像模型进行情感识别,识别出生气、厌恶、恐惧、悲伤情感变化;
通过终端设备中的视觉识别,还可以识别出对应位置乘车人的年龄段及性别,例如,根据部署在各乘车位置的摄像头,提取乘车人员的脸部特征,基于人脸识别模型进行年龄段和性别的识别,并把对应的识别结果传输到中心控制器;
中心控制器根据情感识别结果和视觉识别结果确定安抚方法,并将本地端及云服务端的安抚资源推送到对应的终端设备,由该终端设备呈现给用户进行安抚。例如,根据乘车位置ID、情感类型、乘车人的年龄段和性别,进行安抚方式准备,并将本地端和云服务端的安抚资源推送至乘车位置端的情绪安抚模块;
中心控制器在确定安抚方式时,可以针对用户在车内的位置进行情感管理,确定是否使用该情绪变化对应的安抚方式,在管理过程中对驾驶位置及非驾驶位置的情感管理侧重点不同,在驾驶位置对疲劳产生恐惧感,及路怒症产生的愤怒情感比较侧重,对非驾驶位置乘车人的晕车产生的厌恶感及乘车前产生的生气情感及会影响驾驶员的驾驶安全的愤怒感比较侧重;
各终端设备的安抚方法通过自定义或爱好收集方式产生,安抚方法可以通过音视频播放或语音建议方式对安抚方法进行实施。
终端设备还为该位置乘车人提供喜好定义接口,喜好定义可以根据生气、厌恶、恐惧、悲伤四类进行资源选择,资源可以根据云服务端的情感安抚资源管理列表及资源自定方式进行喜好定义,喜好定义的数据结构如图4所示。
在进行情绪安抚时,由部署在乘车位置上终端设备对推送来的情绪安抚方法进行实施,如在驾驶位置上可以根据驾驶位置的用户的喜好数据进行情绪安抚,该位置上只有音频和语音交互的安抚方法,例如,语音交互除了人机语言交互还可以进行远程人员的语音交互;非驾驶位置则可以提供视频、音频及语音交互方法。
中心控制器可以进一步将各终端设备收集的用户喜好发送至云服务器;中心控制器也可以根据行车过程中驾驶位置及非驾驶位置的人员的年龄段及性别的情绪变化进行权重统计,并进行安抚方法及安抚方法中喜好资源的本地部署,从而在有用户进行安抚时,有针对性的对用户实现安抚。
云服务器主要进行数据分析及数据管理,数据管理根据车内驾驶位置及非驾驶位置来创建该车辆的情感类型与安抚方式、安抚资源对应关系的数据库,根据乘车人员的喜好交互数据进行数据分析,分析过程中根据乘车次数及乘车位置产生的情感类型及性别及年龄段为维度,进行安抚方法及安抚方法的资源的分析,以供安抚方法的自动选择。
对于终端设备中,定向麦克风、摄像头、声音输出单元、视频输出单元的布置,可以采用如下方式:
对于主驾驶及副驾驶,可以在后视镜上设置定向麦克风、摄像头、声音输出单元,后视镜上的两套定向麦克风、摄像头、声音输出单元分别向两侧车门方向部署,以便供定向语音采集、图像采集及定向声音输出。
在后排位置的座位,可以在座位正上方设置以对应位置为中心的30度角部署定向麦克风、摄像头及声音输出单元,以供后排位置乘车 人员的定向语音采集、图像采集及定向语音输出。
针对后排位置,在对应的前排座椅后背部署视频输出单元。
麦克风、摄像头、声音输出单元、视频输出单元通过独立的硬件处理模块完成相应功能,麦克风、摄像头、声音输出单元、视频输出单元可以通过串口向中心控制器发送数据,声音输出单元、视频输出单元可以通过网口接收中心控制器传输的安抚方式中的音视频数据。中心控制器可以与车载系统的中控系统合并设置,实现终端设备的管理、安抚方式管理及资源的喜好管理,并建立各终端设备与云服务端的数据路由。
各终端设备通过各位置的定向语音采集模块进行语言特征提取,并通过部署在各终端设备中语音情感识别模型进行情感识别;各终端设备根据视觉识别模块进行乘车人员的脸部识别,识别出各位置的乘车人员的年龄段及性别,终端设备将语音情感识别及视觉识别结果传输到中心控制器。
各乘车人可以在对应的终端设备上设置安抚方式喜好,终端设备将设置的安抚方式喜好发送至中心控制器,中心控制器进行云服务端的资源申请及云服务端的资源同步;中心控制器根据语音情感识别及视觉识别结果及用户的喜好,确定安抚方式和安抚资源,并发送给终端设备。
本公开实施例还提供一种基于情绪识别的安抚装置,安抚装置与前述安抚方法对应,具体实施例参见前述安抚方法的实施例。如图5所示,包括:
获取设备501,用于获取用户的语音和/或图像。
判断设备502,用于根据用户的语音和/或图像判断所述用户是否情绪异常。
安抚设备503,用于响应于用户情绪异常,根据用户的情绪确定安抚方式,并对用户进行情绪安抚。
上述单元可以通过软件的方式实现,也可以通过硬件的方式来实现。例如,本公开实施例中的单元或模块可以通过通用处理器或专用处理器来实现,例如,中央处理器,可编程逻辑电路。
进一步,获取设备501例如用于:
采集所述用户的语音和/或图像,并确定所采集的用户的乘坐位置。
安抚设备503例如用于:
根据用户的情绪、用户乘坐的位置确定安抚方式,对用户进行情绪安抚。
安抚设备503根据用户的情绪、用户乘坐的位置确定安抚方式,对情绪异常的用户进行情绪安抚,例如包括:
获取所述用户的图像。
根据用户的图像判断用户的性别和年龄。
根据用户的情绪、性别、年龄、用户乘坐的位置,选择至少一种安抚方式对该用户进行情绪安抚。
通常,用户的情绪例如包括如下之一或组合:
愤怒、疲劳、晕车感、厌恶、恐惧、悲伤。
安抚方式包括如下之一或组合:
语音交互、音视频娱乐、播放行车建议。
进一步,安抚设备503还用于:
根据用户设置的安抚方式喜好,向用户提供与该用户当前设置的安抚方式喜好匹配的安抚方式。和/或
响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,向用户展示与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
更进一步,安抚设备503还用于:
将用户标识以及用户设置的安抚方式喜好和/或音视频娱乐资源喜好发送至云服务器。
更进一步,安抚设备503根据用户的情绪、性别、年龄、用户乘坐的位置,选择至少一种安抚方式对该用户进行情绪安抚,例如包括:
将用户的情绪、性别、年龄、用户乘坐的位置发送至云服务器。
接收云服务器根据行车过程中驾驶位置及非驾驶位置的人员的年龄、性别、情绪变化的权重统计结果,和当前用户的情绪、性别、年 龄、用户乘坐的位置,推送的安抚方式。
从云服务器推送的安抚方式中,选择至少一种安抚方式对该用户进行情绪安抚。
应当理解,该装置中记载的诸单元或模块与参考图1描述的方法中的各个步骤相对应。由此,上文针对方法描述的操作和特征同样适用于该装置及例如包含的单元,在此不再赘述。该装置可以预先实现在电子设备的浏览器或其他安全应用中,也可以通过下载等方式而加载到电子设备的浏览器或其安全应用中。该装置中的相应单元可以与电子设备中的单元相互配合以实现本申请实施例的方案。
下面参考图6,其示出了适于用来实现本申请实施例的基于情绪识别的安抚设备的计算机系统的结构示意图,该设备可以例如为终端设备或中心控制器,也可以例如为终端设备和中心控制器合并设置的设备。
如图6所示,计算机系统包括中央处理单元(CPU)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储部分608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
例如,为减少硬件成本,中心控制器也可以不包括输入部分606和输出部分607。
特别地,根据本公开的实施例,上文参考图1描述的过程可以被 实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,所述计算机程序包含用于执行图1的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,所述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元或模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元或模块也可以设置在处理器中,例如,可以描述为:一种处理器包括XX单元、YY单元以及ZZ单元。例如,这些单元或模块的名称在某种情况下并不构成对该单元或模块本身的限定,例如,XX单元还可以被描述为“用于XX的单元”。
作为另一方面,本申请还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中所述装置中所包含的计算机可读存储介质;也可以是单独存在,未装配入设备中的计算机可读存储介质。计算机可读存储介质存储有一个或者一个以上程序,所述程序被一个或者一个以上的处理器用来执行描述于本申请的公式输入方法。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限 于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
本申请要求于2018年4月24日递交的中国专利申请第201810371545.1号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。

Claims (20)

  1. 一种基于情绪识别的安抚方法,包括:
    获取用户的语音和图像中的至少一个;
    根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;
    响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚。
  2. 如权利要求1所述的方法,其中,所述获取用户的语音和图像中的至少一个包括:
    采集所述用户的语音和图像中的至少一个,并确定所采集的用户在交通工具上的乘坐位置;
    所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚包括:
    根据所述用户的情绪和乘坐位置确定安抚方式,对所述用户进行情绪安抚。
  3. 如权利要求2所述的方法,其中,所述根据所述用户的情绪和乘坐位置确定安抚方式,对所述用户进行情绪安抚,包括:
    根据所述采集的所述用户的图像获取用户的性别和年龄;
    根据所述用户的情绪、性别、年龄和乘坐的位置,选择至少一种安抚方式对所述用户进行情绪安抚。
  4. 如权利要求1-3任一所述的方法,其中,所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚包括:
    根据用户设置的安抚方式喜好,向用户提供与该用户当前设置的安抚方式喜好匹配的安抚方式。
  5. 如权利要求1-4任一所述的方法,其中,所述响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,对所述用户进行情绪安抚包括:
    响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,向用户展示与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
  6. 如权利要求4或5所述的方法,其中,还包括:
    将用户标识以及用户设置的安抚方式喜好和/或音视频娱乐资源喜好发送至服务器。
  7. 根据权利要求4-6任一所述的方法,其中,根据用户登录交通工具的用户标识,获取所述用户设置的所述安抚方式喜好和/或音视频娱乐资源喜好。
  8. 如权利要求3-7任一所述的方法,其中,所述根据用户的情绪、性别、年龄和/或用户乘坐位置,选择至少一种安抚方式对该用户进行情绪安抚,包括:
    将用户的情绪、性别、年龄、用户乘坐的位置发送至服务器;
    接收服务器根据行车过程中驾驶位置及非驾驶位置的人员的年龄、性别、情绪变化的权重统计结果,和当前用户的情绪、性别、年龄、用户乘坐的位置,推送的安抚方式;
    从所述服务器推送的安抚方式中,选择至少一种安抚方式对该用户进行情绪安抚。
  9. 根据权利要求1-8任一所述的方法,其中,
    获取用户的语音和图像中的至少一个包括:
    通过位于交通工具上的定向麦克风进行语音特征采集,
    根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常包括:
    通过深度学习训练出的语言特征模型、语言字典库或图像模型进行情感识别,以确定所述用户是否情绪异常。
  10. 根据权利要求2-9任一所述的方法,其中,所述乘坐位置包括驾驶位置和非驾驶位置,
    所述根据所述用户的情绪和/或乘坐位置确定安抚方式,对所述用户进行情绪安抚包括:
    对于在驾驶位置上的用户,通过音频和语音交互方法进行安抚;
    对于非驾驶位置上的用户,通过视频、音频及语音交互方法进行安抚。
  11. 根据权利要求10所述的方法,其中,
    对于所述驾驶位置,在交通工具后视镜上设置定向麦克风、摄像头和/或声音输出单元,所述定向麦克风、摄像头和/或声音输出单元分别向交通工具两侧车门方向部署,以供所述驾驶位置用户进行定向语音采集、图像采集及定向声音输出;
    对于所述非驾驶位置,在非驾驶位置座位上方、并且相对于所述非驾驶位置第一角度部署定向麦克风、摄像头及声音输出单元,以供所述非驾驶位置用户进行定向语音采集、图像采集及定向语音输出。
  12. 一种基于情绪识别的安抚装置,其中,包括:
    获取设备,被配置为获取用户的语音和图像中的至少一个;
    判断设备,被配置为根据所述用户的语音和图像中的至少一个判断所述用户是否情绪异常;
    安抚设备,被配置为响应于所述用户情绪异常,根据所述用户的情绪确定安抚方式,并对所述用户进行情绪安抚。
  13. 一种基于情绪识别的安抚系统,包括:中心控制器和至少一个终端设备,其中,
    终端设备,被配置为采集用户的语音和图像中的至少一个,将用户当前的情绪发送给所述中心控制器,根据中心控制器发送的安抚方式对用户进行情绪安抚;
    中心控制器,被配置为接收所述终端设备发送的用户当前的情绪,根据用户的情绪确定安抚方式,并将安抚方式发送给所述终端设备。
  14. 如权利要求13所述的系统,其中,所述中心控制器进一步被配置为:
    根据用户的情绪、发送用户当前的情绪的终端设备的位置确定安抚方式,并将安抚方式发送给所述终端设备。
  15. 如权利要求13或14所述的系统,其中,所述终端设备还被配置为:
    根据所述采集的用户的图像判断用户的性别和年龄,并发送给中 心控制器;
    所述中心控制器被配置为:
    根据用户的情绪、性别、年龄、发送用户当前的情绪的终端设备的位置,选择至少一种安抚方式并发送给所述终端设备。
  16. 如权利要求13-15任一所述的系统,其中,所述终端设备还被配置为:
    接收用户设置的安抚方式喜好,并发送给中心控制器;
    所述中心控制器根据用户的情绪确定安抚方式,包括:
    根据用户设置的安抚方式喜好,确定与该用户当前设置的安抚方式喜好匹配的安抚方式。
  17. 如权利要求13-16任一所述的系统,其中,所述终端设备还被配置为:
    接收用户设置的音视频娱乐资源喜好,并发送给中心控制器;
    所述中心控制器根据用户的情绪确定安抚方式,包括:
    响应于所述安抚方式采用音视频娱乐,根据用户设置的音视频娱乐资源喜好,确定与当前设置的音视频娱乐资源喜好匹配的音视频娱乐资源。
  18. 如权利要求17任一所述的系统,其中,所述中心控制器根据用户标识以及对应该用户标识的历史记录确定所述音视频娱乐资源。
  19. 一种计算机设备,包括处理器和存储器;其中:
    所述存储器包含可由所述处理器执行的指令以使得所述处理器执行如权利要求1-11任一所述的方法。
  20. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现如权利要求1-11任一所述的方法。
PCT/CN2018/119384 2018-04-24 2018-12-05 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质 WO2019205642A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/472,012 US11498573B2 (en) 2018-04-24 2018-12-05 Pacification method, apparatus, and system based on emotion recognition, computer device and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810371545.1 2018-04-24
CN201810371545.1A CN108549720A (zh) 2018-04-24 2018-04-24 一种基于情绪识别的安抚方法、装置及设备、存储介质

Publications (1)

Publication Number Publication Date
WO2019205642A1 true WO2019205642A1 (zh) 2019-10-31

Family

ID=63512191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119384 WO2019205642A1 (zh) 2018-04-24 2018-12-05 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质

Country Status (3)

Country Link
US (1) US11498573B2 (zh)
CN (1) CN108549720A (zh)
WO (1) WO2019205642A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540358A (zh) * 2020-04-26 2020-08-14 云知声智能科技股份有限公司 人机交互方法、装置、设备和存储介质
CN114595692A (zh) * 2020-12-07 2022-06-07 山东新松工业软件研究院股份有限公司 一种情绪识别方法、系统及终端设备
EP4064113A4 (en) * 2019-11-22 2023-05-10 Arcsoft Corporation Limited USER INFORMATION DETECTION METHOD AND SYSTEM, AND ELECTRONIC DEVICE

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549720A (zh) * 2018-04-24 2018-09-18 京东方科技集团股份有限公司 一种基于情绪识别的安抚方法、装置及设备、存储介质
CN109471954A (zh) * 2018-09-29 2019-03-15 百度在线网络技术(北京)有限公司 基于车载设备的内容推荐方法、装置、设备和存储介质
CN109302486B (zh) * 2018-10-26 2021-09-03 广州小鹏汽车科技有限公司 一种根据车内环境推送音乐的方法和系统
CN109448409A (zh) * 2018-10-30 2019-03-08 百度在线网络技术(北京)有限公司 交通信息交互的方法、装置、设备和计算机存储介质
CN109550133B (zh) * 2018-11-26 2021-05-11 赵司源 一种情绪安抚方法及系统
CN109616109B (zh) * 2018-12-04 2020-05-19 北京蓦然认知科技有限公司 一种语音唤醒方法、装置及系统
CN110598611B (zh) * 2019-08-30 2023-06-09 深圳智慧林网络科技有限公司 看护系统、基于看护系统的病人看护方法和可读存储介质
CN112947740A (zh) * 2019-11-22 2021-06-11 深圳市超捷通讯有限公司 基于动作分析的人机交互方法、车载装置
JP7413055B2 (ja) * 2020-02-06 2024-01-15 本田技研工業株式会社 情報処理装置、車両、プログラム、及び情報処理方法
CN113657134B (zh) * 2020-05-12 2024-04-23 北京地平线机器人技术研发有限公司 语音播放方法和装置、存储介质及电子设备
CN111605556B (zh) * 2020-06-05 2022-06-07 吉林大学 一种防路怒症识别及控制系统
CN111741116B (zh) * 2020-06-28 2023-08-22 海尔优家智能科技(北京)有限公司 情感交互方法、装置、存储介质及电子装置
CN112061058B (zh) * 2020-09-07 2022-05-27 华人运通(上海)云计算科技有限公司 场景触发的方法、装置、设备和存储介质
CN112733763B (zh) * 2021-01-15 2023-12-05 北京华捷艾米科技有限公司 人机语音交互的实现方法及装置、电子设备、存储介质
CN113183900A (zh) * 2021-03-31 2021-07-30 江铃汽车股份有限公司 车辆人员监测方法、装置、可读存储介质及车载系统
CN113780062A (zh) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 一种基于情感识别的车载智能交互方法、存储介质及芯片
CN113581187A (zh) * 2021-08-06 2021-11-02 阿尔特汽车技术股份有限公司 用于车辆的控制方法及相应的系统、车辆、设备和介质
CN114954332A (zh) * 2021-08-16 2022-08-30 长城汽车股份有限公司 一种车辆控制方法、装置、存储介质及车辆
CN114422742A (zh) * 2022-01-28 2022-04-29 深圳市雷鸟网络传媒有限公司 一种通话氛围提升方法、装置、智能设备及存储介质
CN117445805B (zh) * 2023-12-22 2024-02-23 吉林大学 面向公交车司乘冲突的人员预警和行车控制方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201838333U (zh) * 2010-11-12 2011-05-18 北京工业大学 基于驾驶员状态的音乐播放器
CN102874259A (zh) * 2012-06-15 2013-01-16 浙江吉利汽车研究院有限公司杭州分公司 一种汽车驾驶员情绪监视及车辆控制系统
CN203075421U (zh) * 2012-07-31 2013-07-24 深圳市赛格导航科技股份有限公司 一种基于情绪变化的音乐播放系统
CN103873512A (zh) * 2012-12-13 2014-06-18 深圳市赛格导航科技股份有限公司 基于脸部识别技术的车载无线传输音乐的方法
CN106803423A (zh) * 2016-12-27 2017-06-06 智车优行科技(北京)有限公司 基于用户情绪状态的人机交互语音控制方法、装置及车辆
CN107423351A (zh) * 2017-05-24 2017-12-01 维沃移动通信有限公司 一种信息处理方法及电子设备
CN108549720A (zh) * 2018-04-24 2018-09-18 京东方科技集团股份有限公司 一种基于情绪识别的安抚方法、装置及设备、存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9734685B2 (en) * 2014-03-07 2017-08-15 State Farm Mutual Automobile Insurance Company Vehicle operator emotion management system and method
JP6149824B2 (ja) * 2014-08-22 2017-06-21 トヨタ自動車株式会社 車載装置、車載装置の制御方法及び車載装置の制御プログラム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201838333U (zh) * 2010-11-12 2011-05-18 北京工业大学 基于驾驶员状态的音乐播放器
CN102874259A (zh) * 2012-06-15 2013-01-16 浙江吉利汽车研究院有限公司杭州分公司 一种汽车驾驶员情绪监视及车辆控制系统
CN203075421U (zh) * 2012-07-31 2013-07-24 深圳市赛格导航科技股份有限公司 一种基于情绪变化的音乐播放系统
CN103873512A (zh) * 2012-12-13 2014-06-18 深圳市赛格导航科技股份有限公司 基于脸部识别技术的车载无线传输音乐的方法
CN106803423A (zh) * 2016-12-27 2017-06-06 智车优行科技(北京)有限公司 基于用户情绪状态的人机交互语音控制方法、装置及车辆
CN107423351A (zh) * 2017-05-24 2017-12-01 维沃移动通信有限公司 一种信息处理方法及电子设备
CN108549720A (zh) * 2018-04-24 2018-09-18 京东方科技集团股份有限公司 一种基于情绪识别的安抚方法、装置及设备、存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4064113A4 (en) * 2019-11-22 2023-05-10 Arcsoft Corporation Limited USER INFORMATION DETECTION METHOD AND SYSTEM, AND ELECTRONIC DEVICE
CN111540358A (zh) * 2020-04-26 2020-08-14 云知声智能科技股份有限公司 人机交互方法、装置、设备和存储介质
CN111540358B (zh) * 2020-04-26 2023-05-26 云知声智能科技股份有限公司 人机交互方法、装置、设备和存储介质
CN114595692A (zh) * 2020-12-07 2022-06-07 山东新松工业软件研究院股份有限公司 一种情绪识别方法、系统及终端设备

Also Published As

Publication number Publication date
US11498573B2 (en) 2022-11-15
US20210362725A1 (en) 2021-11-25
CN108549720A (zh) 2018-09-18

Similar Documents

Publication Publication Date Title
WO2019205642A1 (zh) 基于情绪识别的安抚方法、装置、系统、计算机设备以及计算机可读存储介质
US12084045B2 (en) Systems and methods for operating a vehicle based on sensor data
CN108725357B (zh) 基于人脸识别的参数控制方法、系统与云端服务器
JP7053432B2 (ja) 制御装置、エージェント装置及びプログラム
KR20220041901A (ko) 자동차 캐빈 내 이미지 처리
US11014508B2 (en) Communication support system, communication support method, and storage medium
CN110286745B (zh) 对话处理系统、具有对话处理系统的车辆及对话处理方法
CN109302486B (zh) 一种根据车内环境推送音乐的方法和系统
DE102018126525A1 (de) Fahrzeuginternes System, Verfahren und Speichermedium
CN111717083A (zh) 一种车辆交互方法和一种车辆
JP2018027731A (ja) 車載装置、車載装置の制御方法およびコンテンツ提供システム
US20230278426A1 (en) Vehicle-mounted apparatus, information processing method, and non-transitory storage medium
CN112440900A (zh) 一种车辆控制方法、装置、控制设备及汽车
CN112996194A (zh) 一种灯光控制方法及装置
CN111902864A (zh) 用于运行机动车的声音输出装置的方法、语音分析与控制装置、机动车和机动车外部的服务器装置
CN111703385B (zh) 一种内容互动方法以及一种车辆
CN112951216B (zh) 一种车载语音处理方法及车载信息娱乐系统
CN117198281A (zh) 语音交互方法、装置、电子设备及车辆
JP2019105966A (ja) 情報処理方法及び情報処理装置
CN113561988A (zh) 一种基于视线追踪的语音控制方法、汽车及可读存储介质
JP2019191859A (ja) 車両の情報提示装置及び車両の情報提示方法
US20220105948A1 (en) Vehicle agent device, vehicle agent system, and computer-readable storage medium
CN114792440A (zh) 用于确定车辆内部的人员的状态的方法和装置
CN116166835A (zh) 智能推荐方法及系统
CN112738447A (zh) 一种基于智能座舱的视频会议方法和智能座舱

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916968

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18916968

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18916968

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 10.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18916968

Country of ref document: EP

Kind code of ref document: A1