CN112947759A - Vehicle-mounted emotional interaction platform and interaction method - Google Patents

Vehicle-mounted emotional interaction platform and interaction method Download PDF

Info

Publication number
CN112947759A
CN112947759A CN202110251326.1A CN202110251326A CN112947759A CN 112947759 A CN112947759 A CN 112947759A CN 202110251326 A CN202110251326 A CN 202110251326A CN 112947759 A CN112947759 A CN 112947759A
Authority
CN
China
Prior art keywords
vehicle
information
data
user model
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110251326.1A
Other languages
Chinese (zh)
Inventor
卜烨雯
郭呈
吴文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Volkswagen Automotive Co Ltd
Original Assignee
SAIC Volkswagen Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Volkswagen Automotive Co Ltd filed Critical SAIC Volkswagen Automotive Co Ltd
Priority to CN202110251326.1A priority Critical patent/CN112947759A/en
Publication of CN112947759A publication Critical patent/CN112947759A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a vehicle-mounted emotional interaction platform, an in-vehicle component and an out-vehicle component. The in-vehicle component includes: the system comprises an in-vehicle interaction module, an in-vehicle unit adaptation module, an in-vehicle data processing module, an in-vehicle user database, a data communication module, a user model updating module, a user model database, an in-vehicle state management module, a service scene management module and an in-vehicle unit control module. The vehicle exterior assembly comprises: a content server, a personnel data server, and a user model machine learning server. The invention also discloses a vehicle-mounted emotional interaction method. The vehicle-mounted emotional interaction platform and the interaction method provided by the invention aim to overcome the defects of interaction service in the prior art, combine the advantages of the traditional vehicle factory and the Internet technology, and provide a new interaction platform architecture system.

Description

Vehicle-mounted emotional interaction platform and interaction method
Technical Field
The invention relates to the field of automobile parts, in particular to the field of intelligent cabin parts of an automobile.
Background
At present, the traditional cockpit interactive design provided by a vehicle factory emphasizes data analysis of electronic control units and sensors in a vehicle, provides mechanized and streamlined service response for users, is limited by external data collection and sharing, and is obviously insufficient in the aspects of humanization and individuation. However, the cockpit interactive design of a new vehicle factory emphasizes the introduction of new internet technology and data of service/content providers, and has insufficient specificity analysis in a cockpit and under a driving scene, and low intelligence and accuracy of service.
Disclosure of Invention
According to an embodiment of the invention, a vehicle-mounted emotional interaction platform, an in-vehicle component and an out-vehicle component are provided. The in-vehicle component includes: the system comprises an in-vehicle interaction module, an in-vehicle unit adaptation module, an in-vehicle data processing module, an in-vehicle user database, a data communication module, a user model updating module, a user model database, an in-vehicle state management module, a service scene management module and an in-vehicle unit control module. The in-vehicle interaction module collects vehicle information and information of drivers and passengers. The in-vehicle unit adaptation module adapts the in-vehicle interaction module and establishes connection with the in-vehicle interaction module. The in-vehicle data processing module acquires the metadata of the vehicle information and the driver and passenger information from the in-vehicle interaction module and converts the metadata into relational data. And the relational data generated by the in-vehicle data processing module is stored in an in-vehicle user database. The data communication module communicates with an off-board component. And the user model updating module acquires the user model generated and updated by the user model machine learning server through the data communication module. The updated user model obtained by the user model updating module is stored in the user model database as the current user model. And the in-vehicle state management module determines the in-vehicle state according to the relational data in the in-vehicle user database. And the service scene management module determines a service scene mode according to the current user model in the user model database and the in-vehicle state in the in-vehicle state management module. And the in-vehicle unit control module controls the in-vehicle component to execute corresponding actions according to the service scene mode determined by the service scene management module. The vehicle exterior assembly comprises: a content server, a personnel data server, and a user model machine learning server. The content server provides multimedia content data. The personal data server provides basic information of the occupant. The user model machine learning server generates a user model for the driver and the passenger, obtains updated relational data in the in-vehicle user database through the data communication module, and updates the user model based on the relational data.
In one embodiment, the in-vehicle interaction module includes: infotainment equipment, driver and crew monitoring equipment, controller and sensor. The infotainment device collects navigation data, media data, voice data, cockpit control commands. The occupant monitoring device acquires expression information, behavior information, and health information of the occupant. The controller and the sensor acquire accelerator information, brake information, wheel speed information, corner information, light information, air conditioner information and seat information.
In one embodiment, an occupant monitoring device includes: a plurality of in-vehicle cameras and millimeter wave radars. The plurality of in-vehicle cameras acquire a panoramic image in the vehicle, extract a face image and a body image of a driver from the panoramic image, acquire expression information from the face image, and acquire behavior information from the body image. The millimeter wave radar is installed on the seat, and the millimeter wave radar acquires heart rate data of drivers and passengers as health information.
In one embodiment, the multimedia content data provided by the content server includes: weather data, calendar, audio data, video data, graphics and text data. The basic information of the driver provided by the personnel data server comprises the name, age, sex, family information and hobby information of the driver.
In one embodiment, a user model machine learning server comprises: the general user model learning module and the personalized user model learning module. And the generic user model learning module generates a generic user model according to the mass in-vehicle user data. The personalized user model learning module is used for generating a personalized user model suitable for drivers and passengers of the vehicle according to the in-vehicle user data of the vehicle based on the generic user model.
In one embodiment, the in-vehicle unit control module controlling the in-vehicle component to execute the corresponding action according to the service scene mode determined by the service scene management module comprises: the in-vehicle control unit controls the infotainment device to perform destination push, route push, media push, or service push. The control unit in the vehicle controls the controller to adjust air conditioner, light, seat and driving auxiliary system parameters.
According to an embodiment of the invention, a vehicle-mounted emotional interaction method is provided, which comprises the following steps:
module adaptation and information acquisition, namely adapting an in-vehicle interaction module, establishing connection with the in-vehicle interaction module, and collecting vehicle information and information of drivers and passengers by the in-vehicle interaction module;
the data processing step, the metadata of the vehicle information and the driver and passenger information acquired by the in-vehicle interaction module are converted into relational data and stored in an in-vehicle user database;
a user model machine learning and generating step, wherein the user model machine learning server is communicated with a user model machine learning server outside the vehicle through a data communication module, generates a user model for a driver and an occupant, acquires updated relational data in a user database in the vehicle through the data communication module, and updates the user model based on the relational data;
a user model updating step, namely acquiring a user model generated and updated by a user model machine learning server through a data communication module, and storing the updated user model in a user model database as a current user model;
determining the in-vehicle state, namely determining the in-vehicle state according to the relational data in the in-vehicle user database;
a service scene determining step, namely determining a service scene mode according to a current user model in a user model database and an in-vehicle state;
and a scene executing step of controlling the vehicle-mounted component to execute corresponding actions according to the determined service scene mode.
In one embodiment, collecting vehicle information and occupant information by the in-vehicle interaction module includes:
collecting navigation data, media data, voice data and cockpit control commands by the infotainment device;
acquiring expression information, behavior information and health information of a driver by driver monitoring equipment;
the controller and the sensor acquire throttle information, brake information, wheel speed information, corner information, light information, air conditioner information and seat information.
In one embodiment, obtaining the facial expression information, behavior information, and health information of the occupant by the occupant monitoring device includes:
acquiring panoramic images in the vehicle through a plurality of cameras in the vehicle, extracting facial images and body images of drivers and passengers from the panoramic images, acquiring expression information from the facial images, and acquiring behavior information from the body images;
the heart rate data of the driver and the passengers are acquired as health information through the millimeter wave radar installed on the seat.
In one embodiment, in the user model machine learning and generation step,
firstly, generating a generic user model according to massive in-vehicle user data;
and generating an individual user model suitable for the driver and the passenger of the vehicle according to the in-vehicle user data of the vehicle based on the generic user model, wherein the generated individual user model also uses basic information of the driver and the passenger, including the name, the age, the sex, the family information and the interest and hobby information of the driver and the passenger, provided by a personnel data server outside the vehicle.
In one embodiment, the scenario execution step includes:
controlling an infotainment device to perform destination push, route push, media push, or service push, the content of the media push including multimedia content data provided by an off-board content server, comprising: weather data, calendar, audio data, video data, image-text data;
the control unit controls the controller to perform air conditioner adjustment, light adjustment, seat adjustment and driving assistance system parameter adjustment.
The vehicle-mounted emotional interaction platform and the interaction method provided by the invention aim to overcome the defects of interaction service in the prior art, combine the advantages of the traditional vehicle factory and the Internet technology, and provide a new interaction platform architecture system.
Drawings
FIG. 1 discloses a block diagram of a vehicular emotional interaction platform according to an embodiment of the invention.
Fig. 2a and fig. 2b disclose the working flow of the vehicular emotional interaction platform according to an embodiment of the invention.
FIG. 3 discloses a flowchart of a vehicular emotional interaction method according to an embodiment of the invention.
Detailed Description
In order to solve the defects in the prior art, the invention introduces a machine learning technology for establishing a user model in an automotive environment. Machine learning, a method for implementing artificial intelligence, uses algorithms to parse and learn from data to make decisions and predictions about events in the real world. At present, the technology is widely applied to various internet enterprises and professional commercial fields, such as recommendation systems, personalized services, prediction planning, accurate marketing and the like. However, when the method is applied to the in-vehicle service for users, the in-vehicle scene and the type of data available in the vehicle must be combined, and due to the particularity and the difficult availability of the data, a brand-new platform architecture is required for technical support. The invention focuses on the construction and application of a vehicle software architecture system, can be suitable for being independently deployed in various controllers or electronic units with higher computing capability in the whole vehicle architecture, and can also be deployed in a distributed manner in a plurality of controllers or electronic units after the functions are disassembled.
FIG. 1 discloses a block diagram of a vehicular emotional interaction platform according to an embodiment of the invention. Referring to fig. 1, the vehicle-mounted emotional interaction platform comprises an in-vehicle component 101 and an out-of-vehicle component 102. The in-vehicle component 101 includes: an in-vehicle interaction module 111, an in-vehicle unit adaptation module 112, an in-vehicle data processing module 113, an in-vehicle user database 114, a data communication module 115, a user model update module 116, a user model database 117, an in-vehicle state management module 118, a service scenario management module 119, and an in-vehicle unit control module 110. The offboard assembly 102 includes: content server 121, people data server 122, and user model machine learning server 123.
The in-vehicle interaction module 111 collects vehicle information as well as information of the driver and the passenger. In the illustrated embodiment, the in-vehicle interaction module 111 includes: infotainment device 131, occupant monitoring device 132, controller and sensors 133. The infotainment device 131 collects environmental data information in the vehicle, such as navigation data, media data, voice data, cockpit control commands, and the like. The occupant monitoring device 132 acquires expression information, behavior information, and health information of the occupant. In one embodiment, the occupant monitoring device 132 includes: a plurality of in-vehicle cameras and a millimeter-wave radar. The plurality of in-vehicle cameras acquire a panoramic image in the vehicle, extract a face image and a body image of a driver from the panoramic image, acquire expression information from the face image, and acquire behavior information from the body image. The plurality of in-car cameras can comprise cameras arranged in front of the car to acquire facial images and front body images of drivers and passengers, and can also comprise cameras arranged on the roof of the car to acquire panoramic images in the car and hand images of the drivers and passengers. The millimeter wave radar can be installed on the seat, and the millimeter wave radar acquires heart rate data, respiratory data and the like of drivers and passengers as health information. The controller and sensor 133 acquires vehicle information such as accelerator information, brake information, wheel speed information, turning angle information, light information, air conditioning information, and seat information.
The in-vehicle unit adapting module 112 adapts the in-vehicle interaction module and establishes a connection with the in-vehicle interaction module. In one embodiment, the in-vehicle unit adaptation module 112 adapts and establishes connections with the infotainment device 131, the occupant monitoring device 132, the controller and the sensors 133.
The in-vehicle data processing module 113 acquires metadata of the vehicle information and the driver information from the in-vehicle interaction module, and converts the metadata into relational data. The data of the vehicle information and the driver information acquired by the in-vehicle interaction module 111 is usually in a metadata format, cannot be recognized by the machine learning module, and therefore cannot be directly used for machine learning. The in-vehicle data processing module 113 converts these metadata into relational data that can be recognized by the machine learning module. The relational data generated by the in-vehicle data processing module 113 is stored in the in-vehicle user database 114.
The data communication module 115 communicates with off-board components. The data communication module 115 enables data exchange between the interior component 101 and the exterior component 102.
The user model updating module 116 acquires the user model generated and updated by the user model machine learning server 123 through the data communication module 115. The user model machine learning server 123 will be described in detail later. The updated user model obtained by the user model updating module 116 is stored in the user model database 117 and is used as the current user model among other modules.
The in-vehicle status management module 118 determines the in-vehicle status from the relational data in the in-vehicle user database. The relational data used by the in-vehicle status management module 118 are health information such as heart rate data, respiration data, and the like of the driver and the passenger, and vehicle information such as throttle information, brake information, wheel speed information, corner information, light information, air conditioning information, seat information, and the like. After the vehicle information and the health information of the driver and the passengers are comprehensively evaluated, the current in-vehicle state, such as the emotion of the driver, the health condition and the driving state of the current vehicle, can be determined.
The service scenario management module 119 determines a service scenario mode according to the current user model in the user model database 117 and the in-vehicle state in the in-vehicle state management module. The service mode scenario is to comprehensively consider the emotion and health condition of the driver and the driving state of the vehicle, evaluate the current safety risk, and then issue control instructions to various devices of the vehicle in order to hopefully adjust the emotion of the driver or adjust the driving state of the vehicle. For example, the state of the user in the vehicle can be fed back in real time, for example, the state of inattention, fatigue driving, emotional excitement, road rage and the like, so that the effects of improving the driving experience and the driving safety are achieved. This feedback is an active concern and does not require the user to issue an instruction.
The in-vehicle unit control module 110 controls the in-vehicle components to perform corresponding actions according to the service scenario mode determined by the service scenario management module 119. In one embodiment, the in-vehicle unit control module 110 controlling the in-vehicle components to perform corresponding actions according to the service scenario mode determined by the service scenario management module 119 includes: the in-vehicle control unit 119 controls the infotainment device 131 to perform destination push, route push, media push, or service push. The in-vehicle control unit 119 controls the controller to perform air conditioning adjustment, light adjustment, seat adjustment, and driving assistance system parameter adjustment. The in-vehicle unit control module 110 performs various controls on the in-vehicle device in order to expect the effect of adjusting the emotion of the driver or adjusting the driving state of the vehicle set by the service scene management module 119.
With continued reference to FIG. 1, the offboard assembly 102 includes: content server 121, people data server 122, and user model machine learning server 123.
The content server 121 provides multimedia content data. In one embodiment, the multimedia content data provided by the content server 121 includes: weather data, calendar, audio data, video data, graphics and text data. The multimedia content data provided by the content server 121 can be used when the in-vehicle control unit 119 controls the infotainment device 131 to perform various pushes.
The person data server 122 provides basic information of the occupant. In one embodiment, the basic information of the occupant provided by the people data server 122 includes the name, age, gender, family information, hobby information of the occupant. The basic information of the occupant provided by the personnel data server 122 is used when the personalized user model learning module generates a personalized user model of the occupant of the host vehicle.
The user model machine learning server 123 generates a user model for the driver and the passenger, and obtains the updated relational data in the in-vehicle user database through the data communication module, and updates the user model based on the relational data. In the illustrated embodiment, the user model machine learning server 123 includes: a generic user model learning module 124 and a personalized user model learning module 125. The generic user model learning module 124 generates generic user models from the mass in-vehicle user data. The mass in-vehicle user data used by the generic user model learning module 124 is typically large data from the internet, i.e., a collection of in-vehicle user data for respective vehicles collected by all vehicles equipped with similar platforms. And obtaining the characteristics of a large class of users, namely a generic user model, according to the massive big data. In one embodiment, the generic user model will classify users into several basic broad categories. The personalized user model learning module 125 generates a personalized user model suitable for the driver and the passenger of the host vehicle from the in-vehicle user data of the host vehicle based on the generic user model. The personalized user model learning module 125 selects a user class that most matches the user of the host vehicle from the generic user models as a basic model, and further generates a personalized user model suitable for the driver of the host vehicle by combining the in-vehicle user data of the host vehicle and the basic information of the driver provided by the out-vehicle person data server 122. The personalized user model for the occupant of the host vehicle is fed back as an updated user model to the user model update module 116 and stored in the user model database 117.
Fig. 2a and fig. 2b disclose the working flow of the vehicular emotional interaction platform according to an embodiment of the invention. The vehicle-mounted emotional interaction platform has two working modes: a learning mode and a service mode. Referring first to fig. 2a, fig. 2a discloses a working process of the vehicular emotional interaction platform in the learning mode according to an embodiment of the invention. The learning mode should include the following processes:
(1) infotainment devices collect data including, but not limited to: navigation data, media data, voice data, cockpit control commands, etc.
(2) The occupant monitoring device collects data including, but not limited to: expression, motion, heart rate, etc.
(3) The controller and sensors collect data including, but not limited to: throttle, brake, wheel speed, corner, light, air conditioner, seat etc.
(4) The in-vehicle unit adaptation module collates the metadata collected by the various components.
(5) The in-vehicle data processing module converts the metadata into relational data that can be recognized and used by the machine learning server.
(6) And the in-vehicle state management module determines the in-vehicle state according to the relational data in the in-vehicle user database.
(7) The relational data is stored in an in-vehicle user database.
(8) The data communication module is communicated with the vehicle-mounted assembly and sends the relational data to the generic user model learning module and the personalized user model learning module.
(9) The content server provides multimedia content data including, but not limited to: weather, calendar, media, services, etc.
(10) The people data server provides basic information for the occupant, including but not limited to: name, age, gender, family, hobbies, etc.
(11) And the generic user model learning module generates a generic user model according to the mass in-vehicle user data.
(12) The personalized user model learning module is based on the generic user model, generates a personalized user model suitable for drivers and passengers of the vehicle according to the in-vehicle user data of the vehicle, and feeds back the personalized user model suitable for the drivers and passengers of the vehicle to the data communication module as an updated user model.
(13) The data communication module provides the updated user model to the user model update module.
(14) The updated user model is stored in a user model database.
Referring to fig. 2b, fig. 2b discloses a working process of the vehicular emotional interaction platform in the service mode according to an embodiment of the invention. The service mode mainly makes a decision according to the updated user model and the real-time in-vehicle data of the user, controls each controller to respond and provides active emotional service for the user. The service mode only relates to the components in the vehicle, and the service mode comprises the following processes:
(1) infotainment devices collect data including, but not limited to: navigation data, media data, voice data, cockpit control commands, etc.
(2) The occupant monitoring device collects data including, but not limited to: expression, motion, heart rate, etc.
(3) The controller and sensors collect data including, but not limited to: throttle, brake, wheel speed, corner, light, air conditioner, seat etc.
(4) The in-vehicle unit adaptation module collates the metadata collected by the various components.
(5) The in-vehicle data processing module converts the metadata into relational data that can be recognized and used by the machine learning server.
(6) And the in-vehicle state management module determines the in-vehicle state according to the relational data in the in-vehicle user database.
(7) The relational data is stored in an in-vehicle user database.
(1) The above-mentioned learning modes are used to generate the behavior/emotion data in the vehicle of the user, as in the above-mentioned learning modes (2), (3), (4), (5), (6), and (7).
(8) The user model database provides a personalized user model describing a characteristic model of a particular user.
(9) And the service scene management module decides response data according to the real-time user in-vehicle data and the user characteristic model.
(10) The in-vehicle unit control module outputs control parameters aiming at each in-vehicle controller and feeds back the control parameters to each in-vehicle controller through the in-vehicle unit adaptation module.
(11) The infotainment device receives control parameters including, but not limited to: destination push, route push, media push, service push, etc.
(12) The controller and sensors receive control data including, but not limited to: air conditioning adjustment, light adjustment, seat adjustment, driving assistance system parameter adjustment, and the like.
The invention also provides a vehicle-mounted emotional interaction method which is realized by the vehicle-mounted emotional interaction platform. FIG. 3 discloses a flowchart of a vehicular emotional interaction method according to an embodiment of the invention. Referring to fig. 3, the vehicular emotional interaction method includes the following steps:
s1, module adaptation and information acquisition. In the module adapting and information collecting step S1, the in-vehicle interaction module is adapted, connection with the in-vehicle interaction module is established, and the in-vehicle interaction module collects vehicle information and information of a driver and a passenger. In one embodiment, collecting vehicle information and occupant information by the in-vehicle interaction module includes:
navigation data, media data, voice data, cockpit control commands are collected by the infotainment device.
The occupant monitoring device acquires the expression information, behavior information and health information of the occupant. In one embodiment, obtaining the facial expression information, behavior information, and health information of the occupant by the occupant monitoring device includes: the method comprises the steps of obtaining panoramic images in the automobile through a plurality of cameras in the automobile, extracting face images and body images of drivers and passengers from the panoramic images, obtaining expression information from the face images, and obtaining behavior information from the body images. And acquiring the heart rate data of the driver and the passengers as health information through the millimeter wave radar installed on the seat.
The controller and the sensor acquire throttle information, brake information, wheel speed information, corner information, light information, air conditioner information and seat information.
And S2, data processing. In the data processing step S2, the metadata of the vehicle information and the occupant information acquired by the in-vehicle interaction module is converted into relational data and stored in the in-vehicle user database.
And S3, machine learning and generating steps of the user model. In the user model machine learning and generating step S3, the user model machine learning server communicates with the user model machine learning server outside the vehicle through the data communication module, and the user model machine learning server generates a user model for the driver and the passenger, and obtains the updated relational data in the user database inside the vehicle through the data communication module, and updates the user model based on the relational data. In one embodiment, in the user model machine learning and generation step S3,
firstly, generating a generic user model according to massive in-vehicle user data.
And generating an individual user model suitable for the driver and the passenger of the vehicle according to the in-vehicle user data of the vehicle based on the generic user model, wherein the generated individual user model also uses basic information of the driver and the passenger, including the name, the age, the sex, the family information and the interest and hobby information of the driver and the passenger, provided by a personnel data server outside the vehicle.
And S4, updating the user model. The user model updating step S4 obtains the user model generated and updated by the user model machine learning server through the data communication module, and stores the updated user model in the user model database as the current user model.
And S5, determining the state in the vehicle. The in-vehicle state determination step S5 determines the in-vehicle state from the relational data in the in-vehicle user database.
And S6, determining a service scene. The service scenario determination step S6 determines a service scenario mode based on the current user model in the user model database and the in-vehicle state.
And S7, executing the scene. The scene execution step S7 controls the in-vehicle component to execute the corresponding action according to the determined service scene mode. In one embodiment, the scenario performing step S7 includes:
controlling an infotainment device to perform destination push, route push, media push, or service push, the content of the media push including multimedia content data provided by an off-board content server, comprising: weather data, calendar, audio data, video data, graphics and text data.
The control unit controls the controller to perform air conditioner adjustment, light adjustment, seat adjustment and driving assistance system parameter adjustment.
The vehicle-mounted emotional interaction platform and the vehicle-mounted emotional interaction method solve the problem of fusion of the user data inside the vehicle and the user data outside the vehicle, form a unique user model for a driver and a passenger by adopting a machine learning mechanism, use the model for providing a decision basis for the service inside the vehicle, meet the trend requirements of intelligent, humanized, emotional and active service development of a cockpit, and are a basic architecture system for realizing artificial intelligence inside the vehicle.
The vehicle-mounted emotional interaction platform and the interaction method provided by the invention aim to overcome the defects of interaction service in the prior art, combine the advantages of the traditional vehicle factory and the Internet technology, and provide a new interaction platform architecture system.
It should also be noted that the above-mentioned embodiments are only specific embodiments of the present invention. It is apparent that the present invention is not limited to the above embodiments and similar changes or modifications can be easily made by those skilled in the art from the disclosure of the present invention and shall fall within the scope of the present invention. The embodiments described above are provided to enable persons skilled in the art to make or use the invention and that modifications or variations can be made to the embodiments described above by persons skilled in the art without departing from the inventive concept of the present invention, so that the scope of protection of the present invention is not limited by the embodiments described above but should be accorded the widest scope consistent with the innovative features set forth in the claims.

Claims (11)

1. The utility model provides a vehicle-mounted emotional interaction platform which characterized in that includes: an in-vehicle component and an out-vehicle component,
the in-vehicle component includes:
the in-vehicle interaction module collects vehicle information and information of drivers and passengers;
the in-vehicle unit adaptation module adapts the in-vehicle interaction module and establishes connection with the in-vehicle interaction module;
the in-vehicle data processing module acquires metadata of vehicle information and driver and passenger information from the in-vehicle interaction module and converts the metadata into relational data;
the system comprises an in-vehicle user database, an in-vehicle data processing module and a database management module, wherein relational data generated by the in-vehicle data processing module are stored in the in-vehicle user database;
the data communication module is communicated with the vehicle exterior assembly;
the user model updating module is used for acquiring the user model generated and updated by the user model machine learning server through the data communication module;
the user model database, the updated user model that the user model updating module obtains is kept in the user model database, as the present user model;
the in-vehicle state management module is used for determining the in-vehicle state according to the relational data in the in-vehicle user database;
the service scene management module determines a service scene mode according to the current user model in the user model database and the in-vehicle state in the in-vehicle state management module;
the in-vehicle unit control module is used for controlling the vehicle-mounted component to execute corresponding actions according to the service scene mode determined by the service scene management module;
the vehicle exterior assembly comprises:
a content server providing multimedia content data;
the personnel data server provides basic information of drivers and passengers;
and the user model machine learning server generates a user model for the driver and the passenger, acquires updated relational data in the in-vehicle user database through the data communication module, and updates the user model based on the relational data.
2. The vehicular emotionalization interaction platform of claim 1, wherein the in-vehicle interaction module comprises:
the information entertainment equipment collects navigation data, media data, voice data and cockpit control commands;
the system comprises a driver and passenger monitoring device, a health monitoring device and a control device, wherein the driver and passenger monitoring device acquires expression information, behavior information and health information of a driver and a passenger;
the device comprises a controller and a sensor, wherein the controller and the sensor acquire accelerator information, braking information, wheel speed information, corner information, light information, air conditioner information and seat information.
3. The vehicular emotionalized interaction platform of claim 2, wherein the occupant monitoring device comprises:
the system comprises a plurality of in-car cameras, a plurality of remote monitoring cameras and a control system, wherein the in-car cameras are used for acquiring a panoramic image in a car, extracting a face image and a body image of a driver from the panoramic image, acquiring expression information from the face image and acquiring behavior information from the body image;
the millimeter wave radar is installed on the seat, and the millimeter wave radar acquires heart rate data of drivers and passengers as health information.
4. The vehicular emotionalized interaction platform of claim 1,
the multimedia content data provided by the content server includes: weather data, calendar, audio data, video data, image-text data;
the basic information of the driver provided by the personnel data server comprises the name, age, sex, family information and hobby information of the driver.
5. The in-vehicle emotionalization interaction platform of claim 1, wherein the user model machine learning server comprises:
the generic user model learning module generates a generic user model according to massive in-vehicle user data;
and the personalized user model learning module is used for generating a personalized user model suitable for drivers and passengers of the vehicle according to the in-vehicle user data of the vehicle based on the generic user model.
6. The vehicular emotional interaction platform of claim 1, wherein the in-vehicle unit control module controlling the in-vehicle component to perform the corresponding action according to the service scenario mode determined by the service scenario management module comprises:
the in-vehicle control unit controls the infotainment device to carry out destination pushing, route pushing, media pushing or service pushing;
the control unit in the vehicle controls the controller to adjust air conditioner, light, seat and driving auxiliary system parameters.
7. A vehicle-mounted emotional interaction method is characterized by comprising the following steps:
module adaptation and information acquisition, namely adapting an in-vehicle interaction module, establishing connection with the in-vehicle interaction module, and collecting vehicle information and information of drivers and passengers by the in-vehicle interaction module;
the data processing step, the metadata of the vehicle information and the driver and passenger information acquired by the in-vehicle interaction module are converted into relational data and stored in an in-vehicle user database;
a user model machine learning and generating step, wherein the user model machine learning server is communicated with a user model machine learning server outside the vehicle through a data communication module, generates a user model for a driver and an occupant, acquires updated relational data in a user database in the vehicle through the data communication module, and updates the user model based on the relational data;
a user model updating step, namely acquiring a user model generated and updated by a user model machine learning server through a data communication module, and storing the updated user model in a user model database as a current user model;
determining the in-vehicle state, namely determining the in-vehicle state according to the relational data in the in-vehicle user database;
a service scene determining step, namely determining a service scene mode according to a current user model in a user model database and an in-vehicle state;
and a scene executing step of controlling the vehicle-mounted component to execute corresponding actions according to the determined service scene mode.
8. The vehicular emotional interaction method of claim 7, wherein collecting vehicle information and information of the driver and the passenger by the in-vehicle interaction module comprises:
collecting navigation data, media data, voice data and cockpit control commands by the infotainment device;
acquiring expression information, behavior information and health information of a driver by driver monitoring equipment;
the controller and the sensor acquire throttle information, brake information, wheel speed information, corner information, light information, air conditioner information and seat information.
9. The vehicular emotionalization interaction method of claim 8, wherein the obtaining of the facial expression information, behavior information and health information of the occupant by the occupant monitoring device comprises:
acquiring panoramic images in the vehicle through a plurality of cameras in the vehicle, extracting facial images and body images of drivers and passengers from the panoramic images, acquiring expression information from the facial images, and acquiring behavior information from the body images;
the heart rate data of the driver and the passengers are acquired as health information through the millimeter wave radar installed on the seat.
10. The vehicular emotional interaction method of claim 7, wherein in the user model machine learning and generating step,
firstly, generating a generic user model according to massive in-vehicle user data;
and generating an individual user model suitable for the driver and the passenger of the vehicle according to the in-vehicle user data of the vehicle based on the generic user model, wherein the generated individual user model also uses basic information of the driver and the passenger, including the name, the age, the sex, the family information and the interest and hobby information of the driver and the passenger, provided by a personnel data server outside the vehicle.
11. The vehicular emotional interaction method of claim 7, wherein the scene executing step comprises:
controlling an infotainment device to perform destination push, route push, media push, or service push, the content of the media push including multimedia content data provided by an off-board content server, comprising: weather data, calendar, audio data, video data, image-text data;
the control unit controls the controller to perform air conditioner adjustment, light adjustment, seat adjustment and driving assistance system parameter adjustment.
CN202110251326.1A 2021-03-08 2021-03-08 Vehicle-mounted emotional interaction platform and interaction method Pending CN112947759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110251326.1A CN112947759A (en) 2021-03-08 2021-03-08 Vehicle-mounted emotional interaction platform and interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110251326.1A CN112947759A (en) 2021-03-08 2021-03-08 Vehicle-mounted emotional interaction platform and interaction method

Publications (1)

Publication Number Publication Date
CN112947759A true CN112947759A (en) 2021-06-11

Family

ID=76228650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110251326.1A Pending CN112947759A (en) 2021-03-08 2021-03-08 Vehicle-mounted emotional interaction platform and interaction method

Country Status (1)

Country Link
CN (1) CN112947759A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486239A (en) * 2021-07-05 2021-10-08 上海优咔网络科技有限公司 Intelligent travel scene engine and pushing method
CN114136307A (en) * 2021-12-07 2022-03-04 上汽大众汽车有限公司 Full-automatic updating method for vehicle-mounted navigation map
CN114323005A (en) * 2021-12-28 2022-04-12 上汽大众汽车有限公司 Method for positioning micro divergent road
CN116767255A (en) * 2023-07-03 2023-09-19 深圳市哲思特科技有限公司 Intelligent cabin linkage method and system for new energy automobile
CN116767256A (en) * 2023-07-14 2023-09-19 深圳市哲思特科技有限公司 Active human-computer interaction method and new energy automobile

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649502A (en) * 2016-10-12 2017-05-10 卡桑德电子科技(扬州)有限公司 Intelligent vehicle-mounted multimedia system and method for active push service and function
WO2017219319A1 (en) * 2016-06-23 2017-12-28 驭势科技(北京)有限公司 Automatic vehicle driving method and automatic vehicle driving system
CN109606386A (en) * 2018-12-12 2019-04-12 北京车联天下信息技术有限公司 Cockpit in intelligent vehicle
CN110196593A (en) * 2019-05-16 2019-09-03 济南浪潮高新科技投资发展有限公司 A kind of more scene environments detections of automatic Pilot and decision system and method
CN111591237A (en) * 2020-04-21 2020-08-28 汉腾汽车有限公司 Scene-based vehicle-mounted information service system
CN112035034A (en) * 2020-08-27 2020-12-04 芜湖盟博科技有限公司 Vehicle-mounted robot interaction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219319A1 (en) * 2016-06-23 2017-12-28 驭势科技(北京)有限公司 Automatic vehicle driving method and automatic vehicle driving system
CN106649502A (en) * 2016-10-12 2017-05-10 卡桑德电子科技(扬州)有限公司 Intelligent vehicle-mounted multimedia system and method for active push service and function
CN109606386A (en) * 2018-12-12 2019-04-12 北京车联天下信息技术有限公司 Cockpit in intelligent vehicle
CN110196593A (en) * 2019-05-16 2019-09-03 济南浪潮高新科技投资发展有限公司 A kind of more scene environments detections of automatic Pilot and decision system and method
CN111591237A (en) * 2020-04-21 2020-08-28 汉腾汽车有限公司 Scene-based vehicle-mounted information service system
CN112035034A (en) * 2020-08-27 2020-12-04 芜湖盟博科技有限公司 Vehicle-mounted robot interaction method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486239A (en) * 2021-07-05 2021-10-08 上海优咔网络科技有限公司 Intelligent travel scene engine and pushing method
CN114136307A (en) * 2021-12-07 2022-03-04 上汽大众汽车有限公司 Full-automatic updating method for vehicle-mounted navigation map
CN114136307B (en) * 2021-12-07 2024-01-26 上汽大众汽车有限公司 Full-automatic map updating method for vehicle navigation system
CN114323005A (en) * 2021-12-28 2022-04-12 上汽大众汽车有限公司 Method for positioning micro divergent road
CN114323005B (en) * 2021-12-28 2023-08-11 上汽大众汽车有限公司 Positioning method for micro bifurcation road
CN116767255A (en) * 2023-07-03 2023-09-19 深圳市哲思特科技有限公司 Intelligent cabin linkage method and system for new energy automobile
CN116767255B (en) * 2023-07-03 2024-02-06 深圳市哲思特科技有限公司 Intelligent cabin linkage method and system for new energy automobile
CN116767256A (en) * 2023-07-14 2023-09-19 深圳市哲思特科技有限公司 Active human-computer interaction method and new energy automobile

Similar Documents

Publication Publication Date Title
CN112947759A (en) Vehicle-mounted emotional interaction platform and interaction method
US11034362B2 (en) Portable personalization
US10192171B2 (en) Method and system using machine learning to determine an automotive driver's emotional state
CN107924633B (en) Information processing apparatus, information processing method, and program
US20170349184A1 (en) Speech-based group interactions in autonomous vehicles
DE102017112172A1 (en) SYSTEMS TO PROVIDE PROACTIVE INFOTAINMENT TO AUTOMATICALLY DRIVING VEHICLES
US7428449B2 (en) System and method for determining a workload level of a driver
DE102018001342A1 (en) Method and system for assisting the driving of a vehicle and computer program product
DE102018127443A1 (en) On-board system for communicating with inmates
US20170217445A1 (en) System for intelligent passenger-vehicle interactions
CN110871684A (en) In-vehicle projection method, device, equipment and storage medium
DE102018001377A1 (en) Method and system for assisting the driving of a vehicle and computer program product
CN108475055A (en) Spare Trajectory System for autonomous vehicle
CN107521485A (en) Driving behavior analysis based on vehicle braking
DE102018001365A1 (en) Vehicle driving assistance system, method for operating a vehicle driving assistance system and computer program product
CN111223479A (en) Operation authority control method and related equipment
CN110103221A (en) A kind of long-range drive manner, equipment and its system
US11661069B2 (en) Driver screening using biometrics and artificial neural network analysis
DE102019118184A1 (en) System and method for user-specific adaptation of vehicle parameters
DE102018001373A1 (en) Vehicle driving assistance system, method for controlling a vehicle driving assistance system and computer program product
CN115551757A (en) Passenger screening
CN107380062A (en) Method and system for display virtual real content in vehicle
DE102021116309A1 (en) ASSISTANCE FOR DISABLED DRIVERS
CN113320537A (en) Vehicle control method and system
DE102022105009A1 (en) INFORMATION PROCESSING CIRCUIT AND INFORMATION PROCESSING PROCEDURES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210611

RJ01 Rejection of invention patent application after publication