CN113212448A - Intelligent interaction method and device - Google Patents

Intelligent interaction method and device Download PDF

Info

Publication number
CN113212448A
CN113212448A CN202110480342.8A CN202110480342A CN113212448A CN 113212448 A CN113212448 A CN 113212448A CN 202110480342 A CN202110480342 A CN 202110480342A CN 113212448 A CN113212448 A CN 113212448A
Authority
CN
China
Prior art keywords
vehicle
scene
people
data
car
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110480342.8A
Other languages
Chinese (zh)
Inventor
于红超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evergrande New Energy Automobile Investment Holding Group Co Ltd
Original Assignee
Evergrande New Energy Automobile Investment Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evergrande New Energy Automobile Investment Holding Group Co Ltd filed Critical Evergrande New Energy Automobile Investment Holding Group Co Ltd
Priority to CN202110480342.8A priority Critical patent/CN113212448A/en
Publication of CN113212448A publication Critical patent/CN113212448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an intelligent interaction method and device, wherein the method comprises the following steps: acquiring target data of people in the vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when the people take the vehicle and vehicle habit data of the people in the vehicle; determining the current vehicle using scene of the personnel in the vehicle based on the deep learning network and the target data; determining interactive contents of the vehicle-mounted virtual assistant and people in the vehicle based on the current vehicle using scene; and controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content. According to the method and the device, the current car using scene of the people in the car can be known through deep learning of at least one of the behavior data and the car using habit data of the people in the car, so that the interactive content of the vehicle-mounted virtual assistant and the people in the car can be flexibly set according to the current car using scene, and interaction with the people in the car can be carried out according to the interactive content, therefore, a richer and more flexible interactive mode can be provided for the people in the car, and the personalized interaction requirements of people can be met.

Description

Intelligent interaction method and device
Technical Field
The present application relates to the field of computers, and in particular, to an intelligent interaction method and apparatus.
Background
With the development of automobile intellectualization and networking, the application of Artificial Intelligence (AI) technology in the vehicle-mounted field is more and more deep, and the novel intelligent networked automobile mostly carries a Personal Virtual Assistant (VPA), which is simply a vehicle-mounted Virtual Assistant. The vehicle-mounted virtual assistant can interact with people in the vehicle through voice like people, can communicate with the people in the vehicle through richer expressions and actions, and can actively provide related services such as message pushing, fault reminding, safety prompting and the like for the people in the vehicle. And, with the more emphasis of people on interactive experience, people hope that the vehicle-mounted virtual assistant can be more intelligent.
At present, the vehicle-mounted virtual assistant only maintains the factory-set function at first, although the character image and the voice broadcast sound can be continuously increased later through an Over-the-Air Technology (OTA), the interaction function is enriched, and therefore the intelligent service level of the vehicle-mounted virtual assistant is improved, the interaction mode is still mechanical and fixed, and the ever-increasing personalized requirements of people cannot be met.
Disclosure of Invention
The embodiment of the application provides an intelligent interaction method and device, so that richer and more flexible interaction modes are provided, and personalized interaction requirements of people are met.
In a first aspect, an embodiment of the present application provides an intelligent interaction method, including:
acquiring target data of people in a vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when riding the vehicle and vehicle habit data of the people in the vehicle;
determining the current car using scene of the people in the car based on a deep learning network and the target data;
determining interactive contents of the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene;
and controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content.
In a second aspect, an embodiment of the present application further provides an intelligent interaction apparatus, including:
the data acquisition module is used for acquiring target data of people in the vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when the people take the vehicle and vehicle habit data of the people in the vehicle;
the vehicle using scene determining module is used for determining the current vehicle using scene of the people in the vehicle based on the deep learning network and the target data;
the interactive content determining module is used for determining interactive content between the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene;
and the interaction module is used for controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor and computer executable instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the apparatus as described in the first aspect above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium for storing computer-executable instructions, which when executed by a processor implement the steps of the apparatus according to the first aspect.
According to the at least one technical scheme, the current car using scene of the people in the car can be known through deep learning of at least one of behavior data and car using habit data of the people in the car, so that interactive contents of the vehicle-mounted virtual assistant and the people in the car can be flexibly set aiming at the current car using scene, and interaction with the people in the car can be carried out according to the interactive contents, therefore, a richer and more flexible interactive mode can be provided for the people in the car, and personalized interaction requirements of the people can be met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of an intelligent interaction method according to an embodiment of the present application.
Fig. 2 is a diagram illustrating a deep learning network structure according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating an intelligent interaction method according to another embodiment of the present application.
Fig. 4 is a system block diagram of an intelligent interaction scheme according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an intelligent interaction device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to provide richer and more flexible interaction modes, the embodiment of the application provides an intelligent interaction method and device. The method and the device provided by the embodiment of the application can be executed by electronic equipment, such as terminal equipment. In other words, the method may be performed by software or hardware installed in the terminal device. The terminal devices may include, but are not limited to: any one of smart terminal devices such as a Virtual Personal Assistant (VPA), a smart phone, a Personal Computer (PC), a notebook computer, a tablet PC, an electronic reader, and a wearable device.
Specific forms of the vehicle-mounted virtual assistant may include, but are not limited to, the following three types: the first is a personal virtual assistant installed in an electronic device with a display screen, the second is a physical robotic personal virtual assistant, and the third is a physical holographic projection personal virtual assistant with media.
An intelligent interaction method provided by the embodiment of the present application is explained first below.
Fig. 1 shows a flowchart of an intelligent interaction method provided by an embodiment of the present application, where the method is applicable to a vehicle-mounted virtual assistant, and as shown in fig. 1, the method may include:
step 101, target data of people in the vehicle are obtained, wherein the target data comprise at least one of behavior data of the people in the vehicle when riding the vehicle and vehicle habit data of the people in the vehicle.
The vehicle occupants include drivers and other occupants, wherein the drivers may or may not be vehicle owners.
The target data of the person in the vehicle refers to raw data without feature extraction. It is to be understood that the target data of the person in the vehicle may include, but is not limited to, at least one of behavior data of the person in the vehicle when riding the vehicle and car habit data of the person in the vehicle.
The behavior data of the vehicle interior personnel comprises but is not limited to behavior data of the vehicle interior personnel in the vehicle and/or data such as schedules set by the vehicle interior personnel in at least one terminal device. The behavior data of the in-vehicle personnel in the vehicle can include but is not limited to at least one of visual data and voice data of the in-vehicle personnel, and can be acquired by utilizing an in-vehicle camera; personnel can share the schedule data that it set up in at least one terminal equipment to the high in the clouds in the car, then acquire the schedule data that it set up in at least one terminal equipment from the high in the clouds.
The car usage habit data of the vehicle occupant may include, but is not limited to: the current journey of the person in the vehicle, the mental state of the person in the vehicle (such as whether the driving state of the driver is fatigue driving), the historical using habit of the person in the vehicle (such as the habit of working days for driving to and from work in fixed time intervals, the habit of weekends for sending children to interested work in fixed time intervals, and the like), the personal preference of the person in the vehicle (such as the fact that the driver likes to listen to the ball game news), and the like.
And 102, determining the current car using scene of the people in the car based on the deep learning network and the target data.
The car using scene can be understood as an abstraction of the behavior state and/or the car using habit of the person in the car, for example, if the person in the car is singing, the car using scene can be abstracted as a singing scene, and if the driver is fatigue driving, the car using scene can be abstracted as a fatigue driving scene.
The deep learning network is obtained by training based on sample data in advance, the input of the deep learning network is target data, and the output of the deep learning network is a vehicle using scene determined based on the target data. Thus, in one embodiment, the step 102 may include: and inputting the target data of the personnel in the vehicle into the deep learning network to obtain the current vehicle using scene of the personnel in the vehicle.
In a more detailed embodiment, the deep learning network may include a feature extraction module and a car use scene matching module, and the step 102 may include: inputting the target data into a feature extraction module, and extracting feature information of people in the vehicle; and inputting the extracted characteristic information into a current car using scene matching module to be matched with a preset scene library to obtain a car using scene of the person in the car. The preset scene library may be a pre-constructed database storing corresponding relationships between features of people in the vehicle and the vehicle using scene.
Fig. 2 shows a schematic structural diagram of a deep learning network provided by an embodiment of the present application. As shown in fig. 2, the deep learning network 22 includes a feature extraction module 221 and a car usage scene matching module 222, the input of the deep learning network 22 is target data 21, the output is a current car usage scene 23, after the target data 21 is input into the deep learning network 22, behavior and/or habit feature information of a user is extracted and obtained through the feature extraction module 221, and after the feature information is input into the car usage scene matching module 222 and matched with the car usage scene library, the current car usage scene 23 is obtained.
In more detail, as shown in fig. 2, the feature extraction module 221 may include at least one of a feature recognition sub-module 2211 and a feature detection sub-module 2212, based on which the following three cases may be derived:
first case
The feature extraction module 221 includes a feature recognition sub-module 2211, and the target data includes behavior data of people in the vehicle when riding the vehicle. Correspondingly, the inputting the target data into the feature extraction module to extract the feature information of the people in the vehicle may include: the behavior data of the people in the vehicle is input into the feature identification submodule 2211 in the feature extraction module 221, and the behavior feature information of the people in the vehicle is obtained through identification. For example, in-vehicle visual data (e.g., image data) including the movement of the vehicle occupant and the speech data 211 including the language of the vehicle occupant are input to the feature recognition submodule 2211, and visual and speech recognition is performed to obtain behavior feature information of the user. The inputting the feature information into the vehicle scene matching module to match with the preset scene library to obtain the current vehicle scene of the person in the vehicle may include: and inputting the behavior feature information into the vehicle scene matching module 222 to match with a preset scene library, so as to obtain the current vehicle scene 23 matched with the behavior features of the people in the vehicle.
Second case
The feature extraction module 221 includes a feature detection sub-module 2212, and the target data includes vehicle usage habit data of a vehicle occupant. Correspondingly, the inputting the target data into the feature extraction module to extract the feature information of the people in the vehicle may include: the vehicle-using habit data of the vehicle-mounted person is input into the feature detection submodule 2212 in the feature extraction module 221, and habit feature information of the vehicle-mounted person is obtained through detection. For example, the car use habit data 212 including habits such as a trip, a driving state, a personal habit, and a personal preference of a person in the vehicle is input to the feature detection submodule 2212, and habit feature information of the user is detected. The step of inputting the characteristic information into the vehicle scene matching module to match with the preset scene library to obtain the current vehicle scene of the person in the vehicle comprises the following steps: and inputting the habit characteristic information into a vehicle scene matching module 222 to be matched with a preset scene library, so as to obtain the current vehicle scene 23 matched with the behavior characteristics of the people in the vehicle.
Third case
The feature extraction module 221 may include a feature recognition sub-module 2211 and a feature detection sub-module 2212, and the target data includes behavior data of persons in the vehicle and vehicle usage habit data of the persons in the vehicle.
Correspondingly, the inputting the target data into the feature extraction module to extract the feature information of the people in the vehicle may include: inputting the behavior data of the people in the vehicle when riding into the feature identification submodule in the feature extraction module, and identifying to obtain the behavior feature information of the people in the vehicle; and inputting the vehicle using habit data of the vehicle-mounted personnel into a feature detection submodule in the feature extraction module, and detecting to obtain habit feature information of the vehicle-mounted personnel.
Correspondingly, the above inputting the feature information into the vehicle scene matching module to match with the preset scene library to obtain the current vehicle scene of the person in the vehicle, includes: and inputting the behavior characteristic information and the habit characteristic information into the vehicle scene matching module to be matched with a preset scene library, so as to obtain the current vehicle scene matched with the behavior characteristics of the people in the vehicle.
And 103, determining interactive contents of the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene.
For the first case listed above, for example:
(1) if the behavior characteristic information identified from the behavior data comprises singing and song names and the car scene obtained by inputting the behavior characteristic information into the car scene matching module is a singing scene, determining that the interactive content of the vehicle-mounted virtual assistant and the people in the vehicle comprises at least one of the following contents: setting a character image of a vehicle-mounted assistant as a singer image, playing music matched with the song title, waving the character image along with the music, playing a preset musical instrument on the character image, and the like. When the passenger singing in the car is identified, the figure image of the vehicle-mounted virtual assistant is automatically changed into a singer image, so that the user can dance while playing music corresponding to the song automatically, and the user directly enters a vehicle-mounted K song state.
(2) If the behavior feature information recognized from the behavior data comprises conversation contents related to a preset theme and the vehicle using scene obtained by inputting the behavior feature information into the vehicle using scene matching module is a conversation scene, determining that the interactive contents of the vehicle-mounted virtual assistant and the vehicle-mounted person comprise at least one of the following contents: and setting the figure image of the vehicle-mounted assistant as the figure image related to the preset theme, participating in the discussion of the information related to the preset theme, and the like. For example, when it is recognized that the vehicle occupant talks about NBA, the character of the vehicle virtual assistant automatically changes to the corresponding NBA star character while making various classic actions of playing basketball by the corresponding star, and supplements the latest information of the star or basketball game with the vehicle occupant actively and from time to time.
(3) If the behavior feature information identified from the behavior data comprises reading, and the vehicle using scene obtained by inputting the behavior feature information into the vehicle using scene matching module is a reading scene, determining that the interactive content of the vehicle-mounted virtual assistant and the vehicle-mounted person comprises at least one of the following contents: the character image of the vehicle assistant is set as a carefully learned character image, the lighting in the vehicle is automatically adjusted, the sound of the vehicle audio is reduced, and the online learning content is recommended, etc., so as to facilitate quiet learning. For example, if the child learns, online learning content is automatically recommended to the child for learning.
For the second case listed above, for example:
(1) if the habit characteristic information identified from the car using habit data comprises that the driving time exceeds the preset time, determining that the interactive content of the vehicle-mounted virtual assistant and the person in the vehicle comprises at least one of the following contents: reminding the driver to pay attention to parking and rest, reminding other people in the vehicle except the driver to move so as to eliminate fatigue, and playing related actions and music suitable for the movement in the vehicle. Specifically, after the vehicle owner drives for a certain time in a long distance, the vehicle-mounted virtual assistant automatically recommends the vehicle owner to pay attention to parking and rest, so that harm caused by fatigue driving is avoided; reminding passengers in the vehicle to perform simple stretching exercise to relieve the fatigue of long-distance riding, playing the stretching exercise action suitable for the passengers in the vehicle and carrying the passengers to move together to stretch active atmosphere and the like.
(2) If the habit characteristic information identified from the car habit data comprises the current journey of the passenger to send the child to the interested class, determining that the interactive content of the vehicle-mounted virtual assistant and the person in the car comprises at least one of the following contents: and playing learning contents related to the interest class and reminding children to pre-study or review courses of the interest class.
And step 104, controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interactive content.
Specifically, the vehicle-mounted virtual assistant is controlled to interact with people in the vehicle according to the interaction content.
According to the intelligent interaction method provided by the embodiment of the application, the current car using scene of the people in the car can be known through deep learning of at least one of behavior data and car using habit data of the people in the car, so that the interaction content of the car-mounted virtual assistant and the people in the car can be flexibly set aiming at the current car using scene, and interaction with the people in the car can be carried out according to the interaction content, therefore, a richer and more flexible interaction mode can be provided for the people in the car, and the personalized interaction requirements of the people can be met.
Fig. 3 shows a schematic flow chart of an intelligent interaction method provided in an embodiment of the present application in practical application. As shown in fig. 3, the method may include:
step 301, start.
Step 302, the intelligent service level of the vehicle-mounted virtual assistant is kept in a factory state: and pre-adapting various character images, actions and voice broadcast sounds of different scenes. At least one of step 304 and step 305 is then performed.
Step 303, the vehicle-mounted virtual assistant manufacturer continuously upgrades through OTA: character, action, voice broadcast, etc.
And 304, acquiring in-vehicle visual data and audio data through an in-vehicle camera, acquiring behavior data such as a vehicle owner cloud schedule, performing comprehensive analysis such as visual recognition, voice recognition and communication of the vehicle owner cloud schedule to obtain a current vehicle using scene of the in-vehicle personnel, and then providing corresponding personalized intelligent interaction service. And then proceeds to step 306.
The method can be realized by utilizing a deep learning network in visual recognition and voice recognition.
And 305, autonomously learning vehicle using habit data of a vehicle owner, such as driving state, formation and habit, determining a current vehicle using scene of the vehicle owner, and making corresponding active services for the current vehicle using scene, wherein the active services made for different vehicle using scenes are different. And then proceeds to step 306.
The vehicle using habit learning can be realized through a deep learning network.
And step 306, providing personalized intelligent interaction service for different scenes for the user by the vehicle-mounted virtual assistant.
And step 307, finishing.
Therefore, according to the intelligent interaction method provided by the embodiment of the application, on one hand, the current vehicle using scene of the user can be obtained through deep learning of the user behavior, and then the vehicle-mounted virtual assistant can perform corresponding intelligent interaction under different vehicle using scenes, such as learning, exercise and singing, and can make corresponding image change and intelligent interaction capacity; on the other hand, the driving state, the travel and the habit of the vehicle owner can be learned by self, so that more active interaction is provided, such as automatically recommending the vehicle owner to have a rest and exercise according to the driving time, recommending accompanying children to pay attention to learning and the like. The intelligent interaction method can enable the vehicle-mounted virtual assistant to give more intelligent interaction to the user and make the user feel attentive to the housekeeper.
Fig. 4 shows a system block diagram of an intelligent interaction scheme provided by an embodiment of the present application. As shown in fig. 4, the system may include: a vehicle mounted virtual assistant 42 and a cloud platform 43. In the system, the vehicle-mounted virtual assistant 42 can acquire target data 421 such as visual data, voice data, vehicle usage habit data and the like of the vehicle-mounted person 41 and send the target data to the vehicle-mounted computing platform 422; the vehicle-mounted computing platform 422 can determine the interactive content through big data learning, local comprehensive analysis and processing and other modes (such as the content executed in the steps 102 and 103 in fig. 1), and the vehicle-mounted computing platform 422 can also get through the schedule of the vehicle-mounted personnel on a plurality of terminal devices and the network ecology through the cloud platform 43 in the process of determining the interactive content, and obtain resources such as news information, entertainment bagua, online education and the like on the network as the basis for determining the interactive content; the in-vehicle virtual assistant control unit 423 may control the virtual assistant to interact with the in-vehicle person according to the determined interaction content.
The above describes an intelligent interaction method provided in the embodiment of the present application, and accordingly, an intelligent interaction apparatus is provided in the embodiment of the present application, which is described below.
As shown in fig. 5, an intelligent interaction device provided in an embodiment of the present application may be applied to a vehicle-mounted virtual assistant, and the device may include: a data acquisition module 501, a car usage scene determination module 502, an interactive content determination module 503 and an interactive module 504.
The data acquisition module 501 is configured to acquire target data of a person in the vehicle, where the target data includes at least one of behavior data of the person in the vehicle when the person in the vehicle takes a car and car habit data of the person in the vehicle.
And the vehicle using scene determining module 502 is configured to determine a current vehicle using scene of the vehicle interior personnel based on a deep learning network and the target data.
And an interactive content determining module 503, configured to determine interactive content between the vehicle-mounted virtual assistant and the person in the vehicle based on the current vehicle usage scenario.
And an interaction module 504, configured to control the vehicle-mounted virtual assistant to interact with the person in the vehicle based on the interaction content.
It should be noted that, since the intelligent interaction apparatus provided in the embodiment of the present application corresponds to the intelligent interaction method provided in the embodiment of the present application and can obtain the same technical effect, the description of the intelligent interaction apparatus in the present specification is simpler, and reference is made to the above description of the intelligent interaction method for relevant points.
Fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program, and forms an intelligent interaction device on a logic level, and is specifically used for executing the following operations:
acquiring target data of people in a vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when riding the vehicle and vehicle habit data of the people in the vehicle;
determining the current car using scene of the people in the car based on a deep learning network and the target data;
determining interactive contents of the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene;
and controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content.
The method executed by the intelligent interaction method disclosed in the embodiment of fig. 1 of the present application can be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Therefore, the electronic device executing the method provided by the embodiment of the present application can execute the methods described in the foregoing method embodiments, and implement the functions and beneficial effects of the methods described in the foregoing method embodiments, which are not described herein again.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to the following devices.
(1) The mobile network device features mobile communication function and mainly aims at providing voice and data communication. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(4) And other electronic devices with data interaction functions.
An embodiment of the present application further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which, when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the intelligent interaction method in the embodiment shown in fig. 1, and are specifically configured to perform the following operations:
acquiring target data of people in a vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when riding the vehicle and vehicle habit data of the people in the vehicle;
determining the current car using scene of the people in the car based on a deep learning network and the target data;
determining interactive contents of the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene;
and controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that all the embodiments in the present application are described in a related manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An intelligent interaction method, the method comprising:
acquiring target data of people in a vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when riding the vehicle and vehicle habit data of the people in the vehicle;
determining the current car using scene of the people in the car based on a deep learning network and the target data;
determining interactive contents of the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene;
and controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content.
2. The method of claim 1, wherein the deep learning network comprises a feature extraction module and a car use scene matching module, and wherein the determining the current car use scene of the person in the vehicle based on the deep learning network and the target data comprises:
inputting the target data into the feature extraction module to extract feature information of the people in the vehicle;
and inputting the characteristic information into the vehicle scene matching module to be matched with a preset scene library to obtain the current vehicle scene of the personnel in the vehicle.
3. The method of claim 2, wherein the target data comprises behavior data of the in-vehicle occupant when riding, and wherein the inputting the target data into the feature extraction module to extract feature information of the in-vehicle occupant comprises:
inputting the behavior data of the people in the vehicle into a feature identification submodule in the feature extraction module, and identifying to obtain behavior feature information of the people in the vehicle;
the step of inputting the characteristic information into the vehicle scene matching module to match with a preset scene library to obtain the current vehicle scene of the person in the vehicle comprises the following steps:
and inputting the behavior characteristic information into the vehicle scene matching module to be matched with a preset scene library to obtain the current vehicle scene matched with the behavior characteristics of the personnel in the vehicle.
4. The method of claim 3, wherein the behavioral data of the vehicle occupant includes at least one of:
the visual data of the person in the vehicle,
the voice data of the person in the vehicle, and
schedule data set in at least one terminal device by the personnel in the vehicle.
5. The method according to claim 3 or 4, wherein the determining of the interactive content of the vehicle-mounted virtual assistant and the vehicle-mounted person based on the current vehicle-mounted scene comprises:
if the behavior characteristic information identified from the behavior data comprises singing and song names and the car scene obtained by inputting the behavior characteristic information into the car scene matching module is a singing scene, determining that the interactive content of the vehicle-mounted virtual assistant and the people in the vehicle comprises at least one of the following contents: setting a character image of a vehicle-mounted assistant as a singer image, playing music matched with the song name, waving the character image along with the music, and playing a preset musical instrument on the character image;
if the behavior feature information recognized from the behavior data comprises conversation contents related to a preset theme and the vehicle using scene obtained by inputting the behavior feature information into the vehicle using scene matching module is a conversation scene, determining that the interactive contents of the vehicle-mounted virtual assistant and the vehicle-mounted person comprise at least one of the following contents: setting the figure image of the vehicle-mounted assistant as the figure image related to the preset theme and participating in the discussion of the information related to the preset theme;
if the behavior feature information identified from the behavior data comprises reading, and the vehicle using scene obtained by inputting the behavior feature information into the vehicle using scene matching module is a reading scene, determining that the interactive content of the vehicle-mounted virtual assistant and the vehicle-mounted person comprises at least one of the following contents: the method comprises the steps of setting the character image of the vehicle-mounted assistant as a carefully-learned character image, automatically adjusting the light in the vehicle, turning down the sound of the vehicle-mounted sound box, and recommending online learning content.
6. The method of claim 2, wherein the target data comprises vehicle usage habit data of the vehicle-mounted person, and wherein the inputting the target data into the feature extraction module to extract feature information of the vehicle-mounted person comprises: inputting the vehicle using habit data of the vehicle-mounted personnel into a feature detection submodule in the feature extraction module, and detecting to obtain habit feature information of the vehicle-mounted personnel;
the step of inputting the characteristic information into the vehicle scene matching module to match with a preset scene library to obtain the current vehicle scene of the person in the vehicle comprises the following steps:
and inputting the habit characteristic information into the vehicle scene matching module to be matched with a preset scene library to obtain the current vehicle scene matched with the behavior characteristics of the personnel in the vehicle.
7. The method of claim 6, wherein the car occupancy data for the vehicle occupant includes at least one of:
the current journey of the person in the vehicle,
the current mental state of the person in the vehicle, an
The historical vehicle usage habits of the persons in the vehicle.
8. The method of claim 6 or 7, wherein the determining of the interactive content of the vehicle-mounted virtual assistant and the vehicle-mounted person based on the current vehicle-mounted scene comprises:
if the habit characteristic information identified from the car using habit data comprises that the driving time exceeds the preset time, determining that the interactive content of the vehicle-mounted virtual assistant and the person in the vehicle comprises at least one of the following contents: reminding the driver to pay attention to parking and rest, reminding other people in the vehicle except the driver to move so as to eliminate fatigue, and playing related actions and music suitable for the movement in the vehicle;
if the habit characteristic information identified from the car habit data comprises the current journey of the passenger to send the child to the interested class, determining that the interactive content of the vehicle-mounted virtual assistant and the person in the car comprises at least one of the following contents: and playing learning contents related to the interest class and reminding children to pre-study or review courses of the interest class.
9. The method of claim 2, wherein the objective data includes behavior data of the person in the vehicle while riding and car habit data of the person in the vehicle;
the inputting the target data into the feature extraction module to extract the feature information of the people in the vehicle comprises the following steps: inputting the behavior data of the people in the vehicle when riding into the feature identification submodule in the feature extraction module, and identifying to obtain the behavior feature information of the people in the vehicle; inputting the vehicle using habit data of the vehicle-mounted personnel into a feature detection submodule in the feature extraction module, and detecting to obtain habit feature information of the vehicle-mounted personnel;
the step of inputting the characteristic information into the vehicle scene matching module to match with a preset scene library to obtain the current vehicle scene of the person in the vehicle comprises the following steps:
and inputting the behavior characteristic information and the habit characteristic information into the vehicle scene matching module to be matched with a preset scene library, so as to obtain the current vehicle scene matched with the behavior characteristics of the people in the vehicle.
10. An intelligent interaction device, the device comprising:
the data acquisition module is used for acquiring target data of people in the vehicle, wherein the target data comprises at least one of behavior data of the people in the vehicle when the people take the vehicle and vehicle habit data of the people in the vehicle;
the vehicle using scene determining module is used for determining the current vehicle using scene of the people in the vehicle based on the deep learning network and the target data;
the interactive content determining module is used for determining interactive content between the vehicle-mounted virtual assistant and the people in the vehicle based on the current vehicle using scene;
and the interaction module is used for controlling the vehicle-mounted virtual assistant to interact with the people in the vehicle based on the interaction content.
CN202110480342.8A 2021-04-30 2021-04-30 Intelligent interaction method and device Pending CN113212448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110480342.8A CN113212448A (en) 2021-04-30 2021-04-30 Intelligent interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110480342.8A CN113212448A (en) 2021-04-30 2021-04-30 Intelligent interaction method and device

Publications (1)

Publication Number Publication Date
CN113212448A true CN113212448A (en) 2021-08-06

Family

ID=77090450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110480342.8A Pending CN113212448A (en) 2021-04-30 2021-04-30 Intelligent interaction method and device

Country Status (1)

Country Link
CN (1) CN113212448A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173657A1 (en) * 2022-03-18 2023-09-21 北京百度网讯科技有限公司 Intelligent interaction method and apparatus, device, and storage medium
CN117667002A (en) * 2023-10-30 2024-03-08 上汽通用汽车有限公司 Vehicle interaction method, device, system and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101669090A (en) * 2007-04-26 2010-03-10 福特全球技术公司 Emotive advisory system and method
CN102529978A (en) * 2010-12-31 2012-07-04 华晶科技股份有限公司 Vehicle equipment control system and method thereof
US20140188920A1 (en) * 2012-12-27 2014-07-03 Sangita Sharma Systems and methods for customized content
CN107697069A (en) * 2017-10-31 2018-02-16 上海汽车集团股份有限公司 Fatigue of automobile driver driving intelligent control method
US20180053102A1 (en) * 2016-08-16 2018-02-22 Toyota Jidosha Kabushiki Kaisha Individualized Adaptation of Driver Action Prediction Models
CN107878467A (en) * 2017-11-10 2018-04-06 江西爱驰亿维实业有限公司 voice broadcast method and system for automobile
CN108657186A (en) * 2018-05-08 2018-10-16 奇瑞汽车股份有限公司 Intelligent driving cabin exchange method and device
CN109131355A (en) * 2018-07-31 2019-01-04 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and its vehicle-mounted scene interactive approach based on user's identification
CN110641476A (en) * 2019-08-16 2020-01-03 广汽蔚来新能源汽车科技有限公司 Interaction method and device based on vehicle-mounted robot, controller and storage medium
CN110770772A (en) * 2017-09-19 2020-02-07 谷歌有限责任公司 Virtual assistant configured to automatically customize an action group
CN110871813A (en) * 2018-08-31 2020-03-10 比亚迪股份有限公司 Control method and device of virtual robot, vehicle, equipment and storage medium
CN112035034A (en) * 2020-08-27 2020-12-04 芜湖盟博科技有限公司 Vehicle-mounted robot interaction method
CN112034989A (en) * 2020-09-04 2020-12-04 华人运通(上海)云计算科技有限公司 Intelligent interaction system
CN112052056A (en) * 2020-08-05 2020-12-08 恒大新能源汽车投资控股集团有限公司 Interaction method and device of vehicle-mounted intelligent assistant, vehicle-mounted equipment and vehicle
CN112193255A (en) * 2020-09-24 2021-01-08 北京百度网讯科技有限公司 Human-computer interaction method, device, equipment and storage medium of vehicle-machine system
CN112307813A (en) * 2019-07-26 2021-02-02 浙江吉智新能源汽车科技有限公司 Virtual butler system of intelligence and vehicle

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101669090A (en) * 2007-04-26 2010-03-10 福特全球技术公司 Emotive advisory system and method
CN102529978A (en) * 2010-12-31 2012-07-04 华晶科技股份有限公司 Vehicle equipment control system and method thereof
US20140188920A1 (en) * 2012-12-27 2014-07-03 Sangita Sharma Systems and methods for customized content
US20180053102A1 (en) * 2016-08-16 2018-02-22 Toyota Jidosha Kabushiki Kaisha Individualized Adaptation of Driver Action Prediction Models
CN110770772A (en) * 2017-09-19 2020-02-07 谷歌有限责任公司 Virtual assistant configured to automatically customize an action group
CN107697069A (en) * 2017-10-31 2018-02-16 上海汽车集团股份有限公司 Fatigue of automobile driver driving intelligent control method
CN107878467A (en) * 2017-11-10 2018-04-06 江西爱驰亿维实业有限公司 voice broadcast method and system for automobile
CN108657186A (en) * 2018-05-08 2018-10-16 奇瑞汽车股份有限公司 Intelligent driving cabin exchange method and device
CN109131355A (en) * 2018-07-31 2019-01-04 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and its vehicle-mounted scene interactive approach based on user's identification
CN110871813A (en) * 2018-08-31 2020-03-10 比亚迪股份有限公司 Control method and device of virtual robot, vehicle, equipment and storage medium
CN112307813A (en) * 2019-07-26 2021-02-02 浙江吉智新能源汽车科技有限公司 Virtual butler system of intelligence and vehicle
CN110641476A (en) * 2019-08-16 2020-01-03 广汽蔚来新能源汽车科技有限公司 Interaction method and device based on vehicle-mounted robot, controller and storage medium
CN112052056A (en) * 2020-08-05 2020-12-08 恒大新能源汽车投资控股集团有限公司 Interaction method and device of vehicle-mounted intelligent assistant, vehicle-mounted equipment and vehicle
CN112035034A (en) * 2020-08-27 2020-12-04 芜湖盟博科技有限公司 Vehicle-mounted robot interaction method
CN112034989A (en) * 2020-09-04 2020-12-04 华人运通(上海)云计算科技有限公司 Intelligent interaction system
CN112193255A (en) * 2020-09-24 2021-01-08 北京百度网讯科技有限公司 Human-computer interaction method, device, equipment and storage medium of vehicle-machine system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173657A1 (en) * 2022-03-18 2023-09-21 北京百度网讯科技有限公司 Intelligent interaction method and apparatus, device, and storage medium
CN117667002A (en) * 2023-10-30 2024-03-08 上汽通用汽车有限公司 Vehicle interaction method, device, system and storage medium

Similar Documents

Publication Publication Date Title
CN111145721B (en) Personalized prompt generation method, device and equipment
CN113212448A (en) Intelligent interaction method and device
JP2013069266A (en) Content recommendation system
CN110111782B (en) Voice interaction method and device
CN112959998B (en) Vehicle-mounted human-computer interaction method and device, vehicle and electronic equipment
CN106600318A (en) Information matching method and device and electronic device
CN108921360A (en) A kind of social interaction mode, device and electronic equipment based on unmanned vehicle
CN114327185A (en) Vehicle screen control method and device, medium and electronic equipment
CN115859219A (en) Multi-modal interaction method, device, equipment and storage medium
CN114710553B (en) Information acquisition method, information pushing method and terminal equipment
CN108573023A (en) For using mixing collaborative filtering device to carry out the system/method and device of content-browsing
KR20180137724A (en) Image Sharing System Based On Speech Recognition and Image Sharing System Using It
CN114694450A (en) Teaching method for automobile
CN113450788A (en) Method and device for controlling sound output
CN112052325A (en) Voice interaction method and device based on dynamic perception
CN118035539A (en) Method and device for generating scene, electronic equipment and storage medium
CN116712723A (en) Vehicle-mounted game method and device based on face recognition and voice recognition
CN117407585A (en) User guiding method, device, electronic equipment, readable storage medium and vehicle
US20210264343A1 (en) Information processing system and information processing method
CN116896660A (en) Information processing device, information processing method, and storage medium
CN118626723A (en) Broadcast content recommendation method, electronic equipment and vehicle
CN118277589A (en) Multimedia data processing method, model training method, device and electronic equipment
CN117573919A (en) Car end music recommendation method, device, equipment and storage medium
CN116168704A (en) Voice interaction guiding method, device, equipment, medium and vehicle
JP2023184289A (en) Information processing method and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210806