CN114237395A - Information processing method, information processing device, electronic equipment and storage medium - Google Patents

Information processing method, information processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114237395A
CN114237395A CN202111526981.XA CN202111526981A CN114237395A CN 114237395 A CN114237395 A CN 114237395A CN 202111526981 A CN202111526981 A CN 202111526981A CN 114237395 A CN114237395 A CN 114237395A
Authority
CN
China
Prior art keywords
vehicle
assistant
information
determining
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111526981.XA
Other languages
Chinese (zh)
Inventor
丁春晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111526981.XA priority Critical patent/CN114237395A/en
Publication of CN114237395A publication Critical patent/CN114237395A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides an information processing method, an information processing device, electronic equipment and a storage medium, and relates to the field of information processing, in particular to the technical fields of virtual/augmented reality, intelligent transportation and human-computer interaction. The specific implementation scheme is as follows: determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode; the hand outputs the corresponding voice and motion.

Description

Information processing method, information processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to an information processing method, apparatus, and storage medium in the fields of virtual/augmented reality and intelligent transportation.
Background
With the development of the fifth generation mobile communication technology (5G) and the digitization of the global automobile industry, the automobile industry continuously departs from the traditional mechanical manufacturing category and further changes to intellectualization and digitization. In the rapidly growing automobile inventory, the proportion of smart automobiles is rapidly increasing. The intelligent experience sense of the user in a driving travel scene is also more and more emphasized by the vehicle enterprises and the user. At present, intelligent automobiles generally have two major characteristics, namely networking and multi-screen, and an Artificial Intelligence (AI) three-dimensional intelligent assistant for overall planning of the whole automobile is a new entrance of man-machine interaction in the intelligent networking era.
Disclosure of Invention
The disclosure provides an information processing method, an information processing apparatus, an electronic device, and a storage medium.
According to a first aspect of the present disclosure, there is provided an information processing method including:
determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode;
determining voice and action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant;
and controlling the vehicle-mounted assistant to output corresponding voice and actions.
According to a second aspect of the present disclosure, there is provided an information processing apparatus comprising:
the first determining unit is used for determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode; (ii) a
The second determination unit is used for determining the voice and the action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant;
and the control unit is used for controlling the vehicle-mounted assistant to output corresponding voice and action.
A third aspect of the present disclosure provides an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the information processing method described above.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the information processing method described above.
A fifth aspect of the present disclosure provides a computer program product comprising computer programs/instructions which, when executed by a processor, implement the information processing method described above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart illustrating an alternative information processing method provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating another alternative information processing method provided by the embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating a further alternative information processing method provided by the embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating a further alternative information processing method provided by the embodiment of the present disclosure;
FIG. 5 is a block diagram of an architecture of an information handling system provided by an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating an alternative structure of an information processing apparatus provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an application of the information processing method according to the embodiment of the present disclosure;
FIG. 8 shows a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of 5G technology and the digitization of the global automobile industry, the automobile industry continuously breaks away from the traditional mechanical manufacturing category and further changes to intellectualization and digitization. In the rapidly growing automobile inventory, the proportion of smart automobiles is rapidly increasing. The intelligent experience sense of the user in a driving travel scene is also more and more emphasized by the vehicle enterprises and the user. At present, intelligent automobiles generally have two major characteristics, can be networked and multi-screen, and the AI 3D intelligent assistant for overall planning of the whole automobile is a new entrance for man-machine interaction in the intelligent networking era.
Most of the existing intelligent assistants in the automobile industry still stay at hardware, two-dimensional (2D) and voice stages, the automobile with the visual intelligent assistant is the phoenix feather unicorn, and emotional interaction with the vehicle-mounted assistant and the vehicle-mounted intelligent device generated by personalized intelligence in the market becomes an important way for improving the user experience of the vehicle-mounted assistant. The electronic counter is finally popularized and landed in mass production products, and is a powerful counter for vehicle-enterprise users to win market competition in the future.
The information processing method provided by the embodiment of the disclosure enables a user to edit an intelligent travel scene of a vehicle-mounted virtual image generated by the user in person during driving travel, can perform voice conversation with the user, and meets a series of requirements of the driver and passengers, such as navigation, schedule reminding and the like, so that the method not only can become a private manager in an intelligent mobile space of the user, but also can bring emotional companionship to the private travel.
Fig. 1 shows an alternative flow chart of an information processing method provided by an embodiment of the present disclosure, which will be described according to various steps.
Step S101, determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode includes an emotional mode and/or an operational mode.
In some embodiments, the information processing device (hereinafter referred to as a device) determines the response mode of the vehicle assistant based on the collected first multimedia information and the information of the vehicle assistant.
Wherein the first multimedia information may include at least one of: voice information, image information, and video information; wherein the voice information may include at least one of: speech content, intonation, speech rate, and sound size; the information of the vehicle assistant comprises starting information of the vehicle assistant after the first time of the day. Wherein the first time may be set to be before an average starting time of the vehicle assistant after the second time in the working day.
In some optional embodiments, the information of the vehicle assistant may further include historical start information of the vehicle assistant, such as a start time of a holiday, an average start time, and an interval between a start time before a third time in a working day and a start time after the second time. Wherein, the second time can be set according to actual requirements, such as 18: 00; the third time can be set according to actual requirements, such as 8: 00; the average starting time may be an average starting time after the second time in the working day.
In specific implementation, the device determines the working mode of the vehicle-mounted assistant based on the current date, the current time, the average starting time of the vehicle-mounted assistant after the second time in a working day and the starting information of the vehicle-mounted assistant after the first time.
Or, in specific implementation, the device determines expression information corresponding to the first multimedia information; and determining the emotion mode of the vehicle-mounted assistant based on the expression information corresponding to the first multimedia information. For example, the device determines that the expression information corresponding to the first multimedia information is a casualty based on the first multimedia information, and then determines that the emotion mode of the vehicle-mounted assistant is a casualty mode.
Or, in specific implementation, the device determines action information corresponding to the first multimedia information; and determining the emotion mode of the vehicle-mounted assistant based on the action information corresponding to the first multimedia information. For example, if the device determines that the first multimedia information includes an action of vigorously shooting a steering wheel, determining that emotion information corresponding to the first multimedia information is impatient; and determining the emotion mode of the vehicle-mounted assistant to be an emotional exciting mode (such as a restlessness-intolerance mode) based on the emotion information. Optionally, the device may further assist in determining emotion information corresponding to the first multimedia information in combination with navigation information, road condition information, and the like.
Or, in specific implementation, the apparatus determines whether an emotion keyword is included in the voice content based on the voice content included in the first multimedia information; and in response to the fact that the emotion keywords are included in the voice content, determining an emotion mode of the vehicle-mounted assistant based on emotion information corresponding to the emotion keywords. For example, if the voice content collected by the device includes "good vexation", "good difficulty" or "fast point", the emotional mode of the vehicle-mounted assistant may be determined to be a casualty mode based on emotional keywords in the voice content.
Or, in specific implementation, the device determines emotion information corresponding to the first multimedia information based on at least one of the intonation, the speech rate and the sound size included in the first multimedia information; and determining the emotion mode of the vehicle-mounted assistant based on the emotion information corresponding to the first multimedia information. For example, the apparatus determines that emotion information corresponding to the first multimedia information is urgent based on one of the intonation, the speech rate and the sound size of the first multimedia information; and determining the emotion mode of the vehicle-mounted assistant to be an urgent mode based on the emotion information corresponding to the first multimedia information.
And step S102, determining the voice and the action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant.
In some embodiments, the device determines, in response to the response mode of the vehicle-mounted assistant, a voice and an action corresponding to the response mode of the vehicle-mounted assistant from pre-stored information based on a response model of the vehicle-mounted assistant;
and determining the voice and the action output by the vehicle-mounted assistant as the voice and the action corresponding to the response mode of the vehicle-mounted assistant.
In some embodiments, the pre-stored information may include a correspondence of an emotional mode and/or a work mode to at least one of speech and motion; for example, the fidgeting mode corresponds to soothing voices and actions, the overtime mode corresponds to soothing voices and actions, and so on.
In some optional embodiments, the device may generate the voice and the action output by the vehicle-mounted assistant in advance, so that after the emotion mode and/or the working mode of the vehicle-mounted assistant are confirmed, the corresponding voice and action can be quickly confirmed and output, and the emotional accompanying efficiency of the vehicle-mounted assistant is improved.
And step S103, controlling the vehicle-mounted assistant to output corresponding voice and action.
In some embodiments, the device controls the vehicle-mounted assistant to output corresponding voice and actions, so that the vehicle-mounted assistant can realize emotional communication with the user. In order to make the emotional communication process more natural, the device can also confirm the orientation of the vehicle-mounted assistant in the vehicle based on the first multimedia information; and adjusting the orientation of the vehicle-mounted assistant to enable the vehicle-mounted assistant to face the source of the first multimedia information, so that the vehicle-mounted assistant outputs the action to the source of the first multimedia information to double the interaction effect of the vehicle-mounted assistant.
Therefore, the information processing method provided by the embodiment of the disclosure determines the response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode; determining voice and action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant; controlling the vehicle-mounted assistant to output corresponding voice and actions; after the response mode of the vehicle-mounted assistant is determined, corresponding voice and actions can be output, emotional accompanying of the vehicle-mounted assistant is achieved, and user experience is improved.
Fig. 2 shows another alternative flow chart of the information processing method provided by the embodiment of the present disclosure, which will be described according to various steps.
Step S201, determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode includes an emotional mode and/or an operational mode.
The specific step flow of step S201 is the same as step S101, and is not repeated here.
And step S202, determining the voice and the action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant.
The specific step flow of step S202 is the same as step S102, and is not repeated here.
And step S203, controlling the vehicle-mounted assistant to output corresponding voice and action.
The specific step flow of step S203 is the same as step S103, and is not repeated here.
Step S204, second multimedia information is collected, and whether the second multimedia information comprises preset reaction information or not is confirmed.
In some embodiments, after the device controls the vehicle-mounted assistant to output corresponding voice and actions, whether preset reaction information is included in second multimedia information is determined based on the acquisition of the second multimedia information by a microphone or an image acquisition device; in response to that preset reaction information is not included in the second multimedia information, stopping outputting the corresponding voice and action, and/or taking the corresponding voice and action output by the first multimedia information and the vehicle-mounted assistant as negative samples, so that the vehicle-mounted assistant does not output the voice and action any more based on the emotion mode and/or the working mode of the vehicle-mounted assistant; the preset reaction information comprises at least one of pre-stored expression, action and voice presented by the user aiming at the voice and action output by the vehicle-mounted assistant.
For example, the first multimedia information comprises a vigorously patted steering wheel, the device determines the emotion mode of the vehicle-mounted assistant to be an anger mode based on the user action information of the vigorously patted steering wheel, and then outputs corresponding soothing voice and action; after the corresponding user receives the soothing voice and the action, the preset reaction information of the user can comprise the extension of the eyebrows, the relaxation of the body and the like; however, the act of vigorously tapping the steering wheel may also correspond to an excitement pattern, but the apparatus erroneously determines that the emotion pattern of the vehicle-mounted assistant is an anger pattern based on the act of vigorously tapping the steering wheel, and therefore outputs soothing voice and an act; the user's reaction may be confusing at this time; and if the preset reaction information corresponding to the soothing voice and the action does not include confusion, the device can control the vehicle-mounted assistant to stop outputting the corresponding voice and action, and/or the first multimedia information and the corresponding voice and action output by the vehicle-mounted assistant are used as negative samples, so that the vehicle-mounted assistant does not output the voice and action any more based on the emotion mode and/or the working mode of the vehicle-mounted assistant.
Therefore, according to the information processing method provided by the embodiment of the disclosure, in the process of communicating the vehicle-mounted assistant with the user and performing emotional companionization, multimedia information is continuously collected, whether the voice and the action output by the vehicle-mounted assistant are attached to the emotion or the communication context is confirmed, inappropriate voice and action (namely, the inappropriate voice and action do not receive the preset reaction information) are marked as negative samples, and the voice and action feedback of the vehicle-mounted assistant is continuously optimized, so that the vehicle-mounted assistant does not output the voice and action based on the emotion mode and/or the working mode of the vehicle-mounted assistant, the emotional experience of the user in the vehicle-mounted scene is further improved, the personalized and visual exclusive virtual image intimate companionization is provided for the user, the man-machine communication in the vehicle is full of temperature, the brand new driving experience of the man-machine co-driving era is brought, and the assistant vehicle enterprise is intelligently transformed and upgraded.
Fig. 3 shows a schematic flow chart of yet another alternative of the information processing method provided by the embodiment of the present disclosure, which will be described according to various steps.
And S301, determining the working mode of the vehicle-mounted assistant based on the starting information of the vehicle-mounted assistant and the first multimedia information.
In some embodiments, the first multimedia information may include at least one of: voice information, image information, and video information; wherein the voice information may include at least one of: speech content, intonation, speech rate, and sound size; the information of the vehicle assistant comprises starting information of the vehicle assistant after the first time of the day. Wherein the first time may be set to be before an average starting time of the vehicle assistant after the second time in the working day.
In some embodiments, the operational modes of the vehicle assistant may include: shift-in mode, advance shift-out mode, outwork mode and normal shift-out mode. The overtime mode corresponds to the starting time of the vehicle-mounted assistant being later than the average starting time after the second time in the working day; the early off duty mode corresponds to the starting time of the vehicle-mounted assistant being earlier than the average starting time after the second time in the working day; the normal off duty mode corresponds to the start time of the vehicle assistant being near the average start time after the second time in the working day (i.e. the difference is less than a preset threshold); the outing mode corresponds to the moment of activation of the vehicle assistant being earlier than the average moment of activation after the second moment in the working day.
In some optional embodiments, the information of the vehicle assistant may further include historical start information of the vehicle assistant, such as a start time of a holiday, an average start time after a second time in a work day, and an interval between a start time before a third time in the work day and a start time after the second time. Wherein, the second time can be set according to actual requirements, such as 18: 00; the third time may be set according to actual requirements, such as 8: 00.
In specific implementation, in response to that the current date is the working day and the starting time of the vehicle-mounted assistant is earlier than the average starting time after the second time, the device determines that the working mode of the vehicle-mounted assistant is an early off-duty mode or an outwork mode; the device can further judge the working mode of the vehicle-mounted assistant by combining the first multimedia information. The first time can be set according to actual requirements, and can be set to be before the second time.
In specific implementation, in response to that the current date is a working day, the starting time of the vehicle-mounted assistant is later than the average starting time after the second time, and the vehicle-mounted assistant is not started after the first time, the device determines that the working mode of the vehicle-mounted assistant is an overtime mode.
Or, in specific implementation, the device may further assist in determining the working mode of the user by combining the voice, the image and the action information included in the first multimedia information on the basis that the current date is a working day, the starting time of the vehicle-mounted assistant is later than the average starting time after the second time, and the vehicle-mounted assistant is not started after the first time.
In the embodiment of the disclosure, by combining the average starting time of the vehicle-mounted assistant after the second time, the current starting time, the working day and the current date of the vehicle-mounted assistant and whether the vehicle-mounted assistant is started after the first time, whether the vehicle-mounted assistant is in an overtime state can be more accurately judged; the error caused by judging whether the vehicle is in the overtime state or not only by the current time and the average starting time after the second time is avoided (for example, a user goes off duty on time, but the vehicle still needs to be driven later than the time of off duty due to a trip, and at the moment, the user is judged to be in the overtime state by mistake only by the current time and the average starting time after the second time).
And step S302, determining the voice and the action output by the vehicle-mounted assistant based on the working mode of the vehicle-mounted assistant.
In some embodiments, in response to the operating mode of the vehicle assistant being an overtime mode, determining that the voice and action output by the vehicle assistant is encouraging-type voice and action or soothing-type voice and action; and in response to the fact that the working mode of the vehicle-mounted assistant is the on-time off duty mode, determining that the voice and the action of the vehicle-mounted assistant are celebration voice and actions or default voice and actions.
In other embodiments, in response to that the working mode of the vehicle-mounted assistant is a mode of going to work in advance, and the emotional state or the body state corresponding to the first multimedia information is determined by combining the first multimedia information, the voice and the action output by the vehicle-mounted assistant are determined.
For example, if the working mode of the vehicle-mounted assistant is a mode of going to work in advance, and the emotional state corresponding to the first multimedia information is an urgent state or an emotional exciting state, determining that the vehicle-mounted assistant outputs soothing voice and actions; optionally, the vehicle-mounted assistant may further output soothing music according to a history play record of the vehicle-mounted assistant; or the working mode of the vehicle-mounted assistant is a mode of going off duty in advance, the body state corresponding to the first multimedia information is a non-health state, and the vehicle-mounted assistant is determined to output comfort voice and actions; optionally, the vehicle-mounted assistant may further determine the severity of the unhealthy state according to the first multimedia information, determine whether to start a vehicle, contact a preset emergency contact person, and even send an alarm to an emergency center.
And step S303, controlling the vehicle-mounted assistant to output corresponding voice and action.
In some embodiments, the device controls the vehicle-mounted assistant to output corresponding voice and actions, so that the vehicle-mounted assistant can realize emotional communication with the user.
In order to make the emotional communication process more natural, the device can also confirm the orientation of the vehicle-mounted assistant in the vehicle based on the first multimedia information; and adjusting the orientation of the vehicle-mounted assistant to enable the vehicle-mounted assistant to face the source of the first multimedia information, so that the vehicle-mounted assistant outputs the action to the source of the first multimedia information to double the interaction effect of the vehicle-mounted assistant.
In this way, with the information processing method provided by the embodiment of the disclosure, the working mode of the vehicle-mounted assistant is determined based on the starting information and the first multimedia information of the vehicle-mounted assistant; determining voice and action output by the vehicle-mounted assistant based on the working mode of the vehicle-mounted assistant; the vehicle-mounted assistant is controlled to output corresponding voice and actions, and the corresponding voice and actions can be output after the working mode of the vehicle-mounted assistant is determined, so that emotional companioning of the vehicle-mounted assistant is realized, and user experience is improved.
Fig. 4 shows a schematic flow chart of yet another alternative of the information processing method provided by the embodiment of the present disclosure, which will be described according to various steps.
Step S501, determining the emotion mode of the vehicle-mounted assistant based on the first multimedia information.
In some embodiments, the first multimedia information may include at least one of: voice information, image information, and motion information; the voice information includes at least one of: speech content, intonation, speed of speech, and sound size.
In some embodiments, the device determines expression information corresponding to an image based on the first multimedia information comprising the image; and determining the emotion mode of the vehicle-mounted assistant based on the expression information corresponding to the image. For example, the device determines that the expression information corresponding to the image is a heart injury based on the collected first multimedia information, and then determines that the emotion mode of the vehicle-mounted assistant is a heart injury mode. Optionally, the device may further assist in determining the emotional mode of the vehicle-mounted assistant in combination with voice information included in the first multimedia information.
In some embodiments, the device determines emotion information corresponding to the action information based on the action information included in the collected first multimedia information; and determining the emotion mode of the vehicle-mounted assistant based on the emotion information. For example, if the action information included in the first multimedia information collected by the device is a large-shot steering wheel, determining that the emotion information corresponding to the user action information is impatient; and determining the emotion mode of the vehicle-mounted assistant to be an emotional exciting state based on the emotion information. Optionally, the device may further assist in determining emotion information corresponding to the action information in combination with navigation information, road condition information, and the like. For example, if the current road condition information is congestion, the emotion mode of the vehicle-mounted assistant can be determined to be an emotion excited state by combining the action information of the steering wheel which is shot vigorously.
In some embodiments, the apparatus determines, based on speech information included in the collected first multimedia information, whether an emotion keyword is included in speech content included in the speech information; and in response to the fact that the emotion keywords are included in the voice content, determining an emotion mode of the vehicle-mounted assistant based on emotion information corresponding to the emotion keywords. For example, if the voice content collected by the device includes "good vexation", "good difficulty in getting over" or "fast spot", the emotional mode of the vehicle assistant may be determined based on emotional keywords in the voice content.
In some embodiments, the apparatus determines emotion information corresponding to the first multimedia information based on at least one of the received intonation, the speech rate, and the sound size; and determining the emotion mode of the vehicle-mounted assistant based on the emotion information corresponding to the first multimedia information. For example, the device determines that the intonation exceeds a preset intonation threshold, the speech rate exceeds a preset speech rate threshold, and the sound size exceeds a preset decibel threshold, and determines that the user is in an emotional state.
And step S502, determining the voice and the action output by the vehicle-mounted assistant based on the emotion mode of the vehicle-mounted assistant.
In some embodiments, the device determines, from pre-stored information, a voice and an action corresponding to an emotional mode of the vehicle assistant in response to the emotional mode of the vehicle assistant; and determining the voice and the action output by the vehicle-mounted assistant as the voice and the action corresponding to the emotion mode of the vehicle-mounted assistant.
In some embodiments, the prestored information may include a correspondence of emotional states to at least one of speech and actions; for example, a fidgety state corresponds to soothing voices and actions, an overtime state corresponds to soothing voices and actions, and so on.
In some optional embodiments, the device may generate the voice and the action output by the vehicle-mounted assistant in advance, so that after the emotional mode of the vehicle-mounted assistant is confirmed, the corresponding voice and action can be quickly confirmed and output, and the emotional accompanying efficiency of the vehicle-mounted assistant is improved.
And step S503, controlling the vehicle-mounted assistant to output corresponding voice and action.
In some embodiments, the device controls the vehicle-mounted assistant to output corresponding voice and actions, so that the vehicle-mounted assistant can realize emotional communication with the user.
In order to make the emotional communication process more natural, the device can also confirm the orientation of the vehicle-mounted assistant in the vehicle based on the first multimedia information; and adjusting the orientation of the vehicle-mounted assistant to enable the vehicle-mounted assistant to face the source of the first multimedia information, so that the vehicle-mounted assistant outputs the action to the source of the first multimedia information to double the interaction effect of the vehicle-mounted assistant.
In this way, by the information processing method provided by the embodiment of the disclosure, the emotion mode of the vehicle-mounted assistant is determined based on the first multimedia information; determining voice and actions output by the vehicle-mounted assistant based on the emotion mode of the vehicle-mounted assistant; controlling the vehicle-mounted assistant to output corresponding voice and actions; after the emotion mode or the working mode of the vehicle-mounted assistant is determined, corresponding voice and actions can be output, emotional accompanying of the vehicle-mounted assistant is achieved, and user experience is improved.
Referring to fig. 5, fig. 5 is a schematic diagram of an architecture of the information processing system 100 provided by the embodiment of the present disclosure, in order to support an exemplary application, the information processing apparatus 400 is connected to the server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of both, and data transmission is implemented using a wireless or wired link.
In some embodiments, the information processing method provided by the embodiments of the present disclosure may be implemented by an information processing apparatus. For example, the information processing apparatus 400 runs a client, and the client 410 may be a client for information processing. The client may collect the first multimedia information and the information of the vehicle assistant and transmit the first multimedia information and the information of the vehicle assistant to the server 200 through the network 300.
When emotional companioning is needed, the client acquires first multimedia information and information of a vehicle-mounted assistant, wherein the client can shoot the interior of a vehicle through a camera in the information processing device 400; it is also possible to receive the acquisition of the first multimedia information by a camera independent of the information processing apparatus 400; the client may also acquire the first multimedia information through a microphone inside the information processing apparatus 400; the first multimedia information may be collected by a microphone independent from the information processing apparatus 400.
In some embodiments, taking the electronic device as a server as an example, the information processing method provided by the embodiments of the present disclosure may be cooperatively implemented by the server and the information processing apparatus.
When emotional companioning is needed, the client acquires first multimedia information and information of a vehicle-mounted assistant, wherein the client can shoot the interior of a vehicle through a camera in the information processing device 400; it is also possible to receive the acquisition of the first multimedia information by a camera independent of the information processing apparatus 400; the client may also acquire the first multimedia information through a microphone inside the information processing apparatus 400; the first multimedia information may be collected by a microphone independent from the information processing apparatus 400. Then, the server 200 determines a response mode of the vehicle assistant based on the first multimedia information and the information of the vehicle assistant; the server 200 determines the voice and the action output by the vehicle-mounted assistant based on the emotion mode and/or the working mode of the vehicle-mounted assistant; and controlling the vehicle-mounted assistant to output corresponding voice and actions. Wherein the voice and actions output by the vehicle assistant may be stored in the database 500.
In some embodiments, the information processing apparatus 400 or the server 200 may implement the information processing method provided by the embodiments of the present disclosure by executing a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) Application (APP), i.e. a program that needs to be installed in the operating system to run; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
In practical applications, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a Cloud server providing basic Cloud computing services such as a Cloud service, a Cloud database, Cloud computing, a Cloud function, Cloud storage, a network service, Cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, where Cloud Technology (Cloud Technology) refers to a hosting Technology for unifying series resources such as hardware, software, and a network in a wide area network or a local area network to implement computing, storage, processing, and sharing of data. The information processing apparatus 400 and the server 200 may be directly or indirectly connected by wired or wireless communication, and the disclosure is not limited thereto.
Fig. 6 is a schematic diagram showing an alternative structure of an information processing apparatus provided in an embodiment of the present disclosure, which will be described according to various parts.
In some embodiments, the information processing apparatus 400 may include a first determination unit 401, a second determination unit 402, and a control unit 403.
The first determining unit 401 is configured to determine a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode;
the second determination unit 402 is configured to determine a voice and an action output by the vehicle-mounted assistant based on a response mode of the vehicle-mounted assistant;
the control unit 403 is configured to control the vehicle-mounted assistant to output corresponding voice and actions.
In some embodiments, the information processing apparatus 400 may further include an optimization unit 404.
The optimization unit 404 is configured to collect second multimedia information and determine whether the second multimedia information includes preset response information after controlling the vehicle-mounted assistant to output corresponding voice and action; in response to the second multimedia information not comprising preset reaction information, stopping outputting the corresponding voice and action, and/or taking the first multimedia information and the corresponding voice and action output by the vehicle-mounted assistant as negative samples, so that the vehicle-mounted assistant does not output the voice and action any more based on the first multimedia information; wherein the preset reaction comprises at least one of an expression, an action and a voice which are pre-stored and presented for the voice and the action output by the vehicle-mounted assistant. .
The first determining unit 401 is specifically configured to determine an operating mode of the vehicle-mounted assistant based on a current date, a current time, an average starting time of the vehicle-mounted assistant, and starting information of the vehicle-mounted assistant after the first time.
The first determining unit 401 is specifically configured to determine expression information corresponding to the first multimedia information; and determining an emotion model mode of the vehicle-mounted assistant based on the expression information corresponding to the first multimedia information.
The first determining unit 401 is specifically configured to determine action information corresponding to the first multimedia information; and determining the emotion mode of the vehicle-mounted assistant based on the action information corresponding to the first multimedia information.
The first determining unit 401 is specifically configured to determine whether the content of the first multimedia information includes an emotional keyword; and in response to the first multimedia information including the emotion keyword, determining an emotion mode of the vehicle-mounted assistant based on emotion information corresponding to the emotion keyword.
The first determining unit 401 is specifically configured to determine emotion information corresponding to the first multimedia information based on at least one of a intonation, a speech rate, and a sound size included in the first multimedia information; and determining the emotion mode of the vehicle-mounted assistant based on the emotion information corresponding to the first multimedia information.
The second determining unit 402 is specifically configured to determine, in response to the response mode of the vehicle-mounted assistant, based on a response model of the vehicle-mounted assistant, a voice and an action corresponding to the response mode of the vehicle-mounted assistant from pre-stored information; and determining the voice and the action output by the vehicle-mounted assistant as the voice and the action corresponding to the response mode of the vehicle-mounted assistant.
In some embodiments, the information processing apparatus 400 may further include an adjusting unit 405.
The adjusting unit 405, before controlling the vehicle assistant to output corresponding voice and action, is configured to confirm the orientation of the vehicle assistant in the vehicle based on the first multimedia information; and adjusting the orientation of the vehicle-mounted assistant to enable the vehicle-mounted assistant to face the source of the first multimedia information.
Fig. 7 shows an application diagram of the information processing method provided by the embodiment of the disclosure.
As shown in fig. 7, after the terminal generates the virtual character, the virtual character is sent to the server, and the server sends the virtual character to the vehicle-mounted assistant, and the vehicle-mounted assistant displays the virtual character on a display screen included by the vehicle-mounted assistant.
After the vehicle-mounted assistant determines the response mode of the vehicle-mounted assistant based on the first multimedia information and/or the vehicle-mounted assistant information, the vehicle-mounted assistant is used for determining the voice and the action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant, controlling the vehicle-mounted assistant to output the voice and controlling the virtual character displayed on the display screen of the vehicle-mounted assistant to show the action.
For example, as shown in fig. 7, the device determines that the emotional mode of the vehicle-mounted assistant is a anxious mode based on the action information (i.e., slapping the steering wheel vigorously) included in the first multimedia information, controls the vehicle-mounted assistant to output "a cheerful, sing a first song to hear a first song", controls the virtual human image displayed on the display screen of the vehicle-mounted assistant to show a corresponding soothing action, and then outputs corresponding soothing music, thereby realizing the emotional companioning of the vehicle-mounted assistant.
Or, as shown in fig. 7, the device confirms that the operation mode of the vehicle assistant is overtime based on the information of the vehicle assistant, and controls the vehicle assistant to output "is a person hard to work? And controlling the virtual character displayed on the display screen of the vehicle-mounted assistant to show the corresponding comfort action or humorous action. After receiving the playing instruction, outputting a corresponding interesting smell to realize emotional accompany of the vehicle-mounted assistant.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as an information processing method. For example, in some embodiments, the information processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the information processing method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the information processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (14)

1. An information processing method comprising:
determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode;
determining voice and action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant;
and controlling the vehicle-mounted assistant to output corresponding voice and actions.
2. The method of claim 1, wherein after the controlling the in-vehicle assistant to output the corresponding voice and action, the method further comprises:
acquiring second multimedia information, and determining whether the second multimedia information comprises preset reaction information;
in response to the second multimedia information not comprising preset reaction information, stopping outputting the corresponding voice and action, and/or taking the first multimedia information and the corresponding voice and action output by the vehicle-mounted assistant as negative samples, so that the vehicle-mounted assistant does not output the voice and action any more based on the first multimedia information;
wherein the preset reaction comprises at least one of an expression, an action and a voice which are pre-stored and presented for the voice and the action output by the vehicle-mounted assistant.
3. The method of claim 1, wherein the determining a response mode of a vehicle assistant based on the collected first multimedia information comprises:
and determining the working mode of the vehicle-mounted assistant based on the current date, the current time, the average starting time of the vehicle-mounted assistant and the starting information of the vehicle-mounted assistant after the first time.
4. The method of claim 1, wherein the determining a response mode of a vehicle assistant based on the collected first multimedia information comprises:
determining expression information corresponding to the first multimedia information;
and determining an emotion model mode of the vehicle-mounted assistant based on the expression information corresponding to the first multimedia information.
5. The method of claim 1, wherein the determining a response mode of a vehicle assistant based on the collected first multimedia information comprises:
determining action information corresponding to the first multimedia information;
and determining the emotion mode of the vehicle-mounted assistant based on the action information corresponding to the first multimedia information.
6. The method of claim 1, wherein the determining a response mode of a vehicle assistant based on the collected first multimedia information comprises:
determining whether an emotional keyword is included in the content of the first multimedia information;
and in response to the first multimedia information including the emotion keyword, determining an emotion mode of the vehicle-mounted assistant based on emotion information corresponding to the emotion keyword.
7. The method of claim 1, wherein the determining a response mode of a vehicle assistant based on the collected first multimedia information comprises:
determining emotion information corresponding to the first multimedia information based on at least one of intonation, speed and sound size included in the first multimedia information;
and determining the emotion mode of the vehicle-mounted assistant based on the emotion information corresponding to the first multimedia information.
8. The method of claim 1, wherein the determining the voice and action output by the vehicle assistant based on the response mode of the vehicle assistant comprises:
determining voice and action corresponding to the response mode of the vehicle-mounted assistant from pre-stored information based on a response model of the vehicle-mounted assistant in response to the response mode of the vehicle-mounted assistant;
and determining the voice and the action output by the vehicle-mounted assistant as the voice and the action corresponding to the response mode of the vehicle-mounted assistant.
9. The method of claim 1, wherein prior to the controlling the in-vehicle assistant to output corresponding speech and actions, the method further comprises:
confirming the orientation of the vehicle-mounted assistant in the vehicle based on the first multimedia information;
and adjusting the orientation of the vehicle-mounted assistant to enable the vehicle-mounted assistant to face the source of the first multimedia information.
10. An information processing apparatus comprising:
the first determining unit is used for determining a response mode of the vehicle-mounted assistant based on the collected first multimedia information; the response mode comprises an emotion mode and/or a working mode;
the second determination unit is used for determining the voice and the action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant;
and the control unit is used for controlling the vehicle-mounted assistant to output corresponding voice and action.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
14. The vehicle-mounted terminal runs with a vehicle-mounted assistant and is used for determining a response mode of the vehicle-mounted assistant based on collected first multimedia information; the response mode comprises an emotion mode and/or a working mode; determining voice and action output by the vehicle-mounted assistant based on the response mode of the vehicle-mounted assistant; and controlling the vehicle-mounted assistant to output corresponding voice and actions.
CN202111526981.XA 2021-12-14 2021-12-14 Information processing method, information processing device, electronic equipment and storage medium Pending CN114237395A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111526981.XA CN114237395A (en) 2021-12-14 2021-12-14 Information processing method, information processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111526981.XA CN114237395A (en) 2021-12-14 2021-12-14 Information processing method, information processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114237395A true CN114237395A (en) 2022-03-25

Family

ID=80755766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111526981.XA Pending CN114237395A (en) 2021-12-14 2021-12-14 Information processing method, information processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114237395A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124123A (en) * 2019-12-24 2020-05-08 苏州思必驰信息科技有限公司 Voice interaction method and device based on virtual robot image and intelligent control system of vehicle-mounted equipment
CN111368609A (en) * 2018-12-26 2020-07-03 深圳Tcl新技术有限公司 Voice interaction method based on emotion engine technology, intelligent terminal and storage medium
CN112434139A (en) * 2020-10-23 2021-03-02 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN112910761A (en) * 2021-01-29 2021-06-04 北京百度网讯科技有限公司 Instant messaging method, device, equipment, storage medium and program product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368609A (en) * 2018-12-26 2020-07-03 深圳Tcl新技术有限公司 Voice interaction method based on emotion engine technology, intelligent terminal and storage medium
CN111124123A (en) * 2019-12-24 2020-05-08 苏州思必驰信息科技有限公司 Voice interaction method and device based on virtual robot image and intelligent control system of vehicle-mounted equipment
CN112434139A (en) * 2020-10-23 2021-03-02 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN112910761A (en) * 2021-01-29 2021-06-04 北京百度网讯科技有限公司 Instant messaging method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
JP2020522776A (en) Virtual assistant configured to recommend actions to facilitate existing conversations
US20170103756A1 (en) Information processing system, and vehicle-mounted device
GB2518002A (en) Vehicle interface system
CN111640429A (en) Method of providing voice recognition service and electronic device for the same
CN113554180B (en) Information prediction method, information prediction device, electronic equipment and storage medium
CN114416012A (en) Audio continuous playing method and device
CN113658586A (en) Training method of voice recognition model, voice interaction method and device
CN111160002B (en) Method and device for analyzing abnormal information in output spoken language understanding
CN117492743A (en) Target application generation method and device based on large language model and storage medium
CN112382294A (en) Voice recognition method and device, electronic equipment and storage medium
CN114237395A (en) Information processing method, information processing device, electronic equipment and storage medium
CN115879469B (en) Text data processing method, model training method, device and medium
CN114722171B (en) Multi-round dialogue processing method and device, electronic equipment and storage medium
CN113360590B (en) Method and device for updating interest point information, electronic equipment and storage medium
CN114428917A (en) Map-based information sharing method, map-based information sharing device, electronic equipment and medium
CN115268821A (en) Audio playing method and device, equipment and medium
CN113808410B (en) Vehicle driving prompting method and device, electronic equipment and readable storage medium
CN112817463A (en) Method, equipment and storage medium for acquiring audio data by input method
CN110288683B (en) Method and device for generating information
CN114221960B (en) Data pushing method based on automatic driving bus and automatic driving bus
CN114756023A (en) Automatic driving vehicle control method, device, electronic equipment and storage medium
CN116757921A (en) User head portrait updating method and device, electronic equipment and storage medium
EP4050533A2 (en) Method for providing state information of taxi service order, device and storage medium
CN113012679A (en) Method, apparatus and medium for broadcasting message by voice
CN114154491A (en) Interface skin updating method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination