CN114582335A - Vehicle information interaction method and related device - Google Patents

Vehicle information interaction method and related device Download PDF

Info

Publication number
CN114582335A
CN114582335A CN202011373553.3A CN202011373553A CN114582335A CN 114582335 A CN114582335 A CN 114582335A CN 202011373553 A CN202011373553 A CN 202011373553A CN 114582335 A CN114582335 A CN 114582335A
Authority
CN
China
Prior art keywords
vehicle
target
target user
voice
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011373553.3A
Other languages
Chinese (zh)
Inventor
应臻恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Pateo Connect and Technology Shanghai Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect and Technology Shanghai Corp filed Critical Pateo Connect and Technology Shanghai Corp
Priority to CN202011373553.3A priority Critical patent/CN114582335A/en
Publication of CN114582335A publication Critical patent/CN114582335A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/14Systems for determining distance or velocity not using reflection or reradiation using ultrasonic, sonic, or infrasonic waves
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a vehicle information interaction method and a related device, and firstly, voice data of a target user are obtained; then, recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and finally, controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, automatic interaction between the vehicle and the user is more perfect, and interaction experience of the target user is greatly improved while manpower is saved.

Description

Vehicle information interaction method and related device
Technical Field
The application relates to the field of Internet of vehicles, in particular to a vehicle information interaction method and a related device.
Background
Along with the development of society, the vehicle has become the indispensable instrument of riding instead of walk of people's trip, and in the fifth generation mobile communication technology era, everything is interconnected, needs the vehicle to realize more intelligent function to this promotes user's use and experiences.
When a user buys a car, a salesperson is often required to introduce relevant information of the car to the user, a large amount of training is required to be performed on the salesperson in the early stage, manpower and material resources are consumed, and how to interact the car and the user in a car purchasing scene becomes a problem.
Disclosure of Invention
Based on the above problems, the application provides a vehicle information interaction method and a related device, which can identify the voice of a target user and perform corresponding interaction, so that the automatic interaction between a vehicle and the user is more perfect, and the interaction experience of the target user is greatly improved while the manpower is saved. In a first aspect, an embodiment of the present application provides a vehicle information interaction method, where the method includes:
acquiring voice data of a target user;
recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle;
and controlling the target vehicle to execute the operation indicated by the vehicle interaction request.
In a second aspect, an embodiment of the present application provides a vehicle information interaction device, where the device includes:
the voice acquisition unit is used for acquiring voice data of a target user;
the voice recognition unit is used for recognizing the voice data and determining a vehicle interaction request of the target user for a target vehicle;
and the interaction control unit is used for controlling the target vehicle to execute the operation indicated by the vehicle interaction request.
In a third aspect, an embodiment of the present application provides an in-vehicle device, including a processor, a memory, and one or more programs, stored in the memory and configured to be executed by the processor, the program including instructions for performing the steps in the method according to any one of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a computer program, the computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any one of the first aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application provides a vehicle information interaction method and a related device, and firstly, voice data of a target user is obtained; then, recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and finally, controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, automatic interaction between the vehicle and the user is more perfect, and interaction experience of the target user is greatly improved while manpower is saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an on-board device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a vehicle information interaction method according to an embodiment of the present application;
FIG. 3 is a schematic flowchart of another vehicle information interaction method according to an embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating functional units of a vehicle information interaction device according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating functional units of another vehicle information interaction device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
First, the software and hardware architecture in the embodiment of the present application will be described.
Fig. 1 is a schematic structural diagram of an in-vehicle device provided in an embodiment of the present application, where the in-vehicle device 100 includes one or more of the following components: a processor 110, a memory 120, and an input-output device 130.
The processor 110 connects various parts within the entire vehicle-mounted device 100 using various interfaces and lines, and performs various functions of the vehicle-mounted device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. Processor 110 may include one or more processing units, such as: the processor 110 may include a Central Processing Unit (CPU), an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The controller may be a neural center and a command center of the in-vehicle apparatus 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. The DSP is used to process digital signals, and may process other digital signals in addition to digital image signals. For example, when the in-vehicle apparatus 100 is in frequency point selection, the digital signal processor is used to perform fourier transform or the like on the frequency point energy. Video codecs are used to compress or decompress digital video. The in-vehicle apparatus 100 may support one or more video codecs. Thus, the in-vehicle apparatus 100 can play or record video in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like. The NPU can rapidly process input information by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also continuously learn by itself, and applications such as intelligent cognition of the vehicle-mounted device 100 can be realized by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
A memory may be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses, reducing the latency of the processor 110, and increasing system efficiency.
The processor 110 may include one or more interfaces, such as an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
For a better understanding of the function of the various interfaces of the processor 110, each interface is described in detail below:
the I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). The processor 110 may include multiple sets of I2C interfaces, and may be coupled to a touch sensor, a charger, a flash, a camera, etc., through different I2C interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor through an I2C interface, such that the processor 110 and the touch sensor communicate through an I2C interface to implement the touch function of the in-vehicle device 100.
The I2S interface may be used for audio communication. The processor 110 may include multiple sets of I2S interfaces coupled to the audio module via I2S interfaces to enable communication between the processor 110 and the audio module. The audio module can transmit audio signals to the wireless communication module through the I2S interface, and the function of answering the call through the Bluetooth headset is realized.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. The audio module and the wireless communication module can be coupled through the PCM interface, and particularly, an audio signal can be transmitted to the wireless communication module through the PCM interface, so that the function of answering a call through the Bluetooth headset is realized. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. The UART interface is generally used to connect the processor 110 with the wireless communication module. For example: the processor 110 communicates with a bluetooth module in the wireless communication module through a UART interface to implement a bluetooth function. The audio module can transmit audio signals to the wireless communication module through the UART interface, and the function of playing music through the Bluetooth headset is achieved.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as a display screen. The MIPI interface includes a Display Serial Interface (DSI) and the like. In some embodiments, the processor 110 and the display screen communicate through the DSI interface to implement the display function of the in-vehicle device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with a camera, display screen, wireless communication module, audio module, sensor module, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface may be used to connect a charger to charge the in-vehicle device 100, or may be used to transmit data between the in-vehicle device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It is understood that the processor 110 may be mapped to a System on a Chip (SOC) in an actual product, and the processing unit and/or the interface may not be integrated into the processor 110, and the corresponding functions may be implemented by a communication Chip or an electronic component alone. The above-described interface connection relationship between the modules is merely illustrative, and does not constitute a unique limitation on the structure of the in-vehicle device 100.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data (such as vehicle information) created by the in-vehicle device 100 in use, and the like.
The input-output device 130 may include a touch display screen for receiving a touch operation of a user thereon or nearby using any suitable object such as a finger, a touch pen, or the like, and displaying a user interface of each application. The touch display screen is generally provided on the front panel of the in-vehicle apparatus 100. The touch display screen may be designed as a full-screen, a curved screen, or a shaped screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
The input/output device 130 may include a microphone array for collecting voice data and a smart speaker for outputting an introduction to the vehicle information.
Through the vehicle-mounted equipment, the voice data of the target user can be acquired; recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, automatic interaction between the vehicle and the user is more perfect, and interaction experience of the target user is greatly improved while manpower is saved.
In a possible embodiment, self-sale of vehicles can be realized, manpower is saved, and vehicle purchasing experience of target users is greatly improved.
A vehicle information interaction method in the embodiment of the present application is described below with reference to fig. 2, where fig. 2 is a schematic flow chart of the vehicle information interaction method provided in the embodiment of the present application, and specifically includes the following steps:
step 201, acquiring voice data of a target user.
The current voice data of the target user can be acquired through the vehicle-mounted equipment
Specifically, the vehicle-mounted device may be equipped with a voice interaction assistant, and after the target user enters the vehicle, the voice interaction assistant may initiate a question "you are a new vehicle sales counselor and need to help you introduce" to the target user through a screen of the vehicle-mounted device and the smart speaker, and then collect voice data of the target user. It is understood that the presentation form of the voice interaction assistant may include any one or any combination of virtual reality images, videos and voices, and is not limited in particular.
Therefore, the voice data of the target user are acquired through the voice interaction assistant, the interestingness of the voice interaction process can be improved, the target user can know how to perform voice interaction, and the interaction experience of the target user is greatly improved.
In an alternative embodiment, the location of the target user may be determined according to the voice intensity, for example, the decibel of the voice data is analyzed within 0.5 second of the voice data being collected, and the target user may be determined to be located in the target vehicle when the decibel is higher than a preset decibel value; when the decibel is lower than the preset decibel value, the target user can be determined to be located outside the target vehicle, and at the moment, the voice data of the target user obtained by the vehicle-mounted equipment can obviously not be clear. When the target user is located inside the target vehicle, the vehicle-mounted device is continuously used for collecting the voice data of the target user, but when the target user is located outside the target vehicle, the vehicle-mounted device needs to establish communication with the target device carried by the target user, the voice data of the target user is collected by the target device and uploaded to a server corresponding to the vehicle-mounted device, and at the moment, the voice input and the information introduction output are completed by the target device.
Therefore, the equipment for collecting the voice data is selected according to the principle of proximity through the voice intensity, so that the clearer voice data can be obtained, and the subsequent interaction execution efficiency and the recognition accuracy are improved.
Step 202, recognizing the voice data, and determining a vehicle interaction request of the target user for the target vehicle.
The voice data can be preprocessed, so that the influence caused by noise and different speakers can be eliminated, and the processed signals can reflect the essential characteristics of the voice. The most common front-end processes are endpoint detection and speech enhancement. The endpoint detection is to distinguish the speech signal from the non-speech signal in the speech signal and accurately determine the starting point of the speech signal. After the endpoint detection, the subsequent processing can be carried out on the voice signal only, which plays an important role in improving the accuracy of the model and the recognition accuracy. The main task of speech enhancement is to eliminate the effect of ambient noise on speech. Wiener filtering can be used, which is better than other filters in case of high noise.
After the preprocessed voice data is obtained, voice features may be extracted, and the voice features may include Mel Frequency Cepstrum Coefficient (MFCC), and the like, which is not limited herein.
And finally, inputting the voice characteristics into a voice recognition model, and obtaining the vehicle interaction request according to the output of the voice model. The speech recognition model may include an acoustic model and a language model, which respectively correspond to the calculation of the probability from speech to syllable and the calculation of the probability from syllable to text, the acoustic model may be a hidden markov model, and the language model may be an N-Gram model, and first, the speech features are input into the acoustic model, and the phoneme features are obtained according to the output of the acoustic model; then, inputting the phoneme characteristics into the language model, and obtaining a semantic text according to the output of the language model; and finally, determining the vehicle interaction request according to the semantic text. And will not be described in detail herein.
Therefore, the voice data are recognized, the vehicle interaction request of the target user for the target vehicle is determined, the target user can conveniently input the vehicle interaction requirement of the target user, and interaction experience is improved.
And step 203, controlling the target vehicle to execute the operation indicated by the vehicle interaction request.
In an optional embodiment, the vehicle interaction request may include an information introduction request, and the vehicle information to be introduced corresponding to the information introduction request may be selected from a preset vehicle information database, and then the voice output device and the image output device of the target vehicle are controlled to output the vehicle information to be introduced to the target user. For example, a target user may speak "price configuration", at this time, price configuration information of a target vehicle may be selected from a preset vehicle information base, and "the price of the vehicle is between 8.8 and 12.8 thousands" is broadcasted in a voice manner, and configuration correlation of each price is continuously broadcasted, and meanwhile, a screen in the vehicle may also synchronously display configuration correlation information of each price; the target user can say that "introduce the bright spot of this car", and like the same reason, can select the advantage poster of target vehicle from the vehicle information base that predetermines to report to the target user "I mainly has 5 functional bright spots, is invincible in the same level car: the method comprises the steps of mobile phone key, automatic parking, one-key sharing and safe driving assistance. Are not listed here.
Therefore, vehicle information is introduced to the target user in a voice and display mode, the target user can know the vehicle state more intuitively, interaction experience is improved, no salesperson is needed, and labor cost is reduced.
In an optional embodiment, the vehicle interaction request may include a function display request, a module to be displayed may be determined according to the function display request, where the module to be displayed is any function module mounted on a target vehicle, and finally, the module to be displayed is controlled to execute a function corresponding to the function display request. For example, when the target user says "open all windows", it can be determined that the module to be displayed is a window, the displayed content is "open", and at this time, the window can be controlled to be completely opened for displaying; when the target user says 'turn on the air conditioner', the module to be displayed can be determined to be 'the air conditioner', the displayed content is 'turn on', and the air conditioner in the target vehicle is controlled to be turned on for displaying at the moment; when the target user says 'open the windscreen wiper', the module to be displayed can be determined to be 'the windscreen wiper', the displayed content is 'open', and at the moment, the windscreen wiper can be controlled to be opened for displaying. The modules to be displayed are not listed one by one, and can be customized according to the requirements of users.
Therefore, related functions can be automatically displayed according to the requirements of the target user, the vehicle can be introduced more comprehensively, and the interaction experience of the target user is greatly improved.
By the method, firstly, voice data of a target user is obtained; then, recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and finally, controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, self-sale of the vehicle can be achieved, manpower is saved, and vehicle purchasing experience of the target user is greatly improved.
Another vehicle information interaction method in the embodiment of the present application is described below with reference to fig. 3, where fig. 3 is a schematic flow chart of another vehicle information interaction method provided in the embodiment of the present application, and specifically includes the following steps:
step 301, acquiring voice data of a target user.
Step 302, recognizing the voice data, and determining a vehicle interaction request of the target user for the target vehicle.
And step 303, controlling the target vehicle to execute the operation indicated by the vehicle interaction request.
Step 304, determining vehicle attention data of the target user according to the vehicle interaction request.
After the interaction with the target user is completed, a plurality of attention degree tags may be generated according to the content related to the vehicle interaction request, and the vehicle attention data may include all the attention degree tags. For example, the target user firstly asks a price question, and then asks functions such as air-conditioning performance and starting speed, so that the high attention degree of the target user to the cost performance can be automatically calculated, the tag with the high cost performance can be arranged in the first order of the vehicle attention data, it can be understood that the vehicle attention data can reflect preference information of the target user to the vehicle to a certain extent, the weight of the target user to the vehicle information attention can be obtained from the first question of the target user, the time length of asking a certain aspect, the related content of asking for many times and the like, and the accuracy of recommendation can be greatly improved by carrying out subsequent recommendation based on the weight.
And 305, pushing recommended vehicle data to the target device according to the vehicle attention data.
Wherein the target device represents a device bound to the target user.
The recommended vehicle data may include information of a vehicle type, price, a function, and the like, when the vehicle attention data of the target user shows that a tag focused on by the target user is "high performance", the "high performance" related vehicle information is pushed to a target device corresponding to the target user, and when the vehicle attention data of the target user shows that the tag focused on by the target user is "optimal performance", the optimally configured related vehicle information is recommended to the target user. Are not listed here.
Firstly, acquiring voice data of a target user; then, recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and finally, controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, self-sale of the vehicle can be achieved, manpower is saved, and vehicle purchasing experience of the target user is greatly improved. Meanwhile, vehicle data meeting the requirements of the target user are automatically pushed to the target user, so that the interaction experience of the target user can be improved, and the sales promotion cost is saved.
The above parts which are not described in detail can be referred to all or part of the method in fig. 2, and are not described again here.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the vehicle-mounted device includes hardware structures and/or software modules for performing the respective functions in order to realize the functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional units may be divided according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of adopting the functional modules divided corresponding to the respective functions, fig. 4 shows a functional unit composition block diagram of a vehicle information interaction device according to the above-described embodiment. The vehicle information interaction device 400 includes:
a voice acquiring unit 410, configured to acquire voice data of a target user;
a voice recognition unit 420, configured to recognize the voice data, and determine a vehicle interaction request of the target user for a target vehicle;
an interaction control unit 430, configured to control the target vehicle to perform an operation indicated by the vehicle interaction request.
Firstly, acquiring voice data of a target user; then, recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and finally, controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, automatic interaction between the vehicle and the user is more perfect, and interaction experience of the target user is greatly improved while manpower is saved. For example, self-sale of vehicles can be realized, and the vehicle purchasing experience of target users is greatly improved while manpower is saved.
In the case of using an integrated unit, fig. 5 shows a functional unit composition block diagram of a vehicle information interaction device according to the above-described embodiment. As shown in fig. 5, the vehicle information interaction device 500 includes a processing unit 501, a communication unit 502 and a storage unit 503, wherein the processing unit 501 is configured to execute any step in the above method embodiments, and when data transmission such as transmission is performed, the communication unit 502 is optionally called to complete corresponding operations, and the storage unit 503 is configured to store program codes and data of the electronic device.
The processing unit 501 may be a central processing unit, the communication unit 502 may be a radio frequency module, and the storage unit 503 may be a memory.
Firstly, acquiring voice data of a target user; then, recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle; and finally, controlling the target vehicle to execute the operation indicated by the vehicle interaction request. The voice of the target user can be recognized, corresponding interaction is carried out, automatic interaction between the vehicle and the user is more perfect, and interaction experience of the target user is greatly improved while manpower is saved. For example, can realize the self-selling of vehicle, practice thrift the manpower and promoted target user's the experience of buying the car greatly simultaneously.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A vehicle information interaction method is characterized by comprising the following steps:
acquiring voice data of a target user;
recognizing the voice data, and determining a vehicle interaction request of the target user for a target vehicle;
and controlling the target vehicle to execute the operation indicated by the vehicle interaction request.
2. The method of claim 1, the recognizing the voice data, determining a vehicle interaction request of the target user for a target vehicle, comprising the steps of:
preprocessing the voice data to obtain preprocessed voice data;
determining voice characteristics according to the preprocessed voice data;
and inputting the voice characteristics into a voice recognition model for voice recognition to obtain a vehicle interaction request.
3. The method of claim 2, the speech recognition model comprising: an acoustic model and a language model; the method for inputting the voice characteristics into the voice recognition model to perform voice recognition so as to obtain the vehicle interaction request comprises the following steps:
inputting the voice features into the acoustic model, and obtaining phoneme features according to the output of the acoustic model;
inputting the phoneme characteristics into the language model, and obtaining a semantic text according to the output of the language model;
and determining the vehicle interaction request according to the semantic text.
4. The method of claim 1, the vehicle interaction request comprising an information introduction request; the control of the target vehicle to perform the operation indicated by the vehicle interaction request comprises the following steps:
selecting to-be-introduced vehicle information corresponding to the information introduction request from a preset vehicle information database;
and controlling a voice output device and an image output device of the target vehicle to output the vehicle information to be introduced to the target user.
5. The method of claim 1, the vehicle interaction request comprising a function exposure request; the control of the target vehicle to perform the operation indicated by the vehicle interaction request comprises the following steps:
determining a module to be displayed according to the function display request, wherein the module to be displayed is any function module of the target vehicle;
and controlling the module to be displayed to execute the function corresponding to the function display request.
6. The method of claim 1, after the controlling the target vehicle to perform the operation indicated by the vehicle interaction request, the method further comprising:
determining vehicle attention data of the target user according to the vehicle interaction request;
and pushing recommended vehicle data to target equipment according to the vehicle attention data, wherein the target equipment is equipment bound with the target user.
7. The method of claim 6, the voice data comprising a voice intensity; the acquiring of the voice data of the target user comprises:
determining the position of the target user according to the voice intensity, wherein the position is the interior of the target vehicle or the exterior of the target vehicle;
when the target user is located inside the target vehicle, acquiring the voice data through vehicle-mounted equipment on the target vehicle;
and when the target user is positioned outside the target vehicle, acquiring the voice data through the target device.
8. A vehicle information interaction device, characterized by comprising:
the voice acquisition unit is used for acquiring voice data of a target user;
the voice recognition unit is used for recognizing the voice data and determining a vehicle interaction request of the target user for a target vehicle;
and the interaction control unit is used for controlling the target vehicle to execute the operation indicated by the vehicle interaction request.
9. An in-vehicle device comprising a processor, a memory, and one or more programs stored in the memory and configured for execution by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-7.
CN202011373553.3A 2020-11-30 2020-11-30 Vehicle information interaction method and related device Pending CN114582335A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011373553.3A CN114582335A (en) 2020-11-30 2020-11-30 Vehicle information interaction method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011373553.3A CN114582335A (en) 2020-11-30 2020-11-30 Vehicle information interaction method and related device

Publications (1)

Publication Number Publication Date
CN114582335A true CN114582335A (en) 2022-06-03

Family

ID=81766821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011373553.3A Pending CN114582335A (en) 2020-11-30 2020-11-30 Vehicle information interaction method and related device

Country Status (1)

Country Link
CN (1) CN114582335A (en)

Similar Documents

Publication Publication Date Title
CN109447234B (en) Model training method, method for synthesizing speaking expression and related device
CN111325386B (en) Method, device, terminal and storage medium for predicting running state of vehicle
CN110972112B (en) Subway running direction determining method, device, terminal and storage medium
WO2022033556A1 (en) Electronic device and speech recognition method therefor, and medium
CN111524501A (en) Voice playing method and device, computer equipment and computer readable storage medium
CN109147764A (en) Voice interactive method, device, equipment and computer-readable medium
WO2020173211A1 (en) Method and apparatus for triggering special image effects and hardware device
CN111554281B (en) Vehicle-mounted man-machine interaction method for automatically identifying languages, vehicle-mounted terminal and storage medium
CN111354362A (en) Method and device for assisting hearing-impaired communication
CN102571882A (en) Network-based voice reminding method and system
CN107767862B (en) Voice data processing method, system and storage medium
CN113486661A (en) Text understanding method, system, terminal equipment and storage medium
CN110337030B (en) Video playing method, device, terminal and computer readable storage medium
CN112259076A (en) Voice interaction method and device, electronic equipment and computer readable storage medium
CN114582335A (en) Vehicle information interaction method and related device
EP4276827A1 (en) Speech similarity determination method, device and program product
CN102542705A (en) Voice reminding method and system
CN113763925B (en) Speech recognition method, device, computer equipment and storage medium
CN116229962A (en) Terminal equipment and voice awakening method
CN113299309A (en) Voice translation method and device, computer readable medium and electronic equipment
CN115331672B (en) Device control method, device, electronic device and storage medium
CN112542157A (en) Voice processing method and device, electronic equipment and computer readable storage medium
CN112133279A (en) Vehicle-mounted information broadcasting method and device and terminal equipment
CN113205569B (en) Image drawing method and device, computer readable medium and electronic equipment
CN110289010B (en) Sound collection method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination