CN117672209A - Voice interaction method based on vehicle, electronic equipment and storage medium - Google Patents
Voice interaction method based on vehicle, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117672209A CN117672209A CN202211063149.5A CN202211063149A CN117672209A CN 117672209 A CN117672209 A CN 117672209A CN 202211063149 A CN202211063149 A CN 202211063149A CN 117672209 A CN117672209 A CN 117672209A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- vehicle
- voice interaction
- identification information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012216 screening Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008774 maternal effect Effects 0.000 description 1
- 239000008267 milk Substances 0.000 description 1
- 210000004080 milk Anatomy 0.000 description 1
- 235000013336 milk Nutrition 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention provides a voice interaction method based on a vehicle, wherein the method is applied to a server and comprises the following steps: acquiring a voice interaction instruction from vehicle-mounted equipment, wherein the voice interaction instruction comprises user identification information of a first user; matching according to the user identification information of the first user to obtain corresponding voiceprint data of the second user, or matching according to the user identification information of the first user and the current date information to obtain voiceprint data of the second user; and returning a voice interaction result to the vehicle-mounted equipment according to the voiceprint data so that the vehicle-mounted equipment plays the voice interaction result. According to the voice interaction method and the voice interaction device, voice interaction results can be returned according to the voice print data of the second user matched with the first user, the situation that only the default voice print winning office of the intelligent vehicle system can be adopted to respond to the user interaction instruction is avoided, the voice print data of the intelligent vehicle system are enriched, and voice interaction experience between the intelligent vehicle system and the first user is improved.
Description
Technical Field
The present invention relates to the field of voice interaction technologies, and in particular, to a vehicle-based voice interaction method, an electronic device, and a storage medium.
Background
With the improvement of living standard of people, automobiles are more and more favored by people, and become an important transportation means for people to travel. Meanwhile, the intelligent degree on the automobile is correspondingly higher and higher.
Currently, most vehicles are equipped with intelligent vehicle systems. The driver or passenger may interact with the intelligent vehicle system in voice to perform certain specific functions. Such as inquiring about weather, inquiring about road conditions, path navigation, etc.
However, after receiving the user instruction, the conventional intelligent vehicle-mounted system responds to the user instruction according to the default voiceprint data of the intelligent vehicle-mounted system, so that voice interaction with the user is realized, and the voice interaction experience effect between the intelligent vehicle-mounted system and the user is not ideal.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a vehicle-based voice interaction method, an electronic device, and a storage medium, so as to improve the voice interaction experience.
In one aspect of the embodiment of the invention, a voice interaction method based on a vehicle is provided and applied to a server, and the method comprises the following steps: acquiring a voice interaction instruction from vehicle-mounted equipment, wherein the voice interaction instruction comprises user identification information of a first user; matching according to the user identification information of the first user to obtain voice print data of a corresponding second user, or matching according to the user identification information of the first user and current date information to obtain voice print data of the second user; and returning a voice interaction result to the vehicle-mounted equipment according to the voiceprint data so that the vehicle-mounted equipment plays the voice interaction result.
In another aspect of an embodiment of the present invention, there is provided an electronic device including: one or more processors; and one or more computer-readable storage media having instructions stored thereon; the instructions, when executed by the one or more processors, cause the processors to perform the vehicle-based voice interaction method as described above.
In yet another aspect of the embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a vehicle-based voice interaction method as described above.
In the embodiment of the invention, a voice interaction scheme based on a vehicle is provided, and the voice interaction scheme can be applied to a server. In the voice interaction scheme, a voice interaction instruction from the vehicle-mounted device is acquired, wherein the voice interaction instruction can contain user identification information of a first user. And matching according to the user identification information of the first user to obtain the corresponding voiceprint data of the second user, or matching according to the user identification information of the first user and the current date information to obtain the voiceprint data of the second user. And then, a voice interaction result is returned to the vehicle-mounted equipment according to the voiceprint data of the second user so that the vehicle-mounted equipment plays the voice interaction result.
In the embodiment of the invention, on one hand, the voiceprint data of the corresponding second user can be matched for the first user according to the user identification information. On the other hand, the first user can be matched with the voiceprint data of the corresponding second user according to the user identification information and the current date information. Finally, the voice interaction result is returned according to the voice print data of the second user. According to the voice interaction method and the voice interaction device, voice interaction results can be returned according to the voice print data of the second user matched with the first user, the situation that only the default voice print winning office of the intelligent vehicle system can be adopted to respond to the user interaction instruction is avoided, the voice print data of the intelligent vehicle system are enriched, and voice interaction experience between the intelligent vehicle system and the first user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of steps of a vehicle-based voice interaction method according to an embodiment of the present invention.
Fig. 2 is a flow chart of an intelligent voice interaction scheme for an occupant in a vehicle according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the present invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments of the present invention, are within the scope of the embodiments of the present invention.
Referring to fig. 1, a flowchart of steps of a vehicle-based voice interaction method according to an embodiment of the present invention is shown. The voice interaction method can be applied to a server. The server can be a voice interaction server or an intelligent vehicle-mounted system server and the like, and the embodiment of the invention does not limit the model, configuration, parameters, network and the like of the server. The voice interaction method may comprise the steps of:
step 101, acquiring a voice interaction instruction from the vehicle-mounted equipment.
In the embodiment of the invention, the vehicle-mounted equipment can be deployed or installed in a vehicle, and the vehicle-mounted equipment can be in communication connection with the server. In practical application, the vehicle-mounted device may receive a voice interaction instruction of a user. For example, the in-vehicle apparatus may be an in-vehicle navigation terminal or the like. The user may be a driver or a passenger, etc. The user speaks in the vehicle, and the vehicle-mounted equipment can collect voice data of the user, so that voice interaction instructions are recognized from the voice data. The voice interaction instruction may comprise user identification information of the first user. Wherein the first user may be a driver or a passenger. Since the embodiment of the present invention relates to a plurality of users, in order to distinguish the users, the representation is performed by the first user, the second user, and the like. The user identification information can be used as unique identification information of the user, and the user identification information can be letters, numbers, symbols, character strings and the like.
Step 102, matching according to the user identification information of the first user to obtain the corresponding voiceprint data of the second user, or matching according to the user identification information of the first user and the current date information to obtain the voiceprint data of the second user.
In the embodiment of the invention, after receiving the voice interaction instruction from the vehicle-mounted device, the server needs to match corresponding voiceprint data for the first user. The voiceprint data are the acoustic spectrum data carrying the speech information and displayed by an electroacoustical instrument. Voiceprint data is not only specific, but also relatively stable. The voiceprint data of each individual can remain relatively constant over a long period of time. Experiments prove that whether a speaker deliberately imitates the voice and the mood of other people or whistle light speaking, the voiceprint data is unchanged all the time even if the speaker imitates the wonderful Chinese.
The voiceprint data that matches the first user may be voiceprint data of the second user. Accordingly, there may be a correspondence between the first user and the second user. The embodiment of the invention can obtain the voiceprint data of the second user by matching the first user in at least two ways. The voice print data of the second user is obtained by matching according to the user identification information of the first user; in another mode, voiceprint data of the second user is obtained according to matching of user identification information of the first user and current date information.
And step 103, returning a voice interaction result to the vehicle-mounted equipment according to the voiceprint data so that the vehicle-mounted equipment plays the voice interaction result.
In the embodiment of the invention, after the voiceprint data of the second user is obtained by matching, a voice interaction result can be generated according to the voiceprint data of the second user, and then the voice interaction result is returned to the vehicle-mounted equipment. For example, if the voiceprint data of the second user is voiceprint data of a child, a voice interaction result can be generated according to the voiceprint data of the child, so as to simulate the child to make a voice corresponding to the voice interaction result. After the vehicle-mounted device receives the voice interaction result, the voice interaction result can be played. The voice interaction result can correspond to the voice interaction instruction. For example, if the voice interaction instruction is question 01, the voice interaction result is an answer to question 01.
In the embodiment of the invention, a voice interaction scheme based on a vehicle is provided, and the voice interaction scheme can be applied to a server. In the voice interaction scheme, a voice interaction instruction from the vehicle-mounted device is acquired, wherein the voice interaction instruction can contain user identification information of a first user. And matching according to the user identification information of the first user to obtain the corresponding voiceprint data of the second user, or matching according to the user identification information of the first user and the current date information to obtain the voiceprint data of the second user. And then, a voice interaction result is returned to the vehicle-mounted equipment according to the voiceprint data of the second user so that the vehicle-mounted equipment plays the voice interaction result.
In the embodiment of the invention, on one hand, the voiceprint data of the corresponding second user can be matched for the first user according to the user identification information. On the other hand, the first user can be matched with the voiceprint data of the corresponding second user according to the user identification information and the current date information. Finally, the voice interaction result is returned according to the voice print data of the second user. According to the voice interaction method and the voice interaction device, voice interaction results can be returned according to the voice print data of the second user matched with the first user, the situation that only the default voice print winning office of the intelligent vehicle system can be adopted to respond to the user interaction instruction is avoided, the voice print data of the intelligent vehicle system are enriched, and voice interaction experience between the intelligent vehicle system and the first user is improved.
In an exemplary embodiment of the present invention, an implementation manner of matching the user identification information of the first user to obtain the corresponding voiceprint data of the second user is that, according to the user identification information of the first user and a correspondence between the user identification information of the first user and the user identification information of the second user, the user identification information of the second user is obtained by matching; and reading the voiceprint data of the second user from a preset voiceprint database according to the user identification information of the second user.
In the server, a correspondence between the user identification information of the first user and the user identification information of the second user may be preset, and a voiceprint database may be created in advance. The voiceprint database may include user identification information and voiceprint data of at least one user. For example, the voiceprint database contains user identification information t01 of a user01 and voiceprint data v01, and user identification information t02 of a user02 and voiceprint data v02. It should be noted that, the correspondence between the user identification information of the first user and the user identification information of the second user may be understood as a one-to-one correspondence. That is, the user identification information of the first user corresponds only to the user identification information of the second user at this time. The unique user identification information of one user can be matched according to the corresponding relation, namely, the user identification information of the second user is obtained through matching.
In an exemplary embodiment of the present invention, an implementation manner of obtaining voiceprint data of a second user according to matching between user identification information of a first user and current date information is to detect whether the current date information is located in a preset date database; if the current date information is positioned in the date database, acquiring associated user information of the current date information; matching in a preset user database according to the associated user information to obtain user identification information of the second user; and reading the voiceprint data of the second user from the voiceprint database according to the user identification information of the second user.
In the server, a date database, a user database, and a voiceprint database may be preset. The date database may contain certain specific date information, for example, holiday information, anniversary information, and the like. In addition, the date database may further include associated user information corresponding to certain specific date information, for example, holiday information is holiday, and associated user information corresponding to the holiday is female information. The holiday information is june one day, and the associated user information corresponding to june one day is child information under 14 years old. The user database may contain user identification information and identity information of a user having a correspondence with the user identification information of the first user, etc. For example, the user database may contain user identification information t02 of the user02 having a correspondence relationship with the user identification information t01 of the user01, identity information of the user02, and the like. The identity information may include, but is not limited to: age information, gender information, and the like.
In practical application, if the current date information is child section information, detecting whether the child section information is located in a date database, and if the child section information is located in the date database, acquiring associated user information of the child section information. The associated user information of the child section information is child information or child information. Further, the user identification information t02 of the user02 is obtained by matching the child information or the child information in the user database. Because the identity information of the user02 is child information under 14 years old. Then, the voiceprint data v02 of the user02 is read out in the voiceprint database.
In an exemplary embodiment of the present invention, one implementation of obtaining the associated user information of the current date information is to read the associated user information corresponding to the current date information from the date database. For example, the current date information is child section information, and the associated user information corresponding to the child section information is child information or child information read from the date database.
In an exemplary embodiment of the present invention, another implementation manner of acquiring the associated user information of the current date information is to acquire the associated user information according to the date attribute of the current date information. For example, when the current date information is the highlight festival information and the date attribute of the highlight festival information is the elderly attribute, the relevant user information is acquired according to the elderly attribute to be the elderly information.
In an exemplary embodiment of the present invention, an implementation manner of the foregoing creation process of the user database is to obtain a login number of a first user logged into a server through an in-vehicle device; if the login times are greater than the login threshold value, setting the first user as a vehicle owner; and storing the user identification information and the identity information of the first user into a user database. In practical application, the first user can scan the two-dimensional code provided by the vehicle-mounted equipment based on the application program on the mobile terminal to log in to the server. The first user may input user identity information or the like in the application. If the first user logs in to the server for the first time, the server may allocate user identification information to the first user. For example, when the number of times of login of the user01 to the server is 15, the preset login threshold is 12, and the number of times of login is greater than the login threshold, the user01 is set as the vehicle owner, and then the user identification information t01 and the identity information s01 of the user01 are stored in the user database. Furthermore, the user01 may also be recorded in the user database as the vehicle owner.
In an exemplary embodiment of the present invention, another implementation manner of the foregoing creation process of the user database is that, after an arbitrary user logs in to the server through the vehicle-mounted device, the server stores user identification information and identity information of the logged-in arbitrary user into the user database. In this case, the server does not limit the number of logins by which any user logs in to the server, but may count the number of logins by which each user logs in to the server so as to judge whether the user is the vehicle owner based on the number of logins.
In an exemplary embodiment of the present invention, if the vehicle-mounted device may also collect voice interaction data of the third user and the first user, the vehicle-mounted device transmits the voice interaction data to the server. The server may identify interaction keywords from the voice interaction data. If the frequency of the interaction keywords is greater than the interaction threshold value, adding association relation information for the third user according to the interaction keywords, and further storing the identity information and the association relation information of the third user into a user database.
In practical application, the third user may log in to the server through the vehicle-mounted device. For example, the user03 logs in to the server. The interaction keywords "loved", "boss", etc. are often identified from the voice interaction data between the user03 and the user 01. Moreover, when the frequency of identifying the interaction keywords is greater than a certain interaction threshold, the association information can be added for the user03 according to the "loved" or "boss" of the interaction keywords. The association information indicates an identity relationship between the user03 and the user 01. In this case, the relationship information is a couple relationship or a lover relationship. If the frequency of the identified interaction keyword 'son' is greater than another interaction threshold value, the association relationship information can be added as a parent-child relationship or a parent-child relationship for the user03 according to the interaction keyword 'son'. The relationship information may include a relationship between a father and a woman, a relationship between a mother and a woman, and the like, in addition to the relationship between a couple, a lover, a father and son, and a mother and son.
In an exemplary embodiment of the present invention, in addition to acquiring the voice interaction data of the third user and the first user, the image data of the third user may also be acquired by an image acquisition device such as an in-vehicle camera. One embodiment of adding association information to a third user according to the interaction keywords is to add association information to the third user according to the image data and the interaction keywords. In practical application, sex information of a third user can be identified from the image data, and a plurality of candidate association relation information of the interaction keywords is generated; and screening out the association relation information from the candidate association relation information according to the gender information, and further adding the association relation information for the third user. For example, the sex information of the user03 is identified as female information from the image data, and a plurality of candidate association relationship information is generated as a parent-child relationship and a parent-child relationship according to the interaction keyword 'son'. And screening out the association relationship information mother-child relationship corresponding to the female information from the father-child relationship and the mother-child relationship according to the female information, and further adding the association relationship information to the user03 as the mother-child relationship.
That is, the server may determine the relative identity of a user as the vehicle owner in addition to the user as the vehicle owner. For example, the server may also determine the relative identity of the user as the vehicle owner based on the number of logins the user has logged into the server and voice interaction data between the user and the vehicle owner.
In an exemplary embodiment of the present invention, the server may acquire voice data of each user from the vehicle-mounted device, and extract corresponding voiceprint data for each user according to the voice data of each user, thereby storing the voiceprint data in the voiceprint database. In addition, the vehicle owner may log into the server, selecting a default user in the user database, so as to return a voice interaction result using the default user's voiceprint data. For example, the vehicle owner may choose to return voice interaction results using wife voice print data by default.
Based on the above description of an embodiment of a vehicle-based voice interaction method, an intelligent voice interaction scheme for an occupant in a vehicle is described below. Referring to fig. 2, a flow chart of an intelligent voice interaction scheme for an occupant in a vehicle according to an embodiment of the present invention is shown.
The intelligent voice interaction scheme can involve a user, a vehicle-mounted device and a server. The user sends a voice wake-up instruction of hey and xiaobai to the vehicle-mounted equipment. The vehicle-mounted equipment receives the voice wake-up instruction and uploads the voice wake-up instruction to the server. The server analyzes the voice awakening instruction, and if the analyzed awakening keywords are the same as preset keywords, the awakening is considered to be successful. The server determines whether the current date is a holiday. And if the current date is the child festival, returning a voice response result to the vehicle-mounted equipment by utilizing the preset voice print data of the child. The vehicle-mounted device plays the voice response result of 'hello, dad'. Next, the user issues a voice query instruction "how weather today" to the in-vehicle apparatus. The vehicle-mounted equipment receives the voice query instruction and uploads the voice query instruction to the server. The server analyzes the voice query instruction to obtain a query keyword, and then searches related query results according to the query keyword. Moreover, the server may also determine again whether the current date is a holiday. And if the current date is still the child festival, returning a query result to the vehicle-mounted equipment by utilizing the preset voice print data of the child. The vehicle-mounted device plays the query result 'today' weather is clear.
The embodiment of the invention can upload the sound data of a plurality of users to the server in advance. For example, sound data of a mother, child, grandpa, milk, etc. is uploaded to the server. The server analyzes the voice data to obtain voice print data of each user, and stores the voice print data of each user into a voice print database. The vehicle owner can log into the server and set which user's voiceprint data is used to answer the voice command in certain specific scenarios. For example, when a vehicle owner (father) logs in to the server through the in-vehicle apparatus, the voice command is replied to by default using the maternal voice print data. For another example, when the vehicle owner (father) logs in to the server through the in-vehicle apparatus and the current date is the child festival, voice instructions are replied by default using the voice print data of the child.
The embodiment of the invention provides a personalized voice interaction mode for passengers (including drivers and passengers) in the vehicle, on one hand, voice interaction can be performed by selecting voice print data of a certain user according to the preset corresponding relation among the users; on the other hand, voice print data of a certain user can be selected for voice interaction according to the current date information and the corresponding relation, so that voice interaction experience is improved.
In an embodiment of the invention, an electronic device is also provided. The electronic device may include one or more processors and one or more computer-readable storage media having instructions stored thereon, such as an application program. The instructions, when executed by the one or more processors, cause the processor to perform the vehicle-based voice interaction method of any of the embodiments described above.
Fig. 3 shows a schematic structural diagram of an electronic device 300 according to an embodiment of the invention. As shown in fig. 3, the electronic device 300 includes a Central Processing Unit (CPU) 301 that can perform various suitable actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 302 or loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic device 300 may also be stored. The CPU301, ROM 302, and RAM 303 are connected to each other through a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in the electronic device 300 are connected to the I/O interface 305, including: an input unit 306 such as a keyboard, mouse, microphone, etc.; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, an optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the electronic device 300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and treatments described above may be performed by the processing unit 301. For example, the methods of any of the embodiments described above may be implemented as a computer software program tangibly embodied on a computer-readable medium, such as the storage unit 308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 300 via the ROM 302 and/or the communication unit 309. When the computer program is loaded into RAM 303 and executed by CPU301, one or more actions of the methods described above may be performed.
In an embodiment of the present invention, there is also provided a non-transitory computer-readable storage medium having stored thereon a computer program executable by a processor of an electronic device to perform the vehicle-based voice interaction method of any of the above embodiments. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing describes in detail a vehicle-based voice interaction method, an electronic device and a storage medium, and specific examples are applied to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (10)
1. A vehicle-based voice interaction method, characterized by being applied to a server, the method comprising:
acquiring a voice interaction instruction from vehicle-mounted equipment, wherein the voice interaction instruction comprises user identification information of a first user;
matching according to the user identification information of the first user to obtain voice print data of a corresponding second user, or matching according to the user identification information of the first user and current date information to obtain voice print data of the second user;
and returning a voice interaction result to the vehicle-mounted equipment according to the voiceprint data so that the vehicle-mounted equipment plays the voice interaction result.
2. The method according to claim 1, wherein the matching the voiceprint data of the corresponding second user according to the user identification information of the first user includes:
according to the user identification information of the first user and the corresponding relation between the user identification information of the first user and the user identification information of the second user, the user identification information of the second user is obtained through matching;
and reading out the voiceprint data of the second user from a preset voiceprint database according to the user identification information of the second user.
3. The method according to claim 2, wherein the matching the voiceprint data of the second user according to the user identification information and the current date information of the first user includes:
detecting whether the current date information is positioned in a preset date database or not;
if the current date information is positioned in the date database, acquiring associated user information of the current date information;
according to the associated user information, matching in a preset user database to obtain user identification information of the second user;
and reading out the voiceprint data of the second user from the voiceprint database according to the user identification information of the second user.
4. A method according to claim 3, wherein said obtaining associated user information for said current date information comprises:
reading the associated user information corresponding to the current date information from the date database, or acquiring the associated user information according to the date attribute of the current date information;
wherein the associated user information comprises at least one of: sex information and age information.
5. A method according to claim 3, characterized in that the method further comprises:
acquiring login times of the first user logging in the server through the vehicle-mounted equipment;
if the login times are larger than a login threshold value, setting the first user as a vehicle owner;
and storing the user identification information and the identity information of the first user into the user database.
6. A method according to claim 3, characterized in that the method further comprises:
acquiring voice interaction data of a third user and the first user;
identifying interaction keywords from the voice interaction data;
if the frequency of identifying the interaction keywords from the voice interaction data is greater than an interaction threshold value, adding association relation information for the third user according to the interaction keywords;
and storing the identity information and the association relation information of the third user into the user database.
7. The method of claim 6, wherein the method further comprises:
acquiring image data of the third user;
the adding association relation information for the third user according to the interaction keywords comprises the following steps:
and adding the association relation information for the third user according to the image data and the interaction keywords.
8. The method of claim 7, wherein the adding the association information for the third user according to the image data and the interaction keyword comprises:
identifying gender information of the third user from the image data;
generating a plurality of candidate association relation information of the interaction keywords;
screening out the association relation information from a plurality of candidate association relation information according to the gender information;
adding the association relation information for the third user;
wherein, the association relation information comprises: father-son relationship, mother-son relationship, father-woman relationship, mother-woman relationship, couple relationship, lover relationship.
9. An electronic device, comprising:
one or more processors; and
one or more computer-readable storage media having instructions stored thereon;
the instructions, when executed by the one or more processors, cause the processor to perform the vehicle-based voice interaction method of any of claims 1-8.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which program, when being executed by a processor, implements the vehicle-based voice interaction method according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211063149.5A CN117672209A (en) | 2022-09-01 | 2022-09-01 | Voice interaction method based on vehicle, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211063149.5A CN117672209A (en) | 2022-09-01 | 2022-09-01 | Voice interaction method based on vehicle, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117672209A true CN117672209A (en) | 2024-03-08 |
Family
ID=90068766
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211063149.5A Pending CN117672209A (en) | 2022-09-01 | 2022-09-01 | Voice interaction method based on vehicle, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117672209A (en) |
-
2022
- 2022-09-01 CN CN202211063149.5A patent/CN117672209A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785828B (en) | Natural language generation based on user speech styles | |
US9809185B2 (en) | Method and apparatus for subjective command control of vehicle systems | |
CN106816149A (en) | The priorization content loading of vehicle automatic speech recognition system | |
JP2019191828A (en) | Information provision system and information provision method | |
DE102018125966A1 (en) | SYSTEM AND METHOD FOR RECORDING KEYWORDS IN A ENTERTAINMENT | |
DE102019105269A1 (en) | VOICE RECOGNITION arbitration logic | |
CN107705793B (en) | Information pushing method, system and equipment based on voiceprint recognition | |
CN111261151B (en) | Voice processing method and device, electronic equipment and storage medium | |
CN109302486B (en) | Method and system for pushing music according to environment in vehicle | |
CN110648671A (en) | Voiceprint model reconstruction method, terminal, device and readable storage medium | |
CN111161742A (en) | Directional person communication method, system, storage medium and intelligent voice device | |
US20200365139A1 (en) | Information processing apparatus, information processing system, and information processing method, and program | |
CN105869631B (en) | The method and apparatus of voice prediction | |
CN117672209A (en) | Voice interaction method based on vehicle, electronic equipment and storage medium | |
CN106686267A (en) | Method and system for implementing personalized voice service | |
CN113409797A (en) | Voice processing method and system, and voice interaction device and method | |
CN116259320A (en) | Voice-based vehicle control method and device, storage medium and electronic device | |
CN111444377A (en) | Voiceprint identification authentication method, device and equipment | |
CN113763920B (en) | Air conditioner, voice generating method thereof, voice generating device and readable storage medium | |
CN115447588A (en) | Vehicle control method and device, vehicle and storage medium | |
CN117275522A (en) | Voice interaction method, device, equipment, storage medium and vehicle | |
US20180314979A1 (en) | Systems and methods for processing radio data system feeds | |
CN116105307A (en) | Air conditioner control method, device, electronic equipment and storage medium | |
CN112102854A (en) | Recording filtering method and device and computer readable storage medium | |
US20200050742A1 (en) | Personal identification apparatus and personal identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |