CN116645964A - Control method and device for vehicle voice zone, storage medium and electronic device - Google Patents

Control method and device for vehicle voice zone, storage medium and electronic device Download PDF

Info

Publication number
CN116645964A
CN116645964A CN202310622735.7A CN202310622735A CN116645964A CN 116645964 A CN116645964 A CN 116645964A CN 202310622735 A CN202310622735 A CN 202310622735A CN 116645964 A CN116645964 A CN 116645964A
Authority
CN
China
Prior art keywords
voice
zone
control
vehicle
text information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310622735.7A
Other languages
Chinese (zh)
Inventor
杨林举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310622735.7A priority Critical patent/CN116645964A/en
Publication of CN116645964A publication Critical patent/CN116645964A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The invention provides a control method and device for a vehicle voice zone, a storage medium and an electronic device, belonging to the field of vehicle control, wherein the method comprises the following steps: collecting a first voice signal in a vehicle cabin of a target vehicle, wherein the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin; converting the first voice signal into text information; analyzing a control intention category of the text information, wherein the control intention category is used for indicating intention definition of a control command corresponding to the text information; and performing voice zone control on the target vehicle according to the control intention type. According to the embodiment of the invention, the technical problem that the voice interaction of the vehicle is easy to be interfered by other voice areas in the related technology is solved, the individuation degree of the vehicle-mounted voice interaction and the recognition flexibility of the voice command are improved, the privacy of the user is protected, and more intelligent, more pleasant and safer driving experience is brought to the user.

Description

Control method and device for vehicle voice zone, storage medium and electronic device
Technical Field
The invention relates to the field of vehicle control, in particular to a method and a device for controlling a vehicle voice zone, a storage medium and an electronic device.
Background
In the related art, through voice interaction service in a vehicle cabin, personnel in the cabin can realize vehicle navigation, commodity reservation service, vehicle control and the like in a voice mode. The voice separation technology divides the cabin interior space into a plurality of voice zones, and personnel in different voice zones can independently perform voice interaction. In practical applications, there are cases where only a specified one or more voice zones are allowed to perform voice interaction, for example: setting a navigation destination, paying, accessing privacy information and the like, and if voices from a plurality of voice areas simultaneously operate the same task, interference or privacy information disclosure can be caused; for a voice zone without passengers, voice interaction service is not required to be provided for the voice zone, so that the flexibility of vehicle voice interaction is poor.
In view of the above problems in the related art, an efficient and accurate solution has not been found.
Disclosure of Invention
The invention provides a control method and device for a vehicle voice zone, a storage medium and an electronic device, and aims to solve the technical problems in the related art.
According to an embodiment of the present invention, there is provided a control method of a vehicle audio zone, including: collecting a first voice signal in a vehicle cabin of a target vehicle, wherein the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin; converting the first voice signal into text information; analyzing a control intention category of the text information, wherein the control intention category is used for indicating intention definition of a control command corresponding to the text information; and performing voice zone control on the target vehicle according to the control intention type.
Further, parsing the control intention category of the text information includes: judging whether the text format of the text information is a preset explicit format or not; if the text format of the text information is a preset explicit format, determining that the text information is an explicit voice zone control command; if the text format of the text information is not a preset explicit format, judging whether the text format of the text information is an implicit format or not; if the text format of the text information is an implicit format, determining that the text information is an implicit voice zone control command; if the text format of the text information is not the implicit format, determining that the text information is not a voice zone control command.
Further, determining whether the text format of the text information is an implicit format includes: judging whether the text format of the text information is a preset implicit format or not; and/or judging whether the text format of the text information is an implicit format or not through a machine learning model; and/or judging whether the text format of the text information is an implicit format or not based on the deep neural network model.
Further, performing zone control on the target vehicle according to the control intention category includes: identifying a target voice zone and a target operation in the text information according to the control intention type, wherein the target operation comprises an opening operation or a closing operation; and executing the target operation on the target voice zone.
Further, performing the target operation on the target volume includes: if the target operation is an opening operation, a data channel between the target voice zone and the voice interaction service is communicated; and if the target operation is a closing operation, disconnecting a data channel between the target voice zone and the voice interaction service.
Further, performing the target operation on the target volume includes: if the target operation is an opening operation, setting position information and sound source enhancement control information corresponding to the physical space of the target sound zone in a sound zone separation module; and if the target operation is a closing operation, setting position information and sound source suppression control information corresponding to the physical space of the target sound zone in a sound zone separation module.
Further, after the target vehicle is subjected to the zone control according to the control intention category, the method further includes: collecting a second voice signal in a vehicle cabin of the target vehicle; performing sound separation on the second voice signal to obtain first audio data from a first sound zone and second audio data from a second sound zone; and determining the first sound zone in an opening state and the second sound zone in a closing state in the vehicle seat cabin, outputting the first audio data, and prohibiting outputting the second audio data.
Further, after the target vehicle is subjected to the zone control according to the control intention category, the method further includes: collecting a third voice signal in a vehicle cabin of the target vehicle; performing sound separation on the third voice signal to obtain third audio data from a third sound zone and fourth audio data from a fourth sound zone; and determining the third voice zone in an on state and the fourth voice zone in an off state in the vehicle cabin, inputting the third audio data to the voice interaction service of the target vehicle, and prohibiting the fourth audio data from being input to the voice interaction service.
According to another embodiment of the present invention, there is provided a control device for a vehicle audio zone, including: the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a first voice signal in a vehicle cabin of a target vehicle, the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin; the conversion module is used for converting the first voice signal into text information; the analysis module is used for analyzing the control intention category of the text information, wherein the control intention category is used for indicating the intention definition of the text information corresponding to the control command; and the first control module is used for controlling the voice zone of the target vehicle according to the control intention type.
Further, the parsing module includes: the judging unit is used for judging whether the text format of the text information is a preset explicit format or not; the first processing unit is used for determining that the text information is an explicit voice zone control command if the text format of the text information is a preset explicit format; if the text format of the text information is not a preset explicit format, judging whether the text format of the text information is an implicit format or not; the second processing unit is used for determining that the text information is an implicit voice zone control command if the text format of the text information is an implicit format; if the text format of the text information is not the implicit format, determining that the text information is not a voice zone control command.
Further, the first processing unit includes: the first judging subunit is used for judging whether the text format of the text information is a preset implicit format or not; and/or a second judging subunit, configured to judge whether the text format of the text information is an implicit format through a machine learning model; and/or a third judging subunit, configured to judge whether the text format of the text information is an implicit format based on the deep neural network model.
Further, the first control module includes: the identification unit is used for identifying a target voice zone and a target operation in the text information according to the control intention type, wherein the target operation comprises an opening operation or a closing operation; and the execution unit is used for executing the target operation on the target voice zone.
Further, the execution unit includes: a communicating subunit, configured to communicate a data channel between the target audio area and the voice interaction service if the target operation is an opening operation; and the disconnection subunit is used for disconnecting the data channel between the target voice zone and the voice interaction service if the target operation is the closing operation.
Further, the execution unit includes: the enhancement subunit is used for setting position information and sound source enhancement control information corresponding to the physical space of the target sound zone in a sound zone separation module if the target operation is an opening operation; and the suppression subunit is used for setting the position information and the sound source suppression control information corresponding to the physical space of the target sound zone in the sound zone separation module if the target operation is a closing operation.
Further, the apparatus further comprises: the second acquisition module is used for acquiring a second voice signal in a vehicle seat cabin of the target vehicle after the first control module performs voice zone control on the target vehicle according to the control intention type; the first separation module is used for carrying out sound separation on the second voice signal to obtain first audio data from a first sound zone and second audio data from a second sound zone; and the second control module is used for determining the first sound zone in an on state and the second sound zone in an off state in the vehicle cabin, outputting the first audio data and prohibiting outputting the second audio data.
Further, the apparatus further comprises: the third acquisition module is used for acquiring a third voice signal in a vehicle seat cabin of the target vehicle after the first control module performs voice zone control on the target vehicle according to the control intention type; the second separation module is used for carrying out sound separation on the third voice signal to obtain third audio data from a third sound zone and fourth audio data from a fourth sound zone; and the third control module is used for determining the third voice zone in the on state and the fourth voice zone in the off state in the vehicle cabin, inputting the third audio data to the voice interaction service of the target vehicle, and prohibiting the fourth audio data from being input to the voice interaction service.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program that performs the above steps when running.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein: a memory for storing a computer program; and a processor for executing the steps of the method by running a program stored on the memory.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the above method.
According to the embodiment of the application, the first voice signal in the vehicle cabin of the target vehicle is collected, wherein the vehicle cabin comprises a plurality of voice areas, each voice area corresponds to one physical space of the vehicle cabin, the first voice signal is converted into text information, the control intention type of the text information is analyzed, wherein the control intention type is used for indicating the intention definition of the text information corresponding to the control command, the voice area control is carried out on the target vehicle according to the control intention type by analyzing the control intention type of the text information, the vehicle voice interaction taking the voice area as a unit is realized, the voice interaction service of the vehicle can be controlled according to the voice area, the technical problem that the voice interaction of the vehicle in the related art is easy to be interfered by other voice areas is solved, the individuation degree of the vehicle voice interaction and the recognition flexibility of voice instructions are improved, the user is protected, and more intelligent, more pleasant and safer driving experience is brought to the user.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a hardware block diagram of a vehicle-mounted terminal according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of controlling a vehicle soundtrack in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of classifying the resulting text of a voice in accordance with an embodiment of the present application;
FIG. 4 is a flow chart of voice interaction before voice zone control in accordance with an embodiment of the present application;
FIG. 5 is a flow chart of voice interaction after voice zone control according to an embodiment of the present application;
FIG. 6 is a flow chart of an implementation of an embodiment of the present application;
fig. 7 is a block diagram of a control apparatus for a vehicle audio zone according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method embodiment provided in the first embodiment of the present application may be executed in a vehicle-mounted terminal, a vehicle control module, a voice control module, or a similar processing device. Taking the operation on the vehicle-mounted terminal as an example, fig. 1 is a hardware structure block diagram of the vehicle-mounted terminal according to an embodiment of the present application. As shown in fig. 1, the in-vehicle terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative, and is not limited to the above-described structure of the vehicle-mounted terminal. For example, the in-vehicle terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a vehicle-mounted terminal program, for example, a software program of application software and a module, such as a vehicle-mounted terminal program corresponding to a method for controlling a vehicle audio zone in an embodiment of the present invention, and the processor 102 executes the vehicle-mounted terminal program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the in-vehicle terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the in-vehicle terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for controlling a vehicle audio zone is provided, and fig. 2 is a flowchart of a method for controlling a vehicle audio zone according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
step S202, collecting a first voice signal in a vehicle cabin of a target vehicle, wherein the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin;
the sound zones of the embodiment may be divided according to the conditions of the vehicle space, type, number of seats, layout of seats, etc., for example, each sound zone corresponds to one seat of the vehicle, one sound zone of the seat in the longitudinal space where the seat is located is divided into 5 sound zones, for example, a 5-seat vehicle, or the arrangement mode of the seats may be arranged, and the front row and the rear row are respectively divided into one sound zone, for example, the vehicle with two rows of seats is divided into 2 sound zones. Of course, the layout situation of the in-vehicle microphone can be divided according to the layout situation of the in-vehicle microphone, and if the vehicle is respectively provided with 4 groups of microphones at 4 positions, the vehicle can be divided into 4 sound areas, and the physical space of each sound area contains the position of the corresponding microphone.
In this embodiment, the first voice signal may be audio data after passing through the sound separation process, or may be original audio data without passing through the sound separation process. The voice recognition of the input audio can be performed by a voice recognizer of a vehicle-mounted mobile terminal or a voice recognizer of a server side of the Internet of vehicles. The separation of the voice regions in this embodiment may be a voice region separation process based on an acoustic signal processing algorithm, or may be a voice region separation process based on a neural network algorithm, in which the voice regions emitting the voice signals are determined by locating the voice sources, after the voice regions are separated from the original audio data, the mixed original audio data is separated into a plurality of sub-audio data from different voice regions, and each sub-audio corresponds to one voice region.
Step S204, converting the first voice signal into text information;
step S206, analyzing the control intention type of the text information, wherein the control intention type is used for indicating the intention definition of the control command corresponding to the text information;
alternatively, according to the intention definition, the text information corresponding control command may be divided into an explicit voice zone control command and an implicit voice zone control command.
Step S208, the target vehicle is subjected to voice zone control according to the control intention type.
Alternatively, when the tone region control is performed, a certain tone region may be turned on, a certain tone region may be turned off, or the like.
Through the steps, the first voice signals in the vehicle cabin of the target vehicle are collected, wherein the vehicle cabin comprises a plurality of voice areas, each voice area corresponds to one physical space of the vehicle cabin, the first voice signals are converted into text information, the control intention type of the text information is analyzed, the control intention type is used for indicating the intention definition of the text information corresponding to the control command, voice area control is carried out on the target vehicle according to the control intention type, voice interaction of the vehicle taking the voice area as a unit is realized by analyzing the control intention type of the text information and carrying out voice area control on the target vehicle according to the control intention type, the technical problem that the voice interaction of the vehicle is easily interfered by other voice areas during voice interaction of the vehicle in the related art is solved, the individuation degree of the vehicle-mounted voice interaction and the recognition flexibility of voice instructions are improved, the privacy of users is protected, and more intelligent, more pleasant and safer driving experience is brought to the users.
In one implementation of the present embodiment, parsing the control intent category of the text information includes:
s11, judging whether the text format of the text information is a preset explicit format or not;
the preset explicit format of the present embodiment may be defined by a key field, where the key field may be any text field such as "voice zone", "open", "close", etc.
S12, if the text format of the text information is a preset explicit format, determining that the text information is an explicit voice zone control command; if the text format of the text information is not the preset explicit format, judging whether the text format of the text information is the implicit format or not;
explicit and implicit in this embodiment are the degrees of understanding of machine language instructions by the instruction execution body, explicit (Explicit) refers to the apparent and clear implementation of instructions, and relative to interfaces, explicit and clear specifies the implementation of interfaces, and Explicit and clear specifies the implementation of other logic. Implicit (materialit) refers to an implementation of course, and is regarded as an implementation of an interface as long as the method signature and return value of the implementation class agree with the interface definition, and is not explicitly (clearly, explicitly) specified, and can be further explicitly specified.
In some examples, explicit zone control commands are as follows: opening the 2 nd sound zone, closing the 3 rd sound zone, opening the 3 rd and 4 th sound zones, etc.
In some examples, implicit zone control commands are as follows: opening a secondary driving sound zone; closing the rear sound emission area; and (5) after the person is not arranged, entering a safe mode and the like. The secondary driving voice zone is opened to indicate one or more voice zones corresponding to the secondary driving position; the closed back row of sound zones represents one or more sound zones corresponding to the closed back row positions; the rear-row nobody indicates one or more sound areas corresponding to the closed rear-row positions; entering the safe mode means that other sound areas except the main driving position are closed, only the main driver is allowed to conduct voice interaction, and passengers in the other sound areas are prevented from acquiring the privacy of the main driver through voice interaction.
In one example, determining whether the text format of the text information is an implicit format includes: judging whether the text format of the text information is a preset implicit format or not; and/or judging whether the text format of the text information is an implicit format or not through a machine learning model; and/or judging whether the text format of the text information is an implicit format or not based on the deep neural network model.
If the text information contains a specific implicit field, judging that the text information is in a preset implicit format, and judging whether the text information is an implicit voice zone control command or not according to an algorithm model for a non-preset implicit format, wherein the judgment can be expressed as follows: inputting the text content into an algorithm model, calculating the algorithm model, and outputting a judging result by the algorithm model. By carrying out semantic analysis and mapping from an implicit format to an explicit format on the text information, if the analysis is successful or the mapping is successful, the text information is in the implicit format.
S13, if the text format of the text information is an implicit format, determining that the text information is an implicit voice zone control command; if the text format of the text information is not the implicit format, determining that the text information is not a voice zone control command.
FIG. 3 is a flow chart of classifying resulting text of a voice, according to an embodiment of the invention, comprising: s21, inputting a recognition result text, S23 judging whether the recognition result text is an explicit voice zone control command, S25 judging whether the recognition result text is an implicit voice zone control command, and S26 finally executing voice zone control operation. When classifying the result of the speech recognition, determining whether it is an explicit voice zone control command may be implemented as: and judging whether the control command is a preset explicit voice zone control command. The judgment result was expressed as 2: is an explicit voice zone control instruction; not explicit zone control instructions. The determination of whether an implicit voice field control command may be implemented as: judging whether the control command is a preset implicit voice zone control command or not, and judging whether the control command is the implicit voice zone control command or not through an algorithm model. The specific implementation of judging whether the preset implicit voice zone control command is the same as the implementation process of judging whether the implicit voice zone control command is the explicit voice zone control command, namely judging whether the implicit voice zone control command is the preset implicit voice zone control command. The specific implementation mode of judging whether the implicit voice zone control command is judged through the algorithm model can be judging through a machine learning model or judging based on a deep neural network model, and the like. The judgment result is expressed as 2: is an implicit voice field control command; not the zone control command.
In one example, zone control of a target vehicle according to a control intent category includes: identifying a target voice zone and a target operation in the text information according to the control intention type, wherein the target operation comprises an opening operation or a closing operation; and executing the target operation on the target voice zone.
In this embodiment, in addition to controlling the opening and closing of the voice zones, the state maintaining time of the voice zones may be configured, for example, the voice interaction service of the vehicle may be configured by closing for one hour, temporarily closing (default closing preset time, for example, 3 minutes), or configuring the priorities of the voice zones, for example, setting the priority of the main voice zone, the auxiliary voice zone, etc., for example, configuring the priority of the voice zone a to be greater than the priority of the voice zone B, so that if the same control instruction (for example, a control skylight) from the voice zone a and the voice zone B is collected at the same time, the voice interaction service of the vehicle may be selected and executed according to the priority of the voice zone, or when the collected mixed voice is denoised in a multi-person speaking process, the audio from the voice zone with low priority is filtered preferentially, the audio from the voice zone with high priority is reserved, or the collection sensitivity of the microphone of the voice zone with high priority is improved, and the collection sensitivity of the microphone of the voice zone with low priority is reduced, thereby improving the intelligentization degree and flexibility in the multi-voice zone voice control process.
The soundfield control operations include control operations for one or more soundfields, the soundfield control including two control elements: controlled target zones, for example: 1 st sound zone, 1 st and 3 rd sound zones, etc.; specific operations of control, for example; open voice zone, close voice zone, priority configuration, etc. The zone control operation may be represented as opening or closing an nth zone, N < = N, where N represents the total number of zones in the cabin.
Optionally, performing the target operation on the target volume includes: if the target operation is an opening operation, a data channel between a target voice area and the voice interaction service is communicated; if the target operation is a closing operation, disconnecting a data channel between the target voice zone and the voice interaction service.
The opening operation for a certain sound zone appears as: the separated audio results from the soundfield are communicated to the voice interaction service so that personnel in the soundfield operated can implement the voice interaction service in the pod. The opening operation for a certain sound zone may also be expressed as: and setting a group of position information and control information corresponding to the physical space of the sound zone in the sound zone separation module, and carrying out sound source separation or sound source enhancement on the physical space corresponding to the sound zone by the sound zone separation module, so that personnel in the operated sound zone can realize voice interaction service in the cabin. The closing operation for a certain sound zone is represented as: the separated audio results from the soundtrack are disconnected from the voice interaction service so that personnel in the operated soundtrack may not implement the voice interaction service in the pod. The closing operation for a certain sound zone can also be expressed as: and setting a group of position information and control information corresponding to the physical space of the sound zone in the sound zone separation module, wherein the sound zone separation module inhibits the sound source of the physical space corresponding to the sound zone, so that personnel in the operated sound zone can not realize voice interaction service in the seat cabin.
In this embodiment, the cabin voice interactive system of the vehicle may be composed of a voice separation service and a voice interactive service. The voice zone separation service converts input audio data of the plurality of microphone channels into audio data in each voice zone; the converted audio data in each voice zone is connected with the voice interaction service through the corresponding data channel, so that the voice interaction process of the personnel in each voice zone is realized. The voice separation service can be a voice separation process based on an acoustic signal processing algorithm, a voice separation process based on a neural network algorithm and the like, and the voice interaction service can be an off-line voice interaction service, an on-line voice interaction service and the like.
In one implementation scenario of the present embodiment, the first audio region is an on-state audio region, the second audio region is an off-state audio region, and after audio region control is performed on the target vehicle according to the control intention category, the method further includes: collecting a second voice signal in a vehicle seat cabin of a target vehicle; performing sound separation on the second voice signal to obtain first audio data from the first sound zone and second audio data from the second sound zone; and determining a first sound zone in an on state and a second sound zone in an off state in the vehicle cabin, outputting the first audio data, and prohibiting outputting the second audio data.
In another implementation scenario of the present embodiment, after performing the zone control on the target vehicle according to the control intention category, further includes: collecting a third voice signal in a vehicle seat cabin of a target vehicle; performing sound separation on the third voice signal to obtain third audio data from a third sound zone and fourth audio data from a fourth sound zone; and determining a third sound zone in an on state and a fourth sound zone in an off state in the vehicle cabin, inputting third audio data to the voice interaction service of the target vehicle, and prohibiting inputting fourth audio data to the voice interaction service.
Fig. 4 is a voice interaction flow chart before voice zone control in the embodiment of the invention, fig. 5 is a voice interaction flow chart after voice zone control in the embodiment of the invention, a vehicle comprises N voice zones, namely a voice zone 1 and a voice zone 2 and … voice zone N, voice interaction service comprises N voice interaction threads respectively corresponding to the audio data of the N voice zones, before voice zone control, all the voice zones are communicated with the voice interaction service of the vehicle, the voice zone separation service performs voice zone separation on the audio data from all voice acquisition channels (channel 1-channel M) to obtain N paths of audio data (voice zone 1 audio data-voice zone N audio data), and the N paths of audio data are respectively transmitted to corresponding threads of the interaction service. After the voice zone control, the voice zone 2 is set to be in a closed state, the states of other voice zones are kept unchanged, after voice zone separation service is carried out on audio data from all voice acquisition channels (channels 1-M), N paths of audio data (voice zone 1 audio data-voice zone N audio data) are obtained, N-1 paths of audio data (voice zone 1 and voice zone 3-voice zone N audio data) are respectively transmitted to corresponding threads of the interaction service, the voice zone 2 audio data cannot be transmitted to corresponding voice interaction threads, the voice zone 2 audio data can be separated from voice zone 2 audio data which is not output by the separation service, the voice zone 2 audio data can be separated from the voice zone 2 audio data which is output by the separation service, but a data channel between the voice zone 2 audio data is not input to the voice interaction service.
Optionally, the voice interaction service of the vehicle in this embodiment may generate a matched vehicle control instruction based on the audio data input by the user, so as to control a power system, a vehicle machine system, an audio-visual system and other vehicle components of the vehicle, for example, control the start and stop of the vehicle, adjust an air conditioner, lift of a window, open and close a door, adjust a sound box, brake, accelerate, shift gears, change lanes and the like.
FIG. 6 is a flow chart of an implementation of an embodiment of the present invention, comprising: s61, inputting audio; s62, performing voice recognition on the input audio; s63, classifying the text of the voice recognition result; s64, controlling the voice zone according to the classification result. As shown in fig. 6, the present embodiment provides a method of in-cabin zone control by which a person in the cabin can control the state of one or more zones with speech. In the implementation process, personnel in the cabin input voice data; actually performing voice recognition on the input voice to obtain a recognition text; classifying the recognized text result; and according to the classification result of the text, controlling one or more voice zones.
By the voice zone control method, the control degree of the voice zone in the cabin can be improved, and the voice interaction quality in the cabin is improved. After the personnel in the cabin input the audio data, the speech recognition process converts the speaking content in the audio data into words. The content classification process classifies the text of the speech recognition result, and the classification result is an explicit voice zone control command, an implicit voice zone control command or a non-voice zone control command. The text is input into the classifying process, firstly, whether the text is an explicit voice zone control command is judged, and if so, the corresponding voice zone control operation is executed; if not, then it is next determined whether it is an implicit zone control command. Judging whether the control command is an implicit voice zone control command, if so, executing corresponding operation; if not, the non-voice zone control command is indicated, and the voice zone control operation is not performed. And in the process of controlling the voice zones, opening or closing the corresponding voice zones according to the explicit or implicit voice zone control command.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
The embodiment also provides a device for controlling a vehicle voice zone, which is used for implementing the above embodiment and the preferred implementation manner, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 7 is a block diagram of a control apparatus for a vehicle audio zone according to an embodiment of the present invention, as shown in fig. 7, the apparatus including:
a first acquisition module 70, configured to acquire a first voice signal in a vehicle cabin of a target vehicle, where the vehicle cabin includes a plurality of sound zones, each sound zone corresponding to a physical space of the vehicle cabin;
a conversion module 72 for converting the first speech signal into text information;
a parsing module 74, configured to parse a control intention category of the text information, where the control intention category is used to indicate intention definition of a control command corresponding to the text information;
a first control module 76 for performing zone control on the target vehicle according to the control intention category.
Optionally, the parsing module includes: the judging unit is used for judging whether the text format of the text information is a preset explicit format or not; the first processing unit is used for determining that the text information is an explicit voice zone control command if the text format of the text information is a preset explicit format; if the text format of the text information is not a preset explicit format, judging whether the text format of the text information is an implicit format or not; the second processing unit is used for determining that the text information is an implicit voice zone control command if the text format of the text information is an implicit format; if the text format of the text information is not the implicit format, determining that the text information is not a voice zone control command.
Optionally, the first processing unit includes: the first judging subunit is used for judging whether the text format of the text information is a preset implicit format or not; and/or a second judging subunit, configured to judge whether the text format of the text information is an implicit format through a machine learning model; and/or a third judging subunit, configured to judge whether the text format of the text information is an implicit format based on the deep neural network model.
Optionally, the first control module includes: the identification unit is used for identifying a target voice zone and a target operation in the text information according to the control intention type, wherein the target operation comprises an opening operation or a closing operation; and the execution unit is used for executing the target operation on the target voice zone.
Optionally, the execution unit includes: a communicating subunit, configured to communicate a data channel between the target audio area and the voice interaction service if the target operation is an opening operation; and the disconnection subunit is used for disconnecting the data channel between the target voice zone and the voice interaction service if the target operation is the closing operation.
Optionally, the execution unit includes: the enhancement subunit is used for setting position information and sound source enhancement control information corresponding to the physical space of the target sound zone in a sound zone separation module if the target operation is an opening operation; and the suppression subunit is used for setting the position information and the sound source suppression control information corresponding to the physical space of the target sound zone in the sound zone separation module if the target operation is a closing operation.
Optionally, the apparatus further comprises: the second acquisition module is used for acquiring a second voice signal in a vehicle seat cabin of the target vehicle after the first control module performs voice zone control on the target vehicle according to the control intention type; the first separation module is used for carrying out sound separation on the second voice signal to obtain first audio data from a first sound zone and second audio data from a second sound zone; and the second control module is used for determining the first sound zone in an on state and the second sound zone in an off state in the vehicle cabin, outputting the first audio data and prohibiting outputting the second audio data.
Optionally, the apparatus further comprises: the third acquisition module is used for acquiring a third voice signal in a vehicle seat cabin of the target vehicle after the first control module performs voice zone control on the target vehicle according to the control intention type; the second separation module is used for carrying out sound separation on the third voice signal to obtain third audio data from a third sound zone and fourth audio data from a fourth sound zone; and the third control module is used for determining the third voice zone in the on state and the fourth voice zone in the off state in the vehicle cabin, inputting the third audio data to the voice interaction service of the target vehicle, and prohibiting the fourth audio data from being input to the voice interaction service.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, collecting a first voice signal in a vehicle cabin of a target vehicle, wherein the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin;
s2, converting the first voice signal into text information;
s3, analyzing the control intention type of the text information, wherein the control intention type is used for indicating the intention definition of a control command corresponding to the text information;
and S4, performing voice zone control on the target vehicle according to the control intention type.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, collecting a first voice signal in a vehicle cabin of a target vehicle, wherein the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin;
s2, converting the first voice signal into text information;
S3, analyzing the control intention type of the text information, wherein the control intention type is used for indicating the intention definition of a control command corresponding to the text information;
and S4, performing voice zone control on the target vehicle according to the control intention type.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (11)

1. A method of controlling a vehicle audio zone, comprising:
collecting a first voice signal in a vehicle cabin of a target vehicle, wherein the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin;
converting the first voice signal into text information;
analyzing a control intention category of the text information, wherein the control intention category is used for indicating intention definition of a control command corresponding to the text information;
and performing voice zone control on the target vehicle according to the control intention type.
2. The method of claim 1, wherein parsing the control intent category of the text information comprises:
judging whether the text format of the text information is a preset explicit format or not;
if the text format of the text information is a preset explicit format, determining that the text information is an explicit voice zone control command; if the text format of the text information is not a preset explicit format, judging whether the text format of the text information is an implicit format or not;
If the text format of the text information is an implicit format, determining that the text information is an implicit voice zone control command; if the text format of the text information is not the implicit format, determining that the text information is not a voice zone control command.
3. The method of claim 2, wherein determining whether the text format of the text information is an implicit format comprises:
judging whether the text format of the text information is a preset implicit format or not; and/or the number of the groups of groups,
judging whether the text format of the text information is an implicit format or not through a machine learning model; and/or the number of the groups of groups,
and judging whether the text format of the text information is an implicit format or not based on the deep neural network model.
4. The method of claim 1, wherein performing zone control on the target vehicle according to the control intent category comprises:
identifying a target voice zone and a target operation in the text information according to the control intention type, wherein the target operation comprises an opening operation or a closing operation;
and executing the target operation on the target voice zone.
5. The method of claim 4, wherein performing the target operation on the target volume comprises:
If the target operation is an opening operation, a data channel between the target voice zone and the voice interaction service is communicated;
and if the target operation is a closing operation, disconnecting a data channel between the target voice zone and the voice interaction service.
6. The method of claim 4, wherein performing the target operation on the target volume comprises:
if the target operation is an opening operation, setting position information and sound source enhancement control information corresponding to the physical space of the target sound zone in a sound zone separation module;
and if the target operation is a closing operation, setting position information and sound source suppression control information corresponding to the physical space of the target sound zone in a sound zone separation module.
7. The method according to claim 1, characterized in that after the target vehicle is subjected to the zone control according to the control intention category, the method further comprises:
collecting a second voice signal in a vehicle cabin of the target vehicle;
performing sound separation on the second voice signal to obtain first audio data from a first sound zone and second audio data from a second sound zone;
and determining the first sound zone in an opening state and the second sound zone in a closing state in the vehicle seat cabin, outputting the first audio data, and prohibiting outputting the second audio data.
8. The method according to claim 1, characterized in that after the target vehicle is subjected to the zone control according to the control intention category, the method further comprises:
collecting a third voice signal in a vehicle cabin of the target vehicle;
performing sound separation on the third voice signal to obtain third audio data from a third sound zone and fourth audio data from a fourth sound zone;
and determining the third voice zone in an on state and the fourth voice zone in an off state in the vehicle cabin, inputting the third audio data to the voice interaction service of the target vehicle, and prohibiting the fourth audio data from being input to the voice interaction service.
9. A control device for a vehicle sound zone, comprising:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a first voice signal in a vehicle cabin of a target vehicle, the vehicle cabin comprises a plurality of sound areas, and each sound area corresponds to one physical space of the vehicle cabin;
the conversion module is used for converting the first voice signal into text information;
the analysis module is used for analyzing the control intention category of the text information, wherein the control intention category is used for indicating the intention definition of the text information corresponding to the control command;
And the first control module is used for controlling the voice zone of the target vehicle according to the control intention type.
10. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when run.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 8.
CN202310622735.7A 2023-05-29 2023-05-29 Control method and device for vehicle voice zone, storage medium and electronic device Pending CN116645964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310622735.7A CN116645964A (en) 2023-05-29 2023-05-29 Control method and device for vehicle voice zone, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310622735.7A CN116645964A (en) 2023-05-29 2023-05-29 Control method and device for vehicle voice zone, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116645964A true CN116645964A (en) 2023-08-25

Family

ID=87622561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310622735.7A Pending CN116645964A (en) 2023-05-29 2023-05-29 Control method and device for vehicle voice zone, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116645964A (en)

Similar Documents

Publication Publication Date Title
DE102018128006B4 (en) METHOD OF PRODUCING OUTPUTS OF NATURAL LANGUAGE GENERATION BASED ON USER LANGUAGE STYLE
CN104332159B (en) Vehicular voice-operated system man-machine interaction method and device
DE102019105269B4 (en) METHOD OF SPEECH RECOGNITION USING SPEECH RECOGNITION ARBITRATION LOGIC
EP0852051B1 (en) Process for automatic control of one or more devices by voice commands or by real-time voice dialog and apparatus for carrying out this process
CN110070868A (en) Voice interactive method, device, automobile and the machine readable media of onboard system
US10255913B2 (en) Automatic speech recognition for disfluent speech
DE102017121059A1 (en) IDENTIFICATION AND PREPARATION OF PREFERRED EMOJI
DE102018103188B4 (en) METHOD OF VOICE RECOGNITION IN A VEHICLE TO IMPROVE TASKS
CN105390136A (en) Vehicle control device and method used for user-adaptable service
US20200160861A1 (en) Apparatus and method for processing voice commands of multiple talkers
DE102019111529A1 (en) AUTOMATED LANGUAGE IDENTIFICATION USING A DYNAMICALLY ADJUSTABLE TIME-OUT
CN109920410B (en) Apparatus and method for determining reliability of recommendation based on environment of vehicle
CN107554456A (en) Vehicle-mounted voice control system and its control method
DE102017121054A1 (en) REMOTE LANGUAGE RECOGNITION IN A VEHICLE
CN210489237U (en) Vehicle-mounted intelligent terminal voice control system
DE102015105876A1 (en) A method of providing operator assistance using a telematics service system of a vehicle
CN114445888A (en) Vehicle-mounted interaction system based on emotion perception and voice interaction
CN109080567A (en) Control method for vehicle and cloud server based on Application on Voiceprint Recognition
US9830925B2 (en) Selective noise suppression during automatic speech recognition
CN110286745A (en) Dialog process system, the vehicle with dialog process system and dialog process method
CN110232924A (en) Vehicle-mounted voice management method, device, vehicle and storage medium
DE102018125564A1 (en) RESPONSE RAPID ACTIVATION OF A VEHICLE FEATURE
DE102016217026A1 (en) Voice control of a motor vehicle
CN101645716A (en) Vehicle-borne communication system having voice recognition function and recognition method thereof
CN116645964A (en) Control method and device for vehicle voice zone, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination