CN114842847A - Vehicle-mounted voice control method and device - Google Patents

Vehicle-mounted voice control method and device Download PDF

Info

Publication number
CN114842847A
CN114842847A CN202210456056.2A CN202210456056A CN114842847A CN 114842847 A CN114842847 A CN 114842847A CN 202210456056 A CN202210456056 A CN 202210456056A CN 114842847 A CN114842847 A CN 114842847A
Authority
CN
China
Prior art keywords
information
command
slot position
position value
semantic group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210456056.2A
Other languages
Chinese (zh)
Inventor
赵晓朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210456056.2A priority Critical patent/CN114842847A/en
Publication of CN114842847A publication Critical patent/CN114842847A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a vehicle-mounted voice control method and device. The vehicle-mounted voice control method comprises the following steps: step 1: acquiring a structured semantic group; step 2: judging whether the command required to be executed can be executed according to the structured semantic group, if not, the step 3: generating guide information according to the structured semantic group and sending the guide information to the man-machine interaction device; and 4, step 4: acquiring command information fed back by a user according to the guide information; and 5: and transmitting the command information to the corresponding executing mechanism so that the corresponding executing mechanism works according to the command information. According to the vehicle-mounted voice control method, when the related command cannot be executed through the structured semantic group, the guide information related to the structured semantic group can be generated according to the structured semantic group, and the user can interact with the vehicle in a standard use mode in a mode of guiding the user through the guide information, so that the purpose of execution required by the user is finally achieved.

Description

Vehicle-mounted voice control method and device
Technical Field
The application relates to the technical field of automobile interaction control, in particular to a vehicle-mounted voice control method and a vehicle-mounted voice control device.
Background
Along with the intelligent vehicle gradually walks into thousands of households, people experience the intelligent interaction and bring convenient control to the driving vehicle, and meanwhile, higher requirements are provided for conversation interaction experience in the vehicle. The task type dialogue system is divided according to processing flow and mainly comprises processing modules such as voice recognition, semantic analysis, dialogue management, reply generation, voice synthesis and the like. The voice recognition recognizes the voice signal of the user into a text query; semantic parsing understands a text as structured information (domain information, intent information entry, slot value pair information slots), for example, air conditioner FOOT blow > (domain ═ vehicle control ', entry ═ air conditioner control', slots ═ wind direction ═ FOOT, FOOT > }. The dialogue management processes the structured semantic information, maintains the current dialogue state (current round number, intention, slot position value and the like) through a dialogue state tracking and dialogue strategy module, and simultaneously outputs action (including execution, inquiry, guidance and the like) required to be taken by the system in the next step, for example, for the semantic result of the query in the previous example, the action is 'execution', which indicates that a user instruction can be directly executed; as another example, for query, action, the system needs to ask the user for the destination. The reply generation is based on the result of the dialogue management module, and generates a reply tts, for example, for the query of the previous example, the air conditioner blows the foot, and the tts is 'good'; query i want to navigate, tts is where you want to navigate. And finally, the voice synthesis module converts the reply into a voice signal and broadcasts the voice signal to the user through a loudspeaker. After the broadcasting is finished, the dialogue system starts to wait for the instruction of the next round of the user.
In an on-vehicle scenario, user commands are often spoken and expressed with words and syntactic structural randomness, resulting in a very different expression manner for the same command, for example, an instruction for an air conditioner blowing (wind direction ═ FACE >) to a person, which may be: the instructions that the air conditioner blows to the feet (wind direction) are expressed by ("air conditioner blows me", "air conditioner blows face", "air conditioner blows head", "air conditioner blows above"), and the expression mode may be: the air conditioner comprises an air conditioner foot, an air conditioner lower part, an air conditioner leg and an air conditioner brake. The diversity and sparsity of the spoken language expression data can easily cause the semantic understanding algorithm to have inaccurate understanding on the spoken language rare expression instruction of the user, so that the spoken language rare expression instruction is conducted in a wrong downward direction, and a dialog system is difficult to give correct response.
The existing semantic parsing technical scheme is as follows:
1. one is template matching. And compiling a template rule according to a task to be completed based on information such as grammar, syntax, a slot position value dictionary and the like, and judging the current structured semantics through template matching. The advantage of this type of solution is the fast and accurate matching, the disadvantage being that the regular coverage is usually not sufficient. The method is mainly used for high-frequency instruction interpretation of users, and the coverage of spoken language expressions is insufficient.
2. The other type is a deep semantic model, a user instruction text is coded based on models such as Bert/LSTM and the like, and then the coded text is input to a full-connection layer or a CRF layer to perform domain classification, intention classification and slot filling tasks. The advantage of this type of solution is a better generalization to the user. This model is trained in a supervised learning manner, requiring a large amount of training data. However, rare spoken slot values often appear little or no in training data, resulting in models that are not sufficiently trained and unable to handle such problems.
Accordingly, a solution is desired to solve or at least mitigate the above-mentioned deficiencies of the prior art.
Disclosure of Invention
The present invention is directed to a vehicle-mounted voice control method to solve at least one of the above problems.
In one aspect of the present invention, a vehicle-mounted voice control method is provided, including:
step 1: acquiring a structured semantic group;
step 2: judging whether the command required to be executed can be executed according to the structured semantic group, if not, executing the command required to be executed according to the structured semantic group
And step 3: generating guide information according to the structured semantic group and sending the guide information to a man-machine interaction device;
and 4, step 4: acquiring command information fed back by a user according to the guide information;
and 5: and transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information.
Optionally, the structured semantic group includes slot value pair information;
the step 2: judging whether the command required to be executed can be executed according to the structured semantic group comprises the following steps:
acquiring a preset slot position value pair database, wherein the preset slot position value pair database comprises at least one piece of preset slot position value pair information;
judging whether a preset slot position value in preset slot position value pair information corresponds to a slot position value in the slot position value pair information, if not, judging whether the preset slot position value in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information or not, and if not, judging whether the preset slot position value in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information
Judging that the command required to be executed cannot be executed according to the structured semantic group;
the step 3: generating the guidance information from the structured set of semantics comprises:
acquiring a guide language database, wherein the guide language database comprises at least one guide condition and a guide group corresponding to each guide condition;
judging whether the structured semantic group meets a guide condition in the guide language database, if so, judging whether the structured semantic group meets the guide condition in the guide language database
Acquiring a guide group corresponding to the guide condition;
and generating guide information according to the guide group.
Optionally, before the obtaining the structured semantic group, the vehicle-mounted voice control method further includes:
acquiring voice information of a user;
and analyzing the voice information so as to obtain the structured semantic group.
Optionally, the command information fed back by the user according to the guidance information is voice information and/or interaction instruction information based on the guidance information.
Optionally, when the command information is voice information, the step 5: transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information comprises the following steps:
analyzing the voice information to obtain a feedback structural semantic group;
judging whether the command to be executed can be executed according to the feedback structural semantic group, if so, executing the command to be executed according to the feedback structural semantic group
Generating an execution command according to the feedback structural semantic group;
and transmitting the execution command to a corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
Optionally, the step 5: transmitting the command information to the corresponding executing mechanism so that the corresponding executing mechanism works according to the command information further comprises:
judging whether the command required to be executed can be executed according to the structured semantic group, if not, executing the command required to be executed according to the structured semantic group
Repeating the steps 2 to 4.
Optionally, after determining whether the command to be executed can be executed according to the fed back structured semantic group, if so, the vehicle-mounted voice control method further includes:
and updating the preset slot position value pair database according to the feedback structured semantic group, so that a preset slot position value in preset slot position value pair information in the preset slot position value pair database corresponds to the slot position value pair information.
Optionally, when the command information fed back by the user according to the guidance information is interaction instruction information based on the guidance information, the step 5: transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information comprises the following steps:
generating an execution command according to the interaction instruction information;
and transmitting the execution command to a corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
Optionally, after the execution command is generated according to the interaction instruction information, the vehicle-mounted voice control method further includes:
updating the preset slot position value pair database according to the interaction instruction information, so that a preset slot position value in preset slot position value pair information in the preset slot position value pair database corresponds to the slot position value pair information
The application also provides a voice control device for vehicle-mounted, the multi-screen control device for vehicle-mounted includes:
the structured semantic group acquisition module is used for acquiring a structured semantic group;
the judging module is used for judging whether the command required to be executed can be executed according to the structured semantic group;
the guiding information generating module is used for generating guiding information according to the structured semantic group and sending the guiding information to the man-machine interaction device after the judging module judges that the guiding information is not the guiding information;
the feedback acquisition module is used for acquiring command information fed back by a user according to the guide information;
and the sending module is used for transmitting the command information to the corresponding executing mechanism so as to enable the corresponding executing mechanism to work according to the command information.
Advantageous effects
According to the vehicle-mounted voice control method, when the related command cannot be executed through the structured semantic group, the guide information related to the structured semantic group can be generated according to the structured semantic group, and the user can interact with the vehicle in a standard use mode in a mode of guiding the user through the guide information, so that the purpose of execution required by the user is finally achieved.
Drawings
Fig. 1 is a schematic flow chart of a vehicle-mounted voice control method according to a first embodiment of the present application.
Fig. 2 is a schematic diagram of a system device for implementing the vehicle-mounted voice control method shown in fig. 1.
Fig. 3 is a flowchart illustrating a vehicle-mounted voice control method according to a second embodiment of the present application.
Fig. 4 is a flowchart illustrating a vehicle-mounted voice control method according to a third embodiment of the present application.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be described in more detail below with reference to the drawings in the embodiments of the present application. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are a subset of the embodiments in the present application and not all embodiments in the present application. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a vehicle-mounted voice control method according to an embodiment of the present application.
The vehicle-mounted voice control method shown in fig. 1 includes:
step 1: acquiring a structured semantic group;
step 2: judging whether the command required to be executed can be executed according to the structured semantic group, if not, executing the command required to be executed according to the structured semantic group
And step 3: generating guide information according to the structured semantic group and sending the guide information to a man-machine interaction device;
and 4, step 4: acquiring command information fed back by a user according to the guide information;
and 5: and transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information.
According to the vehicle-mounted voice control method, when the related command cannot be executed through the structured semantic group, the guide information related to the structured semantic group can be generated according to the structured semantic group, and the user can interact with the vehicle in a standard use mode in a mode of guiding the user through the guide information, so that the purpose of execution required by the user is finally achieved.
In this embodiment, the structured semantic group includes slot value pair information, and it is understood that the structured semantic group may include; domain information, intention information.
For example, a structured semantic group may be a trip as follows: (domain information is vehicle control ', intention information is air conditioner control', slot value pair information slots is wind direction is FOOT.
In this embodiment, before obtaining the structured semantic group, the vehicle-mounted voice control method further includes:
acquiring voice information of a user;
and analyzing the voice information so as to obtain a structured semantic group.
For example, the user voice message is: an air conditioner blows a brake (it can be understood that the voice information is first recognized as text information), and the voice information is processed, so as to obtain a structured voice group (domain is air conditioner, intent entry is air conditioner control, slot value pair slots is { wind direction is brake.
In this embodiment, step 2: judging whether the command required to be executed can be executed according to the structured semantic group comprises the following steps:
acquiring a preset slot position value pair database, wherein the preset slot position value pair database comprises at least one piece of preset slot position value pair information;
judging whether a preset slot position value in preset slot position value pair information corresponds to a slot position value in the slot position value pair information, if not, judging whether the preset slot position value in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information or not, and if not, judging whether the preset slot position value in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information
And judging that the command required to be executed cannot be executed according to the structured semantic group.
Taking the above as an example, the structured semantic group is (domain is air conditioner, intent is air conditioner control, slot value pair slots is { wind direction is brake.
We obtain a preset slot value pair database, and find that only slots in the slot value pair are { matched ═ blow foot, foot }, so from the above analysis, it is found that? And if the position corresponds to the foot, judging that none of the preset slot position values in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information.
In this embodiment, step 3: generating the guidance information from the structured set of semantics includes:
acquiring a guide language database, wherein the guide language database comprises at least one guide condition and a guide group corresponding to each guide condition;
judging whether the structured semantic group meets a guide condition in the guide language database, if so, judging whether the structured semantic group meets the guide condition in the guide language database
Acquiring a guide group corresponding to the guide condition;
and generating the guide information according to the guide group.
For example, the structured semantic group is (domain, air conditioner, intention entry, slot value pair, { wind direction,.
In this embodiment, generating the guidance information according to the guidance group includes:
converting the guide group into voice information and/or converting the guide group into interaction selection information;
specifically, converting the guidance group into the voice message, that is, composing the above guidance message into the answer tts through the predefined template, for example, converting the above guidance group into: ' find four air conditioner wind direction options, face, foot, window, windshield, ask you which one to choose? '.
It will be appreciated that it is also possible to translate into interactive selection information, for example, these four options will be displayed (display) by means of a human-machine interaction device (for example, a car screen): [ air-conditioning blows foot, air-conditioning blows face, air-conditioning blows door window, air-conditioning blows windshield ], for the user to select.
In this embodiment, the command information fed back by the user according to the guidance information is acquired as voice information and/or interaction instruction information based on the guidance information.
For example, when the guidance information given by the present application is voice information, the user can answer the guidance information through the voice information, and if the guidance information given by the present application is interactive selection information, the user can generate interactive instruction information through an interactive instruction and can also answer the guidance information through the voice information.
In this embodiment, when the command information is voice information, step 5: transmitting the command information to the corresponding executing mechanism so that the corresponding executing mechanism works according to the command information comprises the following steps:
analyzing the voice information so as to obtain a feedback structural semantic group;
judging whether the command required to be executed can be executed according to the fed back structural semantic group, if so, generating an execution command according to the fed back structural semantic group;
and transmitting the execution command to the corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
For example, if the user answers that the air conditioner blows the foot, the air conditioner blows the foot and is converted into a structured semantic group (domain is air conditioner, intent is air conditioner control, slots is { wind direction is blowing foot, foot >), at this time, if there is corresponding slot position value pair information in the preset slot position value pair database of the present application, for example, if there is a slot position value pair information in the slot position value pair database, it is considered that there is a preset slot position value in one preset slot position value pair information corresponding to a slot position value in the slot position value pair information, then an execution command is generated according to the fed-back structured semantic group;
and transmitting the execution command to the corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
In this embodiment, step 5: transmitting the command information to the corresponding executing mechanism so that the corresponding executing mechanism works according to the command information further comprises:
judging whether the command required to be executed can be executed according to the structured semantic group, if not, executing the command required to be executed according to the structured semantic group
And (5) repeating the step 2 to the step 4.
In this embodiment, after determining whether the command to be executed can be executed according to the fed back structured semantic group, if so, the vehicle-mounted voice control method further includes:
and updating the preset slot position value pair database according to the feedback structured semantic group, so that a preset slot position value in the preset slot position value pair information corresponds to the slot position value pair information in the preset slot position value pair database.
Specifically, the structured semantic group of the user's feedback is first doted and recorded in the log system.
Screening out records which meet the rule and are failed in slot position analysis and successful in guiding from a large number of user logs from a log platform: for example, from the example in the above example, the following logs may be screened out:
turn 1: air-conditioning blowing brake
Air conditioner control
slot ═ wind direction? < CHEM > }
action ═ guide (< wind direction, candidate ═ foot, face, window, windshield ] >)
turn 2: query is a foot-blowing
Choice of intervention
Wind direction, brake, FOOT >
Based on the screened logs, the slot position value pair data are mined and converted into training data required by semantic understanding. Firstly, the slot of turn 1 can know that the current wind direction slot position 'brake' analysis fails, and further guide clarification is carried out on a user; second, as seen by turn 2, the user has selected 'foot' in the boot option; finally, the wind direction slot 'brake' is associated with the standardization option 'FOOT', and then a slot position value pair is obtained: wind direction is [ < brake, FOOT > ].
And merging the trench position value pairs excavated in the last step, together with the original query and the intention entry, and expanding the merged trench position value pairs into a guide language database.
It is understood that, in another embodiment, the method can be further extended to a training library, and the training data is not directly added to the guild language database, but after the training data is updated to a certain scale or a predetermined model updating time is reached, a semantic understanding model training task is triggered, and after the training data is tested, the semantic understanding model training task is deployed in the guild language database.
After the data mining and model iteration process, the updated semantic model has stronger resolving capability, for example, the next time a user is similar to a query (air conditioner blowing brake), the dialog system can successfully resolve that the wind direction slot is FOOT, and at the moment, dialog guidance is not needed any more, and the user instruction can be directly executed.
In this embodiment, when the command information fed back by the user according to the guidance information is the interaction instruction information based on the guidance information, step 5: transmitting the command information to the corresponding executing mechanism so that the corresponding executing mechanism works according to the command information comprises the following steps:
generating an execution command according to the interactive instruction information;
and transmitting the execution command to the corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
In this embodiment, after generating the execution command according to the interactive instruction information, the vehicle-mounted voice control method further includes:
and updating the preset slot position value pair database according to the interaction instruction information, so that a preset slot position value in the preset slot position value pair information corresponds to the slot position value pair information in the preset slot position value pair database.
Specifically, firstly, the interaction instruction information fed back by the user is recorded into the log system by dotting.
Screening out records which meet the rule and are failed in slot position analysis and successful in guiding from a large number of user logs from a log platform: for example, from the example in the above example, the following logs may be screened out:
turn 1: air-conditioning blowing brake
Air conditioner control
slot ═ wind direction? < CHEM > }
action ═ guide (< wind direction, candidate ═ foot, face, window, windshield ] >)
turn 2: query is a foot-blowing
Choice of intervention
Wind direction, brake, FOOT >
And mining slot position value pair data based on the screened logs, and converting the slot position value pair data into training data required by semantic understanding. Firstly, the slot of turn 1 can know that the current wind direction slot position 'brake' analysis fails, and further guide clarification is carried out on a user; second, as seen by turn 2, the user has selected 'foot' in the boot option; finally, the wind direction slot 'brake' is associated with the standardization option 'FOOT', and then a slot position value pair is obtained: wind direction is [ < brake, FOOT > ].
And merging the trench position value pairs excavated in the last step, together with the original query and the intention entry, and expanding the merged trench position value pairs into a guide language database.
It is understood that, in another embodiment, the method can be further extended to a training library, and the training data is not directly added to the guild language database, but after the training data is updated to a certain scale or a predetermined model updating time is reached, a semantic understanding model training task is triggered, and after the training data is tested, the semantic understanding model training task is deployed in the guild language database.
After the data mining and model iteration process, the updated semantic model has stronger resolving capability, for example, the next time a user is similar to a query (air conditioner blowing brake), the dialog system can successfully resolve that the wind direction slot is FOOT, and at the moment, dialog guidance is not needed any more, and the user instruction can be directly executed.
When the interactive instruction information mode is adopted, the problem that correct slot position value pairs cannot be analyzed all the time due to dialects can be solved.
The present application is described in further detail below by way of examples, it being understood that the examples do not constitute any limitation to the present application.
The general dialog system comprises five modules, respectively: speech recognition, semantic understanding, dialog management, reply generation, TTS. Recognizing the voice signal of the user into a text query by voice recognition; semantic parsing understands text as structured information (domain, intent interaction, slot value pair slots), for example, air-conditioning FOOT blow > (domain ═ vehicle control ', interaction ═ air-conditioning control', slots ═ wind direction ═ FOOT, FOOT > }. The dialogue management processes the structured semantic information, maintains the current dialogue state (current round number, intention, slot position value and the like) through a dialogue state tracking and dialogue strategy module, and simultaneously outputs action (including execution, inquiry, guidance and the like) required to be taken by the system in the next step, for example, for the semantic result of the query in the previous example, the action is 'execution', which indicates that a user instruction can be directly executed; as another example, for query, action, the system needs to ask the user for the destination. The reply generation is based on the result of the dialogue management module, and generates a reply tts, for example, for the query of the previous example, the air conditioner blows the foot, and the tts is 'good'; query i want to navigate, tts is where you want to navigate. And finally, the voice synthesis module converts the reply into a voice signal and broadcasts the voice signal to the user through a loudspeaker. After the broadcasting is finished, the dialogue system starts to wait for the instruction of the next round of the user.
Referring to fig. 3, in the case of a colloquialized slot that cannot be analyzed, for example, an air conditioner blows a brake, semantic analysis can only judge that the current intention is air conditioner control, and the set target slot is a WIND direction, but cannot determine a slot position value corresponding to the brake (assuming that the WIND direction slot is optionally set to be FOOT, FACE, WINDOW, windshield, WIND _ brake, but cannot determine which setting the current user-expressed "brake" corresponds to). For the traditional dialogue system, the semantic analysis fails, so that the task of the current round cannot be executed, and the user requirement is not met.
According to the invention, for the situation that the spoken language slot position analysis fails, the dialogue management module adds a plurality of rounds of guidance for the user, lists the selectable standardized expression modes for the user, guides the user to use the standardized expressions in the next round of interaction, and further completes the current dialogue task.
A first round: and (4) the user query is the air conditioner blowing brake. At the moment, semantic analysis: domain is air conditioner, intent is air conditioner control, slot slots is wind direction is brake? Any one of the above embodiments, wherein the wind direction is a slot position to be set, and ' brake ' is an analyzed slot bit character string, '? ' indicates that the corresponding slot value was not found. At this time, the situation that the spoken language slot cannot be analyzed is triggered, and a multi-turn conversation guide processing flow of conversation management is entered. In session management, the number of wheels turn is 1, the state is air conditioning control, slots is air direction brake, and the status state is air conditioning control. ? Mode? Is the action? And the action is guidance (< wind direction, candidate [ face, foot, window, windshield ]), which indicates that the current wheel needs to take guidance, and gives a candidate of the air conditioner wind direction value. Generating answer words tts through a reply generation module according to the intention interaction and the action and a predefined template: ' find four air conditioner wind direction options, which one you choose? The' simultaneous screen will display (display) these four options: [ air-conditioning blows foot, air-conditioning blows face, air-conditioning blows door window, air-conditioning blows windshield ], for the user to select.
And a second round: the user query is blowing the foot. At the moment, the semantic analysis can perform text matching with the query according to the display list options of the screen, and after the text matching is successful, a semantic result is obtained: the field domain is an instruction, the intention entry is a selection, and the slot is a { match is a < blow pin, pin > }, which indicates that the air conditioner blow pin option is successfully matched. At this point, the dialog management module modifies the dialog state: turn 2, state { interval ═ air-conditioning control, slots ═ wind direction ═ brake, FOOT > }, action ═ execution, wherein, because semantic analysis is the selection intention, therefore the present dialog interval still keeps the previous round unchanged, assign the user selection result to the corresponding position wind direction of slots of the dialog state ═ brake, FOOT > }, until this slot position is analyzed successfully, therefore action is execution operation. The reply generation module generates reply dialect tts 'good' according to the current intention interaction and action through a predefined template.
Referring to fig. 4, on the basis of the above dialogue system, slot position value pair annotation data is generated by using the selection behavior data in the user dialogue through statistical analysis and used for feedback training and iteration of the semantic understanding model, so as to form a data closed loop of 'confirmation error-user feedback-data mining-model training-redeployment', thereby continuously improving the analytic effect of the spoken slot position and achieving better dialogue experience. The specific process is as follows:
and the dialog system takes the feedback behavior of the user as a point and records the point into a log system.
Screening out records which meet the rule and are failed in slot position analysis and successful in guiding from a large number of user logs from a log platform: the first wheel session action is the guide, and the second wheel session action is the selection intention. For example, as in the example above, the following logs may be screened out:
turn 1: air-conditioning blowing brake
Air conditioner control
slot ═ wind direction? < CHEM > }
action ═ guide (< wind direction, candidate ═ foot, face, window, windshield ] >)
turn 2: query is a foot-blowing
Choice of intervention
Wind direction, brake, FOOT >
And mining slot position value pair data based on the screened logs, and converting the slot position value pair data into training data required by semantic understanding. Firstly, the slot of turn 1 can know that the current wind direction slot position 'brake' analysis fails, and further guide clarification is carried out on a user; second, as seen by turn 2, the user has selected 'foot' in the boot option; and finally, establishing association between the wind direction slot position 'brake' and a standardized option 'FOOT', and further obtaining a slot position value pair: wind direction is [ < brake, FOOT > ].
And combining the trench position value pairs excavated in the last step, together with the original query and the intention intent, and expanding and warehousing the merged trench position value pairs into training data of a semantic understanding module.
And triggering a semantic understanding model training task when the training data is updated to a certain scale or the preset model updating time is reached, and deploying the training task to the online dialog system after testing.
After the data mining and model iteration process, the updated semantic model has stronger resolving capability, for example, the next time a user is similar to a query (air conditioner blowing brake), the dialog system can successfully resolve that the wind direction slot is FOOT, and at the moment, dialog guidance is not needed any more, and the user instruction can be directly executed.
The vehicle-mounted voice control method can solve the conversation problem under the condition that spoken language slot position analysis fails in the existing semantic analysis, and helps a user to complete an instruction through conversation. This also facilitates a cold start of new dialog skills.
And the application also provides a feedback training process of a semantic understanding model aiming at the spoken language sparse slot position, forms a data-driven iterative closed loop, solves the problem of the spoken language sparse slot position long tail in a dialogue system, and comprehensively improves the in-vehicle voice interaction experience.
The application also provides a vehicle-mounted voice control device, the vehicle-mounted multi-screen control device comprises a structural semantic group acquisition module, a judgment module, a guide information generation module, a feedback acquisition module and a sending module, wherein the structural semantic group acquisition module is used for acquiring a structural semantic group; the judging module is used for judging whether the command required to be executed can be executed according to the structured semantic group; the guiding information generating module is used for generating guiding information according to the structured semantic group and sending the guiding information to the man-machine interaction device after the judging module judges that the guiding information is not the guiding information; the feedback acquisition module is used for acquiring command information fed back by a user according to the guide information; and the sending module is used for transmitting the command information to the corresponding executing mechanism so as to enable the corresponding executing mechanism to work according to the command information.
It should be noted that the foregoing explanations of the method embodiments also apply to the apparatus of this embodiment, and are not repeated herein.
The application also provides an electronic device, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the vehicle-mounted voice control method when executing the computer program.
The application also provides a computer readable storage medium, which stores a computer program, and the computer program can realize the vehicle-mounted voice control method when being executed by a processor.
Fig. 2 is an exemplary block diagram of an electronic device capable of implementing the in-vehicle voice control method according to an embodiment of the present application.
As shown in fig. 2, the electronic device includes an input device 501, an input interface 502, a central processor 503, a memory 504, an output interface 505, and an output device 506. The input interface 502, the central processing unit 503, the memory 504 and the output interface 505 are connected to each other through a bus 507, and the input device 501 and the output device 506 are connected to the bus 507 through the input interface 502 and the output interface 505, respectively, and further connected to other components of the electronic device. Specifically, the input device 504 receives input information from the outside and transmits the input information to the central processor 503 through the input interface 502; the central processor 503 processes input information based on computer-executable instructions stored in the memory 504 to generate output information, temporarily or permanently stores the output information in the memory 504, and then transmits the output information to the output device 506 through the output interface 505; the output device 506 outputs the output information to the outside of the electronic device for use by the user.
That is, the electronic device shown in fig. 2 may also be implemented to include: a memory storing computer-executable instructions; and one or more processors that, when executing the computer-executable instructions, may implement the in-vehicle voice control method described in conjunction with fig. 1.
In one embodiment, the electronic device shown in fig. 2 may be implemented to include: a memory 504 configured to store executable program code; one or more processors 503 configured to run executable program code stored in the memory 504 to perform the in-vehicle voice control method in the above embodiments.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media include both non-transitory and non-transitory, removable and non-removable media that implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps. A plurality of units, modules or devices recited in the device claims may also be implemented by one unit or overall device by software or hardware.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks identified in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The Processor in this embodiment may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the apparatus/terminal device by running or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
In this embodiment, the module/unit integrated with the apparatus/terminal device may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain content that is appropriately increased or decreased as required by legislation and patent practice in the jurisdiction. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps. A plurality of units, modules or devices recited in the device claims may also be implemented by one unit or overall device by software or hardware.
Although the invention has been described in detail hereinabove with respect to a general description and specific embodiments thereof, it will be apparent to those skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. A vehicle-mounted voice control method, characterized by comprising:
step 1: acquiring a structured semantic group;
step 2: judging whether the command required to be executed can be executed according to the structured semantic group, if not, executing the command required to be executed according to the structured semantic group
And step 3: generating guide information according to the structured semantic group and sending the guide information to a man-machine interaction device;
and 4, step 4: acquiring command information fed back by a user according to the guide information;
and 5: and transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information.
2. The vehicle-mounted voice control method according to claim 1,
the structured semantic group comprises slot position value pair information;
the step 2: judging whether the command required to be executed can be executed according to the structured semantic group comprises the following steps:
acquiring a preset slot position value pair database, wherein the preset slot position value pair database comprises at least one piece of preset slot position value pair information;
judging whether a preset slot position value in preset slot position value pair information corresponds to a slot position value in the slot position value pair information, if not, judging whether the preset slot position value in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information or not, and if not, judging whether the preset slot position value in the preset slot position value pair information corresponds to the slot position value in the slot position value pair information
Judging that the command required to be executed cannot be executed according to the structured semantic group;
the step 3: generating the guidance information from the structured set of semantics comprises:
acquiring a guide language database, wherein the guide language database comprises at least one guide condition and a guide group corresponding to each guide condition;
judging whether the structured semantic group meets a guide condition in the guide language database, if so, judging whether the structured semantic group meets the guide condition in the guide language database
Acquiring a guide group corresponding to the guide condition;
and generating guide information according to the guide group.
3. The vehicle-mounted voice control method according to claim 2, wherein before the obtaining the structured semantic group, the vehicle-mounted voice control method further comprises:
acquiring voice information of a user;
and analyzing the voice information so as to obtain the structured semantic group.
4. The vehicle-mounted voice control method according to claim 2, wherein the command information fed back by the user according to the guidance information is voice information and/or interaction instruction information based on the guidance information.
5. The in-vehicle voice control method according to claim 4, wherein, when the command information is voice information, the step 5: transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information comprises the following steps:
analyzing the voice information to obtain a feedback structural semantic group;
judging whether the command to be executed can be executed according to the feedback structural semantic group, if so, executing the command to be executed according to the feedback structural semantic group
Generating an execution command according to the feedback structural semantic group;
and transmitting the execution command to a corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
6. The vehicle-mounted voice control method according to claim 5, wherein the step 5: transmitting the command information to the corresponding executing mechanism so that the corresponding executing mechanism works according to the command information further comprises:
judging whether the command required to be executed can be executed according to the structured semantic group, if not, executing the command required to be executed according to the structured semantic group
Repeating the steps 2 to 4.
7. The vehicle-mounted voice control method according to claim 6, wherein the determining whether the command to be executed can be executed according to the fed-back structured semantic group, and if so, the vehicle-mounted voice control method further comprises:
and updating the preset slot position value pair database according to the feedback structured semantic group, so that a preset slot position value in preset slot position value pair information in the preset slot position value pair database corresponds to the slot position value pair information.
8. The vehicle-mounted voice control method according to claim 4, wherein when the command information fed back by the user according to the guidance information is interactive instruction information based on the guidance information, the step 5: transmitting the command information to a corresponding execution mechanism so that the corresponding execution mechanism works according to the command information comprises the following steps:
generating an execution command according to the interaction instruction information;
and transmitting the execution command to a corresponding execution mechanism so that the corresponding execution mechanism works according to the execution command.
9. The vehicle-mounted voice control method according to claim 8, wherein after the generating of the execution command according to the interactive instruction information, the vehicle-mounted voice control method further comprises:
and updating the preset slot position value pair database according to the interaction instruction information, so that a preset slot position value in the preset slot position value pair information corresponds to the slot position value pair information in the preset slot position value pair database.
10. An in-vehicle voice control device, characterized in that the in-vehicle multi-screen control device includes:
the structured semantic group acquisition module is used for acquiring a structured semantic group;
the judging module is used for judging whether the command required to be executed can be executed according to the structural semantic group;
the guiding information generating module is used for generating guiding information according to the structured semantic group and sending the guiding information to the man-machine interaction device after the judging module judges that the guiding information is not the guiding information;
the feedback acquisition module is used for acquiring command information fed back by a user according to the guide information;
and the sending module is used for transmitting the command information to the corresponding executing mechanism so as to enable the corresponding executing mechanism to work according to the command information.
CN202210456056.2A 2022-04-27 2022-04-27 Vehicle-mounted voice control method and device Pending CN114842847A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210456056.2A CN114842847A (en) 2022-04-27 2022-04-27 Vehicle-mounted voice control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210456056.2A CN114842847A (en) 2022-04-27 2022-04-27 Vehicle-mounted voice control method and device

Publications (1)

Publication Number Publication Date
CN114842847A true CN114842847A (en) 2022-08-02

Family

ID=82568444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210456056.2A Pending CN114842847A (en) 2022-04-27 2022-04-27 Vehicle-mounted voice control method and device

Country Status (1)

Country Link
CN (1) CN114842847A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115565532A (en) * 2022-12-02 2023-01-03 广州小鹏汽车科技有限公司 Voice interaction method, server and computer readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5685000A (en) * 1995-01-04 1997-11-04 U S West Technologies, Inc. Method for providing a linguistically competent dialogue with a computerized service representative
JP2004045900A (en) * 2002-07-12 2004-02-12 Toyota Central Res & Dev Lab Inc Voice interaction device and program
US20160225370A1 (en) * 2015-01-30 2016-08-04 Microsoft Technology Licensing, Llc Updating language understanding classifier models for a digital personal assistant based on crowd-sourcing
DE102018113034A1 (en) * 2017-11-28 2019-05-29 Hyundai Motor Company VOICE RECOGNITION SYSTEM AND VOICE RECOGNITION METHOD FOR ANALYZING A COMMAND WHICH HAS MULTIPLE INTENTIONS
CN110111787A (en) * 2019-04-30 2019-08-09 华为技术有限公司 A kind of semanteme analytic method and server
US20200120395A1 (en) * 2018-10-16 2020-04-16 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
CN111399629A (en) * 2018-12-29 2020-07-10 Tcl集团股份有限公司 Operation guiding method of terminal equipment, terminal equipment and storage medium
CN111415656A (en) * 2019-01-04 2020-07-14 上海擎感智能科技有限公司 Voice semantic recognition method and device and vehicle
CN112349283A (en) * 2019-08-09 2021-02-09 杭州九阳小家电有限公司 Household appliance control method based on user intention and intelligent household appliance
US20210065685A1 (en) * 2019-09-02 2021-03-04 Samsung Electronics Co., Ltd. Apparatus and method for providing voice assistant service
CN112530428A (en) * 2020-11-26 2021-03-19 深圳Tcl新技术有限公司 Voice interaction method and device, terminal equipment and computer readable storage medium
WO2022001013A1 (en) * 2020-06-28 2022-01-06 广州橙行智动汽车科技有限公司 Voice interaction method, vehicle, server, system, and storage medium
WO2022059979A1 (en) * 2020-09-21 2022-03-24 삼성전자주식회사 Electronic device and control method thereof

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5685000A (en) * 1995-01-04 1997-11-04 U S West Technologies, Inc. Method for providing a linguistically competent dialogue with a computerized service representative
JP2004045900A (en) * 2002-07-12 2004-02-12 Toyota Central Res & Dev Lab Inc Voice interaction device and program
US20160225370A1 (en) * 2015-01-30 2016-08-04 Microsoft Technology Licensing, Llc Updating language understanding classifier models for a digital personal assistant based on crowd-sourcing
DE102018113034A1 (en) * 2017-11-28 2019-05-29 Hyundai Motor Company VOICE RECOGNITION SYSTEM AND VOICE RECOGNITION METHOD FOR ANALYZING A COMMAND WHICH HAS MULTIPLE INTENTIONS
US20200120395A1 (en) * 2018-10-16 2020-04-16 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
CN111399629A (en) * 2018-12-29 2020-07-10 Tcl集团股份有限公司 Operation guiding method of terminal equipment, terminal equipment and storage medium
CN111415656A (en) * 2019-01-04 2020-07-14 上海擎感智能科技有限公司 Voice semantic recognition method and device and vehicle
CN110111787A (en) * 2019-04-30 2019-08-09 华为技术有限公司 A kind of semanteme analytic method and server
CN112349283A (en) * 2019-08-09 2021-02-09 杭州九阳小家电有限公司 Household appliance control method based on user intention and intelligent household appliance
US20210065685A1 (en) * 2019-09-02 2021-03-04 Samsung Electronics Co., Ltd. Apparatus and method for providing voice assistant service
WO2022001013A1 (en) * 2020-06-28 2022-01-06 广州橙行智动汽车科技有限公司 Voice interaction method, vehicle, server, system, and storage medium
WO2022059979A1 (en) * 2020-09-21 2022-03-24 삼성전자주식회사 Electronic device and control method thereof
CN112530428A (en) * 2020-11-26 2021-03-19 深圳Tcl新技术有限公司 Voice interaction method and device, terminal equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115565532A (en) * 2022-12-02 2023-01-03 广州小鹏汽车科技有限公司 Voice interaction method, server and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110807332B (en) Training method, semantic processing method, device and storage medium for semantic understanding model
CN108305634B (en) Decoding method, decoder and storage medium
KR102447513B1 (en) Self-learning based dialogue apparatus for incremental dialogue knowledge, and method thereof
CN110795945B (en) Semantic understanding model training method, semantic understanding device and storage medium
EP3201770B1 (en) Methods and apparatus for module arbitration
CN110807333B (en) Semantic processing method, device and storage medium of semantic understanding model
CN104538024A (en) Speech synthesis method, apparatus and equipment
CN113539242A (en) Speech recognition method, speech recognition device, computer equipment and storage medium
US11069351B1 (en) Vehicle voice user interface
EP4086893A1 (en) Natural language understanding method and device, vehicle and medium
CN113421561B (en) Voice control method, voice control device, server, and storage medium
CN114842847A (en) Vehicle-mounted voice control method and device
CN113409757A (en) Audio generation method, device, equipment and storage medium based on artificial intelligence
CN115148212A (en) Voice interaction method, intelligent device and system
CN112017642A (en) Method, device and equipment for speech recognition and computer readable storage medium
US20240046931A1 (en) Voice interaction method and apparatus
CN114327185A (en) Vehicle screen control method and device, medium and electronic equipment
US11211056B1 (en) Natural language understanding model generation
CN112863496A (en) Voice endpoint detection method and device
CN111261149A (en) Voice information recognition method and device
CN107967308B (en) Intelligent interaction processing method, device, equipment and computer storage medium
CN117316159B (en) Vehicle voice control method, device, equipment and storage medium
CN117496972B (en) Audio identification method, audio identification device, vehicle and computer equipment
CN116168704B (en) Voice interaction guiding method, device, equipment, medium and vehicle
CN111797636B (en) Offline semantic analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination