CN112581955B - Voice control method, server, voice control system, and readable storage medium - Google Patents

Voice control method, server, voice control system, and readable storage medium Download PDF

Info

Publication number
CN112581955B
CN112581955B CN202011380283.9A CN202011380283A CN112581955B CN 112581955 B CN112581955 B CN 112581955B CN 202011380283 A CN202011380283 A CN 202011380283A CN 112581955 B CN112581955 B CN 112581955B
Authority
CN
China
Prior art keywords
information
template
voice
control
round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011380283.9A
Other languages
Chinese (zh)
Other versions
CN112581955A (en
Inventor
赵耀
易晖
申众
翁志伟
张又亮
张崇宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Guangzhou Chengxingzhidong Automotive Technology Co., Ltd
Original Assignee
Guangzhou Xiaopeng Motors Technology Co Ltd
Guangzhou Chengxingzhidong Automotive Technology Co., Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Motors Technology Co Ltd, Guangzhou Chengxingzhidong Automotive Technology Co., Ltd filed Critical Guangzhou Xiaopeng Motors Technology Co Ltd
Priority to CN202011380283.9A priority Critical patent/CN112581955B/en
Publication of CN112581955A publication Critical patent/CN112581955A/en
Application granted granted Critical
Publication of CN112581955B publication Critical patent/CN112581955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a voice control method, a server, a voice control system and a readable storage medium. The voice control method comprises the following steps: acquiring the nth round of voice information, and determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number; acquiring the n+1st round of voice information, and determining a second rewriting template according to the n+1st round of voice information and the knowledge graph; generating response information corresponding to the (n+1) -th round of voice information according to the first rewriting template and the second rewriting template; and sending out corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information. In the voice control method, when the user sends out the voice command in a multi-turn conversation mode, the following conversation can be correspondingly rewritten according to the related information of the previous conversation, and corresponding response information is generated, so that the accuracy of voice control can be improved, and the voice conversation has fluency.

Description

Voice control method, server, voice control system, and readable storage medium
Technical Field
The present invention relates to the field of intelligent voice control, and in particular, to a voice control method, a server, a voice control system, and a readable storage medium.
Background
In the related art, a mission type dialogue may be performed through an in-vehicle interactive system to control a vehicle system accordingly. Because the whole vehicle control involves a plurality of vehicle controls, and the user often omits related information in the previous dialogue in the multi-wheel dialogue, the interactive system cannot clearly determine the control information corresponding to the current dialogue, and the fluency of the multi-wheel dialogue is insufficient.
Disclosure of Invention
Embodiments of the present invention provide a voice control method, a server, a voice control system, and a readable storage medium.
The voice control method provided by the embodiment of the invention is used for controlling the vehicle, and comprises the following steps:
acquiring the nth round of voice information, and determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number;
acquiring the n+1st round of voice information, and determining a second rewriting template according to the n+1st round of voice information and the knowledge graph;
generating response information corresponding to the n+1st round of voice information according to the first rewriting template and the second rewriting template;
and sending out corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information.
In the voice control method, when the user sends out the voice command in a multi-turn conversation mode, the following conversation can be correspondingly rewritten according to the related information of the previous conversation, and corresponding response information is generated, so that the accuracy of voice control can be improved, and the voice conversation has fluency.
In some embodiments, the voice control method includes:
acquiring entity information of a control of the vehicle and mode information of the control;
determining a corresponding relation according to the entity information and the mode information;
generating a template fragment corresponding to the entity information according to the mode information;
and establishing the knowledge graph according to the entity information, the mode information, the corresponding relation and the template fragment.
In some embodiments, the vehicle includes a first control corresponding to the nth round of voice information,
acquiring the nth round of voice information, determining a first rewrite template according to the nth round of voice information and a preset knowledge graph, and comprising:
determining first text information according to the nth round of voice information, wherein the first text information comprises entity information of the first control;
determining a template fragment corresponding to the first text information according to the knowledge graph and the entity information of the first control, and generating a first mode template;
And generating the first rewrite template according to the first text information and the first mode template.
In some embodiments, the entity information includes control information, action information, and attribute information, the mode information includes control class information, action class information, and attribute class information,
generating the first rewrite template according to the first text information and the first mode template, including:
replacing control class information in the first mode template with control information of the first control, and/or
Replacing the action class information in the first mode template with the action information of the first control, and/or
And replacing the attribute type information in the first mode template with the attribute information of the first control.
In some embodiments, the vehicle includes a second control corresponding to the n+1st round of voice information,
acquiring the n+1st round of voice information, determining a second rewriting template according to the n+1st round of voice information and the knowledge graph, and comprising:
determining second text information according to the n+1th round of voice information, wherein the second text information comprises entity information and matching information of the second control;
Determining a template segment corresponding to the second text information according to the knowledge graph, the entity information and the matching information of the second control, and generating a second mode template;
and generating the second rewrite template according to the second text information and the second mode template.
In some embodiments, the entity information includes control information, action information, and attribute information, the mode information includes control class information, action class information, and attribute class information,
generating the second rewrite template according to the second text information and the second pattern template, including:
replacing control class information in the second mode template with control information of the second control, and/or
Replacing the action class information in the second mode template with the action information of the second control, and/or
And replacing the attribute type information in the second mode template with the attribute information of the second control.
In some embodiments, generating response information corresponding to the n+1th round of voice information according to the first rewrite template and the second rewrite template includes:
matching the second rewrite template with the first rewrite template, removing matching information in the second rewrite template, and determining a missing portion in the second rewrite template;
Generating a fragment to be filled according to the first rewrite template;
filling the missing part by the fragment to be filled;
and generating an entity template and corresponding response information under the condition that the missing part is detected to be completely filled.
In some embodiments, the voice control method includes:
in the case where the missing portion is detected to be not completely filled, filling of the missing portion is canceled.
In some embodiments, the voice control method includes:
and sending out a corresponding control instruction to the second control according to the entity information in the entity template.
The embodiment of the invention provides a server for controlling a vehicle, which comprises a control module and a voice acquisition module, wherein the voice acquisition module is used for acquiring the nth round of voice information and the n+1th round of voice information,
the control module is used for determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number; and
The method comprises the steps of determining a second rewrite template according to the n+1st round of voice information and the knowledge graph; and
The response information corresponding to the n+1st round of voice information is generated according to the first rewrite template and the second rewrite template; and
And the control module is used for sending corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information.
In the server, when the user sends out the voice command in the multi-turn conversation mode, the following conversation can be correspondingly rewritten according to the related information of the previous conversation, and corresponding response information is generated, so that the accuracy of voice control can be improved, and the voice conversation has fluency.
The voice control system provided by the embodiment of the invention comprises:
the vehicle is used for collecting the nth round of voice information and the n+1th round of voice information;
the server is used for acquiring the nth round of voice information, determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number; and
The method comprises the steps of acquiring the n+1th round of voice information, and determining a second rewriting template according to the n+1th round of voice information and the knowledge graph; and
The response information corresponding to the n+1st round of voice information is generated according to the first rewrite template and the second rewrite template; and
And the control module is used for sending corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information.
In the voice control system, when the user sends out the voice command in the mode of multi-turn conversation, the following conversation can be correspondingly rewritten according to the related information of the previous conversation, and corresponding response information is generated, so that the accuracy of voice control can be improved, and the voice conversation has fluency.
In some embodiments, the vehicle is further configured to prompt the response message.
An embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the voice control method according to any of the above embodiments.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a voice control method of an embodiment of the present invention;
FIG. 2 is a block diagram of a speech control system according to an embodiment of the present invention;
FIG. 3 is another flow chart of a voice control method of an embodiment of the present invention;
FIG. 4 is a schematic diagram of a knowledge graph of an embodiment of the invention;
FIG. 5 is another schematic diagram of a knowledge-graph in accordance with an embodiment of the invention;
FIG. 6 is a further flowchart of a voice control method of an embodiment of the present invention;
FIG. 7 is a further flowchart of a voice control method according to an embodiment of the present invention;
FIG. 8 is a further flowchart of a voice control method according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a voice control system according to an embodiment of the present invention.
Description of main reference numerals:
a voice control system 100;
a vehicle 10, a reminder 11;
server 20, control module 21, voice acquisition module 23.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, as well as, for example, fixedly coupled, detachably coupled, or integrally coupled, unless otherwise specifically indicated and defined. Either mechanically or electrically. Can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The disclosure of the present invention provides many different embodiments or examples for implementing different structures of the invention. In order to simplify the present disclosure, components and arrangements of specific examples are described below. They are, of course, merely examples and are not intended to limit the invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, which are for the purpose of brevity and clarity, and which do not themselves indicate the relationship between the various embodiments and/or arrangements discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
Referring to fig. 1 and 2, a voice control method according to an embodiment of the present invention is used for a vehicle 10, and includes:
step S110: acquiring the nth round of voice information, and determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number;
step S130: acquiring the n+1st round of voice information, and determining a second rewriting template according to the n+1st round of voice information and the knowledge graph;
step S150: generating response information corresponding to the (n+1) -th round of voice information according to the first rewriting template and the second rewriting template;
step S170: corresponding control instructions are issued to the vehicle 10 based on the nth round of speech information and the n+1st round of speech information.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, a server 20 is used to control the vehicle 10. The server 20 includes a control module 21 and a voice acquisition module 23. The voice acquisition module 23 is configured to acquire the nth round of voice information and the n+1th round of voice information. The control module 21 is configured to determine a first rewrite template according to the nth round of voice information and a preset knowledge graph, where n is a natural number; the method comprises the steps of determining a second rewriting template according to the n+1st round of voice information and a knowledge graph; the response information corresponding to the (n+1) -th round of voice information is generated according to the first rewriting template and the second rewriting template; and is configured to issue a corresponding control instruction to the vehicle 10 according to the nth round of voice information and the n+1th round of voice information.
In the voice control method and the server 20, when the user sends out the voice command in the multi-turn conversation mode, the following conversation can be correspondingly rewritten according to the related information of the previous conversation, and corresponding response information is generated, so that the accuracy of voice control can be improved, and the voice conversation has fluency.
In the related technology, the vehicle can recognize the conversation sent by the user in a voice interaction mode, and correspondingly controls various controls on the vehicle according to related information in the conversation, so that the effect of voice control is achieved. In practical applications, there may be a case that a user needs to operate a plurality of different controls or perform multiple operations on the same control on a vehicle, and in a case of performing multiple sessions, the user often omits part of information in a subsequent session due to convenience.
In view of the foregoing, in one embodiment of the present invention, the n-th round of voice information and the n+1-th round of voice information may be formed by identifying a plurality of rounds of sessions in a session process, a first rewrite template including relevant information of a vehicle control corresponding to the n-th round of voice information may be generated according to the n-th round of voice information and a preset knowledge map, a second rewrite template including relevant information of the vehicle 10 control corresponding to the n+1-th round of voice information may be generated according to the n+1-th round of voice information and the preset knowledge map, omitted information in the n+1-th round of voice information may be determined by the first rewrite template and the second rewrite template, and the omitted information in the second rewrite template may be supplemented according to the first rewrite template, so that response information corresponding to the n+1-th round of voice information may be obtained, and after the response information is determined, a control instruction corresponding to the n+1-th round of voice information may be determined according to the response information, and a control instruction corresponding to the n+1-th round of voice information may be issued to the vehicle 10. In other embodiments, the response information corresponding to the n+1st round of voice information may be matched according to the first rewrite template and the second rewrite template.
In summary, in the case of performing multiple rounds of sessions, the relevant information in the current round of session and the relevant information in the previous round of session are determined, and the current round of session is rewritten by means of information supplement, so that corresponding response information is generated, and the problem that the user needs to repeat the session because the control instruction corresponding to the current round of session is difficult or impossible to identify can be avoided, so that the smoothness of the session and the accuracy of voice control are improved.
n is a natural number, in one embodiment n may be equal to 0, 1, 2, etc., it being understood that round 0 may be understood as the first round of the entire human machine session. In another embodiment, n may be other natural numbers that are not equal to 0. And adjusting according to specific conditions. The vehicle 10 includes, but is not limited to, an electric-only vehicle, a hybrid vehicle, an extended range electric vehicle, a hydrogen-powered vehicle, a fuel-powered vehicle, and the like.
In addition, in the embodiment shown in fig. 2, the voice acquisition module 23 is provided at the server 20, and the vehicle 10 transmits the acquired voice information to the voice acquisition module 23, so that the control module 21 can determine the corresponding rewriting template according to the acquired voice information. In other embodiments, the voice obtaining module 23 may be provided in the vehicle 10 to directly obtain the voice information, and send the voice information to the server through wireless transmission, so that the control module 21 may determine the corresponding rewrite template according to the obtained voice information.
Referring to fig. 3, in some embodiments, the voice control method includes:
step S210: acquiring entity information of a control of a vehicle;
step S230: acquiring mode information of a control;
step S250: determining a corresponding relation according to the entity information and the mode information;
step S270: generating a template fragment corresponding to the entity information according to the mode information;
step S290: and establishing a knowledge graph according to the entity information, the mode information, the corresponding relation and the template fragment.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to obtain entity information of a control of the vehicle 10; the method comprises the steps of obtaining mode information of a control; the method comprises the steps of determining a corresponding relation according to entity information and mode information; and generating a template fragment corresponding to the entity information according to the mode information; and the knowledge graph is preset according to the entity information, the mode information, the corresponding relation and the template fragment.
Therefore, the corresponding template fragments can be conveniently and rapidly generated according to different controls.
Referring to fig. 4, in the embodiment shown in fig. 4, the control of the corresponding vehicle is a window, and the physical information of the window includes a window body, a specific position of the window, a switching action performed on the window, an opening/closing width of the window, and an adjustment of the opening/closing width of the window. The mode information comprises control class information, action class information and attribute class information of the corresponding control, the attribute class information comprises position attribute and adjustable attribute of the control, the action class information comprises switching action and adjusting action of the control, and the control class information is used for determining whether the corresponding control is a switchable control or a control with the position attribute or an adjustable control.
In the case of determining the entity information of the window, according to specific language logic, the links between all the information in the entity information of the window are established, and a first corresponding link (corresponding to the solid arrow lines between the information in the entity information of fig. 4) is formed. Similarly, a relationship between the related information is established in the pattern information, forming a second correspondence relationship (corresponding to the solid arrowed line between the pieces of information in the pattern information of fig. 4). And forming a third corresponding relation (corresponding to the dotted arrow line between the entity information and the mode information in fig. 4) according to the corresponding relation between the entity information and the mode information (such as that the window belongs to the switchable control). And determining a corresponding relation according to the first corresponding relation, the second corresponding relation and the third corresponding relation.
According to the second corresponding relation in the mode information, a plurality of sub-mode information can be generated. Specifically, the embodiment shown in fig. 4 includes first sub-mode information, second sub-mode information, and third sub-mode information. The first sub-mode information comprises control class information and a switching action on the control, the second sub-mode information comprises control class information and a position attribute of the control, and the third sub-mode information comprises control class information, an adjusting action on the control and an adjustable attribute of the control.
More specifically, in the embodiment shown in fig. 4, the template fragment for determining the corresponding entity information may include "[ action ] [ control ]," [ action ] ", according to the first sub-pattern information. According to the second sub-pattern information, the template fragment determining the corresponding entity information may include "[ attribute ] [ control ]". The third sub-mode information, the template fragment for determining the corresponding entity information may include "[ control ] [ action ] [ attribute value ]", "[ action ] [ control ] [ attribute value ]", "[ control ] [ action ] [ attribute ]", "[ action ] [ control ] [ attribute ]", "[ attribute ] [ control ] [ action ] [ attribute value ]", "[ action ] [ control ] [ attribute value ]".
In addition, in other embodiments, new entity information and mode information may be added or existing entity information and mode information may be adjusted as the case may be.
Presetting a knowledge graph according to the entity information, the mode information, the corresponding relation and the template segment, namely establishing the knowledge graph through the entity information, the mode information, the corresponding relation and the template segment, so that the template segment corresponding to the entity information of the control can be directly determined through the knowledge graph.
It can be understood that in actual situations, the new control can be added to the vehicle according to specific requirements, and under the condition that the entity information corresponding to the new control is determined, the entity information of the new control can be corresponding to the mode information in the knowledge graph, so that the connection (including control characteristics, control modes, control ranges and the like) among the controls can be established, the template segment related to the new control can be quickly generated, and the expansion of new business can be quickly and efficiently supported.
Referring to fig. 4 and fig. 5, in the embodiment shown in fig. 5, the corresponding vehicle control is an air conditioner, and under the condition of determining the entity information of the air conditioner, a corresponding relationship can be quickly established between the entity information and the mode information of the air conditioner, so that an indirect relationship can be formed between the vehicle window and the air conditioner, and in the process of performing multiple rounds of conversations, even if the controls corresponding to each round of conversations are different, the corresponding controls can be obtained according to conversations of different rounds and corresponding template fragments can be generated. Controls for a vehicle include, but are not limited to, windows, air conditioners, tailgates, lights, seat positions.
Further, in other embodiments, the knowledge-graph includes an entity layer in which entity information is stored and a schema layer in which schema information is stored.
It should be noted that, in other embodiments, step S210 and step S230 may be performed independently, may be performed synchronously, or may be performed sequentially. In one embodiment, the voice control method may first perform step S210 and then perform step S230, so that entity information and mode information of the vehicle control may be sequentially acquired.
Referring to fig. 6, in some embodiments, the vehicle includes a first control corresponding to an nth round of voice information. Step S110 includes:
Step S111: determining first text information according to the nth round of voice information, wherein the first text information comprises entity information of a first control;
step S113: determining a template fragment corresponding to the first text information according to the knowledge graph and the entity information of the first control, and generating a first mode template;
step S115: and generating a first rewrite template according to the first text information and the first mode template.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to determine first text information according to the nth round of voice information, where the first text information includes entity information of the first control; the template segment corresponding to the first text information is determined according to the knowledge graph and the entity information of the first control, and a first mode template is generated; and generating a first rewrite template according to the first text information and the first pattern template.
Thus, the entity information of the corresponding control in the nth round of voice information can be directly determined.
Specifically, please refer to fig. 5, in such an embodiment, the nth round of voice information is "half of the opening of the secondary driving window", so that it may be determined that the entity information of the first control in the first text information includes "secondary driving", "vehicle window", "open/close", "half", and further it may be determined that the corresponding template fragment is "[ attribute ] [ control ] [ action ] [ attribute value ]" through the knowledge graph, and the template fragment is used as the first mode template, and then according to the first mode template and the first text information, the obtained first rewrite template is "[ position: auxiliary drive control: vehicle window ] [ action: adjustment ] [ attribute value: half ] ". For specific principles of other embodiments reference may be made to the principles of the embodiments described above. The first control may be one of a vehicle window, an air conditioner, a tail gate, a light, and a seat position.
In some implementations, the entity information includes control information, action information, and attribute information, and the mode information includes control class information, action class information, and attribute class information. Step S115, including:
replacing the control class information in the first mode template with the control information of the first control, and/or
Replacing the action class information in the first mode template with the action information of the first control, and/or
And replacing the attribute type information in the first mode template with the attribute information of the first control.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to replace control class information in the first mode template with control information of the first control, and/or replace action class information in the first mode template with action information of the first control, and/or replace attribute class information in the first mode template with attribute information of the first control.
Thus, the first rewrite template can be simply obtained.
Specifically, in one embodiment, in the case of determining the control information of the first control through the first text information, the control class information in the first mode template is replaced with the control information of the first control to generate a first rewrite template (e.g., replace [ control ] in the first mode template with [ window ]). In another embodiment, in the case of determining control information of the first control through the first text information, the action class information in the first pattern template is replaced with action information of the first control to generate a first rewrite template (e.g., replace [ control ] in the first pattern template with [ adjustment ]). In yet another embodiment, in the case of determining control information of the first control through the first text information, the attribute type information in the first mode template is replaced with attribute information of the first control to generate a first rewrite template (e.g., replace [ control ] in the first mode template with [ secondary drive ]). The specific principles of other embodiments are similar to those of the above embodiments and will not be described in detail herein.
Referring to FIG. 7, in some embodiments, the vehicle includes a second control corresponding to the n+1st round of voice information. Step S130 includes:
step S131: determining second text information according to the n+1th round of voice information, wherein the second text information comprises entity information and matching information of a second control;
step S133: determining a template segment corresponding to the second text information according to the knowledge graph, the entity information of the second control and the matching information, and generating a second mode template;
step S135: and generating a second rewrite template according to the second text information and the second mode template.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to determine second text information according to the n+1st round of voice information, where the second text information includes entity information and matching information of the second control; the template segment is used for splicing at least one corresponding template segment according to the corresponding relation, the entity information and the matching information of the second control, and a second mode template is generated; and generating a second rewrite template according to the second text information and the second pattern template.
Therefore, the entity information of the corresponding control in the n+1st round of voice information can be directly determined, and the service coverage and the quick landing of the second control are realized.
Specifically, referring to fig. 5, in one such embodiment, the nth round of voice information is "half of the window of the secondary driving" and the n+1st round of voice information is "primary driving". According to the n+1th round of voice information, the entity information of the second control in the second text information can be determined to comprise "main driving" and the matching information is "also yes", so that the corresponding template fragment can be determined to be "[ attribute value ] [ same ]" through the knowledge graph and the matching information, the template fragment is used as a second mode template, and then the obtained second rewrite template is "[ position") according to the second mode template and the second text information: the main and auxiliary drive are the same. The second control can be one of a vehicle window, an air conditioner, a tail gate, a light, and a seat position.
In addition, in other embodiments, when a plurality of template segments corresponding to the second text information are determined according to the knowledge graph, the entity information of the second control, and the matching information, the plurality of template segments corresponding to the second text information may be fused through specific language logic, so as to form a second mode template. In one embodiment, the template fragments corresponding to the second text information are "[ attribute values ] [ same ]" and "[ action ]", the template fragments formed by fusion in a splicing manner are "[ attribute values ] [ same ] [ action ]", and the template fragments are used as the second mode template. In other embodiments, the second mode template may be obtained by adding, deleting, and modifying template segments correspondingly, which will not be described in detail herein.
In some implementations, the entity information includes control information, action information, and attribute information, and the mode information includes control class information, action class information, and attribute class information. Step S135, including:
replacing the control class information in the second mode template with control information of the second control, and/or
Replacing the action class information in the second mode template with the action information of the second control, and/or
And replacing the attribute type information in the second mode template with the attribute information of the second control.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to replace control class information in the second mode template with control information of the second control, and/or replace action class information in the second mode template with action information of the second control, and/or replace attribute class information in the second mode template with attribute information of the second control.
Thus, the second rewrite template can be simply obtained.
Specifically, in one embodiment, in the case of determining the control information of the second control through the second text information, the control class information in the second mode template is replaced with the control information of the second control to generate a second rewrite template (e.g., replace [ control ] in the second mode template with [ window ]). In another embodiment, in the event control information for the second control is determined from the second text information, the action class information in the second schema template is replaced with the action information for the second control to generate a second rewrite template (e.g., replace [ control ] in the second schema template with [ adjustment ]). In yet another embodiment, in the case of determining control information of the second control through the second text information, the attribute type information in the second mode template is replaced with attribute information of the second control to generate a second rewrite template (e.g., replace [ control ] in the second mode template with [ main drive ]). The specific principles of other embodiments are similar to those of the above embodiments and will not be described in detail herein.
Referring to fig. 8, in some embodiments, step S150 includes:
step S151: matching the second rewritten template with the first rewritten template, removing matching information in the second rewritten template, and determining a missing part in the second rewritten template;
step S153: generating a fragment to be filled according to the first rewriting template;
step S155: filling the missing part with the segment to be filled;
step S157: and generating an entity template and corresponding response information when the missing part is detected to be completely filled.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to match the second rewritten template with the first rewritten template, remove matching information in the second rewritten template, and determine a missing portion in the second rewritten template; the method comprises the steps of generating a segment to be filled according to a first rewriting template; the method comprises the steps of filling a missing part with a fragment to be filled; and generating an entity template and corresponding response information when the missing part is detected to be completely filled.
Thus, the second text information can be rewritten, and further a control instruction corresponding to the (n+1) th round of voice information can be obtained.
Specifically, in such an embodiment, the first rewrite template is "[ position: main drive ] [ control: air conditioner ] [ action: adjust to ] [ attribute value: eighty degrees ] ", the second rewrite template is" [ position: the secondary drive is identical, and the missing part in the second rewriting template is determined as the control action attribute value. According to the first rewrite template, generating "[ control: air conditioner ] [ action: adjust to ] [ attribute value: eighty degrees ] ", and filling the fragment to be filled into the missing part of the second rewrite template until the missing part is completely filled, thereby obtaining" [ position: auxiliary drive control: air conditioner ] [ action: adjust to ] [ attribute value: eighty degrees ] "and using it as an entity template, and generating corresponding response information, which may be" good "or" the secondary driving air conditioner is being tuned to eighty degrees ". The response information can be selected according to specific conditions, and can be preset through actual tests.
In addition, in other embodiments, the entity templates may be generated by template parsing. In one embodiment, the template resolution may be implemented using a tree-based node matching algorithm.
In some embodiments, the voice control method includes:
in the case where the missing portion is detected to be not completely filled, filling of the missing portion is canceled.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to cancel filling of the missing portion if it is detected that the missing portion is not completely filled.
In this manner, it may be determined that the session information is incomplete.
Specifically, in one embodiment, in the case that the missing portion is detected to be not completely filled, it may be determined that there is entity information of the missing portion, so that filling of the missing portion is stopped, and a voice prompt is sent to reconfirm the nth round of voice information and/or the n+1th round of voice information.
In some embodiments, the voice control method includes:
and sending out a corresponding control instruction to the second control according to the entity information in the entity template.
The voice control method of the embodiment of the present invention may be implemented by the server 20 of the embodiment of the present invention. Referring to fig. 2, the control module 21 is configured to issue a corresponding control instruction to the second control according to the entity information in the entity template.
Thus, the corresponding control of the second control can be realized under the condition of ensuring the fluency of the conversation.
Specifically, in one embodiment, the second control is an air conditioner located in a secondary driving, and the entity template is "[ position: auxiliary drive control: air conditioner ] [ action: adjust to ] [ attribute value: eighty degrees), so that a control instruction can be sent to the air conditioner of the secondary driving, so that the air conditioner of the secondary driving is started and the temperature is adjusted to be eighty degrees.
Referring to fig. 9, a voice control system 100 according to an embodiment of the present invention includes:
a vehicle 10 for collecting an nth round of voice information and an n+1th round of voice information;
the server 20 is configured to obtain the nth round of voice information, determine a first rewrite template according to the nth round of voice information and a preset knowledge graph, and n is a natural number; and
The method comprises the steps of acquiring n+1st round of voice information, and determining a second rewriting template according to the n+1st round of voice information and a knowledge graph; and
The response information corresponding to the (n+1) -th round of voice information is generated according to the first rewriting template and the second rewriting template; and
For issuing corresponding control instructions to the vehicle 10 based on the nth round of speech information and the n +1 th round of speech information.
In the above-mentioned voice control system 100, when the user sends out a voice command in a multi-turn session manner, the following session can be rewritten correspondingly according to the related information of the previous session, and corresponding response information is generated, so that the accuracy of voice control can be improved, and the voice session has fluency.
In particular, the knowledge-graph may be stored within the server 20. In one embodiment, when multiple rounds of voice information are collected by the vehicle 10, the multiple rounds of voice information may be uploaded to the server 20, so that the server 20 obtains corresponding first and second rewrite templates for the nth round of voice information and the n+1th round of voice information through knowledge graphs, and further generates response information corresponding to the nth+1th round of voice information. The corresponding control instruction is generated according to the nth round of voice information and the n+1th round of voice information, and the control instruction is sent to the vehicle 10, so that the vehicle 10 controls the corresponding control according to the control instruction, the session purpose corresponding to the n+1th round of voice information can be obtained through the nth round of voice information even if part of information in the n+1th round of voice information is missing, the user does not need to confirm the n+1th round of voice information again, the user can conveniently send a more concise and spoken voice instruction, and the fluency of the conversation is ensured. In one embodiment, the server 20 is a cloud.
In addition, in other embodiments, the speech control system 100 may store the rewrite template corresponding to the nth round of speech information, so that the first rewrite template may be conveniently read directly in the same or similar dialogue. The vehicle 10 includes, but is not limited to, an electric-only vehicle, a hybrid vehicle, an extended range electric vehicle, a hydrogen-powered vehicle, and the like.
In some embodiments, the vehicle 10 is also used to prompt for response messages. In this manner, the user may be alerted that the vehicle 10 has been correspondingly controlled in accordance with the n+1st round of voice information.
Specifically, in some embodiments, the vehicle 10 includes a reminder 11. After the vehicle 10 receives the response message, the response message may be prompted to the user by the prompting element 11. The prompting element 11 can comprise a buzzer, an LED lamp and a display screen, and the response information can comprise voice, alarm prompting sound, light with specific change rules and characters on the display screen.
The embodiment of the invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the voice control method of any of the above embodiments.
For example, in the case where the computer program is executed, the following steps may be implemented:
step S110: acquiring the nth round of voice information, and determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number;
step S130: acquiring the n+1st round of voice information, and determining a second rewriting template according to the n+1st round of voice information and the knowledge graph;
step S150: generating response information corresponding to the (n+1) -th round of voice information according to the first rewriting template and the second rewriting template;
Step S170: and sending out corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information.
The computer-readable storage medium may be provided in a vehicle or in a terminal such as a server, and the vehicle can communicate with the terminal to acquire a corresponding program.
It is understood that the computer-readable storage medium may include: any entity or device capable of carrying a computer program, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth. The computer program comprises computer program code. The computer program code may be in the form of source code, object code, executable files, or in some intermediate form, among others. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth.
In some embodiments of the present invention, the control module may be a single-chip microcomputer chip, integrated with a processor, a memory, a communication module, etc. The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "certain embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (12)

1. A voice control method for controlling a vehicle, the voice control method comprising:
acquiring the nth round of voice information, and determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number;
acquiring the n+1st round of voice information, and determining a second rewriting template according to the n+1st round of voice information and the knowledge graph;
generating response information corresponding to the n+1st round of voice information according to the first rewriting template and the second rewriting template;
according to the nth round of voice information and the n+1th round of voice information, a corresponding control instruction is sent to the vehicle;
the voice control method further comprises the following steps:
acquiring entity information of a control of the vehicle and mode information of the control;
determining a corresponding relation according to the entity information and the mode information;
Generating a template fragment corresponding to the entity information according to the mode information;
and establishing the knowledge graph according to the entity information, the mode information, the corresponding relation and the template fragment.
2. The voice control method of claim 1, wherein the vehicle includes a first control corresponding to the nth round of voice information,
acquiring the nth round of voice information, determining a first rewrite template according to the nth round of voice information and a preset knowledge graph, and comprising:
determining first text information according to the nth round of voice information, wherein the first text information comprises entity information of the first control;
determining a template fragment corresponding to the first text information according to the knowledge graph and the entity information of the first control, and generating a first mode template;
and generating the first rewrite template according to the first text information and the first mode template.
3. The voice control method of claim 2, wherein the entity information includes control information, action information, and attribute information, the mode information includes control class information, action class information, and attribute class information,
Generating the first rewrite template according to the first text information and the first mode template, including:
replacing control class information in the first mode template with control information of the first control, and/or
Replacing the action class information in the first mode template with the action information of the first control, and/or
And replacing the attribute type information in the first mode template with the attribute information of the first control.
4. The voice control method of claim 2, wherein the vehicle includes a second control corresponding to the n+1th round of voice information,
acquiring the n+1st round of voice information, determining a second rewriting template according to the n+1st round of voice information and the knowledge graph, and comprising:
determining second text information according to the n+1th round of voice information, wherein the second text information comprises entity information and matching information of the second control;
determining a template segment corresponding to the second text information according to the knowledge graph, the entity information and the matching information of the second control, and generating a second mode template;
and generating the second rewrite template according to the second text information and the second mode template.
5. The voice control method of claim 4, wherein the entity information includes control information, action information, and attribute information, the mode information includes control class information, action class information, and attribute class information,
generating the second rewrite template according to the second text information and the second pattern template, including:
replacing control class information in the second mode template with control information of the second control, and/or
Replacing the action class information in the second mode template with the action information of the second control, and/or
And replacing the attribute type information in the second mode template with the attribute information of the second control.
6. The voice control method according to claim 4, wherein generating response information corresponding to the n+1th round of voice information based on the first rewrite template and the second rewrite template, comprises:
matching the second rewrite template with the first rewrite template, removing matching information in the second rewrite template, and determining a missing portion in the second rewrite template;
generating a fragment to be filled according to the first rewrite template;
Filling the missing part by the fragment to be filled;
and generating an entity template and corresponding response information under the condition that the missing part is detected to be completely filled.
7. The voice control method according to claim 6, characterized in that the voice control method comprises:
in the case where the missing portion is detected to be not completely filled, filling of the missing portion is canceled.
8. The voice control method according to claim 6, characterized in that the voice control method comprises:
and sending out a corresponding control instruction to the second control according to the entity information in the entity template.
9. A server for controlling a vehicle, characterized in that the server comprises a control module and a voice acquisition module for acquiring the nth round of voice information and the n+1th round of voice information,
the control module is used for determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number; and
The method comprises the steps of determining a second rewrite template according to the n+1st round of voice information and the knowledge graph; and
The response information corresponding to the n+1st round of voice information is generated according to the first rewrite template and the second rewrite template; and
The control method is used for sending corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information;
the control module is further configured to:
acquiring entity information of a control of the vehicle and mode information of the control;
determining a corresponding relation according to the entity information and the mode information;
generating a template fragment corresponding to the entity information according to the mode information;
and establishing the knowledge graph according to the entity information, the mode information, the corresponding relation and the template fragment.
10. A speech control system, comprising:
the vehicle is used for collecting the nth round of voice information and the n+1th round of voice information;
the server is used for acquiring the nth round of voice information, determining a first rewriting template according to the nth round of voice information and a preset knowledge graph, wherein n is a natural number; and
The method comprises the steps of acquiring the n+1th round of voice information, and determining a second rewriting template according to the n+1th round of voice information and the knowledge graph; and
The response information corresponding to the n+1st round of voice information is generated according to the first rewrite template and the second rewrite template; and
The control method is used for sending corresponding control instructions to the vehicle according to the nth round of voice information and the n+1th round of voice information;
The server is further configured to:
acquiring entity information of a control of the vehicle and mode information of the control;
determining a corresponding relation according to the entity information and the mode information;
generating a template fragment corresponding to the entity information according to the mode information;
and establishing the knowledge graph according to the entity information, the mode information, the corresponding relation and the template fragment.
11. The voice control system of claim 10, wherein the vehicle is further configured to prompt the response message.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the speech control method according to any one of claims 1-8.
CN202011380283.9A 2020-11-30 2020-11-30 Voice control method, server, voice control system, and readable storage medium Active CN112581955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011380283.9A CN112581955B (en) 2020-11-30 2020-11-30 Voice control method, server, voice control system, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011380283.9A CN112581955B (en) 2020-11-30 2020-11-30 Voice control method, server, voice control system, and readable storage medium

Publications (2)

Publication Number Publication Date
CN112581955A CN112581955A (en) 2021-03-30
CN112581955B true CN112581955B (en) 2024-03-08

Family

ID=75128067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011380283.9A Active CN112581955B (en) 2020-11-30 2020-11-30 Voice control method, server, voice control system, and readable storage medium

Country Status (1)

Country Link
CN (1) CN112581955B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611297A (en) * 2021-06-25 2021-11-05 北京智芯微电子科技有限公司 Intelligent control method and device and intelligent product
CN113239178A (en) * 2021-07-09 2021-08-10 肇庆小鹏新能源投资有限公司 Intention generation method, server, voice control system and readable storage medium
CN113990299B (en) * 2021-12-24 2022-05-13 广州小鹏汽车科技有限公司 Voice interaction method and device, server and readable storage medium thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513593A (en) * 2015-11-24 2016-04-20 南京师范大学 Intelligent human-computer interaction method drove by voice
CN109033063A (en) * 2017-06-09 2018-12-18 微软技术许可有限责任公司 The machine inference of knowledge based map
CN109616108A (en) * 2018-11-29 2019-04-12 北京羽扇智信息科技有限公司 More wheel dialogue interaction processing methods, device, electronic equipment and storage medium
CN110313153A (en) * 2017-02-14 2019-10-08 微软技术许可有限责任公司 Intelligent digital assistance system
CN111143525A (en) * 2019-12-17 2020-05-12 广东广信通信服务有限公司 Vehicle information acquisition method and device and intelligent vehicle moving system
CN111339246A (en) * 2020-02-10 2020-06-26 腾讯云计算(北京)有限责任公司 Query statement template generation method, device, equipment and medium
CN111640432A (en) * 2020-05-27 2020-09-08 北京声智科技有限公司 Voice control method and device, electronic equipment and storage medium
CN111930913A (en) * 2020-08-14 2020-11-13 上海茂声智能科技有限公司 Knowledge graph-based question and answer method, system, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513593A (en) * 2015-11-24 2016-04-20 南京师范大学 Intelligent human-computer interaction method drove by voice
CN110313153A (en) * 2017-02-14 2019-10-08 微软技术许可有限责任公司 Intelligent digital assistance system
CN109033063A (en) * 2017-06-09 2018-12-18 微软技术许可有限责任公司 The machine inference of knowledge based map
CN109616108A (en) * 2018-11-29 2019-04-12 北京羽扇智信息科技有限公司 More wheel dialogue interaction processing methods, device, electronic equipment and storage medium
CN111143525A (en) * 2019-12-17 2020-05-12 广东广信通信服务有限公司 Vehicle information acquisition method and device and intelligent vehicle moving system
CN111339246A (en) * 2020-02-10 2020-06-26 腾讯云计算(北京)有限责任公司 Query statement template generation method, device, equipment and medium
CN111640432A (en) * 2020-05-27 2020-09-08 北京声智科技有限公司 Voice control method and device, electronic equipment and storage medium
CN111930913A (en) * 2020-08-14 2020-11-13 上海茂声智能科技有限公司 Knowledge graph-based question and answer method, system, device, equipment and medium

Also Published As

Publication number Publication date
CN112581955A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN112581955B (en) Voice control method, server, voice control system, and readable storage medium
US8964995B2 (en) Acoustic diagnosis and correction system
CN106990948B (en) Application upgrading processing method and device
US11260828B2 (en) Method and apparatus for controlling vehicle, and vehicle
CN112634888A (en) Voice interaction method, server, voice interaction system and readable storage medium
CA3095590A1 (en) Diagnostic system and method for processing data of a motor vehicle
WO2023125002A1 (en) Voice interaction method and apparatus, model training method, vehicle and storage medium
CN114868113A (en) Decentralized cluster federation in a computer network node management system
CN115618567A (en) Drive-by-wire function test method for vehicle, electronic device, medium, and program product
CN113535225B (en) Environment configuration file processing method, device, equipment and medium of application software
EP3806012A1 (en) Identity verification purogram, management apparatus, and method for identity verification
GB2577488A (en) Improvements to system controllers
CN112242909B (en) Method and device for generating management template, electronic equipment and storage medium
CN109343874B (en) Unmanned vehicle upgrading method, device, equipment and computer readable storage medium
US11936532B2 (en) Dynamic IoT device definition and visualization
US11924037B2 (en) IoT deployment configuration template
EP3806005A1 (en) Identity verification program, control apparatus, and method for identity verification
CN114299929A (en) Voice interaction method and device, server and storage medium
US11345367B2 (en) Method and device for generating control signals to assist occupants in a vehicle
KR102064519B1 (en) Method for updating software of electronic control unit of vehicle, apparatus and system thereof
CN113775415B (en) Driving state determining method and device for indicator lamp
CN108663882A (en) Light-source system and the method for generating the light beam of light combination with target brightness value
JP6609235B2 (en) Electronic control unit
CN113163249B (en) Method, device and application for optimizing recommended code value
CN116620331B (en) Vehicle control method, apparatus, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant