CN111597808B - Instrument panel drawing processing method and device, electronic equipment and storage medium - Google Patents

Instrument panel drawing processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111597808B
CN111597808B CN202010334802.1A CN202010334802A CN111597808B CN 111597808 B CN111597808 B CN 111597808B CN 202010334802 A CN202010334802 A CN 202010334802A CN 111597808 B CN111597808 B CN 111597808B
Authority
CN
China
Prior art keywords
dialogue
word slot
intention
instrument panel
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010334802.1A
Other languages
Chinese (zh)
Other versions
CN111597808A (en
Inventor
张雪婷
刘畅
张阳
谢奕
杨双全
郑灿祥
季昆鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010334802.1A priority Critical patent/CN111597808B/en
Publication of CN111597808A publication Critical patent/CN111597808A/en
Application granted granted Critical
Publication of CN111597808B publication Critical patent/CN111597808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application discloses a method and a device for drawing and processing an instrument panel, electronic equipment and a storage medium, and relates to the field of big data. The specific implementation scheme is as follows: receiving input voice information, and analyzing the voice information to obtain a corresponding dialogue text; extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets; generating instrument control instructions according to dialogue intents and a plurality of word slot information of dialogue texts; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.

Description

Instrument panel drawing processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of big data in the field of data processing, and in particular, to a method and apparatus for drawing and processing a dashboard, an electronic device, and a storage medium.
Background
In general, when data is visualized and displayed through a dashboard, the display layout, the display content and the display form often need to be changed or the dimension and the filtering mode of the displayed data need to be changed due to different requirements and attention points.
In the related art, a fixed sentence is recognized through voice, data processing is performed on the fixed content of a fixed chart based on a preset command mapped to the preset sentence, the lazy nature of the mode on the accuracy and the integrity of voice recognition is strong, the out-of-control condition can often occur, and the customization degree of freedom on instrument panel control is low.
Disclosure of Invention
Provided are a dashboard drawing processing method, a dashboard drawing processing device, electronic equipment and a storage medium.
According to a first aspect, there is provided a dashboard drawing processing method, including:
receiving input voice information, and analyzing the voice information to obtain a corresponding dialogue text;
extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets;
generating instrument control instructions according to the dialogue intention of the dialogue text and the word slot information;
and drawing the target instrument panel according to the instrument control instruction.
According to a second aspect, there is provided an instrument panel drawing processing apparatus including:
the receiving and analyzing module is used for receiving the input voice information and analyzing the voice information to obtain a corresponding dialogue text;
the extraction and determination module is used for extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining the dialogue intention of the dialogue text according to the word slot field sets;
the generation module is used for generating instrument control instructions according to the dialogue intention of the dialogue text and the word slot information;
and the processing module is used for drawing the target instrument panel according to the instrument control instruction.
An embodiment of a third aspect of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the instrument panel drawing processing method according to the embodiment of the first aspect.
An embodiment of a fourth aspect of the present application proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the instrument panel drawing processing method according to the embodiment of the first aspect.
One embodiment of the above application has the following advantages or benefits:
receiving input voice information, and analyzing the voice information to obtain a corresponding dialogue text; extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets; generating instrument control instructions according to dialogue intents and a plurality of word slot information of dialogue texts; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
fig. 1 is a flowchart of a dashboard drawing processing method according to a first embodiment of the present application;
fig. 2 is a flowchart of a dashboard drawing processing method according to a second embodiment of the present application;
fig. 3 is a flowchart of a dashboard drawing processing method according to a third embodiment of the present application;
FIG. 4 is an exemplary diagram of a dashboard rendering processing method according to an embodiment of the present application;
fig. 5 is a schematic structural view of an instrument panel drawing processing apparatus provided according to a fourth embodiment of the present application;
fig. 6 is a schematic structural view of an instrument panel drawing processing apparatus provided according to a fifth embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a method of dashboard rendering processing according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The instrument panel drawing processing method, the instrument panel drawing processing device, the electronic equipment and the storage medium according to the embodiment of the application are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a dashboard drawing processing method according to a first embodiment of the present application.
Specifically, in the existing mode, a fixed sentence is recognized through voice, based on a preset command mapped to a preset sentence, data processing is performed on the fixed content of a fixed chart, the lazy nature of the mode for the accuracy and the integrity of voice recognition is strong, the control accuracy for the drawing processing of an instrument panel is low, and the customization degree of freedom for the control of the instrument panel is also low.
The application provides a dashboard drawing processing method, which is used for receiving input voice information and analyzing the voice information to obtain corresponding dialogue text; extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets; generating instrument control instructions according to dialogue intents and a plurality of word slot information of dialogue texts; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.
As shown in fig. 1, the dashboard drawing processing method may include the steps of:
step 101, receiving input voice information, analyzing the voice information and obtaining corresponding dialogue text.
In the embodiment of the application, a scene is described in which a user interacts with the dashboard through voice, so that the user can input voice information through related equipment according to the need, and can receive the voice information input by the user and analyze the voice information to obtain corresponding dialogue text.
It can be understood that there are various ways of receiving the input voice information, and the selection setting can be performed according to the actual application requirement, for example, the following steps are illustrated:
as an example, the artificial intelligence speech recognition application program interface built in the applet in the mobile terminal monitors the speech information spoken by the user and converts the speech information into dialogue text by means of a related speech conversion text algorithm or the like.
As another example, a voice receiving device such as a microphone provided through a dashboard terminal receives voice information uttered by a user and converts the voice information into a dialog text by means of a related voice conversion text algorithm or the like.
Step 102, extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining the dialogue intention of the dialogue text according to the word slot field sets.
Specifically, one dialog intention presets several word slot fields, such as dialog intention S includes word slot fields a, b, and c, and thus, the word slot fields a, b, and c can also determine a unique dialog intention S, such as dialog intention to look up weather, and a plurality of word slot fields corresponding to dialog intention are place, time, weather, and the like.
The dialog intention of the dialog text can be determined in a number of ways, and can be selected as desired, for example, the dialog intention of the dialog text can be determined by processing the dialog text through a pre-trained set of dialog templates or dialog models, and then, for example, the dialog intention of the dialog text can be determined by semantic parsing of the dialog text.
As one possible implementation manner, the dialogue text is segmented to generate a plurality of segmented words, the segmented words are matched with the segmented words according to a plurality of dictionary values in the dialogue template set, and the segmented words successfully matched are determined to be word slot information; according to the corresponding relation between the dialogue intention and the word slot fields in the dialogue template set, determining a word slot field set corresponding to the word slot information, and determining the dialogue intention of the dialogue text according to the word slot field set.
And step 103, generating instrument control instructions according to the dialogue intention of the dialogue text and the word slot information.
And 104, drawing the target instrument panel according to the instrument control instruction.
Specifically, in determining the dialogue intent of the dialogue text and the plurality of word slot information in the dialogue text, generating the meter control instruction according to the dialogue intent of the dialogue text and the plurality of word slot information, such as drawing a chart for the dialogue intent, generating the meter control instruction to generate a first quarter sales volume line graph for the plurality of word slot information such as "first quarter", "sales volume" and "line graph", and the like.
Among them, there are various ways of generating meter control instructions based on dialogue intention of dialogue text and a plurality of word slot information, for example, as follows:
in a first example, converting a dialog intention of a dialog text into a control type according to a preset transmission protocol, converting a plurality of word slot information into control parameters, and generating a meter control instruction matched with the transmission protocol according to the control type and the control parameters.
In a second example, a control manner is determined according to the dialog intention, and a plurality of word slot information is composed into control instrument control instructions.
Further, the drawing process is performed on the target instrument panel according to the instrument control instruction, and it can be understood that if the voice information is received at the instrument panel terminal, the relevant drawing process can be performed directly according to the instrument control instruction, and if the voice information is received in a manner of an applet or the like, the drawing process can be performed on the target instrument panel according to the instrument control instruction by the corresponding client.
As one possible implementation manner, the device identifier of the target instrument panel is obtained, and a broadcast message carrying the device identifier and the instrument control instruction is sent, so that the client corresponding to the device identifier draws the target instrument panel according to the instrument control instruction.
According to the dashboard drawing processing method, the input voice information is received, and the voice information is analyzed to obtain the corresponding dialogue text; extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets; generating instrument control instructions according to dialogue intents and a plurality of word slot information of dialogue texts; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.
Based on the description of the above embodiment, the dialog text may be processed by the pre-training dialog template set to determine the dialog intention of the dialog text, and for the sake of clarity of the person skilled in the art, how to train from the dialog sample set to generate the specific process of the dialog template set will be described in detail below with reference to fig. 2.
Specifically, as shown in fig. 2, includes:
step 201, obtaining a dialogue sample set, wherein each dialogue sample includes: a dialog intention, a plurality of word slot fields corresponding to the dialog intention, and a plurality of dictionary values corresponding to each word slot field.
Step 202, training a dialogue template set containing dialogue intention and word slot field corresponding relation according to the dialogue sample set.
In the embodiment of the present application, the dashboard drawing process is directed to the dialog sample set, which is a plurality of dialog samples in the dashboard drawing process, such as "start a layout of 3 x 3", "show sales volume line diagram of the first quarter in the first grid", and "show sales ratio of product a and product B in the second grid", etc.
It will be appreciated that each dialog sample includes: a dialogue intent, a plurality of word slot fields corresponding to the dialogue intent, and a plurality of dictionary values corresponding to each word slot field, such as "chart drawing" for the dialogue intent corresponding to the dialogue sample "sales volume line graph showing the first quarter in the first grid", such as "chart" for the word slot field corresponding to the dialogue intent "chart drawing", such as "chart", and a dictionary value corresponding to the word slot field such as "line graph".
Thus, a dialog template set including the correspondence relation between dialog intentions and word slot fields is trained according to a dialog sample set, that is, each dialog template in the dialog template set is composed of a plurality of word slot fields, it can define a dialog intension, for example, a dialog template, the dialog intension is weather finding, the plurality of word slot fields corresponding to the dialog intension are places, time, weather, etc., and various dictionary values corresponding to each word slot field, for example, the word slot field is place, can correspond to various dictionary values of "Beijing", "Shanghai", and "Guangzhou", etc. Therefore, the subsequent voice interaction efficiency is improved by training the dialogue template set containing the corresponding relation between the dialogue intention and the word slot field according to the dialogue sample set in advance, so that the user interaction experience is improved.
Specifically, in order to better understand the present application, a plurality of word-slot information is extracted from a dialogue text, a word-slot field set corresponding to the plurality of word-slot information is determined, and a dialogue intention of the dialogue text is determined according to the word-slot field set, taking the dialogue text as "buying the train ticket from the Shanghai to the Beijing" for example, a plurality of word-slot information is extracted as "the train ticket from the Shanghai to the Beijing" for example, "the train ticket from the Shanghai to the Beijing for example," the time "," the departure place "," the destination "and the like are extracted as" the time "," the departure place "," the destination "and the like, and the dialogue intention of the dialogue text" buying the train ticket from the Shanghai to the Beijing "is determined according to the word-slot field sets such as" the time "," the departure place "," the destination "and the like for example," the dialogue intention of buying the train ticket from the Shanghai to the Beijing "is determined.
Thus, a set of dialog samples is obtained, each dialog sample comprising: the dialogue system comprises dialogue intentions, a plurality of word slot fields corresponding to the dialogue intentions and a plurality of dictionary values corresponding to each word slot field, wherein a dialogue template set containing the corresponding relation between the dialogue intentions and the word slot fields is trained according to a dialogue sample set so as to directly recognize the dialogue intentions of dialogue texts subsequently, and the subsequent voice interaction efficiency is improved, so that the user interaction experience is improved.
Fig. 3 is a flowchart of a dashboard drawing processing method according to a third embodiment of the present application.
Step 301, receiving input voice information, and analyzing the voice information to obtain corresponding dialogue text.
In the embodiment of the application, a scene is described in which a user interacts with the dashboard through voice, so that the user can input voice information through related equipment according to the need, and can receive the voice information input by the user and analyze the voice information to obtain corresponding dialogue text.
It can be understood that there are many ways of receiving the input voice information, and the selection setting can be performed according to the actual application requirement, for example, the following steps are illustrated:
as an example, the artificial intelligence speech recognition application program interface built in the applet in the mobile terminal monitors the speech information spoken by the user and converts the speech information into dialogue text by means of a related speech conversion text algorithm or the like.
As another example, a voice receiving device such as a microphone provided through a dashboard terminal receives voice information uttered by a user and converts the voice information into a dialog text by means of a related voice conversion text algorithm or the like.
Step 302, word segmentation is performed on the dialogue text to generate a plurality of word segments, matching is performed on the plurality of word segments according to a plurality of dictionary values in the dialogue template set, and the word segments successfully matched are determined to be word slot information.
Step 303, determining a word slot field set corresponding to the word slot information according to the dialogue intent and word slot field correspondence relation in the dialogue template set, and determining the dialogue intent of the dialogue text according to the word slot field set.
Specifically, the dialogue text is segmented by a text segmentation algorithm and the like to generate a plurality of segmentation words, for example, the dialogue text is a plurality of segmentation words such as "buying the train ticket from Shanghai to Beijing" in the year 2020, and the segmentation words are a plurality of segmentation words such as "buying", "cutting the train ticket from Shanghai to Beijing" in the year 2020 in the year 3 and then cutting the word in the year 27 from Shanghai to Beijing.
Further, matching is performed according to a plurality of dictionary values in the dialogue template set and a plurality of word segments, the word segments which are successfully matched are determined to be word slot information, such as '3 month and 27 days in 2020', 'Shanghai', 'Beijing' and 'train ticket', so that the dialogue intention of the dialogue text is determined to be the train ticket purchase according to the corresponding relation between the dialogue intention and the word slot fields in the dialogue template set, such as 'time', 'departure place', 'destination', and the like, and accordingly, the corresponding word slot field sets are determined to be 'time', 'departure place', 'destination', and the like according to the word slot information, and the dialogue intention of the dialogue text is determined to be the train ticket purchase according to the word slot field sets.
Therefore, the dialogue intention of the dialogue text can be rapidly and accurately determined through pre-training the dialogue template set, so that the drawing control efficiency of the instrument panel is improved, and the user interaction experience is improved.
Step 304, converting the dialogue intention of the dialogue text into a control type and converting the word slot information into control parameters according to a preset transmission protocol.
And 305, generating a meter control instruction matched with the transmission protocol according to the control type and the control parameters.
It can be understood that, in order to better perform the dashboard drawing process, it is required to convert the dialog intention into different control types such as generation, adjustment, zoom-in and zoom-out according to a preset transmission protocol, and convert the word slot information into control parameters such as the first grid display sales volume line graph, so as to generate the dashboard control instruction matched with the transmission protocol according to the control types and the control parameters.
For example, if the instrument control instruction needs to be pushed to the relevant server through WebSocket (a protocol for performing full duplex communication on a single TCP connection), the dialog intention of the dialog text needs to be converted into a control type matched with the transmission protocol according to the WebSocket transmission protocol, and the plurality of word slot information needs to be converted into control parameters matched with the transmission protocol, so that the instrument control instruction matched with the transmission protocol is generated according to the control type and the control parameters, and the interaction efficiency is further improved.
Step 306, obtaining the device identifier of the target instrument panel, and sending a broadcast message carrying the device identifier and the instrument control instruction, so that the client corresponding to the device identifier draws the target instrument panel according to the instrument control instruction.
Specifically, in the process of performing dashboard drawing processing, for example, voice information is acquired through an interface such as a applet, a dashboard control instruction is obtained after processing, and a client of a target dashboard corresponding to the dashboard control instruction needs to be determined, so that a device identifier of the target dashboard needs to be acquired and a broadcast message carrying the device identifier and the dashboard control instruction is sent, so that the client corresponding to the device identifier performs drawing processing on the target dashboard according to the dashboard control instruction.
According to the dashboard drawing processing method, input voice information is received, the voice information is analyzed to obtain corresponding dialog texts, word segmentation is conducted on the dialog texts to generate a plurality of words, the words which are successfully matched are determined to be word slot information according to various dictionary values in a dialog template set and are matched with the word slots, a word slot field set corresponding to the word slot information is determined according to the dialog intention and word slot field corresponding relation in the dialog template set, the dialog intention of the dialog texts is determined according to the word slot field set, the dialog intention of the dialog texts is converted into control types according to a preset transmission protocol, the word slot information is converted into control parameters, a dashboard control instruction matched with the transmission protocol is generated according to the control types and the control parameters, the device identifier of a target dashboard is obtained, and broadcast information carrying the device identifier and the dashboard control instruction is sent, so that clients corresponding to the device identifier draw the target dashboard according to the dashboard control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.
In order to make the process described in the above embodiments more clear for those skilled in the art, the following detailed description will be made with reference to specific examples, and it will be understood that, when the data is visualized in the dashboard, a grid layout is often used to divide the page, in which multiple charts of different types may exist, and the actual meaning behind the data is revealed in various dimensions and filtering manners.
As shown in fig. 4, supporting the mobile terminal as a receiver of voice information, controlling the presentation of the dashboard, fig. 4 is a dashboard, and the recent sales situation needs to be presented in the grid layout of 3*3, and for such a requirement, the ideal smooth and natural voice information may be: "start a 3*3 layout", "show sales line graph for first quarter in first grid" and "show sales duty cycle for product a and product B in second grid".
Specifically, the artificial intelligence speech recognition application program interface built in the applet is used for monitoring the speech information spoken by the user and converting the speech information into dialogue texts, wherein the dialogue texts have corresponding dialogue intentions such as 'dashboard layout', 'chart drawing', and the like, more specifically, the 'chart' is a word slot field of the dialogue intent 'chart drawing' and the 'line graph' is a dictionary value of the word slot field of the 'chart'.
Thus, the dialog template set including the correspondence between the dialog intention and the word slot field can be trained to identify the dialog intention of the dialog text according to the dialog sample set, and it is understood that the dialog template is composed of a plurality of word slot fields, and a dialog intention can be defined, for example, i define a dialog template for searching weather, and the word slot fields thereof are composed of location, time and weather.
It should be noted that, how to determine the weather-checking dialog templates are composed of which word slot fields, whether it is necessary to fill, and so on, according to dialog intentions, it is necessary to determine which word slot fields of key information are needed to be included in the dialog intentions, and how to combine them, so that one dialog template may be configured with multiple rules, i.e., multiple syntaxes may all express the same meaning and the same dialog intentions.
For example, the dialog intention is "chart drawing", 4 word slot fields of "grid", "time", "chart" and "show" are defined, and a rule in the dialog intention of "chart drawing" is formed, for example, the dialog text is "sales line diagram showing the upper week in the first grid", "show" word slot field is hit as an intention hit word, mapped to "chart drawing" dialog intention, according to the rule configured, "grid" word slot field can parse the first grid "," time "word slot field can parse the upper week," chart "word slot field can parse the line diagram, and the actual content of the parsed word slot field, that is, word slot information, is combined with the dialog intention to generate the instrument control instruction.
In the embodiment of the application, a lot of support of dialog intention is needed, and chart drawing is only one of the two types of dialog intention, and a lot of drawing rules are involved; in terms of dialogue intentions, the requirements of dialogue intentions such as adjustment, enlargement and reduction of chart positions, modification of chart data, deletion of charts and the like around the instrument panel interaction are further improved, and user interaction experience is further improved.
Further, based on a Websocket transmission protocol, instrument control instructions are pushed to a message center server, digestion and distribution are carried out after the message center server broadcasts the instrument control instructions to a target instrument panel client, after the target instrument panel client receives the instrument control instructions such as layout, drawing and the like sent by a message center, the instrument panel display is updated based on the instrument control instructions, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, instrument panel drawing processing efficiency is improved, and user interaction experience is improved.
In order to achieve the above embodiments, the present application proposes an instrument panel drawing processing apparatus.
Fig. 5 is a schematic structural diagram of an instrument panel drawing processing device according to a fourth embodiment of the present application.
As shown in fig. 5, the instrument panel drawing processing apparatus 500 may include: a receive parsing module 501, an extract determination module 502, a generation module 503, and a processing module 504.
The receiving and analyzing module 501 is configured to receive input voice information, analyze the voice information, and obtain a corresponding dialog text.
The extraction determining module 502 is configured to extract a plurality of word slot information from the dialog text, determine a word slot field set corresponding to the plurality of word slot information, and determine a dialog intention of the dialog text according to the word slot field set.
A generating module 503, configured to generate a meter control instruction according to the dialogue intent of the dialogue text and the word slot information.
And the processing module 504 is used for drawing the target instrument panel according to the instrument control instruction.
As a possible case, as shown in fig. 6, on the basis of fig. 5, further includes: an acquisition module 505 and a training module 506.
An obtaining module 505, configured to obtain a dialog sample set, where each dialog sample includes: a dialog intention, a plurality of word slot fields corresponding to the dialog intention, and a plurality of dictionary values corresponding to each word slot field.
Training module 506 is configured to train a dialog template set including a dialog intention and a word slot field corresponding relationship according to the dialog sample set.
As a possible scenario, the extraction determination module 502 is specifically configured to: word segmentation is carried out on the dialogue text to generate a plurality of word segmentation; according to the dictionary values in the dialogue template set and the word segmentation, matching is carried out, and the word segmentation which is successfully matched is determined to be word slot information; and determining a word slot field set corresponding to the word slot information according to the conversation intention and word slot field corresponding relation in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
As a possible scenario, the generating module 503 is specifically configured to: converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the word slot information into control parameters; and generating an instrument control instruction matched with the transmission protocol according to the control type and the control parameter.
As a possible scenario, the processing module 504 is specifically configured to: acquiring a device identifier of a target instrument panel; and sending a broadcast message carrying the equipment identifier and the instrument control instruction, so that the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction.
According to the instrument panel drawing processing device, the input voice information is received, and the voice information is analyzed to obtain the corresponding dialogue text; extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets; generating instrument control instructions according to dialogue intents and a plurality of word slot information of dialogue texts; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, a block diagram of an electronic device according to a method of dashboard drawing processing according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 7, the electronic device includes: one or more processors 701, memory 702, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 701 is illustrated in fig. 7.
Memory 702 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for dashboard rendering provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of dashboard rendering processing provided herein.
The memory 702 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of XXX in the embodiments of the present application (e.g., the receiving parsing module 501, the extracting determining module 502, the generating module 503, and the processing module 504 shown in fig. 5). The processor 701 executes various functional applications of the server and data processing, that is, a method of implementing the dashboard drawing processing in the above-described method embodiment, by running a non-transitory software program, instructions, and modules stored in the memory 702.
Memory 702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device of the dashboard drawing process, and the like. In addition, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, which may be connected to the dashboard rendering processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of dashboard drawing processing may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or otherwise, in fig. 7 by way of example.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device for dashboard rendering processing, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output device 704 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the input voice information is received, and the voice information is analyzed to obtain the corresponding dialogue text; extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets; generating instrument control instructions according to dialogue intents and a plurality of word slot information of dialogue texts; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the drawing processing efficiency of the instrument panel is improved, and the user interaction experience is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. The instrument panel drawing processing method is characterized by comprising the following steps of:
receiving input voice information, and analyzing the voice information to obtain a corresponding dialogue text;
extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining dialogue intention of the dialogue text according to the word slot field sets;
generating meter control instructions according to the dialogue intention of the dialogue text and the word slot information, wherein the meter control instructions comprise: determining a control mode according to the dialogue intention, and forming a control instrument control instruction by a plurality of word slot information;
drawing the target instrument panel according to the instrument control instruction;
the method for extracting a plurality of word slot information from the dialogue text, determining a word slot field set corresponding to the word slot information, and determining the dialogue intention of the dialogue text according to the word slot field set comprises the following steps:
word segmentation is carried out on the dialogue text to generate a plurality of word segmentation;
according to the dictionary values in the dialogue template set and the word segmentation, determining the successfully matched word segmentation as word slot information,
and determining a word slot field set corresponding to the word slot information according to the conversation intention and word slot field corresponding relation in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
2. The instrument panel drawing processing method according to claim 1, further comprising:
obtaining a set of dialog samples, wherein each dialog sample comprises: a dialog intention, a plurality of word slot fields corresponding to the dialog intention, and a plurality of dictionary values corresponding to each word slot field;
and training a dialogue template set containing dialogue intention and word slot field corresponding relation according to the dialogue sample set.
3. The dashboard drawing processing method according to claim 1, wherein the generating of the dashboard control instruction from the dialog intention of the dialog text and the plurality of word slot information includes:
converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the word slot information into control parameters;
and generating an instrument control instruction matched with the transmission protocol according to the control type and the control parameter.
4. The instrument panel drawing processing method according to claim 1, wherein the drawing processing of the target instrument panel according to the instrument control instruction includes:
acquiring a device identifier of a target instrument panel;
and sending a broadcast message carrying the equipment identifier and the instrument control instruction, so that the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction.
5. An instrument panel drawing processing device, characterized by comprising:
the receiving and analyzing module is used for receiving the input voice information and analyzing the voice information to obtain a corresponding dialogue text;
the extraction and determination module is used for extracting a plurality of word slot information from the dialogue text, determining word slot field sets corresponding to the word slot information, and determining the dialogue intention of the dialogue text according to the word slot field sets;
the generation module is used for generating instrument control instructions according to the dialogue intention of the dialogue text and the word slot information;
the processing module is used for drawing the target instrument panel according to the instrument control instruction;
the extraction determining module is specifically configured to:
word segmentation is carried out on the dialogue text to generate a plurality of word segmentation;
according to the dictionary values in the dialogue template set, matching the dictionary values with the word segmentation words, and determining that the word segmentation words successfully matched are word slot information;
and determining a word slot field set corresponding to the word slot information according to the conversation intention and word slot field corresponding relation in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
6. The instrument panel drawing processing apparatus according to claim 5, further comprising:
an obtaining module, configured to obtain a dialog sample set, where each dialog sample includes: a dialog intention, a plurality of word slot fields corresponding to the dialog intention, and a plurality of dictionary values corresponding to each word slot field;
and the training module is used for training a dialogue template set containing the dialogue intention and word slot field corresponding relation according to the dialogue sample set.
7. The instrument panel drawing processing apparatus according to claim 5, wherein the generating module is specifically configured to:
converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the word slot information into control parameters;
and generating an instrument control instruction matched with the transmission protocol according to the control type and the control parameter.
8. The instrument panel drawing processing apparatus according to claim 5, wherein the processing module is specifically configured to:
acquiring a device identifier of a target instrument panel;
and sending a broadcast message carrying the equipment identifier and the instrument control instruction, so that the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the instrument panel drawing processing method of any one of claims 1-4.
10. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the instrument panel drawing processing method according to any one of claims 1 to 4.
CN202010334802.1A 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium Active CN111597808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334802.1A CN111597808B (en) 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334802.1A CN111597808B (en) 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111597808A CN111597808A (en) 2020-08-28
CN111597808B true CN111597808B (en) 2023-07-25

Family

ID=72190558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334802.1A Active CN111597808B (en) 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111597808B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417115B (en) * 2020-11-17 2024-04-05 华东理工大学 Network-based dialogue state optimization method, device, server and storage medium
CN112650844A (en) * 2020-12-24 2021-04-13 北京百度网讯科技有限公司 Tracking method and device of conversation state, electronic equipment and storage medium
CN113205569A (en) * 2021-04-25 2021-08-03 Oppo广东移动通信有限公司 Image drawing method and device, computer readable medium and electronic device
CN113869046B (en) * 2021-09-29 2022-10-04 阿波罗智联(北京)科技有限公司 Method, device and equipment for processing natural language text and storage medium
CN114168243B (en) * 2021-11-23 2024-04-02 广西电网有限责任公司 Data system and method based on dashboard multi-chart dynamic merging
CN114462364B (en) * 2022-02-07 2023-01-31 北京百度网讯科技有限公司 Method and device for inputting information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330120A (en) * 2017-07-14 2017-11-07 三角兽(北京)科技有限公司 Inquire answer method, inquiry answering device and computer-readable recording medium
CN110544959A (en) * 2019-08-12 2019-12-06 国电南瑞科技股份有限公司 method, device and system for adjusting automatic power generation control parameters of power grid

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU8581498A (en) * 1997-07-24 1999-02-16 Knowles Electronics, Inc. Universal voice operated command and control engine
WO2010131013A1 (en) * 2009-05-15 2010-11-18 British Telecommunications Public Limited Company Collaborative search engine optimisation
CN201623746U (en) * 2009-09-18 2010-11-03 北京大秦兴宇电子有限公司 Telephone set with data transmission function
KR101309794B1 (en) * 2012-06-27 2013-09-23 삼성전자주식회사 Display apparatus, method for controlling the display apparatus and interactive system
KR101284594B1 (en) * 2012-10-26 2013-07-10 삼성전자주식회사 Image processing apparatus and control method thereof, image processing system
KR102092164B1 (en) * 2013-12-27 2020-03-23 삼성전자주식회사 Display device, server device, display system comprising them and methods thereof
US11112963B2 (en) * 2016-05-18 2021-09-07 Apple Inc. Devices, methods, and graphical user interfaces for messaging
US10490187B2 (en) * 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
DK201670540A1 (en) * 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
CN107516511B (en) * 2016-06-13 2021-05-25 微软技术许可有限责任公司 Text-to-speech learning system for intent recognition and emotion
CN107967825A (en) * 2017-12-11 2018-04-27 大连高马艺术设计工程有限公司 A kind of learning aids system that the corresponding figure of display is described according to language
CN110874859A (en) * 2018-08-30 2020-03-10 三星电子(中国)研发中心 Method and equipment for generating animation
CN109308178A (en) * 2018-08-31 2019-02-05 维沃移动通信有限公司 A kind of voice drafting method and its terminal device
CN109739605A (en) * 2018-12-29 2019-05-10 北京百度网讯科技有限公司 The method and apparatus for generating information
CN110008319B (en) * 2019-02-27 2021-06-29 百度在线网络技术(北京)有限公司 Model training method and device based on dialogue template
CN110188361A (en) * 2019-06-10 2019-08-30 北京智合大方科技有限公司 Speech intention recognition methods and device in conjunction with text, voice and emotional characteristics
CN110377716B (en) * 2019-07-23 2022-07-12 百度在线网络技术(北京)有限公司 Interaction method and device for conversation and computer readable storage medium
CN110413756B (en) * 2019-07-29 2022-02-15 北京小米智能科技有限公司 Method, device and equipment for processing natural language
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110705267B (en) * 2019-09-29 2023-03-21 阿波罗智联(北京)科技有限公司 Semantic parsing method, semantic parsing device and storage medium
CN110909137A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Information pushing method and device based on man-machine interaction and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330120A (en) * 2017-07-14 2017-11-07 三角兽(北京)科技有限公司 Inquire answer method, inquiry answering device and computer-readable recording medium
CN110544959A (en) * 2019-08-12 2019-12-06 国电南瑞科技股份有限公司 method, device and system for adjusting automatic power generation control parameters of power grid

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Wireless voice-controlled multi-functional secure ehome;B. Vamshi等;《2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI)》;全文 *
基于语义的用户意图领域多分类算法分析;姜超;《中国优秀硕士学位论文全文数据库-信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN111597808A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111597808B (en) Instrument panel drawing processing method and device, electronic equipment and storage medium
CN111667816B (en) Model training method, speech synthesis method, device, equipment and storage medium
CN112365880B (en) Speech synthesis method, device, electronic equipment and storage medium
EP3796318A1 (en) Video playing method and device, electronic device, and readable storage medium
JP7130194B2 (en) USER INTENTION RECOGNITION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM AND COMPUTER PROGRAM
CN109429522A (en) Voice interactive method, apparatus and system
CN112509552B (en) Speech synthesis method, device, electronic equipment and storage medium
CN110727434B (en) Rendering method, rendering device, electronic equipment and storage medium
JP7246437B2 (en) Dialogue emotion style prediction method, device, electronic device, storage medium and program
CN111241234B (en) Text classification method and device
CN111127191B (en) Risk assessment method and risk assessment device
CN112434139A (en) Information interaction method and device, electronic equipment and storage medium
WO2023142451A1 (en) Workflow generation methods and apparatuses, and electronic device
JP2022120046A (en) Method of synchronizing verification code, apparatus, electronic device, and storage medium
KR102606514B1 (en) Similarity processing method, apparatus, server and storage medium
CN114490967B (en) Training method of dialogue model, dialogue method and device of dialogue robot and electronic equipment
CN110781657A (en) Management method, device and equipment for navigation broadcasting
CN113808572B (en) Speech synthesis method, speech synthesis device, electronic equipment and storage medium
JP7204861B2 (en) Recognition method, device, electronic device and storage medium for mixed Chinese and English speech
CN112329434B (en) Text information identification method, device, electronic equipment and storage medium
CN112581933B (en) Speech synthesis model acquisition method and device, electronic equipment and storage medium
CN111832313B (en) Method, device, equipment and medium for generating emotion matching set in text
CN111309888B (en) Man-machine conversation method and device, electronic equipment and storage medium
CN111325006B (en) Information interaction method and device, electronic equipment and storage medium
CN113066498B (en) Information processing method, apparatus and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant