CN111597808A - Instrument panel drawing processing method and device, electronic equipment and storage medium - Google Patents

Instrument panel drawing processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111597808A
CN111597808A CN202010334802.1A CN202010334802A CN111597808A CN 111597808 A CN111597808 A CN 111597808A CN 202010334802 A CN202010334802 A CN 202010334802A CN 111597808 A CN111597808 A CN 111597808A
Authority
CN
China
Prior art keywords
word slot
dialog
conversation
text
instrument panel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010334802.1A
Other languages
Chinese (zh)
Other versions
CN111597808B (en
Inventor
张雪婷
刘畅
张阳
谢奕
杨双全
郑灿祥
季昆鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010334802.1A priority Critical patent/CN111597808B/en
Publication of CN111597808A publication Critical patent/CN111597808A/en
Application granted granted Critical
Publication of CN111597808B publication Critical patent/CN111597808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method and a device for drawing and processing a dashboard, electronic equipment and a storage medium, and relates to the field of big data. The specific implementation scheme is as follows: receiving input voice information, and analyzing the voice information to obtain a corresponding dialog text; extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining a dialog intention of the dialog text according to the word slot field set; generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.

Description

Instrument panel drawing processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of big data in the field of data processing, and in particular, to a method and an apparatus for drawing and processing a dashboard, an electronic device, and a storage medium.
Background
Generally, when data visualization presentation is performed through a dashboard, due to different requirements and concerns, it is often necessary to transform presentation layout, presentation content and presentation form, or to transform the dimension and filtering manner of the presented data.
In the related art, a fixed sentence is recognized through voice, data processing is carried out on the fixed content of a fixed chart based on a preset command mapped by the preset sentence, the mode is stronger in accordance with the voice recognition accuracy and the integrity, the out-of-control condition can often occur, and the customization freedom degree of the control of the instrument panel is very low.
Disclosure of Invention
Provided are a dashboard drawing processing method and device, an electronic device and a storage medium.
According to a first aspect, a dashboard drawing processing method is provided, including:
receiving input voice information, and analyzing the voice information to obtain a corresponding dialog text;
extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the word slot information, and determining a dialog intention of the dialog text according to the word slot field set;
generating a meter control instruction according to the conversation intention of the conversation text and the plurality of word slot information;
and drawing the target instrument panel according to the instrument control instruction.
According to a second aspect, there is provided an instrument panel rendering processing apparatus including:
the receiving and analyzing module is used for receiving input voice information and analyzing the voice information to obtain a corresponding dialog text;
the extraction determining module is used for extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining the dialog intention of the dialog text according to the word slot field set;
the generating module is used for generating a meter control instruction according to the conversation intention of the conversation text and the plurality of word slot information;
and the processing module is used for drawing the target instrument panel according to the instrument control instruction.
An embodiment of a third aspect of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the dashboard drawing processing method according to the first aspect.
A fourth aspect of the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the dashboard drawing processing method according to the first aspect.
One embodiment in the above application has the following advantages or benefits:
receiving input voice information, and analyzing the voice information to obtain a corresponding dialog text; extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining a dialog intention of the dialog text according to the word slot field set; generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a dashboard rendering processing method according to a first embodiment of the present application;
fig. 2 is a schematic flowchart of a dashboard drawing processing method according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a dashboard rendering processing method according to a third embodiment of the present application;
fig. 4 is an exemplary diagram of a dashboard drawing processing method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an instrument panel drawing processing apparatus according to a fourth embodiment of the present application;
fig. 6 is a schematic structural diagram of an instrument panel drawing processing apparatus according to a fifth embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a method of dashboard rendering processing according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A dashboard drawing processing method, device, electronic apparatus, and storage medium according to embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a schematic flow chart of a dashboard drawing processing method according to a first embodiment of the present application.
Specifically, in the existing method, a fixed sentence is recognized through voice, and data processing is performed on the fixed content of the fixed chart based on a preset command mapped by a preset sentence, so that the method has stronger dependence on the voice recognition accuracy and the integrity, lower control accuracy on instrument panel drawing processing, and lower customization freedom on instrument panel control.
The application provides a dashboard drawing processing method, which comprises the steps of receiving input voice information, analyzing the voice information and obtaining corresponding dialogue texts; extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining a dialog intention of the dialog text according to the word slot field set; generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
As shown in fig. 1, the dashboard drawing processing method may include the following steps:
step 101, receiving input voice information, and analyzing the voice information to obtain a corresponding dialog text.
In the embodiment of the application, a scene that a user interacts with a dashboard through voice is described, so that the user can input voice information through related equipment as required, and thus the voice information input by the user can be received and analyzed to obtain a corresponding dialog text.
It will be understood that there are many ways to receive the input voice information, and the setting can be selected according to the actual application requirement, for example, as follows:
as an example, the voice information spoken by the user is monitored through an artificial intelligence voice recognition application program interface built in an applet in the mobile terminal, and the voice information is converted into a dialog text through a relevant voice conversion text algorithm or the like.
As another example, voice information spoken by a user is received by a voice receiving device such as a microphone provided in a dashboard terminal, and the voice information is converted into a dialog text by an associated voice-to-text algorithm or the like.
Step 102, extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining the dialog intention of the dialog text according to the word slot field set.
Specifically, several word slot fields are preset for one dialog intention, for example, the dialog intention S includes word slot fields a, b and c, and therefore, the word slot fields a, b and c can also determine the unique dialog intention S, for example, the dialog intention is to find weather, and the word slot fields corresponding to the dialog intention are location, time, weather and the like.
The dialog intention of the dialog text can be determined in a variety of ways, and can be set according to needs, for example, the dialog intention of the dialog text can be determined by processing the dialog text through a pre-training dialog template set or a dialog model, and then, for example, the dialog intention of the dialog text can be determined by performing semantic parsing on the dialog text.
As a possible implementation manner, segmenting words of a dialog text to generate a plurality of segmented words, matching the segmented words with a plurality of dictionary values in a dialog template set according to the plurality of dictionary values, and determining the successfully matched segmented words as word slot information; and determining a word slot field set corresponding to the plurality of word slot information according to the corresponding relation between the conversation intention and the word slot field in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
And 103, generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots.
And 104, drawing the target instrument panel according to the instrument control instruction.
Specifically, when determining the dialog intention of the dialog text and the multiple word slot information in the dialog text, further generating a meter control instruction according to the dialog intention of the dialog text and the multiple word slot information, for example, the dialog intention is drawn by a chart, the multiple word slot information is "first quarter", "sales volume", and "line graph", and the meter control instruction is generated to generate a first quarter sales volume line graph.
There are various ways of generating the meter control command according to the dialog intention of the dialog text and the information of the plurality of word slots, which are illustrated as follows:
in a first example, a dialog intention of a dialog text is converted into a control type according to a preset transmission protocol, and a plurality of word slot information is converted into a control parameter, and a meter control instruction matched with the transmission protocol is generated according to the control type and the control parameter.
In the second example, the control mode is determined according to the conversation intention, and the control instrument control instruction is formed by the information of a plurality of word slots.
Further, the target instrument panel is drawn according to the instrument control instruction, and it can be understood that if the voice information is received at the instrument panel terminal, the relevant drawing processing can be directly performed according to the instrument control instruction, and if the voice information is received in a small program or other mode, the instrument control instruction can be broadcasted, so that the corresponding client side performs the drawing processing on the target instrument panel according to the instrument control instruction.
As a possible implementation manner, the device identifier of the target instrument panel is obtained, and the broadcast message carrying the device identifier and the instrument control instruction is sent, so that the client corresponding to the device identifier performs drawing processing on the target instrument panel according to the instrument control instruction.
According to the instrument panel drawing processing method, the input voice information is received, and the voice information is analyzed to obtain the corresponding dialog text; extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining a dialog intention of the dialog text according to the word slot field set; generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
Based on the above description of the embodiments, the dialog intention of the dialog text may be determined by processing the dialog text through a pre-training dialog template set, and in order to make it more clear for those skilled in the art how to perform training to generate the dialog template set according to the dialog sample set, the following description will be made in detail with reference to fig. 2.
Specifically, as shown in fig. 2, the method includes:
step 201, obtaining a set of dialog samples, wherein each dialog sample comprises: a dialog intent, a plurality of word slot fields corresponding to the dialog intent, and a plurality of dictionary values corresponding to each of the word slot fields.
Step 202, training a dialog template set containing the corresponding relationship between the dialog intention and the word slot field according to the dialog sample set.
In the embodiment of the present application, the dashboard drawing process is performed, and therefore, the dialog sample set is a plurality of dialog samples during the dashboard drawing process, such as "start a 3 x 3 layout", "show a first quarter sales line graph in the first grid", and "show the sales ratios of the product a and the product B in the second grid".
It will be appreciated that each dialog sample includes: the dialog intent, the plurality of word slot fields corresponding to the dialog intent, and the plurality of dictionary values corresponding to each word slot field, such as the dialog sample "sales line graph in the first quarter is presented in the first grid" the dialog intent corresponding to "chart drawing", the word slot field corresponding to "chart drawing" the dialog intent such as "chart", and the dictionary value corresponding to the word slot field such as "line graph".
Therefore, a dialog template set containing the corresponding relationship between the dialog intention and the word slot field is trained according to the dialog sample set, namely each dialog template in the dialog template set is composed of a plurality of word slot fields, and a dialog intention can be defined, such as a dialog template, the dialog intention is to find weather, the word slot fields corresponding to the dialog intention are places, time, weather and the like, and a plurality of dictionary values corresponding to each word slot field, such as the word slot field is a place, can correspond to a plurality of dictionary values of Beijing, Shanghai, Guangzhou and the like. Therefore, the subsequent voice interaction efficiency is improved by training a conversation template set containing the corresponding relation between the conversation intention and the word slot field according to the conversation sample set in advance, and the user interaction experience is improved.
Specifically, in order to better understand the present application, a plurality of word slot information is extracted from a dialog text, a word slot field set corresponding to the plurality of word slot information is determined, a dialog intention of the dialog text is determined according to the word slot field set, the dialog text is taken as "purchase of a train ticket from 3 months and 27 days 2020, Shanghai to Beijing", and the like, the plurality of word slot information is extracted as "3 months and 27 days 2020", "Shanghai", and "Beijing", and the like, the word slot field set corresponding to the plurality of word slot information is taken as "time", "place of departure", "destination", and the like, the dialog intention of the dialog text "purchase of a train ticket from 3 months and 27 days 2020, and from Shanghai to Beijing" is determined according to the word slot fields such as "time", "place of departure", "destination", and the like, and the dialog intention of ordering the train ticket is taken as "order.
Thereby, a set of dialog samples is obtained, each dialog sample comprising: the method comprises the steps of training a dialog template set containing the dialog intentions and the word slot fields according to a dialog sample set so as to directly recognize the dialog intentions of a dialog text in the following process, and improving the following voice interaction efficiency, thereby improving the user interaction experience.
Fig. 3 is a schematic flowchart of a dashboard rendering processing method according to a third embodiment of the present application.
Step 301, receiving the input voice information, and analyzing the voice information to obtain a corresponding dialog text.
In the embodiment of the application, a scene that a user interacts with a dashboard through voice is described, so that the user can input voice information through related equipment as required, and thus the voice information input by the user can be received and analyzed to obtain a corresponding dialog text.
It is understood that there are many ways to receive the input voice information, and the setting can be selected according to the actual application requirement, for example, as follows:
as an example, the voice information spoken by the user is monitored through an artificial intelligence voice recognition application program interface built in an applet in the mobile terminal, and the voice information is converted into a dialog text through a relevant voice conversion text algorithm or the like.
As another example, voice information spoken by a user is received by a voice receiving device such as a microphone provided in a dashboard terminal, and the voice information is converted into a dialog text by an associated voice-to-text algorithm or the like.
Step 302, segmenting words of the dialog text to generate a plurality of segmented words, matching the segmented words with the plurality of segmented words according to a plurality of dictionary values in the dialog template set, and determining the successfully matched segmented words as word slot information.
Step 303, determining a word slot field set corresponding to the plurality of word slot information according to the correspondence between the dialog intention and the word slot field in the dialog template set, and determining the dialog intention of the dialog text according to the word slot field set.
Specifically, the words of the dialog text are cut through a text word cutting algorithm and the like to generate a plurality of participles, for example, the dialog text is divided into a plurality of participles of "buy 3 month and 27 day 2020, from shanghai to beijing", and the words of "buy", "3 month and 27 day 2020", "from" "shanghai", "to", "beijing" "and" train ticket "after word cutting.
Further, matching is carried out according to a plurality of dictionary values in the conversation template set and a plurality of participles, the successfully matched participles are determined to be word slot information, for example, matching of '3 month and 27 day 2020', 'Shanghai', 'Beijing' and 'train ticket' is successful, so that according to the correspondence between conversation intents and word slot fields in the conversation template set, for example, the conversation intents are for purchasing the train ticket, and the corresponding word slot fields are 'time', 'place of departure', destination 'and the like, therefore, the corresponding word slot field sets are determined to be' time ',' place of departure ', destination' and the like according to the word slot information, and the conversation intents of the conversation text are determined to be purchasing the train ticket according to the word slot field sets.
Therefore, the dialogue intention of the dialogue text can be quickly and accurately determined by training the dialogue template set in advance, so that the drawing control efficiency of the instrument panel is improved, and the user interaction experience is improved.
Step 304, converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the plurality of word slot information into control parameters.
And 305, generating a meter control instruction matched with the transmission protocol according to the control type and the control parameter.
It can be understood that, for better performing the dashboard drawing process, it is necessary to convert the dialog intentions into control types such as different controls of generating, adjusting, enlarging, and reducing, and convert the multiple word slot information into control parameters such as a first grid presentation sales amount line graph, according to a preset transmission protocol, so as to generate a meter control command matching the transmission protocol according to the control types and the control parameters.
For example, if a meter control command needs to be pushed to a relevant server through a WebSocket (a protocol for performing full duplex communication on a single TCP connection), a dialog intention of a dialog text needs to be converted into a control type matched with a transmission protocol according to a WebSocket transmission protocol, and multiple word slot information needs to be converted into a control parameter matched with the transmission protocol, so that the meter control command matched with the transmission protocol is generated according to the control type and the control parameter, and the interaction efficiency is further improved.
And step 306, acquiring the equipment identifier of the target instrument panel, and sending a broadcast message carrying the equipment identifier and the instrument control instruction so that the client corresponding to the equipment identifier performs drawing processing on the target instrument panel according to the instrument control instruction.
Specifically, in the process of drawing the instrument panel, for example, the voice information is acquired through an interface such as an applet and the like, and the instrument control instruction is obtained after the voice information is processed, and a client of the target instrument panel corresponding to the instrument control instruction needs to be determined.
The instrument panel drawing processing method of the embodiment of the application receives input voice information, analyzes the voice information to obtain corresponding dialogue texts, cuts words of the dialogue texts to generate a plurality of words, matches the words with a plurality of dictionary values in a dialogue template set, determines successfully matched words as word slot information, determines word slot field sets corresponding to the word slot information according to dialogue intents and word slot field corresponding relations in the dialogue template set, determines dialogue intents of the dialogue texts according to the word slot field sets, converts the dialogue intents of the dialogue texts into control types according to a preset transmission protocol, converts the word slot information into control parameters, generates instrument control instructions matched with the transmission protocol according to the control types and the control parameters, obtains equipment identifications of target instrument panels, and sends broadcast messages carrying the equipment identifications and the instrument control instructions, and the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
In order to make the processes described in the above embodiments more clear to those skilled in the art, the following detailed description is made in conjunction with specific examples, and it is understood that when data visualization presentation is performed in a dashboard, pages are often divided by using a grid layout, and in the grid layout, there may be a variety of diagrams of different types, and actual meanings behind the data are presented in various dimensions and filtering manners.
As shown in fig. 4, the dashboard display is controlled to support the mobile terminal as the receiver of the voice message, fig. 4 is a dashboard, and it is required to display the recent sales situation in a 3 × 3 grid layout, and for such a requirement, the smooth and natural voice message may be ideally: "start a 3 x 3 layout", "show the sales line plot for the first quarter in the first grid", and "show the sales ratios for product a and product B in the second grid".
Specifically, by utilizing an artificial intelligence speech recognition application program interface built in the applet, speech information spoken by a user is monitored and converted into a dialog text, the dialog text has corresponding dialog intentions such as 'dashboard layout', 'charting', and the like, more specifically, the 'chart' is a word slot field of the dialog intentions 'charting', and the 'line graph' is a dictionary value of the word slot field of the 'chart'.
Therefore, a dialog template set containing the corresponding relation between the dialog intention and the word slot field can be trained according to the dialog sample set to identify the dialog intention of the dialog text, and it can be understood that the dialog template is composed of a plurality of word slot fields, and a dialog intention can be defined, for example, a dialog template for looking up weather is defined by I, and the word slot field is composed of place, time and weather.
It should be noted that how to determine whether the weather-finding dialog template is composed of which word slot fields, and whether the weather-finding dialog template needs to be filled with the same word slot fields, and it needs to determine which key information word slot fields are needed for the dialog intent and how to combine the word slot fields according to the dialog intent, so that one dialog template can be configured with a plurality of rules, that is, a plurality of syntaxes can express the same meaning and the same dialog intent.
For example, the dialog intention is "graph drawing", 4 word slot fields of "grid", "time", "graph", "presentation", and the like are defined, and a rule in the dialog intention of "graph drawing" is formed, for example, the dialog text is "sales line graph of the last week is presented in the first grid", "presentation" word slot field hits as an intention hit word and is mapped to "graph drawing" dialog intention, according to the configured rule, the "grid" word slot field can analyze the first grid, "time" word slot field can analyze the last week, "graph" word slot field can analyze the line graph, and the actual content of the analyzed word slot field, that is, word slot information, is combined with the dialog intention to generate a meter control instruction.
In the embodiment of the application, a lot of support of dialog intentions is needed, and the chart drawing is only one, and a lot of drawing rules are involved; in the aspect of conversation intentions, the requirements of the conversation intentions of chart position adjustment, enlargement, reduction, chart data modification, chart deletion and the like surrounding the interaction of the dashboard further improve the user interaction experience.
Furthermore, based on a Websocket transmission protocol, an instrument control instruction is pushed to the message center server, the message center server digests and distributes the instrument control instruction after receiving the instrument control instruction, the instrument control instruction is broadcast to the target instrument panel client, and after the target instrument panel client receives the instrument control instructions such as layout and drawing sent by the message center, the instrument panel display is updated based on the instrument control instruction, so that interaction with the instrument panel in a smoother, more convenient and more natural voice mode is realized, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
In order to implement the above embodiments, the present application provides an instrument panel drawing processing apparatus.
Fig. 5 is a schematic structural diagram of an instrument panel drawing processing apparatus according to a fourth embodiment of the present application.
As shown in fig. 5, the dashboard drawing processing apparatus 500 may include: a receiving and analyzing module 501, an extraction determining module 502, a generating module 503 and a processing module 504.
The receiving and analyzing module 501 is configured to receive input voice information, and analyze the voice information to obtain a corresponding dialog text.
An extraction determining module 502, configured to extract multiple pieces of word slot information from the dialog text, determine a set of word slot fields corresponding to the multiple pieces of word slot information, and determine a dialog intention of the dialog text according to the set of word slot fields.
And a generating module 503, configured to generate a meter control instruction according to the dialog intention of the dialog text and the multiple word slot information.
And the processing module 504 is configured to perform drawing processing on the target instrument panel according to the instrument control instruction.
As a possible case, as shown in fig. 6, on the basis of fig. 5, the method further includes: an acquisition module 505 and a training module 506.
An obtaining module 505, configured to obtain a set of conversation samples, where each conversation sample includes: a dialog intent, a plurality of word slot fields corresponding to the dialog intent, and a plurality of dictionary values corresponding to each word slot field.
And the training module 506 is used for training a dialog template set containing the corresponding relationship between the dialog intention and the word slot field according to the dialog sample set.
As a possible scenario, the extraction determining module 502 is specifically configured to: performing word segmentation on the dialog text to generate a plurality of participles; matching with the multiple participles according to multiple dictionary values in the conversation template set, and determining the successfully matched participles as word slot information; and determining a word slot field set corresponding to the plurality of word slot information according to the corresponding relation between the conversation intention and the word slot field in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
As a possible scenario, the generating module 503 is specifically configured to: converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the plurality of word slot information into control parameters; and generating an instrument control instruction matched with the transmission protocol according to the control type and the control parameter.
As a possible scenario, the processing module 504 is specifically configured to: acquiring a device identifier of a target instrument panel; and sending a broadcast message carrying the equipment identifier and the instrument control instruction so that the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction.
According to the instrument panel drawing processing device, the input voice information is received, and the voice information is analyzed to obtain the corresponding dialog text; extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining a dialog intention of the dialog text according to the word slot field set; generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of dashboard mapping processing provided herein. A non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method of dashboard drawing processing provided herein.
Memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the methods of XXX in embodiments of the present application (e.g., receive parsing module 501, extraction determining module 502, generating module 503, and processing module 504 shown in fig. 5). The processor 701 executes various functional applications of the server and data processing, i.e., a method of implementing dashboard drawing processing in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device for dashboard drawing processing, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, and these remote memories may be connected to the dashboard rendering processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for drawing and processing the instrument panel may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus for dashboard drawing processing, such as an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the input voice information is received, and the voice information is analyzed to obtain the corresponding dialog text; extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining a dialog intention of the dialog text according to the word slot field set; generating a meter control instruction according to the conversation intention of the conversation text and the information of the plurality of word slots; and drawing the target instrument panel according to the instrument control instruction. Therefore, interaction with the instrument panel in a smoother, more convenient and more natural voice mode is achieved, the instrument panel drawing processing efficiency is improved, and the user interaction experience is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A dashboard drawing processing method is characterized by comprising the following steps:
receiving input voice information, and analyzing the voice information to obtain a corresponding dialog text;
extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the word slot information, and determining a dialog intention of the dialog text according to the word slot field set;
generating a meter control instruction according to the conversation intention of the conversation text and the plurality of word slot information;
and drawing the target instrument panel according to the instrument control instruction.
2. The instrument panel rendering processing method according to claim 1, further comprising:
obtaining a set of conversation samples, wherein each conversation sample comprises: a dialog intent, a plurality of word slot fields corresponding to the dialog intent, and a plurality of dictionary values corresponding to each word slot field;
and training a dialog template set containing the corresponding relation between the dialog intention and the word slot field according to the dialog sample set.
3. The dashboard rendering processing method of claim 2, wherein said extracting a plurality of word slot information from the dialog text and determining a set of word slot fields corresponding to the plurality of word slot information and determining a dialog intent of the dialog text from the set of word slot fields comprises:
performing word segmentation on the dialog text to generate a plurality of participles;
matching with the multiple participles according to multiple dictionary values in the conversation template set, and determining the successfully matched participles as word slot information;
and determining a word slot field set corresponding to the plurality of word slot information according to the corresponding relation between the conversation intention and the word slot field in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
4. The dashboard rendering processing method according to claim 1, wherein said generating a meter control command from the dialogue intent of the dialogue text and the plurality of word slot information comprises:
converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the plurality of word slot information into control parameters;
and generating an instrument control instruction matched with the transmission protocol according to the control type and the control parameter.
5. The instrument panel drawing processing method according to claim 1, wherein the drawing processing of the target instrument panel according to the instrument control instruction includes:
acquiring a device identifier of a target instrument panel;
and sending a broadcast message carrying the equipment identifier and the instrument control instruction so that the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction.
6. An instrument panel drawing processing apparatus, characterized by comprising:
the receiving and analyzing module is used for receiving input voice information and analyzing the voice information to obtain a corresponding dialog text;
the extraction determining module is used for extracting a plurality of word slot information from the dialog text, determining a word slot field set corresponding to the plurality of word slot information, and determining the dialog intention of the dialog text according to the word slot field set;
the generating module is used for generating a meter control instruction according to the conversation intention of the conversation text and the plurality of word slot information;
and the processing module is used for drawing the target instrument panel according to the instrument control instruction.
7. The instrument panel rendering processing apparatus according to claim 6, further comprising:
an obtaining module, configured to obtain a set of conversation samples, where each conversation sample includes: a dialog intent, a plurality of word slot fields corresponding to the dialog intent, and a plurality of dictionary values corresponding to each word slot field;
and the training module is used for training a dialogue template set containing the corresponding relation between the dialogue intention and the word slot field according to the dialogue sample set.
8. The instrument panel rendering processing apparatus of claim 7, wherein the extraction determination module is specifically configured to:
performing word segmentation on the dialog text to generate a plurality of participles;
matching with the multiple participles according to multiple dictionary values in the conversation template set, and determining the successfully matched participles as word slot information;
and determining a word slot field set corresponding to the plurality of word slot information according to the corresponding relation between the conversation intention and the word slot field in the conversation template set, and determining the conversation intention of the conversation text according to the word slot field set.
9. The dashboard drawing processing apparatus according to claim 6, wherein the generating module is specifically configured to:
converting the dialogue intention of the dialogue text into a control type according to a preset transmission protocol, and converting the plurality of word slot information into control parameters;
and generating an instrument control instruction matched with the transmission protocol according to the control type and the control parameter.
10. The instrument panel rendering processing apparatus according to claim 6, wherein the processing module is specifically configured to:
acquiring a device identifier of a target instrument panel;
and sending a broadcast message carrying the equipment identifier and the instrument control instruction so that the client corresponding to the equipment identifier draws the target instrument panel according to the instrument control instruction.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the dashboard mapping processing method of any of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the instrument panel rendering processing method according to any one of claims 1 to 5.
CN202010334802.1A 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium Active CN111597808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334802.1A CN111597808B (en) 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334802.1A CN111597808B (en) 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111597808A true CN111597808A (en) 2020-08-28
CN111597808B CN111597808B (en) 2023-07-25

Family

ID=72190558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334802.1A Active CN111597808B (en) 2020-04-24 2020-04-24 Instrument panel drawing processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111597808B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417115A (en) * 2020-11-17 2021-02-26 华东理工大学 Network-based session state optimization method, device, server and storage medium
CN112650844A (en) * 2020-12-24 2021-04-13 北京百度网讯科技有限公司 Tracking method and device of conversation state, electronic equipment and storage medium
CN113205569A (en) * 2021-04-25 2021-08-03 Oppo广东移动通信有限公司 Image drawing method and device, computer readable medium and electronic device
CN113869046A (en) * 2021-09-29 2021-12-31 阿波罗智联(北京)科技有限公司 Method, device and equipment for processing natural language text and storage medium
CN114168243A (en) * 2021-11-23 2022-03-11 广西电网有限责任公司 Dashbird multi-chart-based system and method for dynamically merging data
CN114462364A (en) * 2022-02-07 2022-05-10 北京百度网讯科技有限公司 Method and device for inputting information
CN117111879A (en) * 2023-10-25 2023-11-24 深圳市微克科技有限公司 Dial generation method and device, intelligent wearable device and storage medium

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW409226B (en) * 1997-07-24 2000-10-21 Knowles Electronics Llc Universal voice operated command and control engine
CN201623746U (en) * 2009-09-18 2010-11-03 北京大秦兴宇电子有限公司 Telephone set with data transmission function
WO2010131013A1 (en) * 2009-05-15 2010-11-18 British Telecommunications Public Limited Company Collaborative search engine optimisation
CN203151689U (en) * 2012-10-26 2013-08-21 三星电子株式会社 Image processing apparatus and image processing system
CN103516711A (en) * 2012-06-27 2014-01-15 三星电子株式会社 Display apparatus, method for controlling display apparatus, and interactive system
CN107330120A (en) * 2017-07-14 2017-11-07 三角兽(北京)科技有限公司 Inquire answer method, inquiry answering device and computer-readable recording medium
CN107491284A (en) * 2016-06-10 2017-12-19 苹果公司 The digital assistants of automation state report are provided
WO2017218243A2 (en) * 2016-06-13 2017-12-21 Microsoft Technology Licensing, Llc Intent recognition and emotional text-to-speech learning system
CN107608998A (en) * 2016-06-11 2018-01-19 苹果公司 Application integration with digital assistants
CN107967825A (en) * 2017-12-11 2018-04-27 大连高马艺术设计工程有限公司 A kind of learning aids system that the corresponding figure of display is described according to language
CN108572810A (en) * 2013-12-27 2018-09-25 三星电子株式会社 The method of the content information of electronic equipment and offer electronic equipment
CN109117068A (en) * 2016-05-18 2019-01-01 苹果公司 Equipment, method and graphic user interface for messaging
CN109308178A (en) * 2018-08-31 2019-02-05 维沃移动通信有限公司 A kind of voice drafting method and its terminal device
CN109739605A (en) * 2018-12-29 2019-05-10 北京百度网讯科技有限公司 The method and apparatus for generating information
CN110008319A (en) * 2019-02-27 2019-07-12 百度在线网络技术(北京)有限公司 Model training method and device based on dialog template
CN110188361A (en) * 2019-06-10 2019-08-30 北京智合大方科技有限公司 Speech intention recognition methods and device in conjunction with text, voice and emotional characteristics
CN110377716A (en) * 2019-07-23 2019-10-25 百度在线网络技术(北京)有限公司 Exchange method, device and the computer readable storage medium of dialogue
CN110413756A (en) * 2019-07-29 2019-11-05 北京小米智能科技有限公司 The method, device and equipment of natural language processing
CN110544959A (en) * 2019-08-12 2019-12-06 国电南瑞科技股份有限公司 method, device and system for adjusting automatic power generation control parameters of power grid
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110705267A (en) * 2019-09-29 2020-01-17 百度在线网络技术(北京)有限公司 Semantic parsing method, semantic parsing device and storage medium
CN110874859A (en) * 2018-08-30 2020-03-10 三星电子(中国)研发中心 Method and equipment for generating animation
CN110909137A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Information pushing method and device based on man-machine interaction and computer equipment

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW409226B (en) * 1997-07-24 2000-10-21 Knowles Electronics Llc Universal voice operated command and control engine
WO2010131013A1 (en) * 2009-05-15 2010-11-18 British Telecommunications Public Limited Company Collaborative search engine optimisation
CN201623746U (en) * 2009-09-18 2010-11-03 北京大秦兴宇电子有限公司 Telephone set with data transmission function
CN103516711A (en) * 2012-06-27 2014-01-15 三星电子株式会社 Display apparatus, method for controlling display apparatus, and interactive system
CN203151689U (en) * 2012-10-26 2013-08-21 三星电子株式会社 Image processing apparatus and image processing system
CN108572810A (en) * 2013-12-27 2018-09-25 三星电子株式会社 The method of the content information of electronic equipment and offer electronic equipment
CN109117068A (en) * 2016-05-18 2019-01-01 苹果公司 Equipment, method and graphic user interface for messaging
CN107491284A (en) * 2016-06-10 2017-12-19 苹果公司 The digital assistants of automation state report are provided
CN107608998A (en) * 2016-06-11 2018-01-19 苹果公司 Application integration with digital assistants
WO2017218243A2 (en) * 2016-06-13 2017-12-21 Microsoft Technology Licensing, Llc Intent recognition and emotional text-to-speech learning system
CN107330120A (en) * 2017-07-14 2017-11-07 三角兽(北京)科技有限公司 Inquire answer method, inquiry answering device and computer-readable recording medium
CN107967825A (en) * 2017-12-11 2018-04-27 大连高马艺术设计工程有限公司 A kind of learning aids system that the corresponding figure of display is described according to language
CN110874859A (en) * 2018-08-30 2020-03-10 三星电子(中国)研发中心 Method and equipment for generating animation
CN109308178A (en) * 2018-08-31 2019-02-05 维沃移动通信有限公司 A kind of voice drafting method and its terminal device
CN109739605A (en) * 2018-12-29 2019-05-10 北京百度网讯科技有限公司 The method and apparatus for generating information
CN110008319A (en) * 2019-02-27 2019-07-12 百度在线网络技术(北京)有限公司 Model training method and device based on dialog template
CN110188361A (en) * 2019-06-10 2019-08-30 北京智合大方科技有限公司 Speech intention recognition methods and device in conjunction with text, voice and emotional characteristics
CN110377716A (en) * 2019-07-23 2019-10-25 百度在线网络技术(北京)有限公司 Exchange method, device and the computer readable storage medium of dialogue
CN110413756A (en) * 2019-07-29 2019-11-05 北京小米智能科技有限公司 The method, device and equipment of natural language processing
CN110544959A (en) * 2019-08-12 2019-12-06 国电南瑞科技股份有限公司 method, device and system for adjusting automatic power generation control parameters of power grid
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110705267A (en) * 2019-09-29 2020-01-17 百度在线网络技术(北京)有限公司 Semantic parsing method, semantic parsing device and storage medium
CN110909137A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Information pushing method and device based on man-machine interaction and computer equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
B. VAMSHI等: "Wireless voice-controlled multi-functional secure ehome", 《2017 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI)》 *
姜超: "基于语义的用户意图领域多分类算法分析", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *
邝展鹏: "语音交互设计与研究 ——以金融自助终端设备语音交互设计为例", 《中国优秀硕士学位论文全文数据库0信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417115A (en) * 2020-11-17 2021-02-26 华东理工大学 Network-based session state optimization method, device, server and storage medium
CN112417115B (en) * 2020-11-17 2024-04-05 华东理工大学 Network-based dialogue state optimization method, device, server and storage medium
CN112650844A (en) * 2020-12-24 2021-04-13 北京百度网讯科技有限公司 Tracking method and device of conversation state, electronic equipment and storage medium
CN113205569A (en) * 2021-04-25 2021-08-03 Oppo广东移动通信有限公司 Image drawing method and device, computer readable medium and electronic device
CN113869046A (en) * 2021-09-29 2021-12-31 阿波罗智联(北京)科技有限公司 Method, device and equipment for processing natural language text and storage medium
CN114168243A (en) * 2021-11-23 2022-03-11 广西电网有限责任公司 Dashbird multi-chart-based system and method for dynamically merging data
CN114168243B (en) * 2021-11-23 2024-04-02 广西电网有限责任公司 Data system and method based on dashboard multi-chart dynamic merging
CN114462364A (en) * 2022-02-07 2022-05-10 北京百度网讯科技有限公司 Method and device for inputting information
CN114462364B (en) * 2022-02-07 2023-01-31 北京百度网讯科技有限公司 Method and device for inputting information
CN117111879A (en) * 2023-10-25 2023-11-24 深圳市微克科技有限公司 Dial generation method and device, intelligent wearable device and storage medium
CN117111879B (en) * 2023-10-25 2024-05-03 深圳市微克科技股份有限公司 Dial generation method and device, intelligent wearable device and storage medium

Also Published As

Publication number Publication date
CN111597808B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111597808A (en) Instrument panel drawing processing method and device, electronic equipment and storage medium
CN111324727B (en) User intention recognition method, device, equipment and readable storage medium
CN114841274B (en) Language model training method and device, electronic equipment and storage medium
CN110738997B (en) Information correction method and device, electronic equipment and storage medium
JP7246437B2 (en) Dialogue emotion style prediction method, device, electronic device, storage medium and program
CN112269862B (en) Text role labeling method, device, electronic equipment and storage medium
CN112434139A (en) Information interaction method and device, electronic equipment and storage medium
CN111241234A (en) Text classification method and device
CN112382294A (en) Voice recognition method and device, electronic equipment and storage medium
CN110781657A (en) Management method, device and equipment for navigation broadcasting
CN116257690A (en) Resource recommendation method and device, electronic equipment and storage medium
CN113808572B (en) Speech synthesis method, speech synthesis device, electronic equipment and storage medium
CN114490967B (en) Training method of dialogue model, dialogue method and device of dialogue robot and electronic equipment
CN110633357A (en) Voice interaction method, device, equipment and medium
CN114490969B (en) Question and answer method and device based on table and electronic equipment
CN112581933B (en) Speech synthesis model acquisition method and device, electronic equipment and storage medium
CN114549695A (en) Image generation method and device, electronic equipment and readable storage medium
CN114020918A (en) Classification model training method, translation device and electronic equipment
CN114118937A (en) Information recommendation method and device based on task, electronic equipment and storage medium
KR20220024227A (en) Method and related apparatus for data annotation, computer program
CN114049875A (en) TTS (text to speech) broadcasting method, device, equipment and storage medium
CN112817463A (en) Method, equipment and storage medium for acquiring audio data by input method
CN113066498B (en) Information processing method, apparatus and medium
CN114398130B (en) Page display method, device, equipment and storage medium
CN114281981B (en) News brief report generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant