CN114047900A - Service processing method and device, electronic equipment and computer readable storage medium - Google Patents

Service processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114047900A
CN114047900A CN202111187654.6A CN202111187654A CN114047900A CN 114047900 A CN114047900 A CN 114047900A CN 202111187654 A CN202111187654 A CN 202111187654A CN 114047900 A CN114047900 A CN 114047900A
Authority
CN
China
Prior art keywords
voice
module
functional module
voice data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111187654.6A
Other languages
Chinese (zh)
Inventor
唐德顺
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdian Jinxin Software Co Ltd
Original Assignee
Zhongdian Jinxin Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Jinxin Software Co Ltd filed Critical Zhongdian Jinxin Software Co Ltd
Priority to CN202111187654.6A priority Critical patent/CN114047900A/en
Publication of CN114047900A publication Critical patent/CN114047900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Abstract

The embodiment of the application provides a service processing method and device, electronic equipment and a storage medium, and relates to the technical field of data processing. The method comprises the following steps: modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component; carrying out voice configuration aiming at the functional module to generate a voice functional module; and performing service processing based on the user interface component corresponding to the voice function module. The embodiment of the application realizes the user interaction function based on the combination of the interactive interface and the voice control, and effectively improves the service processing efficiency.

Description

Service processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a service processing method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
Interactive design is the design field of defining and designing the behavior of man-made systems, and defines the content and structure of communication between two or more interactive individuals, so that the interactive individuals cooperate with each other to achieve a certain purpose. With the development of computer technology and the wide application of multimedia technology, computer interaction brings more and more convenience to users.
In the prior art, an interactive interface is usually adopted to realize business communication between a user and a machine. The interactive interface is a channel for information exchange between people and the computer, a user inputs information to the computer through the interactive interface and operates the computer, and the computer provides information for the user through the interactive interface for reading, analysis and judgment. Generally, when a user uses a service function of an interactive interface, the user needs to manually click an input device such as a mouse or a touch screen to process a service, and the problems of low service processing efficiency and complex operation exist.
Disclosure of Invention
The embodiment of the application provides a service processing method and device, electronic equipment and a computer readable storage medium, which can solve the problem of low service processing efficiency.
According to an aspect of an embodiment of the present application, there is provided a service processing method, including:
modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component;
carrying out voice configuration aiming at the functional module to generate a voice functional module;
and performing service processing based on the user interface component corresponding to the voice function module.
Optionally, the performing voice configuration on the function module to generate a voice function module includes:
determining interactive information corresponding to the functional module;
screening out at least one piece of target voice data from a preset voice library based on the interactive information;
and performing function matching on the target voice data and the function module to obtain a phonation function module.
Optionally, the functional module further comprises an interface for communication of the user interface component with an exterior of the functional module; the determining of the interactive information corresponding to the functional module includes:
extracting title characters from a user interface component of the functional module, and extracting function address information from an interface corresponding to the user interface component;
the interactive information is determined based on at least one of the title text and the function address information.
Optionally, the screening out at least one piece of target voice data from a preset voice library based on the interaction information includes:
determining an interaction category based on the interaction information, and extracting first voice data from a voice library based on the interaction category; and
extracting keywords from the interactive information, and matching second voice data from the voice database based on the keywords;
at least one of the first voice data and the second voice data is set as target voice data.
Optionally, the performing the function matching on the target voice data and the function module includes:
configuring event information for the functional module based on the target voice data; wherein the event information indicates a play operation on the target voice data and/or a play priority of the target voice data; the playing priority of the first voice data is higher than that of the second voice data.
Optionally, the performing service processing based on the user interface component corresponding to the voice function module includes:
determining a keyword message based on the received voice message;
determining a target phonetic function module matched with the keyword message and target phonetic data corresponding to the target phonetic function module;
and displaying the user interface component corresponding to the target voice function module in the preset interactive interface, and playing the target voice data corresponding to the target voice function module.
Optionally, the determining a keyword message based on the received voice message includes:
carrying out voice recognition on the voice message to obtain a text message;
keyword messages are extracted from the text messages.
According to another aspect of the embodiments of the present application, there is provided a service processing apparatus, including:
the packaging module is used for modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component;
the configuration module is used for carrying out voice configuration on the function module and generating a voice function module;
and the processing module is used for processing the service based on the user interface component corresponding to the voice function module.
Optionally, the configuration module includes:
the determining unit is used for determining the interactive information corresponding to the functional module;
the screening unit is used for screening out at least one piece of target voice data from a preset voice library based on the interactive information;
and the matching unit is used for carrying out function matching on the target voice data and the function module to obtain a phonation function module.
Optionally, the functional module further comprises an interface for communication of the user interface component with an exterior of the functional module; the determining unit is configured to:
extracting title characters from a user interface component of the functional module, and extracting function address information from an interface corresponding to the user interface component;
the interactive information is determined based on at least one of the title text and the function address information.
Optionally, the screening unit is configured to:
determining an interaction category based on the interaction information, and extracting first voice data from a voice library based on the interaction category; and
extracting keywords from the interactive information, and matching second voice data from the voice database based on the keywords;
at least one of the first voice data and the second voice data is set as target voice data.
Optionally, the matching unit is configured to:
configuring event information for the functional module based on the target voice data; wherein the event information indicates a play operation on the target voice data and/or a play priority of the target voice data; the playing priority of the first voice data is higher than that of the second voice data.
Optionally, the processing module is configured to:
determining a keyword message based on the received voice message;
determining a target phonetic function module matched with the keyword message and target phonetic data corresponding to the target phonetic function module;
and displaying the user interface component corresponding to the target voice function module in the preset interactive interface, and playing the target voice data corresponding to the target voice function module.
Optionally, the processing module is further configured to:
carrying out voice recognition on the voice message to obtain a text message;
keyword messages are extracted from the text messages.
According to another aspect of an embodiment of the present application, there is provided an electronic apparatus including:
the device comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to realize the steps of the method shown in the first aspect of the embodiment of the application.
According to a further aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as set forth in the first aspect of embodiments of the present application.
According to an aspect of embodiments of the present application, there is provided a computer program product comprising a computer program that, when executed by a processor, performs the steps of the method illustrated in the first aspect of embodiments of the present application.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the method and the device, the front-end interface interaction codes are packaged into the functional modules, the functional modules are subjected to voice configuration, the voice functional modules are generated, and the user interaction function based on the combination of the interaction interface and the voice control is realized. Under the condition that a user is inconvenient to click a preset interactive interface, a voice instruction can be adopted to perform service processing based on a user interface component corresponding to the voice function module, so that the flexibility of interaction when the user performs a target service is effectively improved; meanwhile, the complex interactive operation aiming at the interactive interface in the prior art is avoided, the service processing efficiency is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of a service processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a service processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a functional module for generating a speech function in a service processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a process of determining a keyword message in a service processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of an interactive interface in a service processing method according to an embodiment of the present application;
fig. 6 is a flowchart illustrating an exemplary service processing method according to an embodiment of the present application
Fig. 7 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a service processing electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms "comprises" and/or "comprising," when used in this specification in connection with embodiments of the present application, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, as embodied in the art. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g., "a and/or B" indicates either an implementation as "a", or an implementation as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Human-computer interaction and human-computer interaction are studies on the interaction relationship between a research system and a user. The system may be a variety of machines, and may be a computerized system and software. The human-computer interaction interface generally refers to a portion visible to a user. And the user communicates with the system through a human-computer interaction interface and performs operation. Such as the play button of a radio, the instrument panel of an airplane, or the control room of a power plant. The human-machine interface is designed to contain the user's understanding of the system (i.e., mental models) for the usability or user-friendliness of the system.
One important issue with human-computer interaction is: different computer users have different styles of use-different educational backgrounds, different ways of understanding, different learning methods and skills, for example, the use habits of a left-handed person and the general population are completely different. In addition, cultural and national factors are also considered. Second, research and design human-computer interaction requires consideration of the rapid changes in user interface technology, and new interaction techniques provided may not be suitable for previous research. Also, as users get more and more mastered of new interfaces, they may make new requests.
The interactive interface is a channel for information exchange between people and the computer, a user inputs information to the computer through the interactive interface and operates the computer, and the computer provides information for the user through the interactive interface for reading, analysis and judgment. For example, when banking is being handled, the user is required to fill in an electronic form to confirm the user data in order to facilitate the handling of the banking. In actual filling, a user needs to perform page turning, inputting and searching operations by himself, when menus in an interactive interface are too many, particularly, a target service is difficult to accurately and quickly find for special people such as the old, and the problems of low service processing efficiency and complex operation of the interactive interface exist.
The application provides a service processing method, a service processing device, an electronic device and a computer-readable storage medium, which aim to solve the above technical problems in the prior art.
The embodiment of the application provides a service processing method, which can be realized by a terminal or a server. The terminal or the server related to the embodiment of the application can perform voice configuration on the functional module to generate the voice functional module, so that the technical scheme of the embodiment of the application can perform service processing based on a voice mode by adopting the user interface component corresponding to the voice functional module, thereby effectively improving the service processing efficiency and improving the user experience.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps and the like in different embodiments is not repeated.
As shown in fig. 1, the service processing method of the present application may be applied to the scenario shown in fig. 1, specifically, the server 101 may first obtain a front-end interface interaction code from the client 102, perform modular encapsulation on the front-end interface interaction code to obtain a functional module, and then perform voice configuration on the functional module to generate a voice functional module; the service processing is carried out based on the voice function module, so that the user interaction function based on the combination of the interaction interface and the voice control is realized, and the efficiency of the service processing is improved.
In the scenario shown in fig. 1, the service processing method may be performed in the server, or in another scenario, may be performed in the terminal.
Those skilled in the art will understand that the "terminal" used herein may be a Mobile phone, a tablet computer, a PDA (Personal Digital Assistant), an MID (Mobile Internet Device), etc.; a "server" may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
An embodiment of the present application provides a service processing method, and as shown in fig. 2, the method includes:
s201, modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module includes a user interface component.
Wherein, the user interface component can be obtained based on the front-end interface interactive code encapsulation; the user interface component may be an interface control, which refers to a visual graphic "element" that can be placed on the form, such as a button, a file edit box, and the like. Most of them have the function of executing a function or causing a code to run by an "event" and complete a response.
In particular, user interface components include properties, methods, and events. Wherein the attributes determine the appearance of the user interface component, typically including the name, shape, display style, font color, etc. of the user interface component; the methods can be class methods prefabricated for the user interface components, which are specially provided for programmers and can be used for setting and processing the self characteristics of the user interface components; the event can be the correspondence of the user interface component to the keyboard and mouse and other operations, any component has its own event set, once a certain event occurs, the execution of the corresponding event process can be caused, the event object has its own specific name, and the event process code is written by the programmer according to its own problem requirement, such as button Click event.
Specifically, the target service may be decomposed into at least one functional module based on a processing flow of the target service; the functional module can comprise a general component, a user interface component, namely an interface control and an interface; the general-purpose component or the user interface component communicates with the outside of the functional module based on the interface. For example, the interface may be an application program interface, which may be a convention for linking different components of the software system, or may be some predefined function for system call.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components. Meanwhile, the user data submission module also comprises an action listener interface, and the submission button needs to write information input by a user in the user interface components such as the input frame, the radio frame and the like into a database through the action listener interface; so as to realize the logic function of the user data submitting module.
S202, voice configuration is carried out on the functional module, and a voice functional module is generated.
Specifically, the terminal or the server for performing the service processing may determine the voice data to be configured based on the function module, and then configure event information for the function module based on the voice data, thereby generating the voice function module.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components.
Specifically, determining the voice data to be configured based on the user profile submission module includes: voice data 1 "ask you for your name is", voice data 2 "ask you for your sex", voice data 3 "how many your mobile phone number is", voice data 4 "whether the type of your certificate is a resident identification card, a driver's license, a passport or a port and macadamia pass", and the like. And then configuring the voice data 1 and a name input box, configuring the voice data 2 and a gender radio box, configuring the voice data 3 and a mobile phone number input box, configuring the certificate type radio box based on the voice data 4, and the like. Meanwhile, the user data submission module also comprises an action listener interface, and the submission button needs to write information input by a user in the user interface components such as the input frame, the radio frame and the like into a database through the action listener interface; and the voice data configured by the submit button is 'please confirm whether to submit', and the logic function of the user data submit module can be realized through the voice data based on a mode of combining voice and manual input. The specific steps for generating the speech function module will be described in detail below.
And S203, performing service processing based on the user interface component corresponding to the voice function module.
The voice function module is a function module configured with voice event information, and can realize voice control and interaction of the user interface component based on the voice event information.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components. The voice data to be configured can be determined based on the user data submitting module, and each user interface component can be configured based on the voice data. Meanwhile, the user data submission module also comprises an action listener interface, and the submission button needs to write information input by a user in the user interface components such as the input frame, the radio frame and the like into a database through the action listener interface; and the submission button is used for confirming whether the submission is carried out or not along with the configured voice data, and the logic function of the user data submission module can be realized through the voice data in a voice or manual input mode.
Specifically, in response to a voice instruction of "i want to open an account" of the user, the terminal or the server for performing the service processing may display the user interface component included in the user profile submitting module in the preset interactive interface. Then, the user interface component can play the voice data in sequence, when the voice data 1 is played and the user inputs voice aiming at the name input box, the voice data 2 can be continuously played, and the like is repeated until all the user data are recorded; finally, in response to the user's confirmation voice command, the submit button writes the user profile into the database through the action listener interface.
According to the method and the device, the front-end interface interaction codes are packaged into the functional modules, the functional modules are subjected to voice configuration, the voice functional modules are generated, and the user interaction function based on the combination of the interaction interface and the voice control is realized. Under the condition that a user is inconvenient to click a preset interactive interface, a voice instruction can be adopted to perform service processing based on a user interface component corresponding to the voice function module, so that the flexibility of interaction when the user performs a target service is effectively improved; meanwhile, the complex interactive operation aiming at the interactive interface in the prior art is avoided, the service processing efficiency is improved, and the user experience is improved.
In the embodiment of the present application, a possible implementation manner is provided, as shown in fig. 3, the performing voice configuration on the function module in step S202 to generate a voice function module includes:
(1) and determining the interactive information corresponding to the functional module.
Specifically, the terminal or the server for performing the service processing may analyze the components and the interfaces included in the functional module to obtain the corresponding interaction information.
The interactive information may indicate interactive voice data corresponding to the functional module. In practical applications, the target voice data can be screened from the preset database based on the interaction information for voice matching, and a specific voice screening process will be described in detail below.
The embodiment of the application provides a possible implementation manner, and the functional module further comprises an interface, wherein the interface is used for communication between the user interface component and the outside of the functional module; the determining of the interactive information corresponding to the functional module includes:
a, extracting title characters from a user interface component of the function module, and extracting function address information from an interface corresponding to the user interface component.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components.
In particular, the title text may be extracted from the attribute information in the user interface component. For example, when the user interface component is a name input box, the title text included in its attribute information may be "name"; when the user interface component is a certificate type radio box, the title text included in its attribute information may be "certificate type".
Meanwhile, the title words 'submit' can be extracted from the attribute information of the submit button, and the function address information can be extracted from the action listener interface corresponding to the submit button, wherein the function address information indicates the storage path of the user data.
And b, determining the interactive information based on at least one item of the title characters and the function address information.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components.
In particular, the title text may be extracted from the attribute information in the user interface component. For example, when the user interface component is a name input box, the title text included in its attribute information may be "name"; when the user interface component is a certificate type radio box, the title text included in its attribute information may be "certificate type". At this time, "certificate type" may be used as the interactive information.
Meanwhile, the title words 'submit' can be extracted from the attribute information of the submit button, and the function address information can be extracted from the action monitor interface corresponding to the submit button, wherein the function address information indicates the storage path of the user data: a database form named "user profile". At this point, the "submission" and "user profile" may be used as the interactive information.
(2) And screening out at least one piece of target voice data from a preset voice library based on the interactive information.
The embodiment of the present application provides a possible implementation manner, where the screening out at least one piece of target voice data from a preset voice library based on the interaction information includes:
determining an interaction category based on the interaction information, and extracting first voice data from a voice library based on the interaction category; and extracting keywords from the interactive information, and matching second voice data from the voice database based on the keywords.
Specifically, the terminal or the server for performing the service processing may determine in advance a correspondence between the interaction category and the voice data and a correspondence between the keyword and the voice data, and then construct the voice library based on the correspondence. The interaction category may be determined based on an input rule of a user interface component to which the interaction information corresponds, and at the same time, a keyword may be extracted from the interaction information based on a keyword extraction algorithm.
The interaction category can comprise a fixed category and a custom category; the fixed category can indicate a user interface component corresponding to the interaction information, and the user interaction is carried out based on a fixed input rule; the custom category may indicate a user interface component corresponding to the interaction information, which is user interactive based on the custom input rule.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components. Specifically, the name input box, the mobile phone number input box and the certificate number input box correspond to the interaction mode of the user personalized information, namely the user-defined input rule, and the interaction category can be determined to be the user-defined category based on the interaction information extracted by the user interface component; and the interactive mode that the gender radio box, the certificate type radio box and the submit button correspond to the fixed input rule can determine the interactive category as the fixed category based on the interactive information extracted by the user interface component.
Specifically, when the interaction category is a user-defined category, taking a mobile phone number input box as an example, the corresponding voice data may include: asking for how many the mobile phone number is, confirming whether the mobile phone number is correct by xxx, and taking trouble for inputting again if the mobile phone number is too advanced; the mobile phone number is personalized information of a customer, and the voice data needs to be acquired and filled based on an input instruction of the user.
When the interaction category is a fixed category, taking a certificate type radio box as an example, the corresponding voice data may include: "the type of your certificate is resident identification card, driver's license, passport or hong Kong and Macao pass", "good", you handle according to resident identification card "," understand, handle you according to driver's license "," receive, handle you according to passport ", and the like. The certificate type is determined by the user based on fixed options, and only the voice data needs to be matched based on the input of the user.
Meanwhile, when the function module is a user feedback module, the function module comprises user interface components such as a text box and a text input box; the names corresponding to the attributes in the text boxes are as follows: "lovely little owner, in order to guarantee our service quality, please evaluate the bank account opening service and welcome you to put forward precious advice", the name can be used as interactive information, and then keywords "evaluation" and "advice" are extracted from the interactive information. Then, based on the keywords, the voice data can be determined from the preset voice library as "thank you for your support, and we will continuously optimize the service quality based on your advice, welcome the next time.
And b, taking at least one item of the first voice data and the second voice data as target voice data.
When a plurality of pieces of target voice data exist, a priority order can be set for the plurality of pieces of data, and voice configuration is performed on the functional modules based on the priority order, and specific configuration steps will be described in detail below.
In the embodiment of the application, the interactive information can be determined based on the user interface component included by the functional module, and then the voice data corresponding to the user interface component is determined based on the interactive information, so that the matching degree of the functional module and the voice data is enhanced, the user experience is further improved, and a good foundation is laid for the subsequent generation of the phonation functional module.
(3) And performing function matching on the target voice data and the function module to obtain a phonation function module.
The embodiment of the present application provides a possible implementation manner, where the performing of the function matching on the target voice data and the function module includes:
configuring event information for the functional module based on the target voice data; wherein the event information indicates a play operation on the target voice data and/or a play priority of the target voice data; the playing priority of the first voice data is higher than that of the second voice data.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components. Specifically, the determining of the target voice data corresponding to the user interface component, which is the mobile phone number input box, may include: voice a "ask for how many the mobile phone number is, voice B" please confirm whether the mobile phone number is correct for xxx ", voice C" the mobile phone number you input is too advanced, it is troublesome for you to re-input ", and so on.
Specifically, a trigger event may be set for the mobile phone number input box according to the voice, when a user instruction indicates to input a mobile phone number, the playing of the voice a may be triggered based on the user instruction, then the mobile phone number input by the user is determined based on a manual operation of the user on the interactive interface or the voice instruction, and the playing of the voice B is triggered, where the mobile phone number in the voice B is identified according to the user input; and then, carrying out format detection on the mobile phone number input by the user, if the mobile phone number is determined to be 11 digits, and playing the voice C when the mobile phone number is detected not to conform to the preset format.
In this embodiment, taking the example that the target service is bank transfer, the function module includes a transfer application module, the function module includes a payee name input box, and the target voice data corresponding to the user interface component may include: voice D "ask for how many payee names you want to transfer" and voice E "ask for you to determine payee from history record"; specifically, the triggering event may be set to trigger the playing of the voice D when the user clicks the user interface component. Meanwhile, based on the practical application situation, when the user carries out the transfer service for the first time, the priority of the voice D can be set to be higher than that of the voice E; when the user does not perform the transfer service for the first time, the priority of the voice E may be set to be higher than that of the voice D.
A possible implementation manner is provided in this embodiment of the present application, where the performing service processing based on the user interface component corresponding to the voice function module in step S203 includes:
(1) based on the received voice message, a keyword message is determined.
Specifically, the terminal or the server for performing the service Processing may process the voice message by using NLP (Natural Language Processing) to obtain the keyword message.
Natural Language Processing (NLP) is a subject of Language, and a subject of Natural Language is analyzed, understood and processed by computer technology, that is, a computer is used as a powerful tool for Language research, and Language information is quantitatively researched with the support of the computer, and Language description which can be commonly used between a person and the computer is provided. The specific voice message processing steps are shown below.
A possible implementation manner is provided in the embodiment of the present application, as shown in fig. 4, the determining a keyword message based on a received voice message includes:
and a, carrying out voice recognition on the voice message to obtain a text message.
Specifically, the terminal or the server for performing the service processing may perform voice recognition on the voice message based on the voice recognition network to obtain a text message corresponding to the content of the voice message, thereby implementing conversion from voice to text. The voice recognition network is composed of an acoustic model and a language model.
Firstly, the audio features of the voice message can be extracted, the probability of each frame of audio features generated by each phoneme in a preset training set is calculated based on a trained acoustic model, and then the phoneme sequence with the maximum probability is determined, so that the conversion from the audio features to the phoneme sequence is realized. The acoustic Model may be GMM (Gaussian Mixture Model), HMM (Hidden Markov Model), or the like.
Secondly, determining to obtain the text message according to the phoneme sequence based on the trained language model, so that the probability of converting the phoneme sequence into the text message is the maximum, and the conversion from the phoneme sequence to the text message is realized. Wherein the language model may be used to calculate the probability that the phoneme sequence constitutes each complete text; the language model may be a statistical-based N-Gram (N-Gram), a neural network language model, or a model based on a transform (data Transformer) architecture.
And b, extracting the keyword message from the text message.
Specifically, the terminal or the server for performing the service processing may determine the keyword message from the text message based on the keyword extraction network. Wherein the keyword extraction network may be based on a supervised algorithm or an unsupervised algorithm. The keyword extraction method based on the supervision algorithm is mainly carried out in a classification mode, a richer and more complete word list is constructed, then the matching degree of each document and each word in the word list is judged, and the purpose of keyword extraction is achieved in a similar labeling mode. The keyword extraction method based on the unsupervised algorithm does not need to manually generate a maintained word list and does not need to be trained with assistance of manual standard corpora, and a TF-IDF (term frequency-inverse text frequency index) algorithm, a TextRank (text arrangement, a graph-based ordering algorithm for texts) algorithm or a topic model algorithm is usually adopted.
(2) And determining a target phonetic function module matched with the keyword message and target phonetic data corresponding to the target phonetic function module.
Specifically, the terminal or the server for performing the service processing may determine in advance a correspondence between the keyword message and the target phonetic function module, and the target phonetic data, and then determine the target phonetic function module and the target phonetic data based on the correspondence. The target phonetization function module can use the received keyword message as a trigger condition of the target phonetization data, and set trigger event information for the user interface component contained in the target phonetization function module according to the trigger condition.
(3) And displaying the user interface component corresponding to the target voice function module in the preset interactive interface, and playing the target voice data corresponding to the target voice function module.
In the embodiment of the present application, taking the example that the target service is bank account opening, the account opening service may be decomposed into a user data submitting module, an account opening information confirming module, and a user feedback module based on the processing flow of the bank account opening service. The user data submitting module comprises a name input box, a gender radio box, a mobile phone number input box, a certificate type radio box, a certificate number input box, a submitting button and other user interface components. Specifically, the step of determining the target voice data corresponding to the user interface component, which is the mobile phone number input box, includes: voice a "ask for how many the mobile phone number is, voice B" please confirm whether the mobile phone number is correct for xxx ", voice C" the mobile phone number you input is too advanced, it is troublesome for you to re-input ", and so on.
Specifically, the user profile submission module may be displayed in the preset interactive interface, as shown in fig. 5, the keyword message may be determined as the "mobile phone number" according to the voice message "my mobile phone number is xxx" of the user, the mobile phone number indicated by the voice message of the user is displayed in a mobile phone number input box in the interactive interface based on the keyword message, and then the play of the voice B is triggered; and then, carrying out format detection on the mobile phone number input by the user, if the mobile phone number is determined to be 11 digits, playing the voice C when the mobile phone number is detected not to conform to the preset format, and clearing the mobile phone number displayed in a mobile phone number input box in the interactive interface so as to facilitate the user to input the mobile phone number again.
In order to better understand the above service processing method, an example of the service processing method of the present application is described in detail below with reference to fig. 6, which includes the following steps:
s601, modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module includes a user interface component.
Wherein, the user interface component can be packaged based on the front-end interface interaction code.
Specifically, a terminal or a server for performing service processing may decompose a target service into at least one functional module based on a processing flow of the target service; the functional modules may include, among other things, generic components, user interface components, and interfaces.
S602, determining the interactive information corresponding to the functional module.
Specifically, the terminal or the server for performing the service processing may analyze the components and the interfaces included in the functional module to obtain the corresponding interaction information.
The interactive information may indicate interactive voice data corresponding to the functional module.
S603, screening out at least one piece of target voice data from a preset voice library based on the interactive information.
Specifically, a terminal or a server for performing service processing determines an interaction category based on interaction information, and extracts first voice data from a voice library based on the interaction category; extracting keywords from the interactive information, and matching second voice data from the voice library based on the keywords; at least one of the first voice data and the second voice data is set as target voice data.
S604, configuring event information for the functional module based on the target voice data to obtain a phonation functional module; wherein the event information indicates a play operation on the target voice data and/or a play priority of the target voice data; the playing priority of the first voice data is higher than that of the second voice data.
And S605, performing service processing based on the user interface component corresponding to the voice function module.
Specifically, the terminal or the server for performing the service processing determines the keyword message based on the received voice message. And determining a target phonetic function module matched with the keyword message and target phonetic data corresponding to the target phonetic function module. And displaying the user interface component corresponding to the target voice function module in the preset interactive interface, and playing the target voice data corresponding to the target voice function module.
According to the method and the device, the front-end interface interaction codes are packaged into the functional modules, the functional modules are subjected to voice configuration, the voice functional modules are generated, and the user interaction function based on the combination of the interaction interface and the voice control is realized. Under the condition that a user is inconvenient to click a preset interactive interface, a voice instruction can be adopted to perform service processing based on a user interface component corresponding to the voice function module, so that the flexibility of interaction when the user performs a target service is effectively improved; meanwhile, the complex interactive operation aiming at the interactive interface in the prior art is avoided, the service processing efficiency is improved, and the user experience is improved.
An embodiment of the present application provides a service processing apparatus, as shown in fig. 7, the service processing apparatus 70 may include: an encapsulation module 701, a configuration module 702 and a processing module 703;
the packaging module 701 is used for modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component;
a configuration module 702, configured to perform voice configuration for the functional module, and generate a voice functional module;
and the processing module 703 is configured to perform service processing based on the user interface component corresponding to the speech function module.
In an embodiment of the present application, a possible implementation manner is provided, and the configuration module 702 may include:
the determining unit is used for determining the interactive information corresponding to the functional module;
the screening unit is used for screening out at least one piece of target voice data from a preset voice library based on the interactive information;
and the matching unit is used for carrying out function matching on the target voice data and the function module to obtain a phonation function module.
The embodiment of the application provides a possible implementation manner, and the functional module further comprises an interface, wherein the interface is used for communication between the user interface component and the outside of the functional module; the determining unit may be configured to:
extracting title characters from a user interface component of the functional module, and extracting function address information from an interface corresponding to the user interface component;
the interactive information is determined based on at least one of the title text and the function address information.
The embodiment of the present application provides a possible implementation manner, and the screening unit may be configured to:
determining an interaction category based on the interaction information, and extracting first voice data from a voice library based on the interaction category; and
extracting keywords from the interactive information, and matching second voice data from the voice database based on the keywords;
at least one of the first voice data and the second voice data is set as target voice data.
In an embodiment of the present application, a possible implementation manner is provided, and the matching unit may be configured to:
configuring event information for the functional module based on the target voice data; wherein the event information indicates a play operation on the target voice data and/or a play priority of the target voice data; the playing priority of the first voice data is higher than that of the second voice data.
In an embodiment of the present application, a possible implementation manner is provided, and the processing module 703 may be configured to:
determining a keyword message based on the received voice message;
determining a target phonetic function module matched with the keyword message and target phonetic data corresponding to the target phonetic function module;
and displaying the user interface component corresponding to the target voice function module in the preset interactive interface, and playing the target voice data corresponding to the target voice function module.
A possible implementation manner is provided in this embodiment of the present application, and the processing module 703 may be further configured to:
carrying out voice recognition on the voice message to obtain a text message;
keyword messages are extracted from the text messages.
According to the method and the device, the front-end interface interaction codes are packaged into the functional modules, the functional modules are subjected to voice configuration, the voice functional modules are generated, and the user interaction function based on the combination of the interaction interface and the voice control is realized. Under the condition that a user is inconvenient to click a preset interactive interface, a voice instruction can be adopted to perform service processing based on a user interface component corresponding to the voice function module, so that the flexibility of interaction when the user performs a target service is effectively improved; meanwhile, the complex interactive operation aiming at the interactive interface in the prior art is avoided, the service processing efficiency is improved, and the user experience is improved.
The apparatus of the embodiment of the present application may execute the method provided by the embodiment of the present application, and the implementation principle is similar, the actions executed by the modules in the apparatus of the embodiments of the present application correspond to the steps in the method of the embodiments of the present application, and for the detailed functional description of the modules of the apparatus, reference may be specifically made to the description in the corresponding method shown in the foregoing, and details are not repeated here.
The embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement the steps of the service processing method, and compared with the prior art, the method can implement: according to the method and the device, the front-end interface interaction codes are packaged into the functional modules, the functional modules are subjected to voice configuration, the voice functional modules are generated, and the user interaction function based on the combination of the interaction interface and the voice control is realized. Under the condition that a user is inconvenient to click a preset interactive interface, a voice instruction can be adopted to perform service processing based on a user interface component corresponding to the voice function module, so that the flexibility of interaction when the user performs a target service is effectively improved; meanwhile, the complex interactive operation aiming at the interactive interface in the prior art is avoided, the service processing efficiency is improved, and the user experience is improved.
In an alternative embodiment, an electronic device is provided, as shown in fig. 8, the electronic device 80 shown in fig. 8 comprising: a processor 801 and a memory 803. Wherein the processor 801 is coupled to a memory 803, such as via a bus 802. Optionally, the electronic device 800 may further include a transceiver 804, and the transceiver 804 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. It should be noted that the transceiver 804 is not limited to one in practical applications, and the structure of the electronic device 800 is not limited to the embodiment of the present application.
The Processor 801 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 801 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 802 may include a path that transfers information between the above components. The bus 802 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 802 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The Memory 803 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium, other magnetic storage devices, or any other medium that can be used to carry or store a computer program and that can be Read by a computer, without limitation.
The memory 803 is used for storing computer programs for executing the embodiments of the present application, and is controlled by the processor 801 to execute the computer programs. The processor 801 is adapted to execute computer programs stored in the memory 803 to implement the steps shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, PADs, etc. and fixed terminals such as digital TVs, desktop computers, etc.
Embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when being executed by a processor, the computer program may implement the steps and corresponding contents of the foregoing method embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device realizes the following when executed:
modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component;
carrying out voice configuration aiming at the functional module to generate a voice functional module;
and performing service processing based on the user interface component corresponding to the voice function module.
It should be understood that, although each operation step is indicated by an arrow in the flowchart of the embodiment of the present application, the implementation order of the steps is not limited to the order indicated by the arrow. In some implementation scenarios of the embodiments of the present application, the implementation steps in the flowcharts may be performed in other sequences as desired, unless explicitly stated otherwise herein. In addition, some or all of the steps in each flowchart may include multiple sub-steps or multiple stages based on an actual implementation scenario. Some or all of these sub-steps or stages may be performed at the same time, or each of these sub-steps or stages may be performed at different times, respectively. In a scenario where execution times are different, an execution sequence of the sub-steps or the phases may be flexibly configured according to requirements, which is not limited in the embodiment of the present application.
The foregoing is only an optional implementation manner of a part of implementation scenarios in this application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of this application are also within the protection scope of the embodiments of this application without departing from the technical idea of this application.

Claims (10)

1. A method for processing a service, the method comprising:
modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component;
carrying out voice configuration on the functional module to generate a voice functional module;
and performing service processing based on the user interface component corresponding to the voice function module.
2. The service processing method according to claim 1, wherein said performing voice configuration for the functional module and generating a voice functional module comprises:
determining interactive information corresponding to the functional module;
screening out at least one piece of target voice data from a preset voice library based on the interactive information;
and performing function matching on the target voice data and the function module to obtain a phonation function module.
3. The traffic processing method according to claim 2, wherein the functional module further comprises an interface for communication of the user interface component with the outside of the functional module; the determining the interactive information corresponding to the functional module includes:
extracting title characters from the user interface assembly of the function module, and extracting function address information from an interface corresponding to the user interface assembly;
determining the interaction information based on at least one of the title word and the function address information.
4. The traffic processing method according to claim 2, wherein the screening out at least one piece of target voice data from a preset voice library based on the interaction information comprises:
determining an interaction category based on the interaction information, and extracting first voice data from the voice library based on the interaction category; and
extracting keywords from the interactive information, and matching second voice data from the voice library based on the keywords;
at least one of the first voice data and the second voice data is set as target voice data.
5. The business processing method of claim 4, wherein the functionally matching the target speech data with the functional module comprises:
configuring event information for the functional module based on the target voice data; wherein the event information indicates a play operation on the target voice data and/or a play priority of the target voice data; the playing priority of the first voice data is higher than that of the second voice data.
6. The service processing method according to claim 2, wherein the performing service processing based on the user interface component corresponding to the speech function module includes:
determining a keyword message based on the received voice message;
determining a target phonetic function module matched with the keyword message and target phonetic data corresponding to the target phonetic function module;
and displaying the user interface component corresponding to the target voice function module in a preset interactive interface, and playing the target voice data corresponding to the target voice function module.
7. The traffic processing method according to claim 6, wherein said determining a keyword message based on the received voice message comprises:
carrying out voice recognition on the voice message to obtain a text message;
and extracting the keyword message from the text message.
8. A traffic processing apparatus, comprising:
the packaging module is used for modularly packaging the front-end interface interaction code to obtain at least one functional module; wherein the functional module comprises a user interface component;
the configuration module is used for carrying out voice configuration on the functional module and generating a voice functional module;
and the processing module is used for processing the service based on the user interface component corresponding to the voice function module.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the business process method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the traffic processing method according to any one of claims 1 to 7.
CN202111187654.6A 2021-10-12 2021-10-12 Service processing method and device, electronic equipment and computer readable storage medium Pending CN114047900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111187654.6A CN114047900A (en) 2021-10-12 2021-10-12 Service processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111187654.6A CN114047900A (en) 2021-10-12 2021-10-12 Service processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114047900A true CN114047900A (en) 2022-02-15

Family

ID=80205261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111187654.6A Pending CN114047900A (en) 2021-10-12 2021-10-12 Service processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114047900A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023230902A1 (en) * 2022-05-31 2023-12-07 西门子股份公司 Human-machine interaction method and apparatus, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424601A (en) * 2017-09-11 2017-12-01 深圳怡化电脑股份有限公司 A kind of information interaction system based on speech recognition, method and its device
WO2018157499A1 (en) * 2017-02-28 2018-09-07 华为技术有限公司 Method for voice input and related device
CN108877791A (en) * 2018-05-23 2018-11-23 百度在线网络技术(北京)有限公司 Voice interactive method, device, server, terminal and medium based on view
CN109960537A (en) * 2019-03-29 2019-07-02 北京金山安全软件有限公司 Interaction method and device and electronic equipment
CN111026355A (en) * 2019-12-09 2020-04-17 珠海市魅族科技有限公司 Information interaction method and device, computer equipment and computer readable storage medium
CN111354360A (en) * 2020-03-17 2020-06-30 北京百度网讯科技有限公司 Voice interaction processing method and device and electronic equipment
CN113191621A (en) * 2021-04-26 2021-07-30 北京亚讯鸿达技术有限责任公司 Intelligent integrated voice service management platform based on data and service fusion
CN113409805A (en) * 2020-11-02 2021-09-17 腾讯科技(深圳)有限公司 Man-machine interaction method and device, storage medium and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018157499A1 (en) * 2017-02-28 2018-09-07 华为技术有限公司 Method for voice input and related device
CN107424601A (en) * 2017-09-11 2017-12-01 深圳怡化电脑股份有限公司 A kind of information interaction system based on speech recognition, method and its device
CN108877791A (en) * 2018-05-23 2018-11-23 百度在线网络技术(北京)有限公司 Voice interactive method, device, server, terminal and medium based on view
CN109960537A (en) * 2019-03-29 2019-07-02 北京金山安全软件有限公司 Interaction method and device and electronic equipment
CN111026355A (en) * 2019-12-09 2020-04-17 珠海市魅族科技有限公司 Information interaction method and device, computer equipment and computer readable storage medium
CN111354360A (en) * 2020-03-17 2020-06-30 北京百度网讯科技有限公司 Voice interaction processing method and device and electronic equipment
CN113409805A (en) * 2020-11-02 2021-09-17 腾讯科技(深圳)有限公司 Man-machine interaction method and device, storage medium and terminal equipment
CN113191621A (en) * 2021-04-26 2021-07-30 北京亚讯鸿达技术有限责任公司 Intelligent integrated voice service management platform based on data and service fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023230902A1 (en) * 2022-05-31 2023-12-07 西门子股份公司 Human-machine interaction method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN110223695B (en) Task creation method and mobile terminal
US10614803B2 (en) Wake-on-voice method, terminal and storage medium
CN107077841B (en) Superstructure recurrent neural network for text-to-speech
RU2360281C2 (en) Data presentation based on data input by user
CN110444198B (en) Retrieval method, retrieval device, computer equipment and storage medium
US10936288B2 (en) Voice-enabled user interface framework
US20150286943A1 (en) Decision Making and Planning/Prediction System for Human Intention Resolution
TW200900967A (en) Multi-mode input method editor
CN108920543B (en) Query and interaction method and device, computer device and storage medium
CN112328761B (en) Method and device for setting intention label, computer equipment and storage medium
CN104471639A (en) Voice and gesture identification reinforcement
CN108227564B (en) Information processing method, terminal and computer readable medium
CN110808032A (en) Voice recognition method and device, computer equipment and storage medium
CN111985243B (en) Emotion model training method, emotion analysis device and storage medium
CN107807968A (en) Question and answer system, method and storage medium based on Bayesian network
CN110837545A (en) Interactive data analysis method, device, medium and electronic equipment
WO2016155643A1 (en) Input-based candidate word display method and device
CN114047900A (en) Service processing method and device, electronic equipment and computer readable storage medium
US20230206007A1 (en) Method for mining conversation content and method for generating conversation content evaluation model
US20220198153A1 (en) Model training
US11704585B2 (en) System and method to determine outcome probability of an event based on videos
CN114218356A (en) Semantic recognition method, device, equipment and storage medium based on artificial intelligence
CN110619038A (en) Method, system and electronic equipment for vertically guiding professional consultation
CN111104118A (en) AIML-based natural language instruction execution method and system
CN111126075B (en) Semantic understanding method, system, equipment and medium for text resistance training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination