CN111930919B - Enterprise online education APP voice interaction implementation method - Google Patents

Enterprise online education APP voice interaction implementation method Download PDF

Info

Publication number
CN111930919B
CN111930919B CN202011059543.2A CN202011059543A CN111930919B CN 111930919 B CN111930919 B CN 111930919B CN 202011059543 A CN202011059543 A CN 202011059543A CN 111930919 B CN111930919 B CN 111930919B
Authority
CN
China
Prior art keywords
voice interaction
app
voice
information
product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011059543.2A
Other languages
Chinese (zh)
Other versions
CN111930919A (en
Inventor
赵隽隽
赵剑飞
欧阳禄萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhixueyun (Beijing) Technology Co.,Ltd.
Original Assignee
Zhixueyun Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhixueyun Beijing Technology Co ltd filed Critical Zhixueyun Beijing Technology Co ltd
Priority to CN202011059543.2A priority Critical patent/CN111930919B/en
Publication of CN111930919A publication Critical patent/CN111930919A/en
Application granted granted Critical
Publication of CN111930919B publication Critical patent/CN111930919B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/313Selection or weighting of terms for indexing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/328Management therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

An implementation method for enterprise online education APP voice interaction comprises the following steps: acquiring a voice awakening instruction of a target user, and awakening a voice interaction APP; collecting voice information of a target user based on a voice interaction APP, and performing voice recognition conversion on the voice information to obtain text information and displaying the text information; recognizing text information according to the prefabricated dialogue template, and determining the user intention of a target user; requesting corresponding product service based on the user intention and the prefabricated mode; global search is carried out on product services, and product resources related to the product services are crawled from a plurality of websites simultaneously; and synchronously transmitting the global search result and the corresponding product resource to a voice interaction APP for rendering and displaying to a target user. The method is used for voice input and extracting the keywords in the voice information, the global property of resource search is achieved, and the experience effect of a user is improved.

Description

Enterprise online education APP voice interaction implementation method
Technical Field
The invention relates to the technical field of online education, in particular to an enterprise-oriented online education APP voice interaction implementation method.
Background
Triggering HTTP request background service based on the APP page clicking event. The background service queries corresponding resources in a fuzzy query mode according to fields returned by the front end, returns front end rendering, directly enters details if the resources are single resources, and displays a list for a user to select if the resources are multiple resources.
Moreover, depending on the results of the search, it cannot drill through multiple pages, querying only a single resource type at a time. The query of the resources only supports fuzzy query of a single resource, and word segmentation matching cannot be performed according to NLP (non line segment protocol), so that the resources desired by a user cannot be accurately queried, and the experience effect brought to the user is poor.
Disclosure of Invention
The invention provides an enterprise-oriented online education APP voice interaction implementation method, which is used for voice input and extraction of keywords in voice information, achieves the global resource search and improves the experience effect of a user.
The invention provides an enterprise-oriented online education APP voice interaction implementation method, which comprises the following steps:
acquiring a voice awakening instruction of a target user, and awakening a voice interaction APP;
collecting voice information of the target user based on the voice interaction APP, and performing voice recognition conversion on the voice information to obtain text information and displaying the text information;
according to a prefabricated dialogue template, recognizing the text information and determining the user intention of the target user;
requesting corresponding product service based on the user intention and the prefabricated mode;
carrying out global search on the product service, and simultaneously crawling product resources related to the product service from a plurality of websites;
and synchronously transmitting the global search result and the corresponding product resource to the voice interaction APP to be rendered and displayed to the target user.
In a possible implementation manner, the step of obtaining a voice wake-up instruction of a target user and waking up a voice interaction APP comprises;
checking whether the voice interaction APP is in a network connection state;
if the voice interaction APP is not in a network connection state, when the voice interaction APP receives a voice signal related to a voice awakening instruction, starting the voice interaction APP according to a waiting time interval, and if the voice interaction APP is not received within a preset time period after the voice interaction APP is started, restoring the voice interaction APP to a sleep state;
if the voice interaction APP is in a network connection state, the voice interaction APP carries out frame section splitting on the received awakening instruction;
determining whether a blank frame section exists in the split frame sections, if so, removing the blank frame section, and reserving the rest frame sections;
performing noise reduction processing on the residual frame sections, and acquiring the frame section energy of each frame section in the residual frame sections after the noise reduction processing;
screening the first N even frame sections according to the size of the frame section energy, determining frame section information of the first N/2 frame sections and frame section information of the last N/2 frame sections in the N even frame sections, obtaining a first correlation value between the frame section information of the first N/2 frame sections and preset awakening information, and obtaining a second correlation value between the frame section information of the first N/2 frame sections and the preset awakening information and a third correlation value between the frame section information of the last N/2 frame sections and the preset awakening information;
calculating to obtain a correlation value according to the first correlation value, the second correlation value, the third correlation value, the weight value of the frame section information of the first N/2 frame sections, the weight value of the frame section information of the last N/2 frame sections and the weight value of the preset awakening information;
when the correlation value is larger than a preset value, awakening the voice interaction APP;
otherwise, not waking up the voice interaction APP.
In a possible implementation manner, the collecting, based on the voice interaction APP, the voice information of the target user includes:
collecting single-channel information sent by the target user from a multi-channel acquisition port based on the voice interaction APP;
determining the acquisition frequency of each channel acquisition port, and further determining frequency adjustment parameters;
calibrating a plurality of high frequency points and low frequency points which are in one-to-one correspondence with the high frequency points in single channel information collected by each channel acquisition port, and meanwhile, calculating a signal difference value between the high frequency points and the corresponding low frequency points;
adjusting the signal fluctuation lines of the single-channel information collected by the corresponding channel acquisition ports according to the signal difference value set of each channel acquisition port and the corresponding frequency adjustment parameters;
and reconstructing the adjusted information fluctuation lines to obtain voice information.
In one possible implementation manner, the text information is identified according to a pre-made dialog template, and the process of determining the user intention of the target user includes:
acquiring a text image corresponding to the text information;
determining the writing outline, the writing fluency and the writing emphasis of each stroke of the target text, and marking the positioning points of each stroke;
meanwhile, a text recognition model is constructed according to the text writing rule and the labeling result;
performing primary screenshot on the text image to obtain an image to be recognized, inputting the image to be recognized into the text recognition model for text recognition, and outputting a text recognition result;
acquiring a prefabricated dialogue template, and extracting a dialogue result related to the text recognition result in the prefabricated dialogue template;
extracting a first keyword in the text recognition result, simultaneously extracting a second keyword in the dialogue result, and determining a first position of the first keyword in the text recognition result and a second position of the second keyword in the dialogue result;
establishing a matching relation between the first keyword and the second keyword, and simultaneously determining a position weight of the first position and a position weight of the second position;
determining a first implication relation between the first keyword and a conversation result, and simultaneously determining a second implication relation between the second keyword and a text recognition result;
according to the matching relation and the position weight, position sequencing is carried out on the first key words and the second key words;
constructing a first feature sequence based on the first keyword, the first implication relation and the position sorting result of the first keyword, establishing an odd index of each feature sequence in the first feature sequence according to the important mechanism of the feature sequence, constructing a second feature sequence according to the second keyword, the second implication relation and the position sorting result of the second keyword, and establishing an even index of each feature sequence in the second feature sequence according to the important mechanism of the feature sequence;
and according to the voice interaction database, the screened odd indexes and even indexes, calling the corresponding feature sequences, and reconstructing and acquiring the main intention set and the secondary intention set of the target user according to all the called feature sequences and the intention database.
In one possible implementation manner, the requesting the corresponding product service based on the user intention and the pre-made manner includes:
splitting the user intent into a primary intent set and a secondary intent set;
calculating a first fraction A1 of the primary intent set based on the user intent and a second fraction A2 of the secondary intent set based on the user intent;
Figure 164830DEST_PATH_IMAGE001
Figure 662808DEST_PATH_IMAGE002
wherein n1 represents the number of primary intents in the set of primary intents; n2 represents the number of secondary intentions in the secondary intent combination;
Figure 621405DEST_PATH_IMAGE003
an information value representing the ith primary intention;
Figure 572044DEST_PATH_IMAGE004
a weight value representing the ith primary intent;
Figure 392232DEST_PATH_IMAGE005
an information value representing a jth secondary intent;
Figure 744716DEST_PATH_IMAGE006
a weight value representing a jth secondary intent;
calling a prefabricated mode matched with the user intention;
calculating a matching value P between the first proportion A1, the second proportion A2 and the prefabrication mode;
Figure 687265DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 125199DEST_PATH_IMAGE008
representing a prefabrication value associated with said prefabrication mode and based on a first percentage, and having a value range of [3, 5%];
Figure 231302DEST_PATH_IMAGE009
Representing a prefabrication value associated with said prefabrication mode and based on a second percentage, and having a value range of [1, 2%];
Figure 703872DEST_PATH_IMAGE010
The prefabrication value related to the prefabrication mode and based on the first proportion and the second proportion is represented, and the value range is [10,15 ]](ii) a rand represents a random function with a value range of [1,2 ]];
When the matching value P is larger than a preset value, requesting to acquire product services related to the first proportion, the second proportion and a prefabrication mode from a service database;
otherwise, performing intention adjustment on the split primary intention set and secondary intention set, acquiring a new primary intention set and a new secondary intention set, and determining whether the related product service can be acquired or not until acquisition is successful.
In one possible implementation, the process of performing a global search on the product service and crawling product resources related to the product service from multiple websites simultaneously comprises:
acquiring a service code of each product service, extracting a feature code in each service code, acquiring a feature binary code corresponding to the feature code based on a code-binary mapping table, and constructing a binary vector;
simultaneously, acquiring a current binary vector corresponding to a current service code, and simultaneously acquiring residual binary vectors in all the binary vectors except the current binary vector;
and a binary matrix is established according to the residual binary vectors;
acquiring a column item comparison vector and a column item difference vector between the current binary vector and each residual binary vector in the corresponding binary matrix, and constructing a corresponding column item comparison matrix and a corresponding column item difference matrix;
accumulating and processing the column item comparison vector and the column item difference matrix to obtain a final matrix;
acquiring a current characteristic value of the current binary vector, a matrix characteristic value of a corresponding final matrix and a correlation value between the current binary vector and the final matrix;
searching a related service network by using a global search model according to the current eigenvalue, the matrix eigenvalue and the correlation value;
simultaneously, based on the service network, crawling product resources related to the product service from multiple websites simultaneously.
In one possible implementation manner, after crawling product resources related to the product service from a plurality of websites simultaneously based on the service network, the method further includes:
acquiring a source address of the product resource;
counting the total number of times of accessing, the total number of times of being reported, the total number of times of being attacked, the total number of being evaluated and the resource information of the website corresponding to the source address;
determining the access success probability of the source address according to the total times of access;
determining the trust degree of the source address according to the total reported times;
determining the safety degree of the source address according to the total attacked times;
extracting keywords in each evaluation in the total number of the evaluated items, and determining the reliability of the source address according to a grading rule;
extracting effective resources related to the product resources in the resource information, and determining effective occupation ratios of the effective resources;
determining whether the source address is qualified or not according to the access success probability, the trust degree, the safety degree, the reliability degree and the effective ratio of effective resources of the source address, and if the source address is qualified, judging that the obtained product resources are qualified;
otherwise, temporarily storing the product resources in the area to be evaluated, and evaluating the temporarily stored product resources in the area to be evaluated based on a resource evaluation model to obtain an evaluation result;
and judging whether the product resources are qualified or not according to the evaluation result.
In a possible implementation manner, the process of synchronously transmitting the global search result and the corresponding product resource to the voice interaction APP to render and display the voice interaction APP to the target user includes:
establishing a subchannel, wherein the subchannel comprises a first subchannel, a second subchannel and a candidate channel, and the subchannel is established based on a plurality of network transmission nodes;
acquiring a father node in the network transmission node, and taking the father node as an initial sending point of the first sub-channel and the second sub-channel;
acquiring a sub-node related to the voice interaction APP from the network transmission node, and taking the sub-node as a tail receiving point of the first sub-channel and the second sub-channel;
detecting the father node and the child nodes based on a structure tree detection model, when the father node and the child nodes are qualified, acquiring intermediate nodes corresponding to the first child channel and the second child channel from the network transmission nodes and detecting, and if the intermediate nodes are unqualified, calibrating the unqualified intermediate nodes;
meanwhile, the calibration result is displayed on a structural tree formed by a father node, an intermediate node and a child node, and a link to be transmitted is planned according to a channel transmission rule;
meanwhile, based on the link to be transmitted, determining the position relation of the candidate channel based on the first sub-channel and the second sub-channel, meanwhile, establishing the connection relation between the unqualified intermediate node and the candidate channel, and adjusting the link to be transmitted according to the connection relation until a qualified transmission link is constructed;
and synchronously transmitting the global search result and the corresponding product resource based on the qualified transmission link.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of an implementation method of APP voice interaction for enterprise-oriented online education in an embodiment of the present invention;
FIG. 2 is a related block diagram of a voice assistant service in an embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The invention provides an enterprise-oriented online education APP voice interaction implementation method, as shown in figure 1, comprising the following steps:
step 1: acquiring a voice awakening instruction of a target user, and awakening a voice interaction APP;
step 2: collecting voice information of the target user based on the voice interaction APP, and performing voice recognition conversion on the voice information to obtain text information and displaying the text information;
and step 3: according to a prefabricated dialogue template, recognizing the text information and determining the user intention of the target user;
and 4, step 4: requesting corresponding product service based on the user intention and the prefabricated mode;
and 5: carrying out global search on the product service, and simultaneously crawling product resources related to the product service from a plurality of websites;
step 6: and synchronously transmitting the global search result and the corresponding product resource to the voice interaction APP to be rendered and displayed to the target user.
In this embodiment, a specific flow of the above steps is, for example: the user wakes up the voice help through 'little known and little known', the APP recognizes the voice and collects the voice, and the APP sends the voice to input interested contents, such as: i want to focus on course queries, my exam. And the APP calls the Baidu UNIT to recognize the voice input, converts the voice input into characters and displays the characters. The APP takes the voice converted characters as parameters to request voice assistant service, and the assistant service requests the Baidu UNIT to recognize the user intention according to a prefabricated dialogue template. Such as: to inquire about the lesson, examination or chatting. After the Baidu UNIT identifies the intention, the prefabricated function is returned, and the voice assistant service acquires user resources according to different product services of the return request and returns the user resources to the APP for rendering and displaying to the user.
In this embodiment, the pre-fabricated function is one of the pre-fabricated modes, and the pre-fabricated mode can be implemented by the pre-fabricated function.
For the voice assistant service, as shown in fig. 2, the method specifically includes:
and the voice assistant service WEB side provides a port address for APP calling and provides a service function.
And the voice assistant asynchronously services and asynchronously maintains the resource change and training model in the Baidu UNIT.
RabbitMQ message middleware.
Baidu UNIT: providing voice conversion and interactive functionality.
In the embodiment, the user intention can be efficiently and accurately identified by adopting Baidu voice through APP and user interaction.
In the embodiment, the Intelligent-assistant service comprises asynchronous service and WEB service to maintain the voice recognition intention corresponding to the product resource, is a basis for interaction between the user and the APP, and can realize search of global resources.
The beneficial effects of the above technical scheme are: the method is used for voice input and extracting the keywords in the voice information, the global property of resource search is achieved, and the experience effect of a user is improved.
The invention provides a realization method for enterprise-oriented online education APP voice interaction, which comprises the steps of obtaining a voice awakening instruction of a target user and awakening a voice interaction APP;
checking whether the voice interaction APP is in a network connection state;
if the voice interaction APP is not in a network connection state, when the voice interaction APP receives a voice signal related to a voice awakening instruction, starting the voice interaction APP according to a waiting time interval, and if the voice interaction APP is not received within a preset time period after the voice interaction APP is started, restoring the voice interaction APP to a sleep state;
if the voice interaction APP is in a network connection state, the voice interaction APP carries out frame section splitting on the received awakening instruction;
determining whether a blank frame section exists in the split frame sections, if so, removing the blank frame section, and reserving the rest frame sections;
performing noise reduction processing on the residual frame sections, and acquiring the frame section energy of each frame section in the residual frame sections after the noise reduction processing;
screening the first N even frame sections according to the size of the frame section energy, determining frame section information of the first N/2 frame sections and frame section information of the last N/2 frame sections in the N even frame sections, obtaining a first correlation value between the frame section information of the first N/2 frame sections and preset awakening information, and obtaining a second correlation value between the frame section information of the first N/2 frame sections and the preset awakening information and a third correlation value between the frame section information of the last N/2 frame sections and the preset awakening information;
calculating to obtain a correlation value according to the first correlation value, the second correlation value, the third correlation value, the weight value of the frame section information of the first N/2 frame sections, the weight value of the frame section information of the last N/2 frame sections and the weight value of the preset awakening information;
when the correlation value is larger than a preset value, awakening the voice interaction APP;
otherwise, not waking up the voice interaction APP.
In this embodiment, the sound signal may be any sound including or not including a wake-up instruction.
In this embodiment, the waiting time interval is, for example, 5s, and the preset time period is, for example, 10 s;
in this embodiment, the first correlation value, the second correlation value, and the third correlation value are all correlated with a sound signal or the like present in the voice information;
in this embodiment, for example, the first N/2 frame sections include a wake-up command, and the last N/2 frame sections do not include a wake-up command, at this time, the weight values corresponding to the first N/2 frame sections are greater than the weight values corresponding to the last N/2 frame sections.
The beneficial effects of the above technical scheme are: when the network connection is not carried out, whether a sound signal exists is judged, whether the sound signal is received or not is determined according to the waiting time interval, so that the voice interaction APP can be effectively used even if the user forgets a voice awakening instruction, the experience effect of the user is improved, meanwhile, when the network connection state is in, because the voice information contains information such as keywords, the effective frame section with sound is conveniently kept by deleting the blank frame section, the processing efficiency is improved, meanwhile, the noise reduction processing is carried out on the frame section, the energy of the APP is conveniently and effectively determined subsequently, an effective basis is provided for screening N frame sections subsequently, the correlation value is obtained by obtaining the correlation value and the weighted value, the correlation value is conveniently determined whether the voice signal is effective or not, and an effective verification basis is provided for awakening the voice interaction, improving the effectiveness of the wake-up.
The invention provides a method for realizing enterprise-oriented online education APP voice interaction, wherein the process of collecting the voice information of a target user based on a voice interaction APP comprises the following steps:
collecting single-channel information sent by the target user from a multi-channel acquisition port based on the voice interaction APP;
determining the acquisition frequency of each channel acquisition port, and further determining frequency adjustment parameters;
calibrating a plurality of high frequency points and low frequency points which are in one-to-one correspondence with the high frequency points in single channel information collected by each channel acquisition port, and meanwhile, calculating a signal difference value between the high frequency points and the corresponding low frequency points;
adjusting the signal fluctuation lines of the single-channel information collected by the corresponding channel acquisition ports according to the signal difference value set of each channel acquisition port and the corresponding frequency adjustment parameters;
and reconstructing the adjusted information fluctuation lines to obtain voice information.
In this embodiment, the multi-channel acquisition port, such as an array microphone, includes a plurality of acquisition channels.
In this embodiment, the high frequency point refers to a maximum value point in the sound fluctuation signal, and the low frequency point refers to a minimum value point in the sound fluctuation signal.
In this embodiment, the frequency adjustment parameter is, for example, adjusted up or down based on the original condition.
In this embodiment, the signal fluctuation lines refer to audio energy and the like.
The beneficial effects of the above technical scheme are: the information is collected through the multi-channel collection port, the centralization and the validity of sound of the collected information are convenient to improve, the signal fluctuation lines are adjusted through the signal difference value set and the frequency meter adjusting parameters, the accuracy of obtaining the voice information is convenient to guarantee, and an effective basis is provided for awakening operation.
The invention provides an enterprise-oriented online education APP voice interaction implementation method, which is characterized in that the process of identifying text information and determining the user intention of a target user according to a prefabricated dialogue template comprises the following steps:
acquiring a text image corresponding to the text information;
determining the writing outline, the writing fluency and the writing emphasis of each stroke of the target text, and marking the positioning points of each stroke;
meanwhile, a text recognition model is constructed according to the text writing rule and the labeling result;
performing primary screenshot on the text image to obtain an image to be recognized, inputting the image to be recognized into the text recognition model for text recognition, and outputting a text recognition result;
acquiring a prefabricated dialogue template, and extracting a dialogue result related to the text recognition result in the prefabricated dialogue template;
extracting a first keyword in the text recognition result, simultaneously extracting a second keyword in the dialogue result, and determining a first position of the first keyword in the text recognition result and a second position of the second keyword in the dialogue result;
establishing a matching relation between the first keyword and the second keyword, and simultaneously determining a position weight of the first position and a position weight of the second position;
determining a first implication relation between the first keyword and a conversation result, and simultaneously determining a second implication relation between the second keyword and a text recognition result;
according to the matching relation and the position weight, position sequencing is carried out on the first key words and the second key words;
constructing a first feature sequence based on the first keyword, the first implication relation and the position sorting result of the first keyword, establishing an odd index of each feature sequence in the first feature sequence according to the important mechanism of the feature sequence, constructing a second feature sequence according to the second keyword, the second implication relation and the position sorting result of the second keyword, and establishing an even index of each feature sequence in the second feature sequence according to the important mechanism of the feature sequence;
and according to the voice interaction database, the screened odd indexes and even indexes, calling the corresponding feature sequences, and reconstructing and acquiring the main intention set and the secondary intention set of the target user according to all the called feature sequences and the intention database.
In this embodiment, the dialog result may be automatically output, such as an automatic reply from a treasure-making client, and the text identifies a first keyword in the result, such as a course query, an examination, or a chat;
a second keyword in the corresponding dialog result, for example, to-be-checked chat time;
at this time, what the matching relationship between the first keyword and the second keyword exists is to-be-examined chatting time, chatting, examination and the like; the corresponding location weight of the chatting is larger than the location weight of the examination;
wherein the implication relationship is an inclusion and contained relationship.
In this embodiment, the feature sequence is constructed to facilitate determination of corresponding effective interaction data, and the important mechanism is to perform screening to construct a subsequent primary intention set and a subsequent secondary intention set.
The beneficial effects of the above technical scheme are: through the positioning point marking, the accurate identification is convenient to carry out in the text identification process, in addition, through the text writing rule and the marking result, the text identification model is convenient to construct, the identification basis is provided for the subsequent identification text, through establishing various relations between the text identification result and the dialogue result, the feature sequence is convenient to construct, through establishing the index, the retrieval is convenient, through screening the feature sequence, and based on the intention database, the main intention and the secondary intention of the user are convenient to obtain, and the convenience is provided for the subsequent realization of the global resource search.
The invention provides an enterprise-oriented online education APP voice interaction realization method, based on the user intention and a prefabrication mode, the process of requesting corresponding product service comprises the following steps:
splitting the user intent into a primary intent set and a secondary intent set;
calculating a first fraction A1 of the primary intent set based on the user intent and a second fraction A2 of the secondary intent set based on the user intent;
Figure 817321DEST_PATH_IMAGE011
Figure 680235DEST_PATH_IMAGE002
wherein n1 represents the number of primary intents in the set of primary intents; n2 represents the number of secondary intentions in the secondary intent combination;
Figure 904543DEST_PATH_IMAGE003
an information value representing the ith primary intention;
Figure 966040DEST_PATH_IMAGE004
a weight value representing the ith primary intent;
Figure 250391DEST_PATH_IMAGE005
an information value representing a jth secondary intent;
Figure 787551DEST_PATH_IMAGE006
a weight value representing a jth secondary intent;
calling a prefabricated mode matched with the user intention;
calculating a matching value P between the first proportion A1, the second proportion A2 and the prefabrication mode;
Figure 815550DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 685548DEST_PATH_IMAGE008
representing a prefabrication value associated with said prefabrication mode and based on a first percentage, and having a value range of [3, 5%];
Figure 875221DEST_PATH_IMAGE009
Representing a prefabrication value associated with said prefabrication mode and based on a second percentage, and having a value range of [1, 2%];
Figure 775044DEST_PATH_IMAGE010
The prefabrication value related to the prefabrication mode and based on the first proportion and the second proportion is represented, and the value range is [10,15 ]](ii) a rand represents a random function with a value range of [1,2 ]];
When the matching value P is larger than a preset value, requesting to acquire product services related to the first proportion, the second proportion and a prefabrication mode from a service database;
otherwise, performing intention adjustment on the split primary intention set and secondary intention set, acquiring a new primary intention set and a new secondary intention set, and determining whether the related product service can be acquired or not until acquisition is successful.
The beneficial effects of the above technical scheme are: the primary intention proportion and the secondary intention proportion of the user are obtained, a foundation is conveniently provided for obtaining product services for subsequent requests, whether the product services need to be obtained or not is conveniently determined by calculating a matching value between the proportion and a prefabricating mode, the intention is adjusted, the product services which are obtained finally are aimed at the user, and the experience effect of the user is improved.
The invention provides a realization method for enterprise-oriented online education APP voice interaction, which comprises the following steps of carrying out global search on product services and simultaneously crawling product resources related to the product services from a plurality of websites:
acquiring a service code of each product service, extracting a feature code in each service code, acquiring a feature binary code corresponding to the feature code based on a code-binary mapping table, and constructing a binary vector;
simultaneously, acquiring a current binary vector corresponding to a current service code, and simultaneously acquiring residual binary vectors in all the binary vectors except the current binary vector;
and a binary matrix is established according to the residual binary vectors;
acquiring a column item comparison vector and a column item difference vector between the current binary vector and each residual binary vector in the corresponding binary matrix, and constructing a corresponding column item comparison matrix and a corresponding column item difference matrix;
accumulating and processing the column item comparison vector and the column item difference matrix to obtain a final matrix;
acquiring a current characteristic value of the current binary vector, a matrix characteristic value of a corresponding final matrix and a correlation value between the current binary vector and the final matrix;
searching a related service network by using a global search model according to the current eigenvalue, the matrix eigenvalue and the correlation value;
simultaneously, based on the service network, crawling product resources related to the product service from multiple websites simultaneously.
In this embodiment, for example, the a and B vectors are vectors with the same column number, and at this time, parameters at the same positions in the a vector and the B vector are compared one by one, so that a column entry ratio can be obtained, and when both the comparisons are completed, a column entry comparison vector can be obtained, so that a column entry comparison matrix can be obtained;
meanwhile, parameters at the same positions in the A vector and the B vector are subtracted one by one to obtain column item differences, and after the subtraction is completed, column item difference vectors can be obtained to further obtain a column item difference matrix.
In this embodiment, the number of columns of the current binary vector and the remaining binary vectors is the same.
In this embodiment, the service network may be, for example, wifi, bluetooth, or the like.
The beneficial effects of the above technical scheme are: the binary vector is constructed by acquiring the service code of the product service and mapping the binary code based on the service code, and the related service network is conveniently searched from the global search model through the current characteristic value, the matrix characteristic value and the associated value, so that a search basis is provided for subsequently acquiring the product resource, and the search efficiency is improved.
The invention provides a realization method for enterprise-oriented online education APP voice interaction, which is characterized in that based on a service network, after product resources related to product services are simultaneously crawled from a plurality of websites, the realization method also comprises the following steps:
acquiring a source address of the product resource;
counting the total number of times of accessing, the total number of times of being reported, the total number of times of being attacked, the total number of being evaluated and the resource information of the website corresponding to the source address;
determining the access success probability of the source address according to the total times of access;
determining the trust degree of the source address according to the total reported times;
determining the safety degree of the source address according to the total attacked times;
extracting keywords in each evaluation in the total number of the evaluated items, and determining the reliability of the source address according to a grading rule;
extracting effective resources related to the product resources in the resource information, and determining effective occupation ratios of the effective resources;
determining whether the source address is qualified or not according to the access success probability, the trust degree, the safety degree, the reliability degree and the effective ratio of effective resources of the source address, and if the source address is qualified, judging that the obtained product resources are qualified;
otherwise, temporarily storing the product resources in the area to be evaluated, and evaluating the temporarily stored product resources in the area to be evaluated based on a resource evaluation model to obtain an evaluation result;
and judging whether the product resources are qualified or not according to the evaluation result.
In this embodiment, the area to be evaluated refers to an area in which product resources need to be temporarily stored.
In this embodiment, the evaluation result may be presented in the form of a score.
The beneficial effects of the above technical scheme are: by determining the access success probability, the trust degree, the safety degree, the reliability degree and the effective duty ratio, the qualification of the source address is judged conveniently, the safety and the real effectiveness of the obtained product resources are ensured, an effective basis is provided for improving the user experience, and the safety of the product resources is determined again conveniently by evaluating the product resources.
The invention provides an enterprise-oriented online education APP voice interaction realization method, which is used for synchronously transmitting a global search result and corresponding product resources to a voice interaction APP rendering and displaying process to a target user, and comprises the following steps:
establishing a subchannel, wherein the subchannel comprises a first subchannel, a second subchannel and a candidate channel, and the subchannel is established based on a plurality of network transmission nodes;
acquiring a father node in the network transmission node, and taking the father node as an initial sending point of the first sub-channel and the second sub-channel;
acquiring a sub-node related to the voice interaction APP from the network transmission node, and taking the sub-node as a tail receiving point of the first sub-channel and the second sub-channel;
detecting the father node and the child nodes based on a structure tree detection model, when the father node and the child nodes are qualified, acquiring intermediate nodes corresponding to the first child channel and the second child channel from the network transmission nodes and detecting, and if the intermediate nodes are unqualified, calibrating the unqualified intermediate nodes;
meanwhile, the calibration result is displayed on a structural tree formed by a father node, an intermediate node and a child node, and a link to be transmitted is planned according to a channel transmission rule;
meanwhile, based on the link to be transmitted, determining the position relation of the candidate channel based on the first sub-channel and the second sub-channel, meanwhile, establishing the connection relation between the unqualified intermediate node and the candidate channel, and adjusting the link to be transmitted according to the connection relation until a qualified transmission link is constructed;
and synchronously transmitting the global search result and the corresponding product resource based on the qualified transmission link.
In this embodiment, the first subchannel and the second subchannel constitute synchronous transmission of data.
In this embodiment, the channel transmission rule is, for example, a shortest path rule, a minimum energy loss rule, or the like.
In this embodiment, the position relationship is to determine the unqualified node, so as to facilitate the candidate channel to replace the unqualified node, and finally obtain the qualified transmission link.
In this embodiment, the link to be transmitted may be planned according to the unqualified intermediate node, but the qualified transmission link is obtained by adjusting the positional relationship on the basis of the link to be transmitted.
The beneficial effects of the above technical scheme are: in the synchronous transmission process, the candidate channel is planned based on the link to be transmitted by determining unqualified nodes in the first sub-channel and the second sub-channel, and then the qualified transmission link is obtained through adjustment, so that the effective transmission of data is ensured, and the rendering is convenient to present.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. An implementation method for APP voice interaction for enterprise online education is characterized by comprising the following steps:
acquiring a voice awakening instruction of a target user, and awakening a voice interaction APP;
collecting voice information of the target user based on the voice interaction APP, and performing voice recognition conversion on the voice information to obtain text information and displaying the text information;
according to a prefabricated dialogue template, recognizing the text information and determining the user intention of the target user;
requesting corresponding product service based on the user intention and the prefabricated mode;
carrying out global search on the product service, and simultaneously crawling product resources related to the product service from a plurality of websites;
synchronously transmitting the global search result and the corresponding product resource to the voice interaction APP to be rendered and displayed to the target user;
the method for obtaining the voice awakening instruction of the target user and awakening the voice interaction APP comprises the following steps:
checking whether the voice interaction APP is in a network connection state;
if the voice interaction APP is not in a network connection state, when the voice interaction APP receives a voice signal related to a voice awakening instruction, starting the voice interaction APP according to a waiting time interval, and if the voice interaction APP is not received within a preset time period after the voice interaction APP is started, restoring the voice interaction APP to a sleep state;
if the voice interaction APP is in a network connection state, the voice interaction APP carries out frame section splitting on the received awakening instruction;
determining whether a blank frame section exists in the split frame sections, if so, removing the blank frame section, and reserving the rest frame sections;
performing noise reduction processing on the residual frame sections, and acquiring the frame section energy of each frame section in the residual frame sections after the noise reduction processing;
screening the first N even frame sections according to the size of the frame section energy, determining frame section information of the first N/2 frame sections and frame section information of the last N/2 frame sections in the N even frame sections, obtaining a first correlation value between the frame section information of the first N/2 frame sections and preset awakening information, and obtaining a second correlation value between the frame section information of the first N/2 frame sections and the preset awakening information and a third correlation value between the frame section information of the last N/2 frame sections and the preset awakening information;
calculating to obtain a correlation value according to the first correlation value, the second correlation value, the third correlation value, the weight value of the frame section information of the first N/2 frame sections, the weight value of the frame section information of the last N/2 frame sections and the weight value of the preset awakening information;
when the correlation value is larger than a preset value, awakening the voice interaction APP;
otherwise, not waking up the voice interaction APP.
2. The method for implementing the APP voice interaction for the enterprise-oriented online education of claim 1, wherein the collecting the voice information of the target user based on the APP voice interaction comprises:
collecting single-channel information sent by the target user from a multi-channel acquisition port based on the voice interaction APP;
determining the acquisition frequency of each channel acquisition port, and further determining frequency adjustment parameters;
calibrating a plurality of high frequency points and low frequency points which are in one-to-one correspondence with the high frequency points in single channel information collected by each channel acquisition port, and meanwhile, calculating a signal difference value between the high frequency points and the corresponding low frequency points;
adjusting the signal fluctuation lines of the single-channel information collected by the corresponding channel acquisition ports according to the signal difference value set of each channel acquisition port and the corresponding frequency adjustment parameters;
and reconstructing the adjusted information fluctuation lines to obtain voice information.
3. The method for realizing APP voice interaction for enterprise-oriented online education as claimed in claim 1, wherein the process of requesting corresponding product services based on the user intention and the pre-made mode includes:
splitting the user intent into a primary intent set and a secondary intent set;
calculating a first fraction A1 of the primary intent set based on the user intent and a second fraction A2 of the secondary intent set based on the user intent;
Figure 731815DEST_PATH_IMAGE001
Figure 468827DEST_PATH_IMAGE002
wherein n1 represents the number of primary intents in the set of primary intents; n2 represents the number of secondary intentions in the set of secondary intentions;
Figure 992212DEST_PATH_IMAGE003
an information value representing the ith primary intention;
Figure 890898DEST_PATH_IMAGE004
a weight value representing the ith primary intent;
Figure 132524DEST_PATH_IMAGE005
an information value representing a jth secondary intent;
Figure 407647DEST_PATH_IMAGE006
a weight value representing a jth secondary intent;
calling a prefabricated mode matched with the user intention;
calculating a matching value P between the first proportion A1, the second proportion A2 and the prefabrication mode;
Figure 519960DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 855126DEST_PATH_IMAGE008
representing a prefabrication value associated with said prefabrication mode and based on a first percentage, and having a value range of [3, 5%];
Figure 584048DEST_PATH_IMAGE009
Representing a prefabrication value associated with said prefabrication mode and based on a second percentage, and having a value range of [1, 2%];
Figure 131704DEST_PATH_IMAGE010
The prefabrication value related to the prefabrication mode and based on the first proportion and the second proportion is represented, and the value range is [10,15 ]](ii) a rand represents a random function with a value range of [1,2 ]];
When the matching value P is larger than a preset value, requesting to acquire product services related to the first proportion, the second proportion and a prefabrication mode from a service database;
otherwise, performing intention adjustment on the split primary intention set and secondary intention set, acquiring a new primary intention set and a new secondary intention set, and determining whether the related product service can be acquired or not until acquisition is successful.
4. The method for realizing APP voice interaction for enterprise-oriented online education as claimed in claim 1, wherein the process of performing global search on the product service and crawling product resources related to the product service from multiple websites simultaneously comprises:
acquiring a service code of each product service, extracting a feature code in each service code, acquiring a feature binary code corresponding to the feature code based on a code-binary mapping table, and constructing a binary vector;
simultaneously, acquiring a current binary vector corresponding to a current service code, and simultaneously acquiring residual binary vectors in all the binary vectors except the current binary vector;
and a binary matrix is established according to the residual binary vectors;
acquiring a column item comparison vector and a column item difference vector between the current binary vector and each residual binary vector in the corresponding binary matrix, and constructing a corresponding column item comparison matrix and a corresponding column item difference matrix;
accumulating and processing the column item comparison vector and the column item difference matrix to obtain a final matrix;
acquiring a current characteristic value of the current binary vector, a matrix characteristic value of a corresponding final matrix and a correlation value between the current binary vector and the final matrix;
searching a related service network by using a global search model according to the current eigenvalue, the matrix eigenvalue and the correlation value;
simultaneously, based on the service network, crawling product resources related to the product service from multiple websites simultaneously.
5. The method for implementing APP voice interaction for enterprise-oriented online education as recited in claim 4, wherein after crawling product resources related to the product service from multiple websites simultaneously based on the service network, the method further comprises:
acquiring a source address of the product resource;
counting the total number of times of accessing, the total number of times of being reported, the total number of times of being attacked, the total number of being evaluated and the resource information of the website corresponding to the source address;
determining the access success probability of the source address according to the total times of access;
determining the trust degree of the source address according to the total reported times;
determining the safety degree of the source address according to the total attacked times;
extracting keywords in each evaluation in the total number of the evaluated items, and determining the reliability of the source address according to a grading rule;
extracting effective resources related to the product resources in the resource information, and determining effective occupation ratios of the effective resources;
determining whether the source address is qualified or not according to the access success probability, the trust degree, the safety degree, the reliability degree and the effective ratio of effective resources of the source address, and if the source address is qualified, judging that the obtained product resources are qualified;
otherwise, temporarily storing the product resources in the area to be evaluated, and evaluating the temporarily stored product resources in the area to be evaluated based on a resource evaluation model to obtain an evaluation result;
and judging whether the product resources are qualified or not according to the evaluation result.
6. The method for realizing the APP voice interaction for the enterprise-oriented online education of the claim 1, wherein the step of synchronously transmitting the global search result and the corresponding product resource to the voice interaction APP rendering process for showing to the target user comprises:
establishing a subchannel, wherein the subchannel comprises a first subchannel, a second subchannel and a candidate channel, and the subchannel is established based on a plurality of network transmission nodes;
acquiring a father node in the network transmission node, and taking the father node as an initial sending point of the first sub-channel and the second sub-channel;
acquiring a sub-node related to the voice interaction APP from the network transmission node, and taking the sub-node as a tail receiving point of the first sub-channel and the second sub-channel;
detecting the father node and the child nodes based on a structure tree detection model, when the father node and the child nodes are qualified, acquiring intermediate nodes corresponding to the first child channel and the second child channel from the network transmission nodes and detecting, and if the intermediate nodes are unqualified, calibrating the unqualified intermediate nodes;
meanwhile, the calibration result is displayed on a structural tree formed by a father node, an intermediate node and a child node, and a link to be transmitted is planned according to a channel transmission rule;
meanwhile, based on the link to be transmitted, determining the position relation of the candidate channel based on the first sub-channel and the second sub-channel, meanwhile, establishing the connection relation between the unqualified intermediate node and the candidate channel, and adjusting the link to be transmitted according to the connection relation until a qualified transmission link is constructed;
and synchronously transmitting the global search result and the corresponding product resource based on the qualified transmission link.
CN202011059543.2A 2020-09-30 2020-09-30 Enterprise online education APP voice interaction implementation method Active CN111930919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011059543.2A CN111930919B (en) 2020-09-30 2020-09-30 Enterprise online education APP voice interaction implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011059543.2A CN111930919B (en) 2020-09-30 2020-09-30 Enterprise online education APP voice interaction implementation method

Publications (2)

Publication Number Publication Date
CN111930919A CN111930919A (en) 2020-11-13
CN111930919B true CN111930919B (en) 2021-01-05

Family

ID=73334728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011059543.2A Active CN111930919B (en) 2020-09-30 2020-09-30 Enterprise online education APP voice interaction implementation method

Country Status (1)

Country Link
CN (1) CN111930919B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206608A (en) * 2021-12-01 2023-06-02 中国电信股份有限公司 Network intention processing method and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224278A (en) * 2015-08-21 2016-01-06 百度在线网络技术(北京)有限公司 Interactive voice service processing method and device
CN107833574A (en) * 2017-11-16 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for providing voice service
CN110717026A (en) * 2019-10-08 2020-01-21 腾讯科技(深圳)有限公司 Text information identification method, man-machine conversation method and related device
CN111429903A (en) * 2020-03-19 2020-07-17 百度在线网络技术(北京)有限公司 Audio signal identification method, device, system, equipment and readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102096590B1 (en) * 2018-08-14 2020-04-06 주식회사 알티캐스트 Gui voice control apparatus using real time command pattern matching and method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224278A (en) * 2015-08-21 2016-01-06 百度在线网络技术(北京)有限公司 Interactive voice service processing method and device
CN107833574A (en) * 2017-11-16 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for providing voice service
CN110717026A (en) * 2019-10-08 2020-01-21 腾讯科技(深圳)有限公司 Text information identification method, man-machine conversation method and related device
CN111429903A (en) * 2020-03-19 2020-07-17 百度在线网络技术(北京)有限公司 Audio signal identification method, device, system, equipment and readable medium

Also Published As

Publication number Publication date
CN111930919A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN108170859B (en) Voice query method, device, storage medium and terminal equipment
CN115238101B (en) Multi-engine intelligent question-answering system oriented to multi-type knowledge base
CN107240398B (en) Intelligent voice interaction method and device
CN107316643B (en) Voice interaction method and device
CN108829757B (en) Intelligent service method, server and storage medium for chat robot
CN108345690B (en) Intelligent question and answer method and system
US6574624B1 (en) Automatic topic identification and switch for natural language search of textual document collections
CN102163198B (en) A method and a system for providing new or popular terms
CN110795542B (en) Dialogue method, related device and equipment
CN110597962B (en) Search result display method and device, medium and electronic equipment
CN111858877A (en) Multi-type question intelligent question answering method, system, equipment and readable storage medium
CN112035599B (en) Query method and device based on vertical search, computer equipment and storage medium
CN105931633A (en) Speech recognition method and system
CN108304424B (en) Text keyword extraction method and text keyword extraction device
CN111477231B (en) Man-machine interaction method, device and storage medium
CN116881429B (en) Multi-tenant-based dialogue model interaction method, device and storage medium
CN111611358A (en) Information interaction method and device, electronic equipment and storage medium
CN111930919B (en) Enterprise online education APP voice interaction implementation method
CN115952770B (en) Data standardization processing method and device, electronic equipment and storage medium
US8478517B2 (en) Method and apparatus to provide location information
CN114528851B (en) Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium
CN110209804B (en) Target corpus determining method and device, storage medium and electronic device
CN115098655A (en) Common question answering method, system, equipment and medium
CN111261165B (en) Station name recognition method, device, equipment and storage medium
CN109446424B (en) Invalid address webpage filtering method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.637, 6 / F, 101 West Fourth Ring Road South, Fengtai District, Beijing

Patentee after: Zhixueyun (Beijing) Technology Co.,Ltd.

Address before: No.637, 6 / F, 101 West Fourth Ring Road South, Fengtai District, Beijing

Patentee before: Zhixueyun (Beijing) Technology Co.,Ltd.