CN111581623B - Intelligent data interaction method and device, electronic equipment and storage medium - Google Patents
Intelligent data interaction method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111581623B CN111581623B CN202010389172.8A CN202010389172A CN111581623B CN 111581623 B CN111581623 B CN 111581623B CN 202010389172 A CN202010389172 A CN 202010389172A CN 111581623 B CN111581623 B CN 111581623B
- Authority
- CN
- China
- Prior art keywords
- data
- preset
- vector
- client
- sentence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003993 interaction Effects 0.000 title claims abstract description 42
- 230000008569 process Effects 0.000 claims abstract description 17
- 239000013598 vector Substances 0.000 claims description 150
- 238000012549 training Methods 0.000 claims description 34
- 238000012795 verification Methods 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 13
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 239000003814 drug Substances 0.000 description 6
- 208000019622 heart disease Diseases 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- RWSOTUBLDIXVET-UHFFFAOYSA-N Dihydrogen sulfide Chemical compound S RWSOTUBLDIXVET-UHFFFAOYSA-N 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013016 damping Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Evolutionary Computation (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention relates to artificial intelligence and provides an intelligent data interaction method which is applied to electronic equipment. The invention can provide forgotten information for the user in the process of filling in the information, and improves the experience of the user.
Description
Technical Field
The present invention relates to artificial intelligence, and in particular, to an intelligent data interaction method, apparatus, electronic device, and storage medium.
Background
In some self-service platform application scenarios requiring users to fill in a lot of information, such as complicated information of personal account numbers, passwords and the like, the next operation of the users generally depends on the previous operation, and because memory forgetting is easy to occur under the condition that the memory of people is limited, the information filled in the previous step can be forgotten by the time the next step is needed, and bad experience is brought to the users. Therefore, how to provide forgotten information for users in the process of filling in information, and improving the experience of users becomes a technical problem to be solved.
Disclosure of Invention
The invention mainly aims to provide an intelligent data interaction method, an intelligent data interaction device, electronic equipment and a storage medium, and aims to solve the problem of how to provide forgotten information for a user in the process of filling information and improve user experience.
In order to achieve the above object, the present invention provides an intelligent data interaction method, applied to an electronic device, the method comprising:
the creation step: a preset number of data pools capable of storing data in short time are pre-established in a database;
and (3) identification: acquiring a face image of a user to be identified through a client, and inputting the face image into a pre-trained identity recognition model to identify identity information to be identified of the face image;
A confirmation step: comparing the identity information to be confirmed with preset identity information of the database, if preset identity information matched and consistent with the identity information to be confirmed exists in the database, displaying an information input interface on the client, and establishing a data transmission channel between the client and the data pool;
the input step comprises the following steps: acquiring operation data input on corresponding columns on the information input interface, storing the operation data into the data pool, and extracting title words and preset words corresponding to a single column to construct query sentences, wherein each query sentence corresponds to one piece of operation data; and
And (3) feedback step: and receiving a query request containing query sentences initiated by a user to be identified in real time through the client, and querying corresponding operation data from the data pool according to the query sentences and feeding back the operation data to the client.
Preferably, the method further comprises the steps of:
and when the client identifies that the identity information to be confirmed is changed, closing the current information input interface of the client.
Preferably, the method further comprises the steps of:
and when the client identifies that the identity information to be confirmed is changed, the operation data in the data pool are cleared.
Preferably, the training process of the identification model includes:
acquiring face image samples, and distributing unique identity information to be confirmed for each face image sample;
dividing the face image samples into a training set and a verification set according to a preset proportion, wherein the number of the face image samples in the training set is larger than that of the face image samples in the verification set;
inputting face image samples in the training set into the identity recognition model for training, verifying the identity recognition model by using the verification set every other preset period, and verifying the accuracy of the identity recognition model by using each face image sample in the verification set and corresponding identity information to be confirmed; and
And when the verification accuracy is greater than a preset threshold, finishing training to obtain the identity recognition model after training is finished.
Preferably, the method further comprises the steps of:
acquiring first voice data of a user to be confirmed through the client, and converting the first voice data by using a preset voice conversion algorithm to obtain text data;
inputting the obtained text data into a pre-trained vector extraction model, outputting text sentence vectors corresponding to the text data, inputting the text sentence vectors into a pre-trained intention recognition model, and outputting intention types corresponding to the text sentence vectors;
According to the intention type, a corresponding preset answering system is found from a mapping relation table which is pre-created in the database and is formed by the intention type and the preset answering system to serve as a target answering system, and text data corresponding to the intention type is issued to the target answering system; and
And respectively calculating the text data and each preset answer stored in the target answer system in advance by using a preset similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
Preferably, the vector extraction model includes:
the vector conversion layer is configured to add a representation vector of a special symbol based on an input current sentence vector sequence, and convert each sentence vector contained in the representation vector and the current sentence vector sequence according to the position information of each sentence vector contained in the representation vector and the current sentence vector sequence to form a vector sequence;
the converter encoder layer is configured to encode the vector sequence to obtain special symbol encoded vectors and sentence encoded vectors respectively corresponding to each sentence vector contained in the current sentence vector sequence; and
And determining, based on a supervised attention layer of a transducer encoder, a dot product between a query vector of the special symbol encoding vector and a key vector of the special symbol encoding vector as a supervised attention of the special symbol encoding vector, and determining, for each sentence encoding vector, a dot product between a key vector of the sentence encoding vector and a query vector of the special symbol encoding vector as a supervised attention of the sentence encoding vector, and obtaining the text sentence vector according to the special symbol encoding vector, each sentence encoding vector, and the supervised attention thereof.
To achieve the above object, the present invention further provides an intelligent data interaction device, including:
the creation module: a preset number of data pools capable of storing data in short time are pre-established in a database;
and an identification module: acquiring a face image of a user to be identified through a client, and inputting the face image into a pre-trained identity recognition model to identify identity information to be identified of the face image;
and a confirmation module: comparing the identity information to be confirmed with preset identity information of the database, if preset identity information matched and consistent with the identity information to be confirmed exists in the database, displaying an information input interface on the client, and establishing a data transmission channel between the client and the data pool;
And an input module: acquiring operation data input on corresponding columns on the information input interface, storing the operation data into the data pool, and extracting title words and preset words corresponding to a single column to construct query sentences, wherein each query sentence corresponds to one piece of operation data; and
And a feedback module: and receiving a query request containing query sentences initiated by a user to be identified in real time through the client, and querying corresponding operation data from the data pool according to the query sentences and feeding back the operation data to the client.
Preferably, the apparatus further comprises the following modules:
and when the client identifies that the identity information to be confirmed is changed, closing the current information input interface of the client.
To achieve the above object, the present invention further provides an electronic device including:
a memory storing at least one instruction; and
And the processor executes the instructions stored in the memory to realize the intelligent data interaction method.
To achieve the above object, the present invention further provides a computer-readable storage medium having stored thereon a smart data interaction program executable by one or more processors to implement the steps of the smart data interaction method as described above.
According to the intelligent data interaction method, the device, the electronic equipment and the storage medium, a preset number of data pools capable of storing data in a short time are created in advance in a database, face images of users to be identified are obtained through a client, the face images are input into a pre-trained identity recognition model to identify identity information to be confirmed of the face images, the identity information to be confirmed is compared with the preset identity information of the database, if the preset identity information matched with the identity information to be confirmed exists in the database, an information input interface is displayed on the client, a data transmission channel between the client and the data pool is established, operation data input on corresponding columns on the information input interface are obtained and stored in the data pool, a single title word corresponding to each column is extracted, an operation data corresponding to each operation data is obtained, an inquiry request containing an inquiry sentence initiated by the users to be identified is received in real time through the client, and the operation data corresponding to the inquiry sentence are fed back to the client from the data pool. The invention can provide forgotten information for the user in the process of filling in the information, and improves the experience of the user.
Drawings
Fig. 1 is a schematic diagram of an internal structure of an electronic device for implementing an intelligent data interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of an intelligent data interaction device according to an embodiment of the present invention;
fig. 3 is a flow chart of an intelligent data interaction method according to an embodiment of the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
In order to make the objects, technical embodiments and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the description of "first", "second", etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical embodiments of the present embodiments may be combined with each other, but it is necessary that the combination of the technical embodiments should be considered that the combination of the technical embodiments does not exist or is not within the scope of protection claimed in the present invention when the combination of the technical embodiments contradicts or cannot be realized on the basis of the implementation of the technical embodiments by persons of ordinary skill in the art.
The invention provides an intelligent data interaction method. Referring to fig. 1, a flow chart of an intelligent data interaction method according to an embodiment of the invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the intelligent data interaction method includes:
s110, a preset number of data pools capable of storing data in a short time are created in the database in advance.
The scheme can be used for application scenes in which the next operation depends on the previous operation and the previous and subsequent operations, and the operation user does not need to store the data generated in the operation process for a long time, such as filling in personal information, user registration, business handling and other application scenes on a self-service platform. Because the above application scenario often requires the user to fill in a lot of information, such as complicated information of personal account numbers, passwords, and the like. Under the condition that the memory of a person is limited, memory forgetting is easy to occur, and the information filled in the previous step is forgotten when the information is used in the next step, so that bad experience is brought to the user. Therefore, in this embodiment, by creating a preset number of data pools capable of storing data in a short time (the specific number may be determined according to the actual situation) in the database in advance, the data which does not need to be permanently stored can be stored by using the data pools capable of storing data in a short time, and the data can be automatically deleted after a certain time, so that the resource occupation of the system is reduced, and the method is suitable for the application scenario that needs to be frequently replaced.
S120, acquiring a face image of a user to be identified through a client, and inputting the face image into a pre-trained identity recognition model to identify identity information to be identified of the face image.
In this embodiment, a face image of a user to be identified is acquired through a client (e.g., a camera), and the face image is input into a pre-trained identity recognition model to identify identity information to be identified of the face image, e.g., user a, so as to prepare for subsequently judging whether the user has permission to enter an operating system.
The region identification model can be obtained through convolutional neural network model (Convolutional Neural Network, CNN) training, and the specific training process is as follows:
acquiring a preset number (for example, 10 ten thousand) of image samples to be processed, and marking the target area and the non-target area in each image sample to be processed;
dividing the image samples to be processed into a training set and a verification set according to a preset proportion (for example, 5:1), wherein the number of the image samples in the training set is larger than that in the verification set;
inputting the image samples in the training set into the target recognition model for training, verifying the target recognition model by using the verification set every preset period (for example, every 1000 iterations), and verifying the accuracy of the target recognition model by using each image sample to be processed in the verification set, the corresponding target region and the non-target region; and
And when the accuracy rate of verification is greater than a preset threshold (for example, 85%), finishing training to obtain the region identification model after training is finished.
S130, comparing the identity information to be confirmed with preset identity information of the database, if preset identity information matched and consistent with the identity information to be confirmed exists in the database, displaying an information input interface on the client, and establishing a data transmission channel between the client and the data pool.
In this embodiment, by comparing the identity information to be confirmed with the preset identity information of the database, when it is determined that the database has the preset identity information matched with the identity information to be confirmed, the user is indicated to have authority to enter the operating system, an information input interface is popped up on a client (for example, a display screen), a data transmission channel between the client and the data pool is established, and operation data generated by the user in the operation process is stored in the data pool to replace short-time memory information of the user.
S140, acquiring operation data input in corresponding columns on the information input interface, storing the operation data in the data pool, and extracting heading words corresponding to the individual columns and preset words to construct query sentences, wherein each query sentence corresponds to one piece of operation data.
In this embodiment, the operation data entered in the corresponding fields (for example, account number input field and password input field) on the information entry interface are acquired and stored in the data pool, the header words (for example, "account number", "password") corresponding to the single fields and the preset words (for example, "just", "input", "what is.
S150, receiving a query request containing a query statement initiated by a user to be identified in real time through the client, and querying corresponding operation data from the data pool according to the query statement and feeding the operation data back to the client.
In this embodiment, when the user forgets the previous information during the operation, a query request including a query sentence (for example, "what is just what is the account input") may be initiated to the electronic device 1 by the client (for example, a microphone), and the corresponding operation data is queried from the data pool according to the query sentence and fed back to the client.
In another embodiment, the intelligent data interaction method further comprises the steps of:
in order to avoid that information is tampered or leaked by surrounding people in the operation process of a user, in the embodiment, when the client (such as a camera) recognizes that the current identity information to be confirmed is changed, for example, when other people face the information input interface, the current information input interface of the client is closed, so that the information is prevented from being tampered or leaked.
In another embodiment, the intelligent data interaction method further comprises the steps of:
in order to further improve information security, in this embodiment, when the client (for example, a camera) recognizes that the current identity information to be confirmed is changed, operation data in the data pool is cleared, so that the system is prevented from being forced to invade, and the information is prevented from being tampered or leaked.
In another embodiment, because most intelligent answering terminals can only perform a dialogue with a user at present, the answering system matches a question of the user with an answer preset in the database to obtain an answer with high similarity, and the answer is fed back to the user, however, as the service volume increases, the query efficiency of the database is reduced, the feedback delay of the dialogue system is increased, and the user experience is affected. Therefore, in order to improve the query efficiency of the database and reduce the feedback delay of the dialogue system, the intelligent data interaction method further comprises the following steps:
acquiring first voice data of a user to be confirmed through the client, and converting the first voice data by using a preset voice conversion algorithm to obtain text data;
inputting the obtained text data into a pre-trained vector extraction model, outputting text sentence vectors corresponding to the text data, inputting the text sentence vectors into a pre-trained intention recognition model, and outputting intention types corresponding to the text sentence vectors;
According to the intention type, a corresponding preset answering system is found from a mapping relation table which is pre-created in the database and is formed by the intention type and the preset answering system to serve as a target answering system, and text data corresponding to the intention type is issued to the target answering system; and
And respectively calculating the text data and each preset answer stored in the target answer system in advance by using a preset similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
In this embodiment, after obtaining the first voice data uploaded by the client, the electronic device 1 converts the first voice data to obtain text data by using a preset voice conversion algorithm.
Wherein the conversion of the first speech data into text data may be realized in a dynamic time warping model (Dynamic Time Warping, DTW). In other embodiments, text data may be obtained using other speech recognition models, such as: a BLSTM model or an LSTM model. However, the DTW model usually requires a large number of training samples to train before the conversion is performed on the first speech data, so in this embodiment, the DTW model may be trained in advance, and a specific training process includes: the method comprises the steps of collecting a preset number of first voice data samples and text data samples corresponding to each first voice data sample in advance, inputting the first voice data samples into a preset DTW model for each first voice data sample to obtain text data corresponding to the first voice data samples, comparing the text data with the text data samples corresponding to the first voice data after the text data corresponding to the first voice data samples are obtained, and adjusting the DTW model according to a comparison result. The DTW model trained by a large amount of voice sample information can accurately convert the first voice data into corresponding text data.
After the text data is obtained, the text data obtained by converting the first voice data is input into a pre-trained vector extraction model, a text sentence vector corresponding to the text data is output, the text sentence vector is input into a pre-trained intention recognition model, and an intention type corresponding to the text sentence vector is output.
Wherein, the vector extraction model is trained by BERT (Bidirectional Encoder Representation from Transformers) model, and the vector extraction model comprises:
the vector conversion layer is configured to add a representation vector of a special symbol based on an input current sentence vector sequence, and convert each sentence vector contained in the representation vector and the current sentence vector sequence according to the position information of each sentence vector contained in the representation vector and the current sentence vector sequence to form a vector sequence;
the converter encoder layer is configured to encode the vector sequence to obtain special symbol encoded vectors and sentence encoded vectors respectively corresponding to each sentence vector contained in the current sentence vector sequence; and
And determining, based on a supervised attention layer of a transducer encoder, a dot product between a query vector of the special symbol encoding vector and a key vector of the special symbol encoding vector as a supervised attention of the special symbol encoding vector, and determining, for each sentence encoding vector, a dot product between a key vector of the sentence encoding vector and a query vector of the special symbol encoding vector as a supervised attention of the sentence encoding vector, and obtaining the text sentence vector according to the special symbol encoding vector, each sentence encoding vector, and the supervised attention thereof.
The intent recognition model is trained from a CNN (Convolutional Neural Network ) model, which includes at least one convolutional layer, at least one pooling layer, and at least one fully-connected layer.
Through the BERT model and the CNN model, the intention type represented by the text data can be rapidly identified.
After the intention type represented by the text data is identified, the intention of the user can be primarily judged, the preset answering system corresponding to the intention type is found out from a mapping relation table between the intention type and the preset answering system (such as a boring system, an intelligent answering system, a business handling system and the like) in a database in advance to serve as a target answering system, and the text data corresponding to the intention type is issued to the target answering system for further solving and confusing. For example, the user simply wants to know the general question "who is something? The text data sent by the client corresponding to the user is sent to the chat system, and a reply of "Zhang somewhere is xxxx" is fed back to the client; or a highly specialized problem "what is a drug for treating heart disease? The corresponding keywords are heart disease and medicine, text data sent by a client corresponding to the user are sent to an intelligent answering system, and xxxx for answering the medicine for treating heart disease is fed back to the client; or the user wants to transact the business "set the alarm clock of meeting at 8 am" (the intention type is business transacting), the corresponding keyword is "8 am, meeting", the text data sent by the client corresponding to the user is sent to the business transacting system, and the reply "the alarm clock of meeting at 8 am is set up in the present day" is fed back to the client. Because each target answering system corresponds to an intention type, the answer with pertinence can be provided for the questions presented by the user, so that the answering accuracy can be improved, the query efficiency of a database can be improved, the feedback delay of a dialogue system can be reduced, and the user experience can be improved.
And respectively calculating the text data and each preset answer stored in the target answer system in advance by utilizing a predetermined similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
Specifically, firstly, calculating the score of each first word in the text data by using a predetermined scoring algorithm, sorting all the first words according to the score, and sequentially selecting a preset number of first words from the text data as first keywords from large to small according to the sorting result;
calculating the score of each second word in the preset answer by using a predetermined scoring algorithm, sorting all the second words according to the score, and sequentially selecting a preset number of second words from the text data as second keywords according to the sorting result from large to small;
wherein, the scoring algorithm is as follows:
、/>and->Representing word nodes extracted from the text data or a preset answer, S (++>)、S(/>) Respectively represent word node ++>、/>Score of->Representation->And->Weighting of edges between two word nodes, +. >Representation->And->Weighting of edges between two word nodes, in +>) Representing pointing words node ++>Is Out ()>) Representing word node->The set of nodes pointed to, d, represents the damping coefficient.
Specifically, each word in the text data is used as a node in the formula, the word segmentation and the part-of-speech tagging are carried out on each sentence in the text data, and only words with specified part of speech (such as nouns, verbs and adjectives) are reserved. Constructing a candidate keyword graph G= (V, E), wherein V consists of reserved words with specified parts of speech, then constructing edges between any two words by adopting Co-Occurrence relation (Co-Occurrence), and enabling edges to exist between the two words, wherein only when the two words Co-occur in a window with the length of K, and K represents the size of the window. According to the formula, the initial value of the weight of the edge between the nodes is set as 1, the scores of the appointed words are calculated by iterating the propagation weight, the calculated scores of the appointed words are ranked from big to small, and the words with the scores ranked in the first ten can be selected as keywords. The method can also use the voting principle to vote each other by taking edges as words, the number of votes obtained by each word tends to be stable after continuous iteration, then the number of votes obtained by the words is ordered from big to small, the words with the first six of the number of votes obtained can be selected as keywords, the obtained keywords are marked in the original text data, and if adjacent phrases are formed, the multi-word keywords are combined.
And respectively calculating the text data by using a predetermined similarity calculation rule, and performing similarity value calculation on each preset answer stored in the target answer system in advance.
The similarity value calculation rule adopts a Jacquard similarity coefficient algorithm:
wherein,a first word set representing all first key words in each text data, ++>A second word set representing the composition of all second key words in each preset answer, ++>Jacquard similarity coefficient between text data and preset answer->Representing the total number of identical key words between the first word set and the second word set, ++>And representing the total number of all key words in the first word set and the second word set.
As shown in fig. 2, a functional block diagram of the intelligent data interaction device 100 of the present invention is shown.
The intelligent data interaction device 100 of the present invention may be installed in an electronic apparatus. Depending on the functions implemented, the intelligent data interaction device 100 may include a creation module 110, an identification module 120, a confirmation module 130, an entry module 140, and a feedback module 140. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the creation module 110 creates a preset number of data pools capable of storing data for a short time in advance in a database.
The scheme can be used for application scenes in which the next operation depends on the previous operation and the previous and subsequent operations, and the operation user does not need to store the data generated in the operation process for a long time, such as filling in personal information, user registration, business handling and other application scenes on a self-service platform. Because the above application scenario often requires the user to fill in a lot of information, such as complicated information of personal account numbers, passwords, and the like. Under the condition that the memory of a person is limited, memory forgetting is easy to occur, and the information filled in the previous step is forgotten when the information is used in the next step, so that bad experience is brought to the user. Therefore, in this embodiment, by creating a preset number of data pools capable of storing data in a short time (the specific number may be determined according to the actual situation) in the database in advance, the data which does not need to be permanently stored can be stored by using the data pools capable of storing data in a short time, and the data can be automatically deleted after a certain time, so that the resource occupation of the system is reduced, and the method is suitable for the application scenario that needs to be frequently replaced.
The recognition module 120 acquires a face image of a user to be recognized through a client, and inputs the face image into a pre-trained identity recognition model to recognize identity information to be recognized of the face image.
In this embodiment, a face image of a user to be identified is acquired through a client (e.g., a camera), and the face image is input into a pre-trained identity recognition model to identify identity information to be identified of the face image, e.g., user a, so as to prepare for subsequently judging whether the user has permission to enter an operating system.
The region identification model can be obtained through convolutional neural network model (Convolutional Neural Network, CNN) training, and the specific training process is as follows:
acquiring a preset number (for example, 10 ten thousand) of image samples to be processed, and marking the target area and the non-target area in each image sample to be processed;
dividing the image samples to be processed into a training set and a verification set according to a preset proportion (for example, 5:1), wherein the number of the image samples in the training set is larger than that in the verification set;
inputting the image samples in the training set into the target recognition model for training, verifying the target recognition model by using the verification set every preset period (for example, every 1000 iterations), and verifying the accuracy of the target recognition model by using each image sample to be processed in the verification set, the corresponding target region and the non-target region; and
And when the accuracy rate of verification is greater than a preset threshold (for example, 85%), finishing training to obtain the region identification model after training is finished.
And the confirmation module 130 compares the identity information to be confirmed with preset identity information of the database, if the preset identity information matched and consistent with the identity information to be confirmed exists in the database, an information input interface is displayed on the client, and a data transmission channel between the client and the data pool is established.
In this embodiment, by comparing the identity information to be confirmed with the preset identity information of the database, when it is determined that the database has the preset identity information matched with the identity information to be confirmed, the user is indicated to have authority to enter the operating system, an information input interface is popped up on a client (for example, a display screen), a data transmission channel between the client and the data pool is established, and operation data generated by the user in the operation process is stored in the data pool to replace short-time memory information of the user.
The input module 140 acquires the operation data input in the corresponding column on the information input interface, stores the operation data in the data pool, and extracts the title word corresponding to the single column and the preset word to construct a query sentence, wherein each query sentence corresponds to one piece of operation data.
In this embodiment, the operation data entered in the corresponding fields (for example, account number input field and password input field) on the information entry interface are acquired and stored in the data pool, the header words (for example, "account number", "password") corresponding to the single fields and the preset words (for example, "just", "input", "what is.
And the feedback module 150 receives a query request containing a query statement initiated by a user to be identified in real time through the client, and queries corresponding operation data from the data pool according to the query statement and feeds the operation data back to the client.
In this embodiment, when the user forgets the previous information during the operation, a query request including a query sentence (for example, "what is just what is the account input") may be initiated to the electronic device 1 by the client (for example, a microphone), and the corresponding operation data is queried from the data pool according to the query sentence and fed back to the client.
In another embodiment, the intelligent data interaction device 100 may further include the following modules:
And when the client identifies that the current identity information to be confirmed is changed, closing the current information input interface of the client.
In order to avoid that information is tampered or leaked by surrounding people in the operation process of a user, in the embodiment, when the client (such as a camera) recognizes that the current identity information to be confirmed is changed, for example, when other people face the information input interface, the current information input interface of the client is closed, so that the information is prevented from being tampered or leaked.
In another embodiment, the intelligent data interaction device 100 may further include the following modules:
and when the client identifies that the current identity information to be confirmed is changed, the operation data in the data pool are cleared.
In order to further improve information security, in this embodiment, when the client (for example, a camera) recognizes that the current identity information to be confirmed is changed, operation data in the data pool is cleared, so that the system is prevented from being forced to invade, and the information is prevented from being tampered or leaked.
In another embodiment, because most intelligent answering terminals can only perform a dialogue with a user at present, the answering system matches a question of the user with an answer preset in the database to obtain an answer with high similarity, and the answer is fed back to the user, however, as the service volume increases, the query efficiency of the database is reduced, the feedback delay of the dialogue system is increased, and the user experience is affected. Thus, in order to increase the query efficiency of the database and reduce the feedback delay of the dialogue system, the intelligent data interaction device 100 may further comprise the following modules:
Acquiring first voice data of a user to be confirmed through the client, and converting the first voice data by using a preset voice conversion algorithm to obtain text data;
inputting the obtained text data into a pre-trained vector extraction model, outputting text sentence vectors corresponding to the text data, inputting the text sentence vectors into a pre-trained intention recognition model, and outputting intention types corresponding to the text sentence vectors;
according to the intention type, a corresponding preset answering system is found from a mapping relation table which is pre-created in the database and is formed by the intention type and the preset answering system to serve as a target answering system, and text data corresponding to the intention type is issued to the target answering system; and
And respectively calculating the text data and each preset answer stored in the target answer system in advance by using a preset similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
In this embodiment, after obtaining the first voice data uploaded by the client, the electronic device 1 converts the first voice data to obtain text data by using a preset voice conversion algorithm.
Wherein the conversion of the first speech data into text data may be realized in a dynamic time warping model (Dynamic Time Warping, DTW). In other embodiments, text data may be obtained using other speech recognition models, such as: a BLSTM model or an LSTM model. However, the DTW model usually requires a large number of training samples to train before the conversion is performed on the first speech data, so in this embodiment, the DTW model may be trained in advance, and a specific training process includes: the method comprises the steps of collecting a preset number of first voice data samples and text data samples corresponding to each first voice data sample in advance, inputting the first voice data samples into a preset DTW model for each first voice data sample to obtain text data corresponding to the first voice data samples, comparing the text data with the text data samples corresponding to the first voice data after the text data corresponding to the first voice data samples are obtained, and adjusting the DTW model according to a comparison result. The DTW model trained by a large amount of voice sample information can accurately convert the first voice data into corresponding text data.
After the text data is obtained, the text data obtained by converting the first voice data is input into a pre-trained vector extraction model, a text sentence vector corresponding to the text data is output, the text sentence vector is input into a pre-trained intention recognition model, and an intention type corresponding to the text sentence vector is output.
Wherein, the vector extraction model is trained by BERT (Bidirectional Encoder Representation from Transformers) model, and the vector extraction model comprises:
the vector conversion layer is configured to add a representation vector of a special symbol based on an input current sentence vector sequence, and convert each sentence vector contained in the representation vector and the current sentence vector sequence according to the position information of each sentence vector contained in the representation vector and the current sentence vector sequence to form a vector sequence;
the converter encoder layer is configured to encode the vector sequence to obtain special symbol encoded vectors and sentence encoded vectors respectively corresponding to each sentence vector contained in the current sentence vector sequence; and
And determining, based on a supervised attention layer of a transducer encoder, a dot product between a query vector of the special symbol encoding vector and a key vector of the special symbol encoding vector as a supervised attention of the special symbol encoding vector, and determining, for each sentence encoding vector, a dot product between a key vector of the sentence encoding vector and a query vector of the special symbol encoding vector as a supervised attention of the sentence encoding vector, and obtaining the text sentence vector according to the special symbol encoding vector, each sentence encoding vector, and the supervised attention thereof.
The intent recognition model is trained from a CNN (Convolutional Neural Network ) model, which includes at least one convolutional layer, at least one pooling layer, and at least one fully-connected layer.
Through the BERT model and the CNN model, the intention type represented by the text data can be rapidly identified.
After the intention type represented by the text data is identified, the intention of the user can be primarily judged, the preset answering system corresponding to the intention type is found out from a mapping relation table between the intention type and the preset answering system (such as a boring system, an intelligent answering system, a business handling system and the like) in a database in advance to serve as a target answering system, and the text data corresponding to the intention type is issued to the target answering system for further solving and confusing. For example, the user simply wants to know the general question "who is something? The text data sent by the client corresponding to the user is sent to the chat system, and a reply of "Zhang somewhere is xxxx" is fed back to the client; or a highly specialized problem "what is a drug for treating heart disease? The corresponding keywords are heart disease and medicine, text data sent by a client corresponding to the user are sent to an intelligent answering system, and xxxx for answering the medicine for treating heart disease is fed back to the client; or the user wants to transact the business "set the alarm clock of meeting at 8 am" (the intention type is business transacting), the corresponding keyword is "8 am, meeting", the text data sent by the client corresponding to the user is sent to the business transacting system, and the reply "the alarm clock of meeting at 8 am is set up in the present day" is fed back to the client. Because each target answering system corresponds to an intention type, the answer with pertinence can be provided for the questions presented by the user, so that the answering accuracy can be improved, the query efficiency of a database can be improved, the feedback delay of a dialogue system can be reduced, and the user experience can be improved.
And respectively calculating the text data and each preset answer stored in the target answer system in advance by utilizing a predetermined similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
Specifically, firstly, calculating the score of each first word in the text data by using a predetermined scoring algorithm, sorting all the first words according to the score, and sequentially selecting a preset number of first words from the text data as first keywords from large to small according to the sorting result;
calculating the score of each second word in the preset answer by using a predetermined scoring algorithm, sorting all the second words according to the score, and sequentially selecting a preset number of second words from the text data as second keywords according to the sorting result from large to small;
wherein, the scoring algorithm is as follows:
、/>and->Representing word nodes extracted from the text data or a preset answer, S (++>)、S(/>) Respectively represent word node ++>、/>Score of->Representation->And->Weighting of edges between two word nodes, +. >Representation->And->Weighting of edges between two word nodes, in +>) Representing pointing words node ++>Is Out ()>) Representing word node->The set of nodes pointed to, d, represents the damping coefficient.
Specifically, each word in the text data is used as a node in the formula, the word segmentation and the part-of-speech tagging are carried out on each sentence in the text data, and only words with specified part of speech (such as nouns, verbs and adjectives) are reserved. Constructing a candidate keyword graph G= (V, E), wherein V consists of reserved words with specified parts of speech, then constructing edges between any two words by adopting Co-Occurrence relation (Co-Occurrence), and enabling edges to exist between the two words, wherein only when the two words Co-occur in a window with the length of K, and K represents the size of the window. According to the formula, the initial value of the weight of the edge between the nodes is set as 1, the scores of the appointed words are calculated by iterating the propagation weight, the calculated scores of the appointed words are ranked from big to small, and the words with the scores ranked in the first ten can be selected as keywords. The method can also use the voting principle to vote each other by taking edges as words, the number of votes obtained by each word tends to be stable after continuous iteration, then the number of votes obtained by the words is ordered from big to small, the words with the first six of the number of votes obtained can be selected as keywords, the obtained keywords are marked in the original text data, and if adjacent phrases are formed, the multi-word keywords are combined.
And respectively calculating the text data by using a predetermined similarity calculation rule, and performing similarity value calculation on each preset answer stored in the target answer system in advance.
The similarity value calculation rule adopts a Jacquard similarity coefficient algorithm:
wherein,a first word set representing all first key words in each text data, ++>A second word set representing the composition of all second key words in each preset answer, ++>Jacquard similarity coefficient between text data and preset answer->Representing the total number of identical key words between the first word set and the second word set, ++>And representing the total number of all key words in the first word set and the second word set.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the intelligent data interaction method according to the present invention.
The electronic device 1 may comprise a processor 12, a memory 11 and a bus, and may further comprise a computer program, such as an intelligent data interaction program 10, stored in the memory 11 and executable on the processor 12.
Wherein the memory 11 comprises at least one type of readable storage medium, the computer usable storage medium may primarily include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions, and the like; the storage data area may store data created from the use of blockchain nodes, and the like. The readable storage medium includes flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of the smart data interacting program 10, but also for temporarily storing data that has been output or is to be output.
The processor 12 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 12 is a Control Unit (Control Unit) of the electronic device, connects respective components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (e.g., intelligent data interaction programs, etc.) stored in the memory 11, and calling data stored in the memory 11.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 12 etc.
Fig. 3 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further comprise a power source (such as a battery) for powering the respective components, and the power source may be logically connected to the at least one processor 12 through a power management device, so as to perform functions of charge management, discharge management, and power consumption management through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may further comprise a network interface 13, optionally the network interface 13 may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The intelligent data interaction program 10 stored in the memory 11 of the electronic device 1 is a combination of instructions which, when executed in the processor 12, may implement:
and (3) identification: responding to an image acquisition request sent by a client, analyzing the image acquisition request to obtain an image to be processed and a target area type, inputting the image to be processed into a pre-trained area identification model, and identifying a target area and a non-target area on the image to be processed according to the target area type;
A first processing step: performing up-sampling processing on the target area to obtain a first image with resolution greater than the current resolution of the target area;
and a second processing step: downsampling the non-target area to obtain a second image with resolution smaller than the current resolution of the non-target area; and
A transmission step: and transmitting the first image and the second image to the client and then splicing to obtain a complete target image.
Specifically, the specific implementation method of the above instructions by the processor 11 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (9)
1. An intelligent data interaction method applied to electronic equipment is characterized by comprising the following steps:
the creation step: a preset number of data pools capable of storing data in short time are pre-established in a database;
and (3) identification: acquiring a face image of a user to be identified through a client, and inputting the face image into a pre-trained identity recognition model to identify identity information to be identified of the face image;
a confirmation step: comparing the identity information to be confirmed with preset identity information of the database, if preset identity information matched and consistent with the identity information to be confirmed exists in the database, displaying an information input interface on the client, and establishing a data transmission channel between the client and the data pool;
the input step comprises the following steps: acquiring operation data input on corresponding columns on the information input interface, storing the operation data into the data pool, and extracting title words and preset words corresponding to a single column to construct query sentences, wherein each query sentence corresponds to one piece of operation data; and
And (3) feedback step: receiving a query request containing query sentences initiated by a user to be identified in real time through the client, and querying corresponding operation data from the data pool according to the query sentences and feeding the operation data back to the client;
the method further comprises the steps of:
acquiring first voice data of a user to be confirmed through the client, and converting the first voice data by using a preset voice conversion algorithm to obtain text data;
inputting the obtained text data into a pre-trained vector extraction model, outputting text sentence vectors corresponding to the text data, inputting the text sentence vectors into a pre-trained intention recognition model, and outputting intention types corresponding to the text sentence vectors;
according to the intention type, a corresponding preset answering system is found from a mapping relation table which is pre-created in the database and is formed by the intention type and the preset answering system to serve as a target answering system, and text data corresponding to the intention type is issued to the target answering system; and
And respectively calculating the text data and each preset answer stored in the target answer system in advance by using a preset similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
2. The intelligent data interaction method according to claim 1, further comprising the steps of:
and when the client identifies that the identity information to be confirmed is changed, closing the current information input interface of the client.
3. The intelligent data interaction method according to claim 2, further comprising the steps of:
and when the client identifies that the identity information to be confirmed is changed, the operation data in the data pool are cleared.
4. The intelligent data interaction method of claim 1, wherein the training process of the identification model comprises:
acquiring face image samples, and distributing unique identity information to be confirmed for each face image sample;
dividing the face image samples into a training set and a verification set according to a preset proportion, wherein the number of the face image samples in the training set is larger than that of the face image samples in the verification set;
inputting face image samples in the training set into the identity recognition model for training, verifying the identity recognition model by using the verification set every other preset period, and verifying the accuracy of the identity recognition model by using each face image sample in the verification set and corresponding identity information to be confirmed; and
And when the verification accuracy is greater than a preset threshold, finishing training to obtain the identity recognition model after training is finished.
5. The intelligent data interaction method of claim 1, wherein the vector extraction model comprises:
the vector conversion layer is configured to add a representation vector of a special symbol based on an input current sentence vector sequence, and convert each sentence vector contained in the representation vector and the current sentence vector sequence according to the position information of each sentence vector contained in the representation vector and the current sentence vector sequence to form a vector sequence;
the converter encoder layer is configured to encode the vector sequence to obtain special symbol encoded vectors and sentence encoded vectors respectively corresponding to each sentence vector contained in the current sentence vector sequence; and
And determining, based on a supervised attention layer of a transducer encoder, a dot product between a query vector of the special symbol encoding vector and a key vector of the special symbol encoding vector as a supervised attention of the special symbol encoding vector, and determining, for each sentence encoding vector, a dot product between a key vector of the sentence encoding vector and a query vector of the special symbol encoding vector as a supervised attention of the sentence encoding vector, and obtaining the text sentence vector according to the special symbol encoding vector, each sentence encoding vector, and the supervised attention thereof.
6. An intelligent data interaction device, characterized in that the intelligent data interaction device comprises:
the creation module: a preset number of data pools capable of storing data in short time are pre-established in a database;
and an identification module: acquiring a face image of a user to be identified through a client, and inputting the face image into a pre-trained identity recognition model to identify identity information to be identified of the face image;
and a confirmation module: comparing the identity information to be confirmed with preset identity information of the database, if preset identity information matched and consistent with the identity information to be confirmed exists in the database, displaying an information input interface on the client, and establishing a data transmission channel between the client and the data pool;
and an input module: acquiring operation data input on corresponding columns on the information input interface, storing the operation data into the data pool, and extracting title words and preset words corresponding to a single column to construct query sentences, wherein each query sentence corresponds to one piece of operation data; and
And a feedback module: receiving a query request containing query sentences initiated by a user to be identified in real time through the client, and querying corresponding operation data from the data pool according to the query sentences and feeding the operation data back to the client;
The device further comprises a conversion module, wherein the conversion module is used for obtaining first voice data of a user to be confirmed through the client, and converting the first voice data by utilizing a preset voice conversion algorithm to obtain text data; inputting the obtained text data into a pre-trained vector extraction model, outputting text sentence vectors corresponding to the text data, inputting the text sentence vectors into a pre-trained intention recognition model, and outputting intention types corresponding to the text sentence vectors; according to the intention type, a corresponding preset answering system is found from a mapping relation table which is pre-created in the database and is formed by the intention type and the preset answering system to serve as a target answering system, and text data corresponding to the intention type is issued to the target answering system; and respectively calculating the text data and each preset answer stored in the target answer system in advance by using a predetermined similarity calculation rule to calculate a similarity value, finding out the preset answer corresponding to the maximum similarity value as a target answer, converting the target answer into second voice data, and feeding the second voice data back to the client.
7. The intelligent data interaction device of claim 6, further comprising the following modules:
And when the client identifies that the identity information to be confirmed is changed, closing the current information input interface of the client.
8. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the intelligent data interaction method of any of claims 1 to 5.
9. A computer readable storage medium having stored thereon a smart data interaction program executable by one or more processors to implement the steps of the smart data interaction method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010389172.8A CN111581623B (en) | 2020-05-09 | 2020-05-09 | Intelligent data interaction method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010389172.8A CN111581623B (en) | 2020-05-09 | 2020-05-09 | Intelligent data interaction method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111581623A CN111581623A (en) | 2020-08-25 |
CN111581623B true CN111581623B (en) | 2023-12-19 |
Family
ID=72110785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010389172.8A Active CN111581623B (en) | 2020-05-09 | 2020-05-09 | Intelligent data interaction method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111581623B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112069230B (en) * | 2020-09-07 | 2023-10-27 | 中国平安财产保险股份有限公司 | Data analysis method, device, equipment and storage medium |
CN113609335B (en) * | 2021-08-12 | 2023-02-03 | 北京滴普科技有限公司 | Target object searching method, system, electronic equipment and storage medium |
CN113849795A (en) * | 2021-10-18 | 2021-12-28 | 深圳追一科技有限公司 | Digital human interaction method and device, electronic equipment and computer storage medium |
CN116405300B (en) * | 2023-04-18 | 2024-01-23 | 无锡锡商银行股份有限公司 | Scene-based online protocol signing security analysis system and method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0628077A (en) * | 1992-07-07 | 1994-02-04 | Fuji Xerox Co Ltd | Input supporting system |
CN106250369A (en) * | 2016-07-28 | 2016-12-21 | 海信集团有限公司 | voice interactive method, device and terminal |
CN108427722A (en) * | 2018-02-09 | 2018-08-21 | 卫盈联信息技术(深圳)有限公司 | intelligent interactive method, electronic device and storage medium |
CN108491709A (en) * | 2018-03-21 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus of permission for identification |
CN108564036A (en) * | 2018-04-13 | 2018-09-21 | 上海思依暄机器人科技股份有限公司 | A kind of method for judging identity, device and Cloud Server based on recognition of face |
CN109117801A (en) * | 2018-08-20 | 2019-01-01 | 深圳壹账通智能科技有限公司 | Method, apparatus, terminal and the computer readable storage medium of recognition of face |
CN110020382A (en) * | 2018-03-29 | 2019-07-16 | 中国平安财产保险股份有限公司 | Intelligent input method, user equipment, storage medium and the device of information |
WO2020024455A1 (en) * | 2018-08-01 | 2020-02-06 | 平安科技(深圳)有限公司 | Context-based input method, apparatus, storage medium, and computer device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7647312B2 (en) * | 2005-05-12 | 2010-01-12 | Microsoft Corporation | System and method for automatic generation of suggested inline search terms |
CN107767869B (en) * | 2017-09-26 | 2021-03-12 | 百度在线网络技术(北京)有限公司 | Method and apparatus for providing voice service |
CN109036425B (en) * | 2018-09-10 | 2019-12-24 | 百度在线网络技术(北京)有限公司 | Method and device for operating intelligent terminal |
-
2020
- 2020-05-09 CN CN202010389172.8A patent/CN111581623B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0628077A (en) * | 1992-07-07 | 1994-02-04 | Fuji Xerox Co Ltd | Input supporting system |
CN106250369A (en) * | 2016-07-28 | 2016-12-21 | 海信集团有限公司 | voice interactive method, device and terminal |
CN108427722A (en) * | 2018-02-09 | 2018-08-21 | 卫盈联信息技术(深圳)有限公司 | intelligent interactive method, electronic device and storage medium |
CN108491709A (en) * | 2018-03-21 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus of permission for identification |
CN110020382A (en) * | 2018-03-29 | 2019-07-16 | 中国平安财产保险股份有限公司 | Intelligent input method, user equipment, storage medium and the device of information |
CN108564036A (en) * | 2018-04-13 | 2018-09-21 | 上海思依暄机器人科技股份有限公司 | A kind of method for judging identity, device and Cloud Server based on recognition of face |
WO2020024455A1 (en) * | 2018-08-01 | 2020-02-06 | 平安科技(深圳)有限公司 | Context-based input method, apparatus, storage medium, and computer device |
CN109117801A (en) * | 2018-08-20 | 2019-01-01 | 深圳壹账通智能科技有限公司 | Method, apparatus, terminal and the computer readable storage medium of recognition of face |
Also Published As
Publication number | Publication date |
---|---|
CN111581623A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111581623B (en) | Intelligent data interaction method and device, electronic equipment and storage medium | |
CN111708873A (en) | Intelligent question answering method and device, computer equipment and storage medium | |
CN113688221B (en) | Model-based conversation recommendation method, device, computer equipment and storage medium | |
WO2021218028A1 (en) | Artificial intelligence-based interview content refining method, apparatus and device, and medium | |
CN114461777B (en) | Intelligent question-answering method, device, equipment and storage medium | |
CN111695354A (en) | Text question-answering method and device based on named entity and readable storage medium | |
WO2021204017A1 (en) | Text intent recognition method and apparatus, and related device | |
CN112988963B (en) | User intention prediction method, device, equipment and medium based on multi-flow nodes | |
CN113627797B (en) | Method, device, computer equipment and storage medium for generating staff member portrait | |
CN113821622B (en) | Answer retrieval method and device based on artificial intelligence, electronic equipment and medium | |
CN111694937A (en) | Interviewing method and device based on artificial intelligence, computer equipment and storage medium | |
CN112395391B (en) | Concept graph construction method, device, computer equipment and storage medium | |
CN113111159A (en) | Question and answer record generation method and device, electronic equipment and storage medium | |
CN116450829A (en) | Medical text classification method, device, equipment and medium | |
CN117648982A (en) | Question-answer model-based answer generation method and device, electronic equipment and storage medium | |
CN113821587A (en) | Text relevance determination method, model training method, device and storage medium | |
CN113254814A (en) | Network course video labeling method and device, electronic equipment and medium | |
CN116881446A (en) | Semantic classification method, device, equipment and storage medium thereof | |
CN114528851B (en) | Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium | |
CN110765245A (en) | Emotion positive and negative judgment method, device and equipment based on big data and storage medium | |
CN115238077A (en) | Text analysis method, device and equipment based on artificial intelligence and storage medium | |
CN113268953A (en) | Text key word extraction method and device, computer equipment and storage medium | |
CN113870478A (en) | Rapid number-taking method and device, electronic equipment and storage medium | |
CN113010664A (en) | Data processing method and device and computer equipment | |
CN111680513B (en) | Feature information identification method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |