CN113724882A - Method, apparatus, device and medium for constructing user portrait based on inquiry session - Google Patents
Method, apparatus, device and medium for constructing user portrait based on inquiry session Download PDFInfo
- Publication number
- CN113724882A CN113724882A CN202111005960.3A CN202111005960A CN113724882A CN 113724882 A CN113724882 A CN 113724882A CN 202111005960 A CN202111005960 A CN 202111005960A CN 113724882 A CN113724882 A CN 113724882A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- inquiry
- feature vector
- chief complaint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 239000013598 vector Substances 0.000 claims abstract description 92
- 239000011159 matrix material Substances 0.000 claims abstract description 66
- 238000012549 training Methods 0.000 claims abstract description 23
- 201000010099 disease Diseases 0.000 claims abstract description 16
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims description 30
- 230000036541 health Effects 0.000 claims description 26
- 230000015654 memory Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 23
- 230000011218 segmentation Effects 0.000 claims description 21
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 13
- 230000002457 bidirectional effect Effects 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 6
- 230000000306 recurrent effect Effects 0.000 claims description 6
- 238000007499 fusion processing Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000010801 machine learning Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 description 10
- 208000004998 Abdominal Pain Diseases 0.000 description 9
- 238000004891 communication Methods 0.000 description 9
- 235000013305 food Nutrition 0.000 description 9
- 238000013473 artificial intelligence Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 6
- 238000003058 natural language processing Methods 0.000 description 6
- 208000024891 symptom Diseases 0.000 description 6
- 208000002193 Pain Diseases 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 4
- 210000001015 abdomen Anatomy 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 208000022531 anorexia Diseases 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 206010061428 decreased appetite Diseases 0.000 description 3
- 235000020803 food preference Nutrition 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000006403 short-term memory Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 206010000087 Abdominal pain upper Diseases 0.000 description 1
- 206010013700 Drug hypersensitivity Diseases 0.000 description 1
- 235000011158 Prunus mume Nutrition 0.000 description 1
- 244000018795 Prunus mume Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000037396 body weight Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 201000005311 drug allergy Diseases 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a method, a device, equipment and a medium for constructing a user portrait based on an inquiry session, and relates to the technical field of machine learning. The method comprises the following steps: acquiring chief complaint information input by a user; extracting features of the chief complaint information to obtain a first feature vector matrix; inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease; presenting the inquiry questions to the user to obtain inquiry information input by the user; and constructing the user portrait according to the inquiry information. By identifying the chief complaint information input by the user and obtaining the inquiry problem corresponding to the chief complaint information through the prediction network model, the user is quickly and automatically inquired, the inquiry information obtaining efficiency is improved, and the labor cost is reduced.
Description
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a method, an apparatus, a device, and a medium for constructing a user portrait based on an inquiry session.
Background
In the methods for generating a user portrait according to medical data in the related art, inquiry information between doctors and patients is mostly acquired in a manual online inquiry mode, and after the inquiry information is subjected to word segmentation, screening and identification, a user portrait of a patient is constructed according to an identification result. However, the method of acquiring inquiry information by manual online inquiry and constructing a user figure has low efficiency and high labor cost.
Disclosure of Invention
The present application is directed to solving at least one of the problems in the prior art. Therefore, the application provides a method, a device, equipment and a medium for constructing a user portrait based on an inquiry session, which can improve the efficiency of constructing the user portrait and reduce the labor cost.
A method of constructing a user representation based on an interrogation session according to an embodiment of the first aspect of the application comprises: obtaining complaint information input by a user, wherein the complaint information is disease description information of the user; extracting features of the chief complaint information to obtain a first feature vector matrix; inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease; presenting the inquiry questions to the user to obtain inquiry information input by the user; and constructing a user portrait according to the inquiry information.
The method for constructing the user portrait according to the inquiry session in the embodiment of the application has at least the following beneficial effects: the method comprises the steps of identifying the chief complaint information input by a user, performing feature extraction on the chief complaint information, inputting the chief complaint information into a prediction network model, obtaining an inquiry question corresponding to the chief complaint information through the prediction network model, performing automatic inquiry on the user rapidly through the inquiry question to obtain the inquiry information of the user, constructing a user portrait according to the inquiry information, improving the efficiency of constructing the user portrait, avoiding manual inquiry when the inquiry information is collected, and saving labor cost. Through the constructed user image, doctors in the treatment field which accords with the current symptoms of the user can be conveniently selected for further inquiry in the follow-up process, and the project can be more accurately recommended to the user through the user image.
According to some embodiments of the present application, the performing feature extraction on the chief complaint information to obtain a first feature vector matrix includes: natural language preprocessing is carried out on the chief complaint information to obtain a plurality of first word segments; inputting the first participles into a pre-trained word vector model to obtain a plurality of first word vectors; and combining the plurality of first word vectors to obtain the first feature vector matrix.
According to some embodiments of the present application, the natural language preprocessing the complaint information to obtain a plurality of first terms includes: performing complex body to simple body processing on the main complaint information to obtain simple body information; performing word segmentation on the simplified information to obtain pre-word segmentation information; performing word-stop-removal processing on the pre-word segmentation information to obtain word segmentation information; and carrying out synonym normalization on the participle information to obtain the plurality of first participles.
According to some embodiments of the present application, inputting the first eigenvector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information includes: inputting the first feature vector matrix into a bidirectional recurrent neural network for feature fusion processing to obtain a first fusion feature vector matrix; inputting the first fusion eigenvector matrix into a high-speed neural network for deep processing to obtain a first depth eigenvector matrix; performing feature extraction on the first depth feature vector matrix through a convolutional neural network to obtain a first vector; and obtaining the inquiry question matched with the main complaint information according to the first vector.
According to some embodiments of the present application, the presenting the inquiry questions to the user to obtain the inquiry information input by the user comprises: presenting the inquiry questions and structured answer options to the user; and obtaining the inquiry information input by the user according to the answer options input by the user.
According to some embodiments of the present application, the method of constructing a user representation based on an interrogation session further comprises: acquiring session information when the user is manually asked; extracting the features of the session information to obtain a second feature vector matrix; inputting the second feature vector matrix into a label extraction network model to obtain a health label of the user, wherein the label extraction network model is obtained by training according to a second data set, and the second data set comprises a plurality of session information and health labels corresponding to the session information; and updating the user image according to the health class label.
According to some embodiments of the present application, the inputting the second feature vector matrix into a tag extraction network model to obtain the health class tag of the user includes: inputting the second eigenvector matrix into a bidirectional long-short term memory network for part of speech tagging to obtain a tag score probability; and carrying out label sequence correction on the label score probability through a conditional random field to obtain the health label of the user.
An apparatus for constructing a user representation based on an interrogation session according to a second aspect of embodiments of the present application, comprising: the information acquisition module is used for acquiring the chief complaint information input by a user, wherein the chief complaint information is the disease description information of the user; the characteristic extraction module is used for extracting the characteristics of the chief complaint information to obtain a first characteristic vector matrix; the prediction module is used for inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease state; the inquiry module is used for presenting the inquiry questions to the user so as to obtain inquiry information input by the user; a representation construction module for constructing a user representation from the interrogation information.
An electronic device according to an embodiment of a third aspect of the present application includes: at least one memory; at least one processor; at least one program; the programs are stored in the memory, and the processor executes the at least one program to implement: a method as in the embodiment of the first aspect above.
A storage medium according to an embodiment of a fourth aspect of the present application is a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform: a method as in the embodiment of the first aspect above.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The present application is further described with reference to the following figures and examples, in which:
FIG. 1 is a flow chart of a method for constructing a user representation based on an interrogation session in an embodiment of the present application;
FIG. 2 is a flowchart illustrating an embodiment of obtaining a first eigenvector matrix;
FIG. 3 is a diagram illustrating a training word vector model according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating natural language preprocessing performed according to an embodiment of the present application;
FIG. 5 is a flow chart of an embodiment of the present application for obtaining an interrogation question;
FIG. 6 is a flow chart of obtaining interrogation information in an embodiment of the application;
FIG. 7 is a flow diagram of a method for constructing a user representation based on an interrogation session according to another embodiment of the present application;
FIG. 8 is a flowchart illustrating an embodiment of the present disclosure for obtaining a health class label of a user;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
First, several terms referred to in the present application are resolved:
natural Language Processing (NLP): NLP uses computer to process, understand and use human language (such as chinese, english, etc.), and belongs to a branch of artificial intelligence, which is a cross discipline between computer science and linguistics, also commonly called computational linguistics. Natural language processing includes parsing, semantic analysis, discourse understanding, and the like. Natural language processing is commonly used in the technical fields of machine translation, character recognition of handwriting and print, speech recognition and text-to-speech conversion, information retrieval, information extraction and filtering, text classification and clustering, public opinion analysis and opinion mining, and relates to data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research, linguistic research related to language calculation, and the like, which are related to language processing.
Word2 Vec: is a tool for training Word vectors, Word2Vec considers that the similarity of words frequently occurring in a sentence is relatively high, i.e. for a central Word, the probability of surrounding words is maximized. Word2Vec was trained using a three-tier network, with the last tier predicted using the huffman tree (Huffuman).
GloVe: is another tool for training word vectors, GloVe is realized by co-occurrence counting: firstly, constructing a co-occurrence matrix of words, wherein each row is a word, each column is a sentence, calculating the occurrence frequency of each word in each sentence through the co-occurrence matrix, and because the sentences are combinations of various words and have very large dimensionality, dimension reduction is needed, namely the dimension reduction is needed on the co-occurrence matrix.
Long Short Term Memory network (LSTM): the long-term dependency relationship can be learned, errors can be reserved, the errors can be kept at a more constant level when the errors are transmitted reversely along time and layers, the cyclic network can learn a plurality of time steps, and therefore long-distance causal connection is established, and the method is very suitable for modeling of time sequence data, such as text data.
Bi-directional Long Short Term Memory network (Bi-directional Long Short Term Memory, BilSTM): the method is formed by combining the forward LSTM and the backward LSTM, and can utilize the information of the past time and the information of the future time, so that the final prediction result is more accurate compared with the one-way LSTM.
Conditional Random Field (CRF): the method is a discriminant probability model, is a random field, is commonly used for labeling or analyzing sequence data, and is often used for lexical analysis work such as Chinese word segmentation, part of speech labeling and the like.
Bidirectional Recurrent Neural Network (BRNN): the bidirectional cyclic neural network is formed by superposing two unidirectional cyclic neural networks together, the output is jointly determined by the states of the two cyclic neural networks, and the output at the current moment is not only related to the previous state but also related to the later state.
High speed neural Network (Highway Network): a gate structure is added in the high-speed neural network, so that the problems of network training difficulty caused by deepening of the network depth and blocked gradient information backflow can be solved.
Convolutional Neural Networks (CNN): the convolutional neural network has the characteristic learning capability, can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network, and can be applied to supervised learning and unsupervised learning.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Based on this, embodiments of the present application provide a method, apparatus, device, and medium for constructing a user representation based on an interrogation session, which may improve efficiency and accuracy of constructing a user representation.
The embodiment of the present application provides a method, an apparatus, a device, and a medium for constructing a user portrait based on an inquiry session, which are specifically described in the following embodiments.
The embodiment of the application provides a method for constructing a user portrait based on an inquiry session, and relates to the technical field of machine learning. The method for constructing the user portrait by the inquiry session provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, smart watch, or the like; the server side can be configured into an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and cloud servers for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content delivery network) and big data and artificial intelligence platforms; the software may be an application or the like that implements a method of constructing a user representation for an interrogation session, but is not limited to the above form.
Referring to fig. 1, an alternative flowchart of a method for constructing a user representation for an interrogation session in an embodiment of the present application, the method in fig. 1 may include, but is not limited to, steps S100 to S500.
S100, obtaining the chief complaint information input by a user;
s200, extracting features of the chief complaint information to obtain a first feature vector matrix;
s300, inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information;
s400, presenting the inquiry questions to the user to obtain inquiry information input by the user;
and S500, constructing a user portrait according to the inquiry information.
In some embodiments, in S100, the complaint information obtained by the present application is disease description information of the user. According to different specific embodiments of the method, different manners for acquiring the chief complaint information can be selected. For example, when the method of the present Application is implemented in the form of an inquiry Application (APP), a user may input chief complaint information into a dialog box through the inquiry APP, where the chief complaint information may be text information directly input by the user or voice information input by the user, and when the user inputs voice information, the inquiry APP needs to perform voice recognition on the voice information first to obtain recognized text information and then perform a subsequent processing procedure. For example, after the user opens the inquiry APP, the text information input into the dialog box is "child is 7 years old, somewhat low, and not good to eat, and the inquiry APP automatically identifies and acquires the text information in the dialog box, and uses the text information as the chief complaint information.
After the chief complaint information is obtained, natural language preprocessing is carried out on the chief complaint information, such as processing of turning from traditional Chinese characters to simplified Chinese characters, normalizing synonyms, word segmentation and the like, and then characteristic extraction is carried out to obtain a first characteristic vector matrix, wherein the first characteristic vector matrix comprises word vectors of each word segmentation in the chief complaint information.
And then inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease state. In a specific example, the disease in a medical inquiry sample is ' poor eating quality ', and the corresponding inquiry question is ' food preference and food preference are obvious choices, only some foods are eaten, anorexia is disliked for all foods, and asking children to ask which kind of cases they belong? "; another medical inquiry sample is "food picking", and the corresponding inquiry question is "how long the child probably had the food picking symptoms? ". The method comprises the steps of training a prediction network model by using a large number of medical inquiry samples, and inputting chief complaint information into the prediction network model to obtain inquiry questions corresponding to the chief complaint information.
And presenting the predicted inquiry questions to the user to obtain the inquiry information input by the user. It will be appreciated that different ways of presenting the interrogation questions may be selected, depending on the particular embodiment of the method of the present application. For example, the presentation mode may be that the inquiry question is presented in a text form through a dialog box of the inquiry APP, or the inquiry question is presented to the user through a speaker after being converted into voice information through voice conversion. Correspondingly, after receiving the inquiry questions, the user answers the inquiry questions and inputs the answer information again. In some embodiments, after the corresponding user answers are obtained, new inquiry questions can be obtained continuously by obtaining the answer information of the user, performing feature extraction on the answer information and inputting the extracted answer information into the prediction network model, and continuous inquiry of the inquiry questions is completed in such a way, so that comprehensive inquiry on the inquiry questions related to the specific symptoms is comprehensively and completely performed; in some other embodiments, a plurality of inquiry questions may correspond to one disease state, and the user may be asked in sequence according to the priority of different inquiry questions to obtain the response information of the user. And finally, collecting the answer information input by all users as the inquiry information of the current user.
The user portrait is constructed by acquiring the inquiry information of the user and processing the inquiry information. As shown in table 1, in an embodiment, a user portrait is constructed according to the inquiry information:
table 1:
label item | Tag value |
Sex | For male |
Age (age) | Age 7 |
Height of a person | Short to one side |
Body weight | - |
Symptoms and disease course | Food choosing (more than one month) and dry stool |
Drug allergy | Is free of |
…… | - |
According to the method for constructing the user portrait by the inquiry session, the inquiry problems related to the corresponding symptoms are selected through the prediction network model, and the user is automatically asked.
Referring to fig. 2, a flowchart for acquiring a first eigenvector matrix according to an embodiment of the present application includes:
s210, natural language preprocessing is carried out on the chief complaint information to obtain a plurality of first word segments;
s220, inputting the first participles into a word vector model trained in advance to obtain a plurality of first word vectors;
and S230, combining the plurality of first word vectors to obtain a first feature vector matrix.
Fig. 3 is a schematic diagram of a training word vector model according to an embodiment of the present application. According to the method and the device, a large amount of medical inquiry sample data are used as training samples, and it can be understood that the medical inquiry sample data are all subjected to word segmentation. And then training is carried out through a GloVe algorithm to obtain a trained word vector model. The word vector model can convert words into vector representations, i.e. each word is represented by a low-dimensional, dense and real-valued word vector, so that word relevancy can be calculated, and the distances between the corresponding word vectors are close if two words have semantic correlation or similarity. In some other embodiments, Word vector training can also be performed by using Word2Vec algorithm, and compared with GloVe algorithm, GloVe algorithm has more advantages in parallelization and faster processing speed.
When extracting the features of the obtained chief complaint information, natural language preprocessing is first required to be performed on the chief complaint information, and the natural language preprocessing may include: stop words, original complex form to simplified form, synonym normalization, word segmentation, etc. The method comprises the steps of obtaining a plurality of first participles through natural language preprocessing, inputting the plurality of first participles into a word vector model to obtain a plurality of first word vectors, wherein the plurality of first participles and the plurality of first word vectors are in one-to-one correspondence, and finally combining the plurality of first word vectors to obtain a first feature vector matrix.
In some embodiments, referring to fig. 4, a flow chart of natural language preprocessing performed for an embodiment of the present application includes:
s211, performing complex body to simple body processing on the main complaint information to obtain simple body information;
s212, performing word segmentation on the simplified information to obtain pre-word segmentation information;
s213, performing word-stop-removing processing on the pre-word-segmentation information to obtain word-segmentation information;
s214, carrying out synonym normalization on the participle information to obtain a plurality of first participles.
Natural language preprocessing, comprising: simplified form, word segmentation, stop word and synonym normalization. Specifically, when the input text information is: "you are good, raw, I have a pain in the belly. After the reduction of the original complex to the simplified complex, the method comprises the following steps: "you are good, doctor, I have a belly ache. "after word segmentation: "[ hello ], [ doctor ], [ my ], [ bit ], [ belly ache ]. "after removal of stop words: "[ I ] [ A bit ] [ Du Tong ]. By setting the disabled word list file, words in the disabled word list can be removed from the word segmentation result, so that the data volume in the subsequent processing flow is reduced. Synonym normalization can replace words with the same meaning with a specific word, and can also reduce the data volume of subsequent processing. For example, table 2 is a normalized mapping table for synonym transformation:
table 2:
core word | Words to be normalized |
Abdominal pain | Stomachache |
Abdominal pain | Abdominal pain |
Abdominal pain | Abdominal pain |
Abdominal pain | Abdominal pain |
In some embodiments, the predictive network model comprises: referring to fig. 5, the bidirectional recurrent neural network, the high-speed neural network, and the convolutional neural network input the first eigenvector matrix into the prediction network model to obtain an inquiry question matched with the chief complaint information, including:
s310, inputting the first feature vector matrix into a bidirectional recurrent neural network for feature fusion processing to obtain a first fusion feature vector matrix;
s320, inputting the first fusion characteristic vector matrix into a high-speed neural network for deep processing to obtain a first depth characteristic vector matrix;
s330, performing feature extraction on the first depth feature vector matrix through a convolutional neural network to obtain a first vector;
and S340, obtaining the inquiry questions matched with the chief complaint information according to the first vector.
In the prediction network model, features in an input first feature vector matrix are subjected to fusion processing through a bidirectional recurrent neural network, namely, a current word vector is spliced with word vectors adjacent to the current word vector to learn semantic features of the current word vector so as to obtain a first fusion feature vector matrix; then inputting the first fusion eigenvector matrix into a high-speed neural network, and obtaining a first depth eigenvector matrix through training of a multilayer network; and finally, extracting features of the first depth feature vector matrix through a convolutional neural network to obtain a low-dimensional first vector, and obtaining an inquiry question matched with the chief complaint information according to the first vector.
In some embodiments, referring to fig. 6, the user is presented with an inquiry question to obtain user-entered inquiry information, including:
s410, presenting the inquiry questions and the structured answer options to the user;
and S420, obtaining the inquiry information input by the user according to the answer options input by the user.
When the inquiry questions are presented to the user, the structured answer options are presented to the user, and therefore the obtained inquiry information is structured user answer information. For example, when an inquiry question is presented to a user using an inquiry APP, the user is presented with structured answers that answer the inquiry question at the same time. The question of inquiry is that what is more obvious about food is selected as food, only some food is eaten, anorexia is disliked for all food, asking children about what kind of condition they belong to? "when the user can only selectively input" food preference "or" anorexia ", the user is not allowed to input answer information by himself. By enabling the user to select the structured user response information, the inquiry information does not need to be processed, the user portrait can be constructed by directly using the structured user response information, and the efficiency of constructing the user portrait is further improved. In some other embodiments, the user input inquiry information may also be processed by keyword matching or extracting keywords using a feature extraction network to construct a user representation.
Referring to fig. 7, an optional flow diagram of a method of constructing a user representation for an interrogation session in accordance with another embodiment of the present application, the method further comprising:
s600, acquiring session information when a user is manually asked;
s700, extracting the features of the session information to obtain a second feature vector matrix;
s800, inputting the second feature vector matrix into a label extraction network model to obtain a health label of the user;
and S900, updating the user image according to the health class label of the user.
After the automatic questioning is finished, a preliminary user portrait is constructed according to the questioning information, and in order to enable the user portrait to be constructed more completely, manual questioning needs to be carried out at the moment so as to make up for the information missed during the automatic questioning. In some embodiments, a physician in a desired area of care may be selected for manual interrogation based on the user representation constructed during automated questioning.
And after the manual inquiry is finished, acquiring session information during the manual inquiry, wherein the session information can be character information acquired through an inquiry APP dialog box or voice information during the voice inquiry, and if the session information is voice session information, the subsequent processing process is continued after voice recognition is carried out on the voice information.
After the conversation information is obtained, natural language preprocessing is carried out on the conversation information, such as processing of turning from traditional to simplified, normalizing synonyms, word segmentation and the like, and then characteristic extraction is carried out to obtain a second characteristic vector matrix, wherein the second characteristic vector matrix comprises word vectors of each word segmentation in the chief complaint information. The feature vectors of the session information may be extracted through the same trained word vector model in the above embodiments, and the manner of training the word vector model has been described in detail in the above embodiments, and is not described in detail here.
And then inputting the second feature vector matrix into a label extraction network model to obtain a health label of the user, wherein the label extraction network model is obtained by training according to a second data set, and the second data set comprises a plurality of session information and health labels corresponding to the session information. The label extraction network model is used for labeling the collected session information and extracting the corresponding health labels according to the obtained labeling information. For example, referring to table 3, it is an embodiment of the correspondence table of the markup corpus:
table 3:
original text | Labeling |
I am | O-O |
Is provided with | O-O |
Dot | O-O |
Belly | S-spt |
Seed of Japanese apricot | M-spt |
Pain (due to cold or dampness) | E-spt |
The second data set comprises session information and health labels corresponding to the session information, the label extraction network model is trained through the prepared plurality of session information and the health labels corresponding to the plurality of session information, the trained label extraction network model can label the currently input session information, and therefore the health labels are extracted according to the labels. For example, when the input session information is "i have a certain belly pain", the obtained health label of the user is "belly pain" after the label extraction network model processing, and the session information is screened to obtain key information related to symptoms. It will be appreciated that the health class label may also include personal information of the user, treatment of the medical condition, etc., and that the training samples in the second data set may be varied according to the particular needs of constructing the representation of the user.
And finally, updating the user portrait according to the obtained health class label of the user, so that the finally generated user portrait is more complete. The updating process is to merge the user portrait obtained according to the main complaint information and the user portrait obtained according to the conversation information. For example, referring to table 4, a user profile constructed from session information in one embodiment is shown:
table 4:
merging with the user representation constructed in table 1 to obtain an updated user representation, as shown in table 5:
table 5:
through the set label extraction network model, the health type label in the session information is extracted, and the user portrait is updated, so that the finally obtained user portrait is more specific and complete. By constructing the user portrait, the inquiry experience of subsequent users can be improved, and the accuracy of medical recommendation project delivery is improved.
In some embodiments, referring to fig. 8, inputting the second feature vector matrix into the label extraction network model to obtain the health class label of the user includes:
s810, inputting the second eigenvector matrix into a bidirectional long-short term memory network for part of speech tagging to obtain a label score probability;
and S820, carrying out label sequence correction on the label score probability through the conditional random field to obtain the health label of the user.
The tag extraction network of the present application includes a two-way long-short term memory network and conditional random fields. Specifically, the application uses BilSTM-CRF to realize the labeling of the input session information, and the BilSTM consists of forward LSTM and backward LSTM. During training, firstly, the labeled session information is mapped into word vectors through a word vector model, then the word vectors are input into a BilSTM layer, the score probability of each word corresponding to each label is output through context learning information, finally, the output of all the BilSTM layers is used as the input of a CRF layer, a final prediction result is obtained through learning sequence dependence information among labels, the output of the BilSTM layer is corrected through learning the transition probability among labels in a second data set through the CRF layer, the rationality of the obtained prediction labels is ensured, and the accuracy of the obtained health labels is improved.
The application also discloses a device based on user portrait is found to inquiry conversation, includes:
the information acquisition module is used for acquiring the chief complaint information input by the user, wherein the chief complaint information is the disease description information of the user;
the characteristic extraction module is used for extracting the characteristics of the chief complaint information to obtain a first characteristic vector matrix;
the prediction module is used for inputting the first characteristic vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease;
the inquiry module is used for presenting inquiry questions to the user so as to obtain inquiry information input by the user;
and the image construction module is used for constructing the user image according to the inquiry information.
The specific implementation steps of the apparatus for constructing a user portrait based on an inquiry session in the present application are the same as those of the method for constructing a user portrait based on an inquiry session in the above embodiments, and are not described herein any more.
The application also discloses an electronic device, including: at least one memory, at least one processor, at least one program, the program stored in the memory, the processor executing the at least one program to implement: the method for constructing a user portrait based on an inquiry session disclosed in the above embodiments. The electronic device may be any intelligent terminal including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a vehicle-mounted computer, and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to an embodiment, where the electronic device includes:
a processor, which may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute a relevant program to implement the technical solution provided by the embodiments of the present disclosure;
the Memory may be implemented in the form of a ROM (Read Only Memory), a static Memory device, a dynamic Memory device, or a RAM (Random Access Memory). The memory can store an operating system and other application programs, and when the technical scheme provided by the embodiment of the specification is implemented through software or firmware, related program codes are stored in the memory and the processor calls the method for constructing the user portrait based on the inquiry session, which is used for executing the embodiment of the disclosure;
the input/output interface is used for realizing information input and output;
the communication interface is used for realizing communication interaction between the equipment and other equipment, and can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like);
a bus that transfers information between various components of the device (e.g., processor, memory, input/output interfaces, and communication interfaces);
wherein the processor, the memory, the input/output interface and the communication interface are communicatively connected to each other within the device by a bus.
The present application also discloses a storage medium, which is a computer-readable storage medium storing computer-executable instructions for causing a computer to perform: the method for constructing a user portrait based on an inquiry session disclosed in the above embodiments.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present disclosure are for more clearly illustrating the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation to the technical solutions provided in the embodiments of the present disclosure, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present disclosure are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
Those skilled in the art will appreciate that the solutions shown in the figures are not intended to limit embodiments of the present disclosure, and may include more or less steps than those shown, or some of the steps may be combined, or different steps.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments of the present application have been described in detail with reference to the drawings, but the present application is not limited to the embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present application. Furthermore, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
Claims (10)
1. A method for constructing a user portrait based on an inquiry session is characterized by comprising the following steps:
obtaining complaint information input by a user, wherein the complaint information is disease description information of the user;
extracting features of the chief complaint information to obtain a first feature vector matrix;
inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease;
presenting the inquiry questions to the user to obtain inquiry information input by the user;
and constructing a user portrait according to the inquiry information.
2. The method of claim 1, wherein the performing feature extraction on the chief complaint information to obtain a first feature vector matrix comprises:
natural language preprocessing is carried out on the chief complaint information to obtain a plurality of first word segments;
inputting the first participles into a pre-trained word vector model to obtain a plurality of first word vectors;
and combining the plurality of first word vectors to obtain the first feature vector matrix.
3. The method of claim 2, wherein the natural language preprocessing the complaint information to obtain a plurality of first tokens comprises:
performing complex body to simple body processing on the main complaint information to obtain simple body information;
performing word segmentation on the simplified information to obtain pre-word segmentation information;
performing word-stop-removal processing on the pre-word segmentation information to obtain word segmentation information;
and carrying out synonym normalization on the participle information to obtain the plurality of first participles.
4. The method of claim 1, wherein inputting the first eigenvector matrix into a predictive network model results in an interrogation problem matching the chief complaint information, comprising:
inputting the first feature vector matrix into a bidirectional recurrent neural network for feature fusion processing to obtain a first fusion feature vector matrix;
inputting the first fusion eigenvector matrix into a high-speed neural network for deep processing to obtain a first depth eigenvector matrix;
performing feature extraction on the first depth feature vector matrix through a convolutional neural network to obtain a first vector;
and obtaining the inquiry question matched with the main complaint information according to the first vector.
5. The method of claim 1, wherein the presenting the interrogation questions to the user for the interrogation information input by the user comprises:
presenting the inquiry questions and structured answer options to the user;
and obtaining the inquiry information input by the user according to the answer options input by the user.
6. The method of any of claims 1 to 5, further comprising:
acquiring session information when the user is manually asked;
extracting the features of the session information to obtain a second feature vector matrix;
inputting the second feature vector matrix into a label extraction network model to obtain a health label of the user, wherein the label extraction network model is obtained by training according to a second data set, and the second data set comprises a plurality of session information and health labels corresponding to the session information;
and updating the user image according to the health class label.
7. The method of claim 6, wherein inputting the second feature vector matrix into a tag extraction network model, and obtaining the health class tag of the user comprises:
inputting the second eigenvector matrix into a bidirectional long-short term memory network for part of speech tagging to obtain a tag score probability;
and carrying out label sequence correction on the label score probability through a conditional random field to obtain the health label of the user.
8. An apparatus for constructing a representation of a user based on an interrogation session, comprising:
the information acquisition module is used for acquiring the chief complaint information input by a user, wherein the chief complaint information is the disease description information of the user;
the characteristic extraction module is used for extracting the characteristics of the chief complaint information to obtain a first characteristic vector matrix;
the prediction module is used for inputting the first feature vector matrix into a prediction network model to obtain an inquiry question matched with the chief complaint information, wherein the prediction network model is obtained by training according to a first data set, the first data set comprises a plurality of medical inquiry samples, and each medical inquiry sample comprises an inquiry question and a corresponding disease state;
the inquiry module is used for presenting the inquiry questions to the user so as to obtain inquiry information input by the user;
a representation construction module for constructing a user representation from the interrogation information.
9. An electronic device, comprising:
at least one memory;
at least one processor;
at least one program;
the programs are stored in the memory, and the processor executes the at least one program to implement:
the method of any one of claims 1 to 7.
10. A storage medium that is a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform:
the method of any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111005960.3A CN113724882B (en) | 2021-08-30 | 2021-08-30 | Method, device, equipment and medium for constructing user portrait based on inquiry session |
PCT/CN2022/087528 WO2023029502A1 (en) | 2021-08-30 | 2022-04-19 | Method and apparatus for constructing user portrait on the basis of inquiry session, device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111005960.3A CN113724882B (en) | 2021-08-30 | 2021-08-30 | Method, device, equipment and medium for constructing user portrait based on inquiry session |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113724882A true CN113724882A (en) | 2021-11-30 |
CN113724882B CN113724882B (en) | 2024-07-12 |
Family
ID=78679296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111005960.3A Active CN113724882B (en) | 2021-08-30 | 2021-08-30 | Method, device, equipment and medium for constructing user portrait based on inquiry session |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113724882B (en) |
WO (1) | WO2023029502A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114048283A (en) * | 2022-01-11 | 2022-02-15 | 北京仁科互动网络技术有限公司 | User portrait generation method and device, electronic equipment and storage medium |
CN115631852A (en) * | 2022-11-02 | 2023-01-20 | 北京大学重庆大数据研究院 | Certificate type recommendation method and device, electronic equipment and nonvolatile storage medium |
WO2023029502A1 (en) * | 2021-08-30 | 2023-03-09 | 康键信息技术(深圳)有限公司 | Method and apparatus for constructing user portrait on the basis of inquiry session, device, and medium |
CN118132736A (en) * | 2024-05-08 | 2024-06-04 | 青岛国创智能家电研究院有限公司 | Training method, control device and storage medium for user portrait identification system |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116521822B (en) * | 2023-03-15 | 2024-02-13 | 上海帜讯信息技术股份有限公司 | User intention recognition method and device based on 5G message multi-round session mechanism |
CN117854713B (en) * | 2024-03-06 | 2024-06-04 | 之江实验室 | Method for training traditional Chinese medicine syndrome waiting diagnosis model and method for recommending information |
CN117874633B (en) * | 2024-03-13 | 2024-05-28 | 金祺创(北京)技术有限公司 | Network data asset portrayal generation method and device based on deep learning algorithm |
CN118051879B (en) * | 2024-04-16 | 2024-06-11 | 杭州小策科技有限公司 | Crowd portrayal analysis method and system under massive data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109192300A (en) * | 2018-08-17 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Intelligent way of inquisition, system, computer equipment and storage medium |
CN109545394A (en) * | 2018-11-21 | 2019-03-29 | 上海依智医疗技术有限公司 | A kind of way of inquisition and device |
CN110019793A (en) * | 2017-10-27 | 2019-07-16 | 阿里巴巴集团控股有限公司 | A kind of text semantic coding method and device |
CN111274365A (en) * | 2020-02-25 | 2020-06-12 | 广州七乐康药业连锁有限公司 | Intelligent inquiry method and device based on semantic understanding, storage medium and server |
CN112084783A (en) * | 2020-09-24 | 2020-12-15 | 中国民航大学 | Entity identification method and system based on civil aviation non-civilized passengers |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10489701B2 (en) * | 2015-10-13 | 2019-11-26 | Facebook, Inc. | Generating responses using memory networks |
CN108922608A (en) * | 2018-06-13 | 2018-11-30 | 平安医疗科技有限公司 | Intelligent hospital guide's method, apparatus, computer equipment and storage medium |
CN111326251B (en) * | 2020-02-13 | 2023-08-29 | 北京百度网讯科技有限公司 | Question output method and device and electronic equipment |
CN113724882B (en) * | 2021-08-30 | 2024-07-12 | 康键信息技术(深圳)有限公司 | Method, device, equipment and medium for constructing user portrait based on inquiry session |
-
2021
- 2021-08-30 CN CN202111005960.3A patent/CN113724882B/en active Active
-
2022
- 2022-04-19 WO PCT/CN2022/087528 patent/WO2023029502A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110019793A (en) * | 2017-10-27 | 2019-07-16 | 阿里巴巴集团控股有限公司 | A kind of text semantic coding method and device |
CN109192300A (en) * | 2018-08-17 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Intelligent way of inquisition, system, computer equipment and storage medium |
CN109545394A (en) * | 2018-11-21 | 2019-03-29 | 上海依智医疗技术有限公司 | A kind of way of inquisition and device |
CN111274365A (en) * | 2020-02-25 | 2020-06-12 | 广州七乐康药业连锁有限公司 | Intelligent inquiry method and device based on semantic understanding, storage medium and server |
CN112084783A (en) * | 2020-09-24 | 2020-12-15 | 中国民航大学 | Entity identification method and system based on civil aviation non-civilized passengers |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023029502A1 (en) * | 2021-08-30 | 2023-03-09 | 康键信息技术(深圳)有限公司 | Method and apparatus for constructing user portrait on the basis of inquiry session, device, and medium |
CN114048283A (en) * | 2022-01-11 | 2022-02-15 | 北京仁科互动网络技术有限公司 | User portrait generation method and device, electronic equipment and storage medium |
CN115631852A (en) * | 2022-11-02 | 2023-01-20 | 北京大学重庆大数据研究院 | Certificate type recommendation method and device, electronic equipment and nonvolatile storage medium |
CN115631852B (en) * | 2022-11-02 | 2024-04-09 | 北京大学重庆大数据研究院 | Certificate type recommendation method and device, electronic equipment and nonvolatile storage medium |
CN118132736A (en) * | 2024-05-08 | 2024-06-04 | 青岛国创智能家电研究院有限公司 | Training method, control device and storage medium for user portrait identification system |
Also Published As
Publication number | Publication date |
---|---|
CN113724882B (en) | 2024-07-12 |
WO2023029502A1 (en) | 2023-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113724882B (en) | Method, device, equipment and medium for constructing user portrait based on inquiry session | |
CN108829822B (en) | Media content recommendation method and device, storage medium and electronic device | |
CN106682397B (en) | Knowledge-based electronic medical record quality control method | |
CN110675944A (en) | Triage method and device, computer equipment and medium | |
CN110737763A (en) | Chinese intelligent question-answering system and method integrating knowledge map and deep learning | |
CN109871538A (en) | A kind of Chinese electronic health record name entity recognition method | |
CN112863630A (en) | Personalized accurate medical question-answering system based on data and knowledge | |
CN109508459B (en) | Method for extracting theme and key information from news | |
CN110442841A (en) | Identify method and device, the computer equipment, storage medium of resume | |
CN113704428B (en) | Intelligent inquiry method, intelligent inquiry device, electronic equipment and storage medium | |
CN115292457B (en) | Knowledge question answering method and device, computer readable medium and electronic equipment | |
CN112559684A (en) | Keyword extraction and information retrieval method | |
CN116561538A (en) | Question-answer scoring method, question-answer scoring device, electronic equipment and storage medium | |
CN113569023A (en) | Chinese medicine question-answering system and method based on knowledge graph | |
CN114595327A (en) | Data enhancement method and device, electronic equipment and storage medium | |
CN111581364B (en) | Chinese intelligent question-answer short text similarity calculation method oriented to medical field | |
CN113764112A (en) | Online medical question and answer method | |
CN115599899B (en) | Intelligent question-answering method, system, equipment and medium based on aircraft knowledge graph | |
CN110795544A (en) | Content search method, device, equipment and storage medium | |
CN114360715A (en) | Constitution identification method and device, electronic equipment and storage medium | |
CN114153994A (en) | Medical insurance information question-answering method and device | |
CN110969005B (en) | Method and device for determining similarity between entity corpora | |
CN114613462A (en) | Medical data processing method and device, electronic equipment and storage medium | |
CN114239599A (en) | Method, system, equipment and medium for realizing machine reading understanding | |
CN116522165B (en) | Public opinion text matching system and method based on twin structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |