CN108415897A - Classification method of discrimination, device and storage medium based on artificial intelligence - Google Patents

Classification method of discrimination, device and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN108415897A
CN108415897A CN201810049997.8A CN201810049997A CN108415897A CN 108415897 A CN108415897 A CN 108415897A CN 201810049997 A CN201810049997 A CN 201810049997A CN 108415897 A CN108415897 A CN 108415897A
Authority
CN
China
Prior art keywords
word
text
training
model
segmentation result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810049997.8A
Other languages
Chinese (zh)
Inventor
汪琦
冯知凡
陆超
朱勇
李莹
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810049997.8A priority Critical patent/CN108415897A/en
Publication of CN108415897A publication Critical patent/CN108415897A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses classification method of discrimination, device and storage medium based on artificial intelligence, wherein method includes:Obtain pending text object;Obtain the word sequence being made of the cutting word result of text object;Word sequence is inputted into the discrimination model that training obtains in advance, the text object for obtaining discrimination model output is belonging respectively to the probability of different pre-set categories.Using scheme of the present invention, the accuracy of classification results can be improved, and enriches output content, so as to meet the application demand etc. of different scenes.

Description

Artificial intelligence based category discrimination method, apparatus and storage medium
[ technical field ] A method for producing a semiconductor device
The present invention relates to computer application technologies, and in particular, to a method and an apparatus for discriminating a category based on artificial intelligence, and a storage medium.
[ background of the invention ]
Artificial Intelligence (Artificial Intelligence), abbreviated in english as AI. The method is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence, a field of research that includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others.
In the prior art, for a given one text segment, classification of a category (type) can be performed in the following manner.
1) Statistical-based approach: extracting key words in the text segments, performing statistical weighting on the key words, and further performing simple type classification in a voting mode.
2) Named Entity Recognition (NER) -based approach: and identifying entities with specific meanings in the text segments, wherein the entities mainly comprise names of people, place names, mechanism names, proper nouns and the like, and further carrying out type classification according to the identified entities.
However, both of the above two methods have certain problems in practical applications, such as: type classification is performed only according to the extracted keywords or the identified entities, and context information and the like cannot be combined, so that the accuracy of classification results is low, and a single type discrimination result can be provided, so that the application requirements under complex scenes are difficult to meet.
[ summary of the invention ]
In view of the above, the present invention provides a method, an apparatus and a storage medium for category discrimination based on artificial intelligence.
The specific technical scheme is as follows:
a category discrimination method based on artificial intelligence comprises the following steps:
acquiring a text object to be processed;
acquiring a word sequence formed by word cutting results of the text object;
and inputting the word sequence into a discrimination model obtained by pre-training to obtain the probability that the text objects output by the discrimination model respectively belong to different preset categories.
According to a preferred embodiment of the present invention, the discriminant model includes: a neural network model.
According to a preferred embodiment of the present invention, the discriminant model includes: an input layer, a hidden layer and an output layer;
the input layer respectively generates a feature vector corresponding to each word segmentation result;
mapping the feature vector to the hidden layer through linear transformation to obtain a hidden layer vector;
and the output layer generates an output result based on a Huffman tree according to the hidden layer vector.
According to a preferred embodiment of the present invention, the feature vector corresponding to each word segmentation result is respectively composed of the word segmentation result, the n-gram segmentation result of the word segmentation result, and the subword information of the word segmentation result.
According to a preferred embodiment of the present invention, the text object includes: text segments or signal words.
According to a preferred embodiment of the present invention, the training of the discriminant model includes:
extracting key text fragments and high-frequency signal words from a preset data source;
constructing training samples according to the extracted content, wherein each training sample comprises: a text segment or signal word, and a category to which it belongs;
and training by using the training sample to obtain the discrimination model.
According to a preferred embodiment of the present invention, the predetermined data source includes one or any combination of the following: knowledge base, web page base, query log querylog.
An artificial intelligence-based category discrimination apparatus comprising: the device comprises a text acquisition unit, a word sequence acquisition unit and a category judgment unit;
the text acquisition unit is used for acquiring a text object to be processed;
the word sequence acquiring unit is used for acquiring a word sequence formed by word cutting results of the text object;
and the category distinguishing unit is used for inputting the word sequence into a distinguishing model obtained by pre-training to obtain the probability that the text objects output by the distinguishing model respectively belong to different preset categories.
According to a preferred embodiment of the present invention, the discriminant model includes: a neural network model.
According to a preferred embodiment of the present invention, the discriminant model includes: an input layer, a hidden layer and an output layer;
the input layer respectively generates a feature vector corresponding to each word segmentation result;
mapping the feature vector to the hidden layer through linear transformation to obtain a hidden layer vector;
and the output layer generates an output result based on a Huffman tree according to the hidden layer vector.
According to a preferred embodiment of the present invention, the feature vector corresponding to each word segmentation result is respectively composed of the word segmentation result, the n-gram segmentation result of the word segmentation result, and the subword information of the word segmentation result.
According to a preferred embodiment of the present invention, the text object includes: text segments or signal words.
According to a preferred embodiment of the present invention, the apparatus further comprises: a model training unit;
the model training unit is used for extracting key text fragments and high-frequency signal words from a preset data source, and constructing training samples according to the extracted contents, wherein each training sample comprises: and training a text segment or a signal word and the category to which the text segment or the signal word belongs by using the training sample to obtain the discriminant model.
According to a preferred embodiment of the present invention, the predetermined data source includes one or any combination of the following: knowledge base, web page base, query log querylog.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method as set forth above.
Based on the introduction, the scheme of the invention can further obtain the word sequence formed by the word segmentation result of the text object after obtaining the text object to be processed, and then input the obtained word sequence into the pre-trained discrimination model, so as to obtain the probability that the text object output by the discrimination model belongs to different preset categories.
[ description of the drawings ]
FIG. 1 is a flowchart of an embodiment of a method for artificial intelligence based class classification according to the present invention.
FIG. 2 is a schematic structural diagram of the discriminant model according to the present invention.
Fig. 3 is a schematic diagram of an output result corresponding to the singing of lina according to the present invention.
Fig. 4 is a schematic diagram of an output result corresponding to the "lina tennis" according to the present invention.
Fig. 5 is a schematic diagram of an output result corresponding to "liu de hua singing" in the present invention.
FIG. 6 is a diagram illustrating an output result corresponding to "hit" according to the present invention.
Fig. 7 is a schematic structural diagram of an embodiment of the artificial intelligence-based class classification device according to the present invention.
FIG. 8 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention.
[ detailed description ] embodiments
Aiming at the problems in the prior art, the invention provides a category discrimination mode, combines the concept of success in natural speech processing and machine learning, designs a set of model system, and can adaptively learn the relationship among entities and the like, for example, type classification can be carried out according to context collocation.
In order to make the technical solution of the present invention clearer and more obvious, the solution of the present invention is further described below by referring to the drawings and examples.
It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a flowchart of an embodiment of a method for artificial intelligence based class classification according to the present invention. As shown in fig. 1, the following detailed implementation is included.
In 101, a text object to be processed is obtained.
At 102, a word sequence comprised of word segmentation results for the text object is obtained.
In 103, the obtained word sequence is input into a pre-trained discrimination model, and probabilities that text objects output by the discrimination model respectively belong to different preset categories are obtained.
The discrimination model can be a neural network model, such as a shallow learning (learning) network model, and compared with a deep learning network model, the discrimination model can obtain a good effect without using a complex model, namely the complexity of model training can be reduced under the condition of ensuring the effect, and the training efficiency is improved.
The input of the discrimination model is a word sequence, and the output is the probability that the word sequence respectively belongs to different preset categories. Which categories are specifically included in the preset categories can be determined according to actual needs, and can be flexibly adjusted according to actual needs.
The discriminant model may include three layers, an input layer, a hidden layer (middle layer), and an output layer. As shown in fig. 2, fig. 2 is a schematic structural diagram of the discriminant model according to the present invention. The input layer can respectively generate a feature vector corresponding to each word segmentation result, the feature vectors can be mapped to the hidden layer through linear transformation to obtain hidden layer vectors, and the output layer can generate output results based on a Huffman tree according to the hidden layer vectors.
That is, the feature vectors corresponding to the word segmentation results are generated in the input layer, the feature vectors are mapped to the hidden layer through linear transformation, and the middle layer is mapped to the classification system (label). The model uses a hierarchical classifier, different classes are integrated into a tree structure, and in order to improve the running time, a hierarchical Softmax skill can be used. The hierarchical Softmax skill is based on Huffman coding, and the number of model prediction targets can be greatly reduced by coding label.
The optimization objective of the discriminant model may be:
wherein,<xn,yn>representing a training sample, xnIs an input feature, ynIs a training target; the matrix parameter a is a look-up table (look-up table) based on words, i.e. a is an embedding vector of words; axnThe mathematical meaning of the matrix operation is to add or average the embedding vectors of the words to obtain hidden vectors; the matrix parameter B is a parameter of the function f, which is a multi-class problem, so f (BAx)n) Is a multi-classification linear function; the overall optimization objective is to make the likelihood of this multi-classification problem the greater the better.
The feature vector corresponding to each word segmentation result can be composed of the word segmentation result, the n-gram segmentation result of the word segmentation result and subword (subword) information of the word segmentation result respectively.
The n-gram segmentation is realized in the prior art, and long words can be segmented into a plurality of short words through the n-gram. How to obtain the sub-word information is also prior art. By using n-gram segmentation, subword information, and the like, information is shared between classes in a hidden representation manner, thereby making the model sensitive to different semantic representations.
At the leaf nodes of the huffman tree are the embedding vectors for each class label. In the training process, the characteristic vectors generated in the input layer are mapped to the hidden layer through linear transformation, the hidden layer constructs a Huffman tree through solving a maximum likelihood function according to the weight of each category and model parameters, and the Huffman tree is used as a model training output result.
For training to obtain a discriminant model, firstly, extracting key text segments, high-frequency signal words and the like from a predetermined data source, and then constructing training samples according to the extracted content, wherein each training sample may include: and training a text segment or a signal word and the category to which the text segment or the signal word belongs by utilizing the constructed training sample to obtain a discriminant model.
The predetermined data source may comprise one or any combination of: knowledge base, web page base, query log (querylog), etc.
Preferably, some key text segments, such as important/representative sentences or paragraphs extracted from the abstract of the hundred degree encyclopedia, and high frequency signal words, such as predicates with higher frequency of use, can be extracted from the knowledge base, the web page base and the querylog.
Then, a training sample may be constructed according to the extracted text segments and signal words, for example, a training sample includes: for another example, another training sample includes: a signal word and a category to which the signal word belongs.
How to obtain the text segment and the category of the signal word may be determined according to actual needs, for example, the category of the manual label may be obtained.
For a text segment, it may belong to the category a or the category b, and then two training samples may be constructed for the text segment, where one training sample consists of the text segment and the category a, and the other training sample consists of the text segment and the category b. For example, the text segment is "singing Lina", and may belong to the category of "singer" or "sports figure".
Similarly, for a signal word, which may belong to either category c or category d, two training samples may be constructed for the signal word, wherein one training sample consists of the signal word and category c, and the other training sample consists of the signal word and category d. For example, if the signal word is "play," possible combinations include "play basketball," "play game," etc., and accordingly, may belong to the "sporting goods" category, or may belong to the "game" category, etc.
Based on the constructed training sample, the discrimination model can be obtained through training. If the word segmentation results of the text segments or the signal words in each training sample can be respectively obtained, the word segmentation results are utilized to form word sequences, the word sequences are used as the input of the discrimination model, and the discrimination model is obtained by combining the belonging category information in each training sample for training. For a signal word, which may include only one word or a plurality of words, the word segmentation process is performed on the signal word, and the word segmentation result may be equal to the signal word itself. The existing arbitrary word segmentation mode can be adopted.
Based on the discrimination model obtained by training, the actual category discrimination can be carried out, namely, a text object to be processed is obtained, a word sequence formed by word segmentation results of the text object is obtained, and the obtained word sequence is input into the discrimination model, so that the probability that the text object output by the discrimination model belongs to different preset categories is obtained. The text object may be a text fragment or a signal word.
Several exemplary application scenarios of the solution of the invention can be seen as follows.
1) By semantic understanding, type distribution is given
For example, "singing a song for lina" is input to the discrimination model, and the output may be: "singer: 0.603, musical composition: 0.301, device: 0.096".
Fig. 3 is a schematic diagram of an output result corresponding to the singing of lina according to the present invention. As shown in fig. 3, the probability that "singing for lina" belongs to the category of "singer" is 0.603, the probability that "musical composition" belongs to the category of "musical composition" is 0.301, and the probability that "device" belongs to the category of "equipment" is 0.096.
Assuming that the preset number of categories is 20, except for the above three categories, the probabilities of the other categories are all 0, and the sum of the probabilities of the various categories is 1, which will not be described again below.
For another example, the output of "lisa tennis ball" input to the discrimination model may be: "sports figures: 0.711, sports goods: 0.244, place: 0.045".
Fig. 4 is a schematic diagram of an output result corresponding to the "lina tennis" according to the present invention. As shown in fig. 4, the probability that "lina tennis" belongs to the "sports figure" category is 0.711, the probability that "sports goods" belongs to the "sports goods" category is 0.244, and the probability that "location" belongs to the "place" category is 0.045.
2) Predicting a category of a current entity by context
For example, if a complete text is "liu de hua singing ice rain", the current entity is "ice rain", and it is desired to predict the category of "ice rain" by context, then "liu de hua singing" may be input to the discriminant model, and the output may be: "songs: 0.891, singer: 0.109".
Fig. 5 is a schematic diagram of an output result corresponding to "liu de hua singing" in the present invention. As shown in fig. 5, the probability that "liu de hua singing" belongs to the "song" category is 0.891, and the probability that "singer" belongs to the "singer" category is 0.109.
Since the probability of "Liu De Hua singing" belonging to the "song" category is 0.891, which is significantly higher than the probability of belonging to the "singer" category of 0.109, the "ice rain" category can be predicted to be "song".
3) Inputting simple signal words to give type distribution
For example, the signal word "play" is input to the discriminant model, possible combinations such as "play basketball", "play game", "player", etc., and the output may be: "sporting goods: 0.472, game: 0.319, person: 0.209".
FIG. 6 is a diagram illustrating an output result corresponding to "hit" according to the present invention. As shown in fig. 6, the probability of "hit" belonging to the "sports goods" category is 0.472, the probability of belonging to the "games" category is 0.319, and the probability of belonging to the "characters" category is 0.209.
In conclusion, the scheme of the invention can dynamically depict the context, solve the problem of disambiguation of concept level, is sensitive to the context, can fully capture semantic information, has stronger self-adaptability and robustness, and improves the accuracy of classification results.
Moreover, the discrimination model in the scheme of the invention can output the probability that the text object belongs to different preset categories respectively, thereby enriching the output content, further meeting the application requirements of different scenes and having stronger expandability.
In addition, the scheme of the invention is suitable for tasks which have a large number of training samples and require high-efficiency training speed, more than 10 hundred million words can be processed in 10 minutes under the condition of using a standard multi-core Central Processing Unit (CPU), and compared with a deep learning network model, the training time of the discriminant model in the invention can be shortened from several days to several minutes.
Furthermore, the scheme of the invention supports multi-language expression, can be designed to support multiple languages such as Chinese, English, Japanese, German, Spanish and French by using the language morphological structure, and also uses a simple and efficient way of incorporating sub-word information, so that the scheme has a very good effect when used for languages with rich morphological characters such as Czech.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
Fig. 7 is a schematic structural diagram of an embodiment of the artificial intelligence-based class classification device according to the present invention. As shown in fig. 7, includes: text acquisition section 701, word sequence acquisition section 702, and category discrimination section 703.
A text acquiring unit 701, configured to acquire a text object to be processed.
A word sequence acquiring unit 702 is configured to acquire a word sequence formed by word segmentation results of the text object.
The category identifying unit 703 is configured to input the word sequence into an identifying model obtained through pre-training, and obtain probabilities that text objects output by the identifying model respectively belong to different preset categories.
The discriminant model may be a neural network model.
The input of the discrimination model is a word sequence, and the output is the probability that the word sequence respectively belongs to different preset categories. Which categories are specifically included in the preset categories can be determined according to actual needs, and can be flexibly adjusted according to actual needs.
The discriminant model may include three layers, an input layer, a hidden layer (middle layer), and an output layer. The input layer can respectively generate a feature vector corresponding to each word segmentation result, the feature vectors can be mapped to the hidden layer through linear transformation to obtain hidden layer vectors, and the output layer can generate output results based on a Huffman tree according to the hidden layer vectors.
The feature vector corresponding to each word segmentation result can be respectively composed of the word segmentation result, the n-gram segmentation result of the word segmentation result and the sub-character information of the word segmentation result.
As shown in fig. 7, the apparatus may further include: the model training unit 700.
The model training unit 700 may first extract key text snippets, high-frequency signal words, and the like from a predetermined data source, and then may construct training samples according to the extracted content, where each training sample may include: and training a text segment or a signal word and the category to which the text segment or the signal word belongs by utilizing the constructed training sample to obtain a discriminant model.
The predetermined data source may comprise one or any combination of: knowledge base, web page base, query log, etc.
Preferably, the model training unit 700 extracts some key text fragments and high-frequency signal words from the knowledge base, the web page base and the querylog, and then constructs a training sample according to the extracted text fragments and signal words, for example, a training sample includes: for another example, another training sample includes: a signal word and a category to which the signal word belongs. Based on the constructed training samples, the model training unit 700 may train to obtain the above-mentioned discrimination model, for example, may obtain word segmentation results of text segments or signal words in each training sample, form word sequences by using the word segmentation results, use the word sequences as inputs of the discrimination model, and train to obtain the discrimination model by combining with the belonging category information in each training sample.
Based on the discrimination model obtained by training, actual category discrimination can be performed, that is, the text acquisition unit 701 can acquire a text object to be processed and send the text object to the word sequence acquisition unit 702, the word sequence acquisition unit 702 can further acquire a word sequence formed by word segmentation results of the text object and send the word sequence to the category discrimination unit 703, and the category discrimination unit 703 can input the acquired word sequence into the discrimination model, so as to obtain probabilities that the text object output by the discrimination model respectively belongs to different preset categories.
The text object may be a text fragment or a signal word.
For a specific work flow of the apparatus embodiment shown in fig. 7, please refer to the corresponding description in the foregoing method embodiment, which is not repeated.
FIG. 8 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention. The computer system/server 12 shown in FIG. 8 is only one example and should not be taken to limit the scope of use or functionality of embodiments of the present invention.
As shown in FIG. 8, computer system/server 12 is in the form of a general purpose computing device. The components of computer system/server 12 may include, but are not limited to: one or more processors (processing units) 16, a memory 28, and a bus 18 that connects the various system components, including the memory 28 and the processors 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The computer system/server 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The computer system/server 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the computer system/server 12, and/or with any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the computer system/server 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 8, the network adapter 20 communicates with the other modules of the computer system/server 12 via the bus 18. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer system/server 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 16 executes various functional applications and data processing, such as implementing the method in the embodiment shown in fig. 1, by executing programs stored in the memory 28.
The invention also discloses a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, will carry out the method as in the embodiment shown in fig. 1.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method, etc., can be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (16)

1. A category discrimination method based on artificial intelligence is characterized by comprising the following steps:
acquiring a text object to be processed;
acquiring a word sequence formed by word cutting results of the text object;
and inputting the word sequence into a discrimination model obtained by pre-training to obtain the probability that the text objects output by the discrimination model respectively belong to different preset categories.
2. The method of claim 1,
the discriminant model includes: a neural network model.
3. The method of claim 1,
the discriminant model comprises: an input layer, a hidden layer and an output layer;
the input layer respectively generates a feature vector corresponding to each word segmentation result;
mapping the feature vector to the hidden layer through linear transformation to obtain a hidden layer vector;
and the output layer generates an output result based on a Huffman tree according to the hidden layer vector.
4. The method of claim 3,
and the feature vector corresponding to each word segmentation result is respectively composed of the word segmentation result, the n-gram segmentation result of the word segmentation result and the sub-character information of the word segmentation result.
5. The method of claim 1,
the text object includes: text segments or signal words.
6. The method of claim 5,
the training to obtain the discriminant model comprises the following steps:
extracting key text fragments and high-frequency signal words from a preset data source;
constructing training samples according to the extracted content, wherein each training sample comprises: a text segment or signal word, and a category to which it belongs;
and training by using the training sample to obtain the discrimination model.
7. The method of claim 6,
the predetermined data source comprises one or any combination of the following: knowledge base, web page base, query log querylog.
8. A category discriminating device based on artificial intelligence is characterized by comprising: the device comprises a text acquisition unit, a word sequence acquisition unit and a category judgment unit;
the text acquisition unit is used for acquiring a text object to be processed;
the word sequence acquiring unit is used for acquiring a word sequence formed by word cutting results of the text object;
and the category distinguishing unit is used for inputting the word sequence into a distinguishing model obtained by pre-training to obtain the probability that the text objects output by the distinguishing model respectively belong to different preset categories.
9. The apparatus of claim 8,
the discriminant model includes: a neural network model.
10. The apparatus of claim 8,
the discriminant model comprises: an input layer, a hidden layer and an output layer;
the input layer respectively generates a feature vector corresponding to each word segmentation result;
mapping the feature vector to the hidden layer through linear transformation to obtain a hidden layer vector;
and the output layer generates an output result based on a Huffman tree according to the hidden layer vector.
11. The apparatus of claim 10,
and the feature vector corresponding to each word segmentation result is respectively composed of the word segmentation result, the n-gram segmentation result of the word segmentation result and the sub-character information of the word segmentation result.
12. The apparatus of claim 8,
the text object includes: text segments or signal words.
13. The apparatus of claim 12,
the device further comprises: a model training unit;
the model training unit is used for extracting key text fragments and high-frequency signal words from a preset data source, and constructing training samples according to the extracted contents, wherein each training sample comprises: and training a text segment or a signal word and the category to which the text segment or the signal word belongs by using the training sample to obtain the discriminant model.
14. The apparatus of claim 13,
the predetermined data source comprises one or any combination of the following: knowledge base, web page base, query log querylog.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201810049997.8A 2018-01-18 2018-01-18 Classification method of discrimination, device and storage medium based on artificial intelligence Pending CN108415897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810049997.8A CN108415897A (en) 2018-01-18 2018-01-18 Classification method of discrimination, device and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810049997.8A CN108415897A (en) 2018-01-18 2018-01-18 Classification method of discrimination, device and storage medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN108415897A true CN108415897A (en) 2018-08-17

Family

ID=63126047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810049997.8A Pending CN108415897A (en) 2018-01-18 2018-01-18 Classification method of discrimination, device and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN108415897A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740642A (en) * 2018-12-19 2019-05-10 北京邮电大学 Invoice category recognition methods, device, electronic equipment and readable storage medium storing program for executing
CN109753556A (en) * 2018-12-24 2019-05-14 出门问问信息科技有限公司 A kind of query categories estimation method, device, equipment and storage medium
CN109800438A (en) * 2019-02-01 2019-05-24 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109818954A (en) * 2019-01-22 2019-05-28 深信服科技股份有限公司 Web injection type attack detection method, device, electronic equipment and storage medium
CN110390107A (en) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 Hereafter relationship detection method, device and computer equipment based on artificial intelligence
CN110428891A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 A kind of processing method, device and the equipment of medical intention
CN110991164A (en) * 2018-09-28 2020-04-10 北京国双科技有限公司 Legal document processing method and device
CN111178531A (en) * 2018-11-09 2020-05-19 百度在线网络技术(北京)有限公司 Relational reasoning and relational reasoning model acquisition method, device and storage medium
JP2020091846A (en) * 2018-10-19 2020-06-11 タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited Systems and methods for conversation-based ticket logging
CN111274383A (en) * 2018-12-05 2020-06-12 北京京东尚科信息技术有限公司 Method and device for classifying objects applied to quotation
CN112395414A (en) * 2019-08-16 2021-02-23 北京地平线机器人技术研发有限公司 Text classification method and training method, device, medium and equipment of classification model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194013A (en) * 2011-06-23 2011-09-21 上海毕佳数据有限公司 Domain-knowledge-based short text classification method and text classification system
CN106095928A (en) * 2016-06-12 2016-11-09 国家计算机网络与信息安全管理中心 A kind of event type recognition methods and device
CN106326346A (en) * 2016-08-06 2017-01-11 上海高欣计算机系统有限公司 Text classification method and terminal device
WO2017090051A1 (en) * 2015-11-27 2017-06-01 Giridhari Devanathan A method for text classification and feature selection using class vectors and the system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194013A (en) * 2011-06-23 2011-09-21 上海毕佳数据有限公司 Domain-knowledge-based short text classification method and text classification system
WO2017090051A1 (en) * 2015-11-27 2017-06-01 Giridhari Devanathan A method for text classification and feature selection using class vectors and the system thereof
CN106095928A (en) * 2016-06-12 2016-11-09 国家计算机网络与信息安全管理中心 A kind of event type recognition methods and device
CN106326346A (en) * 2016-08-06 2017-01-11 上海高欣计算机系统有限公司 Text classification method and terminal device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARMAND JOULIN 等: "Bag of Tricks for Efficient Text Classification", 《HTTPS://ARXIV.ORG/ABS/1607.01759》 *
BOJANOWSKI 等: "Enriching Word Vectors with Subword Information", 《HTTPS://ARXIV.ORG/ABS/1607.04606》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991164B (en) * 2018-09-28 2023-04-07 北京国双科技有限公司 Legal document processing method and device
CN110991164A (en) * 2018-09-28 2020-04-10 北京国双科技有限公司 Legal document processing method and device
JP7372812B2 (en) 2018-10-19 2023-11-01 タタ コンサルタンシー サービシズ リミテッド System and method for conversation-based ticket logging
JP2020091846A (en) * 2018-10-19 2020-06-11 タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited Systems and methods for conversation-based ticket logging
CN111178531B (en) * 2018-11-09 2023-09-22 百度在线网络技术(北京)有限公司 Method, device and storage medium for acquiring relationship reasoning and relationship reasoning model
CN111178531A (en) * 2018-11-09 2020-05-19 百度在线网络技术(北京)有限公司 Relational reasoning and relational reasoning model acquisition method, device and storage medium
CN111274383A (en) * 2018-12-05 2020-06-12 北京京东尚科信息技术有限公司 Method and device for classifying objects applied to quotation
CN111274383B (en) * 2018-12-05 2023-11-07 北京京东振世信息技术有限公司 Object classifying method and device applied to quotation
CN109740642A (en) * 2018-12-19 2019-05-10 北京邮电大学 Invoice category recognition methods, device, electronic equipment and readable storage medium storing program for executing
CN109753556A (en) * 2018-12-24 2019-05-14 出门问问信息科技有限公司 A kind of query categories estimation method, device, equipment and storage medium
CN109818954B (en) * 2019-01-22 2021-08-13 深信服科技股份有限公司 Web injection type attack detection method and device, electronic equipment and storage medium
CN109818954A (en) * 2019-01-22 2019-05-28 深信服科技股份有限公司 Web injection type attack detection method, device, electronic equipment and storage medium
CN109800438B (en) * 2019-02-01 2020-03-31 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109800438A (en) * 2019-02-01 2019-05-24 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110390107B (en) * 2019-07-26 2023-04-18 腾讯科技(深圳)有限公司 Context relation detection method and device based on artificial intelligence and computer equipment
CN110390107A (en) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 Hereafter relationship detection method, device and computer equipment based on artificial intelligence
CN110428891A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 A kind of processing method, device and the equipment of medical intention
CN112395414A (en) * 2019-08-16 2021-02-23 北京地平线机器人技术研发有限公司 Text classification method and training method, device, medium and equipment of classification model
CN112395414B (en) * 2019-08-16 2024-06-04 北京地平线机器人技术研发有限公司 Text classification method, training method of classification model, training device of classification model, medium and training equipment

Similar Documents

Publication Publication Date Title
CN108415897A (en) Classification method of discrimination, device and storage medium based on artificial intelligence
US11216504B2 (en) Document recommendation method and device based on semantic tag
CN109657054B (en) Abstract generation method, device, server and storage medium
CN110245259B (en) Video labeling method and device based on knowledge graph and computer readable medium
CN108460011B (en) Entity concept labeling method and system
KR101754473B1 (en) Method and system for automatically summarizing documents to images and providing the image-based contents
CN113051356B (en) Open relation extraction method and device, electronic equipment and storage medium
JP6838092B2 (en) Text paraphrase method, device, server, and storage medium
CN114861600B (en) NER-oriented Chinese clinical text data enhancement method and device
CN110377902B (en) Training method and device for descriptive text generation model
CN113434636B (en) Semantic-based approximate text searching method, semantic-based approximate text searching device, computer equipment and medium
CN112633017B (en) Translation model training method, translation processing method, translation model training device, translation processing equipment and storage medium
CN110597961A (en) Text category labeling method and device, electronic equipment and storage medium
CN110222328B (en) Method, device and equipment for labeling participles and parts of speech based on neural network and storage medium
CN110362823A (en) The training method and device of text generation model are described
CN112613293B (en) Digest generation method, digest generation device, electronic equipment and storage medium
CN107861948B (en) Label extraction method, device, equipment and medium
CN109271641A (en) A kind of Text similarity computing method, apparatus and electronic equipment
CN113590810B (en) Abstract generation model training method, abstract generation device and electronic equipment
CN110941958A (en) Text category labeling method and device, electronic equipment and storage medium
Bao et al. Contextualized rewriting for text summarization
Sun et al. Study on medical image report generation based on improved encoding-decoding method
CN112231468A (en) Information generation method and device, electronic equipment and storage medium
WO2021012958A1 (en) Original text screening method, apparatus, device and computer-readable storage medium
CN112949293B (en) Similar text generation method, similar text generation device and intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination