WO2018040343A1 - 用于识别文本类型的方法、装置和设备 - Google Patents

用于识别文本类型的方法、装置和设备 Download PDF

Info

Publication number
WO2018040343A1
WO2018040343A1 PCT/CN2016/108421 CN2016108421W WO2018040343A1 WO 2018040343 A1 WO2018040343 A1 WO 2018040343A1 CN 2016108421 W CN2016108421 W CN 2016108421W WO 2018040343 A1 WO2018040343 A1 WO 2018040343A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
probability value
keyword
positive class
belongs
Prior art date
Application number
PCT/CN2016/108421
Other languages
English (en)
French (fr)
Inventor
岳爱珍
崔燕
赵辉
高显
王私江
谭静
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to JP2018553944A priority Critical patent/JP6661790B2/ja
Publication of WO2018040343A1 publication Critical patent/WO2018040343A1/zh
Priority to US16/160,950 priority patent/US11281860B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present application relates to the field of computer technologies, and in particular, to the field of Internet technologies, and in particular, to a method, apparatus, and device for identifying a text type.
  • Identifying a text type is to identify a category for a document according to a predefined type.
  • the field of application for recognizing text types is very broad, for example, it can be applied to the field of web page classification, the field of search engines that need to recognize the input text of users, and the field of classification of original content for users.
  • the purpose of the present application is to propose an improved method and apparatus for identifying text types to solve the technical problems mentioned in the background section above.
  • the present application provides a method for identifying a text type, the method comprising: pre-processing a pre-acquired text to obtain a keyword set of the text; and calculating each of the keyword sets An occurrence probability value of the keyword in the text; for each keyword in the keyword set, the keyword and the keyword corresponding to the keyword The occurrence probability value is imported into a pre-established document theme generation model, and the appearance probability value of each of the preset topics in the document theme generation model is determined, wherein the document theme generation model is used to represent the words in the text. a correspondence between a probability value, an occurrence probability value of the word in the respective topics, and an appearance probability value of the respective topic in the text; the text is according to the respective topic The occurrence probability value in the middle identifies the type to which the text belongs.
  • the present application provides an apparatus for identifying a text type, the apparatus comprising: a preprocessing module configured to preprocess a pre-acquired text to obtain a keyword set of the text; and a computing module, Configuring a calculation probability value for calculating each keyword in the keyword set in the text; a determining module configured to: for each keyword in the keyword set, the keyword and The appearance probability value corresponding to the keyword is imported into a pre-established document topic generation model, and the appearance probability value of each topic preset in the document topic generation model in the text is determined, wherein the document topic generation model is used for characterization Correspondence between the occurrence probability value of the word in the text, the appearance probability value pre-determined by the word in the respective subject, and the appearance probability value of the respective subject in the text; identification module, configuration And identifying a type to which the text belongs according to an occurrence probability value of the respective topics in the text.
  • the present application provides an apparatus comprising: one or more processors; a storage device for storing one or more programs when the one or more programs are used by the one or more processors Executing to cause the one or more processors to implement the above method.
  • the present application provides a non-volatile computer storage medium storing computer readable instructions executable by a processor, when the computer readable instructions are executed by a processor The processor performs the above method.
  • the method, device and device for identifying a text type provided by the present application first extract a keyword set of text, then calculate an occurrence probability value of each keyword in the keyword set, and then generate a model by using a pre-established document theme Deriving the probability value of occurrence of the above-mentioned respective topics in the text by the occurrence probability value of the words in the text, and the appearance probability values obtained in advance in the above-mentioned various themes, and finally according to the respective themes in the text The probability value appears, identifying the type to which the above text belongs, and improving the accuracy of identifying the type of text.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flow diagram of one embodiment of a method for identifying a type of text in accordance with the present application
  • FIG. 3 is a schematic diagram of an application scenario of a method for identifying a text type according to the present application
  • FIG. 4 is a flow chart of still another embodiment of a method for identifying a text type in accordance with the present application.
  • Figure 5 is a block diagram showing an embodiment of an apparatus for identifying a text type according to the present application.
  • FIG. 6 is a block diagram showing the structure of a computer system suitable for implementing the server of the embodiment of the present application.
  • FIG. 1 illustrates an exemplary system architecture 100 of an embodiment of a method for identifying a text type or a device for identifying a text type to which the present application may be applied.
  • system architecture 100 can include terminal devices 101, 102, 103, network 104, and server 105.
  • the network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105.
  • Network 104 may include various types of connections, such as wired, wireless communication links, fiber optic cables, and the like.
  • the user can interact with the server 105 over the network 104 using the terminal devices 101, 102, 103 to receive or transmit messages and the like.
  • Various communication client applications such as a review application, a web browser application, a shopping application, a search application, an instant communication tool, a mailbox client, a social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablets, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic The video specialist compresses the standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV) player, laptop portable computer and desktop computer, and the like.
  • MP3 players Motion Picture Experts Group Audio Layer III, dynamic The video specialist compresses the standard audio layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV
  • the server 105 may be a server that provides various services, such as a background server that provides support for comment pages displayed on the terminal devices 101, 102, 103.
  • the background server may analyze and process data such as received text, and feed back the processing result (for example, the classification to which the text belongs) to the terminal device.
  • the method for identifying the text type provided by the embodiment of the present application is generally performed by the server 105. Accordingly, the device for identifying the text type is generally disposed in the server 105.
  • terminal devices, networks, and servers in Figure 1 is merely illustrative. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
  • a flow 200 of one embodiment of a method for identifying text types in accordance with the present application is illustrated.
  • the above method for identifying a text type includes the following steps:
  • Step 201 Pre-process the pre-acquired text to obtain a keyword set of the text.
  • an electronic device for example, the server shown in FIG. 1 on which the method for identifying a text type runs may first obtain text from the terminal device, and then preprocess the text, and finally obtain the key of the text. Word collection.
  • the above electronic device can also acquire text from a storage device in which text is stored in advance.
  • the text may be a search text input by a user in a search box of a search application, and may be a user in a web browsing application.
  • the comment text published in the news webpage may be the evaluation text published by the user in the shopping application, or may be the comment text posted by the user on the merchant, website, service, etc. in the review application.
  • the pre-processing the text may include the following steps: removing a special symbol in the text; and performing a word-cutting on the text after removing the special symbol; and removing the word
  • the stop words in the set get the above set of keywords.
  • the special symbols in the text can be punctuation, URL links, numbers, and the like.
  • the granularity of the word-cutting can be selected from the basic granularity, and how to cut the text is well known to those skilled in the art, and details are not described herein.
  • the stop words may be manually defined and stored in advance in the set of stop words. For example, modal particles, conjunctions, and the like may be defined as stop words.
  • the pre-processing the text may include: performing: removing a special symbol in the text; and performing a word-cutting on the text after removing the special symbol;
  • the stop words in the set of words obtain the above initial keyword set; calculate the frequency-inverse document frequency of each initial keyword in the initial keyword set (TF-IDF, term frequency–inverse document frequency);
  • the initial keyword whose file frequency is greater than a predetermined threshold is used as a keyword of the above text, and a keyword set is generated.
  • Step 202 Calculate an occurrence probability value of each keyword in the keyword set in the text.
  • the electronic device may calculate an occurrence probability value of each keyword in the keyword set in the text.
  • the occurrence probability value of the keyword may be a ratio of the number of occurrences of the keyword in the text to the total number of words of the text.
  • the occurrence probability value of the keyword may be a ratio of the number of occurrences of the keyword in the text to the number of keywords in the keyword set of the text.
  • Step 203 Import, for each keyword in the keyword set, the keyword and the appearance probability value corresponding to the keyword into a pre-established document theme generation model, and determine that each theme preset in the document theme generation model is in the text. The probability value of occurrence in .
  • the electronic device may import the keyword and the occurrence probability value corresponding to the keyword into a pre-established document theme generation model for each keyword in the keyword set, and determine the document theme generation model.
  • the probability value of occurrence of each topic preset in the above text may import the keyword and the occurrence probability value corresponding to the keyword into a pre-established document theme generation model for each keyword in the keyword set, and determine the document theme generation model. The probability value of occurrence of each topic preset in the above text.
  • the document theme generation model is used to represent the appearance probability value of the word in the text, the appearance probability value of the word in each topic, and the appearance probability value of each of the above topics in the text. Correspondence between the two.
  • x represents a word
  • y i represents an i-th topic
  • z represents a document
  • z) represents a word occurrence probability value in the document
  • y i ) represents a probability value of the word in the i-th topic
  • z) represents the probability of occurrence of the i-th topic in the document
  • * represents a multiplication sign
  • n represents the number of topics included in the document.
  • the probability of selecting the word A can be obtained by the following process: the document to be generated may involve three topics, namely, theme A, topic B, and theme C; to generate this article, select this
  • the probability values of the three themes are P (topic A
  • document) 50%, P (topic B
  • document) 30%, P (topic C
  • document) 20%; the probability of occurrence of the word A in each topic
  • the values are P (word A
  • topic A) 20%, P (word A
  • topic B) 10%, P (word A
  • topic C) 0%; can be derived from the word A in this to-be-generated document
  • the probability value of occurrence is (word A
  • document) P(topic A
  • the document theme generation model is built by a large number of documents, through training, reasoning p(x
  • the above document theme generation model may be established as follows: First, a large number of documents collected in advance are used as a training set; then, the number of topics is determined using the degree of confusion; the documents in the training set are trained to generate a document theme generation model.
  • the documents in the training set can be trained to establish a document topic generation model by using Probabilistic Latent Semantic Analysis (PLS) or Latent Dirichlet Allocation (LDA).
  • PLS Probabilistic Latent Semantic Analysis
  • LDA Latent Dirichlet Allocation
  • z) of the above text is obtained by step 203, and the appearance probability value p(x
  • Step 204 Identify the type to which the text belongs according to the appearance probability value of each topic in the text.
  • the electronic device may identify the type to which the text belongs according to the appearance probability value in the above text of each of the above topics.
  • the types of texts may be multiple types, for example, a first category, a second category, and a third category.
  • the type of text may be of two types, such as a positive class and a negative class.
  • each type of topic may be divided into types, wherein the type of the topic is consistent with the type of the text; and first, the appearance probability values of the respective topics in the text are sorted from large to small. And then determine the type of topic with the highest probability value as the type to which the above text belongs.
  • the type of the text is of a plurality of types, such as a literary, sports, and financial class.
  • the preset ten topics may be divided into literary, sports, and financial categories in advance, and after step 203, the appearance probability values of the ten the themes in the text are obtained, and the ten occurrence probability values obtained are large.
  • the subject type with the highest probability value is determined as the type to which the above text belongs. For example, if the type of the subject with the largest probability value is the sports category, then the sports category is determined as the above type of the text, that is, the text is Sports class.
  • the type of text is two types, such as a positive class and a negative class.
  • Can Pre-determined ten topics are classified into positive or negative classes.
  • the ten probability values of the ten topics in the above text are obtained, and the ten occurrence probability values are sorted from large to small.
  • the subject type with the highest probability value is determined as the type to which the above text belongs. For example, if the type of the subject having the largest probability value is a positive class, then the positive class is determined as the above-mentioned type of the text, that is, the above text is a positive class.
  • FIG. 3 is a schematic diagram of an application scenario of a method for identifying a text type according to the present embodiment.
  • the user first enters the text "mobile phone price reduction, speed to purchase, website xx"; after that, the background server can obtain the above text in the background, and preprocess the text to obtain a keyword set "mobile phone, minus The price, purchase, and website address; then, the background server calculates the probability value of each keyword in the text; then, the background server can import the probability value corresponding to each keyword and the keyword into the pre-established document theme. Generating a model to determine the probability of occurrence of each topic in the text.
  • the appearance probability value of the mobile phone theme in the above text is 20%, and the appearance probability value of the advertising theme is 50%; finally, according to each topic in the above text The probability value appears, and the type of the above text is identified.
  • the subject with the highest probability value may be selected as the type of the text, and the theme with the highest probability value of "mobile phone price reduction, quick purchase, and website xx" is the advertisement theme, then "Advertisement" can be used as the type to which the above text belongs.
  • the above embodiment of the present application provides a method by first extracting a keyword set of text, then calculating an occurrence probability value of each keyword in the keyword set, and then generating a model by using a pre-established document theme, by the word in the text.
  • the type to which the text belongs increases the accuracy of identifying the type of text.
  • the process 400 for identifying a method of a text type includes the following steps:
  • Step 401 Preprocess the pre-acquired text to obtain a keyword set of the text.
  • Step 402 Calculate an occurrence probability value of each keyword in the keyword set in the text.
  • Step 403 Import, for each keyword in the keyword set, the keyword and the appearance probability value corresponding to the keyword into a pre-established document theme generation model, and determine that each theme preset in the document theme generation model is in the text. The probability value of occurrence in .
  • Step 404 Import the appearance probability value of each topic in the text into the pre-established first logistic regression model to obtain a first probability value that the text belongs to the positive class.
  • the electronic device may import the appearance probability value of each topic in the text into the pre-established first logistic regression model to obtain a first probability value that the text belongs to the positive class.
  • the type of text can include positive and negative classes.
  • the first logistic regression model is used to represent a correspondence between an appearance probability value of each of the above topics in the text and a first probability value that the text belongs to a positive class.
  • the logistic regression algorithm on which the first logistic regression model is based is a classification algorithm.
  • the first logistic regression model can also be replaced with a model based on other classification algorithms.
  • a logistic regression model is selected as the classification algorithm, and the logistic regression algorithm can be used to analyze the appearance probability values of the respective topics in the text, and the calculation is simple and fast, and the various themes obtained by the document theme generation model can be compared with the step 403.
  • the combination of the probability values in the text to identify the classification to which the text belongs, the combination of the document topic generation model and the first logistic regression model can improve the accuracy of the classification and improve the classification efficiency when the text is classified.
  • determining that the text belongs to a positive class in response to the first probability value being greater than a preset first threshold determining that the text belongs to a positive class in response to the first probability value being greater than a preset first threshold.
  • the first logistic regression model pre-sets a corresponding first regression parameter value for each topic, where each first regression parameter value is used to represent each topic belongs to the above-mentioned positive class. Probability; first calculate the product value of the probability value of each topic in the above text and the regression parameter value corresponding to the topic, and then use the sum of the product values as the independent function of the logic function to obtain the dependent variable of the logic function as the text The first probability value that belongs to the positive class.
  • the logic function itself is well known to those skilled in the art. , no longer repeat them here.
  • whether the text belongs to the positive class may be identified according to the first probability value.
  • the first probability value in response to the first probability value being greater than the preset threshold, determining that the text belongs to a positive class; and in response to the first probability value being less than a preset threshold, determining that the text belongs to a negative class.
  • Step 405 In response to the first probability value being less than the preset first threshold, importing the appearance probability value of each topic in the text into the pre-established second logistic regression model to obtain a second probability value that the text belongs to the positive class.
  • the electronic device may import the appearance probability value of each topic in the text into the pre-established second logistic regression model in response to the first probability value being less than the preset first threshold, and obtain that the text belongs to the positive The second probability value of the class.
  • the second logistic regression model is used to represent a correspondence between an appearance probability value of each of the above topics in the text and a second probability value of the text belonging to the positive class, and the second logistic regression model
  • the regression parameters are different from the regression parameters of the first logistic regression model described above, wherein the regression parameters are used to characterize the probability that each topic belongs to the above positive class.
  • the logistic regression algorithm on which the second logistic regression model is based is a classification algorithm.
  • the second logistic regression model can also be replaced with a model based on other classification algorithms.
  • the second regression parameter model is configured to preset a corresponding second regression parameter value for each topic, where each second regression parameter value is used to represent each topic belongs to the above positive class. The probability.
  • the dependent variable of the logical function is obtained as the second probability value that the text belongs to the positive class.
  • Step 406 Determine a text genre in response to the second probability value being greater than a preset second threshold. Yu Zheng class.
  • the electronic device may determine that the text belongs to the above-mentioned positive class in response to the second probability value determined in step 405 being greater than a preset second threshold.
  • the flow 400 of the method for recognizing a text type in the present embodiment highlights the probability of occurrence of each topic in text using a two-layer logistic regression model as compared with the embodiment corresponding to FIG. The value of the steps to operate, thereby improving the accuracy and efficiency of text type recognition.
  • the present application provides an embodiment of an apparatus for identifying a text type, the apparatus embodiment corresponding to the method embodiment shown in FIG.
  • the device can be specifically applied to various electronic devices.
  • the apparatus 500 for identifying a text type in the foregoing embodiment includes: a pre-processing module 501, a calculation module 502, a determination module 503, and an identification module 504.
  • the pre-processing module 501 is configured to pre-process the pre-acquired text to obtain a keyword set of the text
  • the calculating module 502 is configured to calculate the occurrence of each keyword in the keyword set in the text.
  • a determining module 503 configured to import, for each keyword in the keyword set, the keyword and an occurrence probability value corresponding to the keyword into a pre-established document topic generation model to determine the document theme generation
  • the identification module 504 is configured to identify the type to which the text belongs according to the appearance probability values in the texts of the respective topics.
  • the pre-processing module 501 of the apparatus 500 for identifying a text type may first acquire text from the terminal device, then pre-process the text, and finally obtain a keyword set of the text.
  • the above electronic device can also acquire text from a storage device in which text is stored in advance.
  • the calculation module 502 calculates each key in the above keyword set. The probability value of the word in the above text.
  • the determining module 503 may import the keyword and the occurrence probability value corresponding to the keyword into a pre-established document topic generation model for each keyword in the keyword set, and determine the document topic generation model. The probability value of occurrence of each topic preset in the above text.
  • the identification module 504 identifies the type to which the text belongs according to the appearance probability values in the above texts of the respective topics.
  • the foregoing type includes a positive class and a negative class
  • the foregoing identifying module 504 includes: a determining unit 5041 configured to import an occurrence probability value of each topic in the text into a pre- Establishing a first logistic regression model, and obtaining a first probability value that the text belongs to the above positive class, wherein the first logistic regression model is used to represent the appearance probability value of each of the above topics in the text and the text belongs to a positive class Corresponding relationship between the first probability values; the identifying unit 5042 is configured to identify, according to the first probability value, whether the text belongs to the positive class.
  • the identifying unit is further configured to: in response to the first probability value being less than a preset first threshold, importing an occurrence probability value of each topic in the text into a pre- Establishing a second logistic regression model, and obtaining a second probability value that the text belongs to the above positive class, wherein the second logistic regression model is used to represent the appearance probability value of each of the above topics in the text and the text belongs to the above positive Corresponding relationship between the second probability values of the class and the regression parameter of the second logistic regression model is different from the regression parameter of the first logistic regression model; and determining that the second probability value is greater than a preset second threshold The text belongs to the above positive class.
  • the foregoing identifying module is further configured to: determine that the text belongs to the positive class in response to the first probability value being greater than a preset first threshold.
  • the identifying unit is further configured to: determine that the text belongs to the negative class in response to the second probability value being less than a preset second threshold.
  • the foregoing pre-processing module is further configured to: remove the special symbol in the text; and enter the text after removing the special symbol Row cutting words to obtain a collection of words; removing the stop words in the set of words above to obtain the above keyword set.
  • the apparatus provided by the above embodiment of the present application first extracts a keyword set of text through the pre-processing module 501, and then the calculation module 502 calculates an appearance probability value of each keyword in the keyword set, and then the determining module 503 utilizes the pre-established
  • the document theme generation model obtains an appearance probability value of each of the above-mentioned topics in the text by both the appearance probability value of the word in the text and the appearance probability value of the above-mentioned word in each of the above-mentioned various themes, and finally the identification module 504 is based on The appearance probability value of each topic in the above text, identifying the type to which the above text belongs, improves the accuracy of identifying the type of text.
  • FIG. 6 a block diagram of a computer system 600 suitable for use in implementing a server of an embodiment of the present application is shown.
  • computer system 600 includes a central processing unit (CPU) 601 that can be stored in a read only memory (ROM) 602.
  • CPU central processing unit
  • ROM read only memory
  • the program or the program loaded from the storage portion 608 into the random access memory (RAM) 603 performs various appropriate actions and processes.
  • RAM 603 various programs and data required for the operation of the system 600 are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also coupled to bus 604.
  • the following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, etc.; an output portion 607 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 608 including a hard disk or the like. And a communication portion 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the Internet.
  • Driver 610 is also coupled to I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 610 as needed so that a computer program read therefrom is installed into the storage portion 608 as needed.
  • embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via the communication portion 609, and/or installed from the removable medium 611.
  • CPU central processing unit
  • each block of the flowchart or block diagrams can represent a module, a program segment, or a portion of code that includes one or more Executable instructions.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented by software or by hardware.
  • the described unit may also be provided in the processor, for example, as a processor including a pre-processing module, a computing module, a determining module, and an identifying module.
  • the names of these units do not constitute a limitation on the unit itself in some cases.
  • the pre-processing module may also be described as a module that pre-processes pre-acquired text to obtain a keyword set of the above-mentioned texts. .
  • the present application further provides a non-volatile computer storage medium, which may be a non-volatile computer storage medium included in the foregoing apparatus in the foregoing embodiment; It is a non-volatile computer storage medium that exists alone and is not assembled into the terminal.
  • the non-volatile computer storage medium stores one or more programs, when the one or more programs are executed by a device, causing the device to: pre-process the pre-acquired text to obtain a keyword set of the text; An occurrence probability value of each keyword in the above keyword set in the text; for each keyword in the keyword set, the keyword and an appearance probability value corresponding to the keyword are imported into a pre-established document a topic generation model for determining occurrence probability values of the respective texts preset in the document theme generation model in the above text; The appearance probability value of the subject in the above text identifies the type to which the above text belongs.

Abstract

公开了用于识别文本类型的方法、装置和设备。所述方法的一个具体实施方式包括:对预先获取的文本进行预处理得到所述文本的关键词集合(201);计算关键词集合中每个关键词在所述文本中的出现概率值(202);对于关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定文档主题生成模型中预先设置的各个主题在文本中的出现概率值(203);根据各个主题在文本中的出现概率值,识别文本所属的类型(204)。该方法提高了识别文本类型的准确率。

Description

用于识别文本类型的方法、装置和设备
相关申请的交叉引用
本申请要求于2016年8月31日提交的中国专利申请号为“201610798213.2”、发明名称为“用于识别文本类型的方法、装置和设备”的中国专利申请的优先权,其全部内容作为整体并入本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及互联网技术领域,尤其涉及用于识别文本类型的方法、装置和设备。
背景技术
识别文本类型,也可以称为文本分类,是按照预先定义的类型,为文档确定一个类别。识别文本类型的应用领域十分广泛,例如,可以应用于为网页分类领域、需要识别用户的输入文本的搜索引擎领域、为用户的原创内容分类领域。
然而,现有的识别文本类型的方式,例如朴素贝叶斯法、支持向量机法等均是仅根据文本中的词语的意思推断文本的类型,在文本中词语可能存在的一词多义、异行同义的情况下,识别文本类型的准确率下降,从而,存在着识别文本类型的准确率较低的问题。
发明内容
本申请的目的在于提出一种改进的用于识别文本类型的方法和装置,来解决以上背景技术部分提到的技术问题。
第一方面,本申请提供了一种用于识别文本类型的方法,所述方法包括:对预先获取的文本进行预处理得到所述文本的关键词集合;计算所述关键词集合中的每个关键词在所述文本中的出现概率值;对于所述关键词集合中的每个关键词,将该关键词和与该关键词对应的 出现概率值导入预先建立的文档主题生成模型,确定所述文档主题生成模型中预先设置的各个主题在所述文本的出现概率值,其中,所述文档主题生成模型用于表征词语在文本中的出现概率值、所述词语在所述各个主题中预先得出的出现概率值这两者与所述各个主题在文本中的出现概率值之间的对应关系;根据所述各个主题在所述文本中的出现概率值,识别所述文本所属的类型。
第二方面,本申请提供了一种用于识别文本类型的装置,所述装置包括:预处理模块,配置用于对预先获取的文本进行预处理得到所述文本的关键词集合;计算模块,配置用于计算所述关键词集合中的每个关键词在所述文本中的出现概率值;确定模块,配置用于对于所述关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定所述文档主题生成模型中预先设置的各个主题在所述文本的出现概率值,其中,所述文档主题生成模型用于表征词语在文本中的出现概率值、所述词语在所述各个主题中预先得出的出现概率值这两者与所述各个主题在文本中的出现概率值之间的对应关系;识别模块,配置用于根据所述各个主题在所述文本中的出现概率值,识别所述文本所属的类型。
第三方面,本申请提供了一种设备,包括:一个或多个处理器;存储设备,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述方法。
第四方面,本申请提供了一种非易失性计算机存储介质,所述计算机存储介质存储有能够被处理器执行的计算机可读指令,当所述计算机可读指令被处理器执行时,所述处理器执行上述方法。
本申请提供的用于识别文本类型的方法、装置和设备,通过首先提取文本的关键词集合,然后计算关键词集合中的每个关键词的出现概率值,然后利用预先建立的文档主题生成模型,由词语在文本中的出现概率值、上述词语在上述各个主题中预先得出的出现概率值这两者得出上述各个主题在文本中的出现概率值,最后根据各个主题在上述文本中的出现概率值,识别上述文本所属的类型,提高了识别文本类型的准确率。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请可以应用于其中的示例性系统架构图;
图2是根据本申请的用于识别文本类型的方法的一个实施例的流程图;
图3是根据本申请的用于识别文本类型的方法的一个应用场景的示意图;
图4是根据本申请的用于识别文本类型的方法的又一个实施例的流程图;
图5是根据本申请的用于识别文本类型的装置的一个实施例的结构示意图;
图6是适于用来实现本申请实施例的服务器的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的用于识别文本类型的方法或用于识别文本类型的装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如点评类应用、网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的评论页面提供支持的后台服务器。后台服务器可以对接收到的文本等数据进行分析等处理,并将处理结果(例如文本所属的分类)反馈给终端设备。
需要说明的是,本申请实施例所提供的用于识别文本类型的方法一般由服务器105执行,相应地,用于识别文本类型的装置一般设置于服务器105中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本申请的用于识别文本类型的方法的一个实施例的流程200。上述的用于识别文本类型的方法,包括以下步骤:
步骤201,对预先获取的文本进行预处理得到文本的关键词集合。
在本实施例中,用于识别文本类型的方法运行于其上的电子设备(例如图1所示的服务器)可以首先从终端设备获取文本,然后对文本进行预处理,最后得到上述文本的关键词集合。在这里,上述电子设备还可以从预先存储有文本的存储设备中获取文本。
在本实施例的一些可选的实现方式中,上述文本可以是用户在搜索类应用的搜索框中输入的搜索文本,可以是用户在网页浏览类应用 的新闻网页中发表的评论文本,可以是用户在购物类应用中对商品发表的评价文本,还可以是用户在点评类应用中对商家、网站、服务等发表的评论文本。
在本实施例的一些可选的实现方式中,对上述文本进行预处理可以包括以下步骤:去除上述文本中的特殊符号;对去除特殊符号后的文本进行切词得到词的集合;去除上述词的集合中的停用词得到上述关键词集合。在这里,文本中的特殊符号可以是标点符号、网址链接、数字等。在这里,对文本进行切词的粒度可以选用基本粒度,关于如何对文本进行切词是本领域的技术人员所公知的,在此不再赘述。在这里,停用词可以由人工定义并预先存储于停用词集合中,例如,可以将语气词、连接词等定义为停用词。
在本实施例的一些可选的实现方式中,对上述文本进行预处理可以包括一些步骤:计算去除上述文本中的特殊符号;对去除特殊符号后的文本进行切词得到词的集合;去除上述词的集合中的停用词得到上述初始关键词集合;计算初始关键词集合中每个初始关键词的词频-反转文件频率(TF-IDF,term frequency–inverse document frequency);选取词频-反转文件频率大于预定阈值的初始关键词作为上述文本的关键词,并生成关键词集合。
步骤202,计算关键词集合中每个关键词在文本中的出现概率值。
在本实施例中,上述电子设备可以计算上述关键词集合中的每个关键词在上述文本中的出现概率值。
在本实施例的一些可选的实现方式中,上述关键词的出现概率值可以是该关键词在上述文本中的出现次数与上述文本的总词语数的比值。
在本实施例的一些可选的实现方式中,上述关键词的出现概率值可以是该关键词在上述文本中的出现次数与上述文本的关键词集合中的关键词数目的比值。
步骤203,对于关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定文档主题生成模型中预先设置的各个主题在文本中的出现概率值。
在本实施例中,上述电子设备可以对于关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定上述文档主题生成模型中预先设置的各个主题在上述文本中的出现概率值。
在本实施例中,上述文档主题生成模型用于表征词语在文本中的出现概率值、词语在各个主题中预先得出的出现概率值这两者与上述各个主题在文本中的出现概率值之间的对应关系。
本领域技术人员可以理解的是,文档主题生成模型的原理可以用以下公式表示:
Figure PCTCN2016108421-appb-000001
其中,x表示词语,yi表示第i个主题,z表示文档,p(x|z)表示文档中词语出现概率值,p(x|yi)表示词语在第i个主题中出现概率值,p(yi|z)表示文档中第i个主题的出现概率值,*表示乘号,n表示文档所包括的主题个数。
作为示例,在生成一篇文档时,选用词语A的概率可以通过以下过程得出:待生成的文档可能涉及三个主题,分别为主题甲、主题乙和主题丙;要生成这篇文章选取这三个主题的概率值分别为P(主题甲|文档)=50%,P(主题乙|文档)=30%,P(主题丙|文档)=20%;词语A在各个主题中的出现概率值分别为P(词语A|主题甲)=20%,P(词语A|主题乙)=10%,P(词语A|主题丙)=0%;可以得出词语A在这篇待生成文档中的出现概率值为(词语A|文档)=P(主题甲|文档)*P(词语A|主题甲)+P(主题乙|文档)*P(词语A|主题乙)+P(主题丙|文档)*P(词语A|主题丙)=50%*20%+30%*10%+20%*0%=0.13。
对任意一篇文档,p(x|z)是已知的,文档主题生成模型的建立过程是通过大量的文档,通过训练,推理出p(x|yi)和p(yi|z)。作为示例,上述文档主题生成模型的建立过程可以如下:首先,使用预先收集的 大量文档作为训练集;然后,使用困惑度确定主题个数;对训练集中的文档进行训练,生成文档主题生成模型。本领域技术人员可以理解,可以利用概率潜语义分析(pLSA,Probabilistic Latent Semantic Analysis)或线性判别分析(LDA,Latent Dirichlet Allocation)对训练集中的文档进行训练建立文档主题生成模型。
在本实施例中,上述文本的p(x|z)通过步骤203得到,词语在各个主题中的出现概率值p(x|yi)预先训练得出,通过p(x|z)和p(x|yi)这两者即可确定各个主题在文本中的出现概率值p(yi|z)。
步骤204,根据各个主题在文本中的出现概率值,识别文本所属的类型。
在本实施例中,上述电子设备可以根据上述各个主题在上述文本中的出现概率值,识别上述文本所属的类型。
在本实施例的一些可选的实现方式中,文本的类型可以是多种类型,例如,第一分类、第二分类、第三分类。
在本实施例的一些可选的实现方式中,文本的类型可以是两种类型,例如正类和负类。
在本实施例的一些可选的实现方式中,可以预先为各个主题划分类型,其中,主题的类型与文本的类型对应一致;以及首先为各个主题在文本中的出现概率值由大到小排序,然后将出现概率值最大的主题的类型确定为上述文本所属的类型。
作为示例,如果文本的类型是多种类型,例如文艺类、体育类、财经类。可以预先为预设的十个主题划分类型为文艺类、体育类、财经类,在步骤203得出上述十个主题在上述文本中的出现概率值后,为得到的十个出现概率值由大到小排序,将出现概率值最大的主题类型确定为上述文本所属的类型,例如,出现概率值最大的主题的类型是体育类,那么将体育类确定为上述文本上述的类型,即上述文本为体育类。
作为示例,如果文本的类型是两种类型,例如正类和负类。可以 预先为预设的十个主题划分类型为正类或者负类,在步骤203得出上述十个主题在上述文本中的出现概率值后,为得到的十个出现概率值由大到小排序,将出现概率值最大的主题类型确定为上述文本所属的类型,例如,出现概率值最大的主题的类型是正类,那么将正类确定为上述文本上述的类型,即上述文本为正类。
继续参见图3,图3是根据本实施例的用于识别文本类型的方法的应用场景的一个示意图。在图3的应用场景中,用户首先输入文本“手机减价,速来购买,网址xx”;之后,后台服务器可以后台获取上述文本,并对上述文本进行预处理得到关键词集合“手机、减价、购买、网址”;然后,上述后台服务器计算出各个关键词在文本中的出现概率值;然后,上述后台服务器可以将各个关键词与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定各个主题在文本中的出现概率值,作为示例,上述文本中手机主题的出现概率值是20%,广告主题的出现概率值是50%;最后,根据各个主题在上述文本中的出现概率值,识别上述文本所属的类型,作为示例,可以选取出现概率值最高的主题作为文本的类型,“手机减价,速来购买,网址xx”出现概率值最高的主题是广告主题,那么可以将“广告”作为上述文本所属的类型。
本申请的上述实施例提供的方法,通过首先提取文本的关键词集合,然后计算关键词集合中的每个关键词的出现概率值,然后利用预先建立的文档主题生成模型,由词语在文本中的出现概率值、上述词语在上述各个主题中预先得出的出现概率值这两者得出上述各个主题在文本中的出现概率值,最后根据各个主题在上述文本中的出现概率值,识别上述文本所属的类型,提高了识别文本类型的准确率。
进一步参考图4,其示出了用于识别文本类型的方法的又一个实施例的流程400。该用于识别文本类型的方法的流程400,包括以下步骤:
步骤401,对预先获取的文本进行预处理得到文本的关键词集合。
步骤402,计算关键词集合中每个关键词在文本中的出现概率值。
步骤403,对于关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定文档主题生成模型中预先设置的各个主题在文本中的出现概率值。
上述步骤401-步骤403操作分别与步骤201-步骤203的操作基本相同,在此不再赘述。
步骤404,将各个主题在文本中的出现概率值导入预先建立的第一逻辑回归模型,得出文本属于正类的第一概率值。
在本实施例中,上述电子设备可以将各个主题在文本中的出现概率值导入预先建立的第一逻辑回归模型,得出文本属于正类的第一概率值。在这里,文本的类型可以包括正类和负类。
在本实施例中,上述第一逻辑回归模型用于表征上述各个主题在上述文本中的出现概率值与上述文本属于正类的第一概率值之间的对应关系。
本领域技术人员可以理解的是,第一逻辑回归模型所基于的逻辑回归算法是一种分类算法,在本申请中,第一逻辑回归模型也可以替换为基于其他分类算法的模型。
本申请中选用逻辑回归模型作为分类算法,利用逻辑回归算法的可以对各个主题在文本中的出现概率值整体进行分析,并且计算简单速度快,可以与步骤403通过文档主题生成模型得到的各个主题在文本中的出现概率值结合,以识别上述文本所属的分类,文档主题生成模型与第一逻辑回归模型的结合可以在对文本进行二分类时,提高分类的准确率同时提高分类效率。
在本实施例的一些可选的实现方式中,响应于上述第一概率值大于预设的第一阈值,确定上述文本属于正类。
在本实施例的一些可选的实现方式中,上述第一逻辑回归模型中预先为各个主题设置对应的第一回归参数值,其中,各个第一回归参数值用于表征各个主题属于上述正类的概率;首先计算各个主题在上述文本中出现的概率值与该主题对应的回归参数值的乘积值,然后将各个乘积值的和作为逻辑函数的自变量,得出逻辑函数的因变量作为文本属于正类的第一概率值。逻辑函数本身是本领域技术人员所公知 的,在此不再赘述。
在本实施例的一些可选的实现方式中,可以根据上述第一概率值识别上述文本是否属于上述正类。作为示例,响应于上述第一概率值大于预设阈值,确定上述文本属于正类;响应于上述第一概率值小于预设阈值,确定上述文本属于负类。
步骤405,响应于第一概率值小于预设的第一阈值,将各个主题在文本中的出现概率值导入预先建立的第二逻辑回归模型,得出文本属于正类的第二概率值。
在本实施例中,上述电子设备可以响应于第一概率值小于预设的第一阈值,将各个主题在文本中的出现概率值导入预先建立的第二逻辑回归模型,得出上述文本属于正类的第二概率值。
在本实施例中,上述第二逻辑回归模型用于表征上述各个主题在上述文本中的出现概率值与上述文本属于上述正类的第二概率值之间的对应关系且上述第二逻辑回归模型的回归参数与上述第一逻辑回归模型的回归参数不同,其中,回归参数用于表征各个主题属于上述正类的概率。
在本实施例中,利用两个不同的逻辑回归模型,设置双层判断的机制,可以提高文本类别识别的准确率。
本领域技术人员可以理解的是,第二逻辑回归模型所基于的逻辑回归算法是一种分类算法,在本申请中,第二逻辑回归模型也可以替换为基于其他分类算法的模型。
在本实施例的一些可选的实现方式中,上述第二逻辑回归模型中预先为各个主题设置对应的第二回归参数值,其中,各个第二回归参数值用于表征各个主题属于上述正类的概率。
在本实施例的一些可选的实现方式中,首先计算各个主题在上述文本中出现的概率值与该主题对应的回归参数值的乘积值,然后将各个乘积值的和作为逻辑函数的自变量,得出逻辑函数的因变量作为文本属于正类的第二概率值。逻辑函数本身是本领域技术人员所公知的,在此不再赘述。
步骤406,响应于第二概率值大于预设的第二阈值,确定文本属 于正类。
在本实施例中,上述电子设备可以响应于步骤405确定的第二概率值大于预设的第二阈值,确定上述文本属于上述正类。
在本实施例的一些可选的实现方式中,响应于上述第二概率值小于预设的第二阈值,确定上述文本属于上述负类。
从图4中可以看出,与图2对应的实施例相比,本实施例中的用于识别文本类型的方法的流程400突出了使用双层逻辑回归模型对各个主题在文本中的出现概率值进行操作的步骤,从而全面提高了文本类型识别的准确率和效率。
进一步参考图5,作为对上述各图所示方法的实现,本申请提供了一种用于识别文本类型的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例上述的用于识别文本类型的装置500包括:预处理模块501、计算模块502、确定模块503和识别模块504。其中,预处理模块501,配置用于对预先获取的文本进行预处理得到上述文本的关键词集合;计算模块502,配置用于计算上述关键词集合中的每个关键词在上述文本中的出现概率值;确定模块503,配置用于对于上述关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定上述文档主题生成模型中预先设置的各个主题在上述文本的出现概率值,其中,上述文档主题生成模型用于表征词语在文本中的出现概率值、上述词语在上述各个主题中预先得出的出现概率值这两者与上述各个主题在文本中的出现概率值之间的对应关系;识别模块504,配置用于根据上述各个主题在上述文本中的出现概率值,识别上述文本所属的类型。
在本实施例中,用于识别文本类型的装置500的预处理模块501可以首先从终端设备获取文本,然后对文本进行预处理,最后得到上述文本的关键词集合。在这里,上述电子设备还可以从预先存储有文本的存储设备中获取文本。
在本实施例中,计算模块502计算上述关键词集合中的每个关键 词在上述文本中的出现概率值。
在本实施例中,确定模块503可以对于关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定上述文档主题生成模型中预先设置的各个主题在上述文本中的出现概率值。
在本实施例中,识别模块504根据上述各个主题在上述文本中的出现概率值,识别上述文本所属的类型。
在本实施例的一些可选的实现方式中,上述类型包括正类和负类;以及上述识别模块504,包括:确定单元5041,配置用于将上述各个主题在文本中的出现概率值导入预先建立的第一逻辑回归模型,得出上述文本属于上述正类的第一概率值,其中,上述第一逻辑回归模型用于表征上述各个主题在上述文本中的出现概率值与上述文本属于正类的第一概率值之间的对应关系;识别单元5042,配置用于根据上述第一概率值识别上述文本是否属于上述正类。
在本实施例的一些可选的实现方式中,上述识别单元,进一步配置用于:响应于上述第一概率值小于预设的第一阈值,将上述各个主题在文本中的出现概率值导入预先建立的第二逻辑回归模型,得出上述文本属于上述正类的第二概率值,其中,上述第二逻辑回归模型用于表征上述各个主题在上述文本中的出现概率值与上述文本属于上述正类的第二概率值之间的对应关系且上述第二逻辑回归模型的回归参数与上述第一逻辑回归模型的回归参数不同;响应于上述第二概率值大于预设的第二阈值,确定上述文本属于上述正类。
在本实施例的一些可选的实现方式中,上述识别模块,进一步配置用于:响应于上述第一概率值大于预设的第一阈值,确定上述文本属于上述正类。
在本实施例的一些可选的实现方式中,识别单元,进一步配置用于:响应于上述第二概率值小于预设的第二阈值,确定上述文本属于上述负类。
在本实施例的一些可选的实现方式中,上述预处理模块,进一步配置用于:去除上述文本中的特殊符号;对去除特殊符号后的文本进 行切词得到词的集合;去除上述词的集合中的停用词得到上述关键词集合。
本申请的上述实施例提供的装置,通过预处理模块501首先提取文本的关键词集合,然后计算模块502计算关键词集合中的每个关键词的出现概率值,然后确定模块503利用预先建立的文档主题生成模型,由词语在文本中的出现概率值、上述词语在上述各个主题中预先得出的出现概率值这两者得出上述各个主题在文本中的出现概率值,最后识别模块504根据各个主题在上述文本中的出现概率值,识别上述文本所属的类型,提高了识别文本类型的准确率。
下面参考图6,其示出了适于用来实现本申请实施例的服务器的计算机系统600的结构示意图。
如图6所示,计算机系统600包括中央处理单元(CPU)601,其可以根据存储在只读存储器(ROM)602中的
程序或者从存储部分608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统600操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,上述计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施 例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。在该计算机程序被中央处理单元(CPU)601执行时,执行本申请的方法中限定的上述功能。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括预处理模块、计算模块、确定模块和识别模块。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,预处理模块还可以被描述为“对预先获取的文本进行预处理得到上述文本的关键词集合的模块”。
作为另一方面,本申请还提供了一种非易失性计算机存储介质,该非易失性计算机存储介质可以是上述实施例中上述装置中所包含的非易失性计算机存储介质;也可以是单独存在,未装配入终端中的非易失性计算机存储介质。上述非易失性计算机存储介质存储有一个或者多个程序,当上述一个或者多个程序被一个设备执行时,使得上述设备:对预先获取的文本进行预处理得到上述文本的关键词集合;计算上述关键词集合中的每个关键词在上述文本中的出现概率值;对于上述关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定上述文档主题生成模型中预先设置的各个主题在上述文本的出现概率值;根据上述各个 主题在上述文本中的出现概率值,识别上述文本所属的类型。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (14)

  1. 一种用于识别文本类型的方法,其特征在于,所述方法包括:
    对预先获取的文本进行预处理得到所述文本的关键词集合;
    计算所述关键词集合中的每个关键词在所述文本中的出现概率值;
    对于所述关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成模型,确定所述文档主题生成模型中预先设置的各个主题在所述文本的出现概率值,其中,所述文档主题生成模型用于表征词语在文本中的出现概率值、所述词语在所述各个主题中预先得出的出现概率值这两者与所述各个主题在文本中的出现概率值之间的对应关系;
    根据所述各个主题在所述文本中的出现概率值,识别所述文本所属的类型。
  2. 根据权利要求1所述的方法,其特征在于,所述类型包括正类和负类;以及
    所述根据所述各个主题中每个主题在文本中的出现概率值,识别所述文本所属的类型,包括:
    将所述各个主题在文本中的出现概率值导入预先建立的第一逻辑回归模型,得出所述文本属于所述正类的第一概率值,其中,所述第一逻辑回归模型用于表征所述各个主题在所述文本中的出现概率值与所述文本属于正类的第一概率值之间的对应关系;
    根据所述第一概率值识别所述文本是否属于所述正类。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第一概率值识别所述文本是否属于所述正类,包括:
    响应于所述第一概率值小于预设的第一阈值,将所述各个主题在文本中的出现概率值导入预先建立的第二逻辑回归模型,得出所述文本属于所述正类的第二概率值,其中,所述第二逻辑回归模型用于表 征所述各个主题在所述文本中的出现概率值与所述文本属于所述正类的第二概率值之间的对应关系且所述第二逻辑回归模型的回归参数与所述第一逻辑回归模型的回归参数不同,其中,回归参数用于表征各个主题属于所述正类的概率;
    响应于所述第二概率值大于预设的第二阈值,确定所述文本属于所述正类。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述第一概率值识别所述文本是否属于所述正类,还包括:
    响应于所述第一概率值大于预设的第一阈值,确定所述文本属于所述正类。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述第一概率值识别所述文本是否属于所述正类,还包括:
    响应于所述第二概率值小于预设的第二阈值,确定所述文本属于所述负类。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述对预先获取的文本进行预处理得到所述文本的关键词集合,包括:
    去除所述文本中的特殊符号;
    对去除特殊符号后的文本进行切词得到词的集合;
    去除所述词的集合中的停用词得到所述关键词集合。
  7. 一种用于识别文本类型的装置,其特征在于,所述装置包括:
    预处理模块,配置用于对预先获取的文本进行预处理得到所述文本的关键词集合;
    计算模块,配置用于计算所述关键词集合中的每个关键词在所述文本中的出现概率值;
    确定模块,配置用于对于所述关键词集合中的每个关键词,将该关键词和与该关键词对应的出现概率值导入预先建立的文档主题生成 模型,确定所述文档主题生成模型中预先设置的各个主题在所述文本的出现概率值,其中,所述文档主题生成模型用于表征词语在文本中的出现概率值、所述词语在所述各个主题中预先得出的出现概率值这两者与所述各个主题在文本中的出现概率值之间的对应关系;
    识别模块,配置用于根据所述各个主题在所述文本中的出现概率值,识别所述文本所属的类型。
  8. 根据权利要求7所述的装置,其特征在于,所述类型包括正类和负类;以及
    所述识别模块,包括:
    确定单元,配置用于将所述各个主题在文本中的出现概率值导入预先建立的第一逻辑回归模型,得出所述文本属于所述正类的第一概率值,其中,所述第一逻辑回归模型用于表征所述各个主题在所述文本中的出现概率值与所述文本属于正类的第一概率值之间的对应关系;
    识别单元,配置用于根据所述第一概率值识别所述文本是否属于所述正类。
  9. 根据权利要求8所述的装置,其特征在于,所述识别单元,进一步配置用于:
    响应于所述第一概率值小于预设的第一阈值,将所述各个主题在文本中的出现概率值导入预先建立的第二逻辑回归模型,得出所述文本属于所述正类的第二概率值,其中,所述第二逻辑回归模型用于表征所述各个主题在所述文本中的出现概率值与所述文本属于所述正类的第二概率值之间的对应关系且所述第二逻辑回归模型的回归参数与所述第一逻辑回归模型的回归参数不同,其中,回归参数用于表征各个主题属于所述正类的概率;
    响应于所述第二概率值大于预设的第二阈值,确定所述文本属于所述正类。
  10. 根据权利要求8所述的装置,其特征在于,所述识别模块,进一步配置用于:
    响应于所述第一概率值大于预设的第一阈值,确定所述文本属于所述正类。
  11. 根据权利要求9所述的装置,其特征在于,所述识别单元,进一步配置用于:
    响应于所述第二概率值小于预设的第二阈值,确定所述文本属于所述负类。
  12. 根据权利要求7-11中任一项所述的装置,其特征在于,所述预处理模块,进一步配置用于:
    去除所述文本中的特殊符号;
    对去除特殊符号后的文本进行切词得到词的集合;
    去除所述词的集合中的停用词得到所述关键词集合。
  13. 一种设备,其特征在于,所述设备包括:
    一个或多个处理器;
    存储设备,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1到6中任一所述的方法。
  14. 一种非易失性计算机存储介质,所述计算机存储介质存储有能够被处理器执行的计算机可读指令,当所述计算机可读指令被处理器执行时,所述处理器执行如权利要求1到6中任一所述的方法。
PCT/CN2016/108421 2016-08-31 2016-12-02 用于识别文本类型的方法、装置和设备 WO2018040343A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018553944A JP6661790B2 (ja) 2016-08-31 2016-12-02 テキストタイプを識別する方法、装置及びデバイス
US16/160,950 US11281860B2 (en) 2016-08-31 2018-10-15 Method, apparatus and device for recognizing text type

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610798213.2A CN107797982B (zh) 2016-08-31 2016-08-31 用于识别文本类型的方法、装置和设备
CN201610798213.2 2016-08-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/160,950 Continuation US11281860B2 (en) 2016-08-31 2018-10-15 Method, apparatus and device for recognizing text type

Publications (1)

Publication Number Publication Date
WO2018040343A1 true WO2018040343A1 (zh) 2018-03-08

Family

ID=61299880

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108421 WO2018040343A1 (zh) 2016-08-31 2016-12-02 用于识别文本类型的方法、装置和设备

Country Status (4)

Country Link
US (1) US11281860B2 (zh)
JP (1) JP6661790B2 (zh)
CN (1) CN107797982B (zh)
WO (1) WO2018040343A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274798A (zh) * 2020-01-06 2020-06-12 北京大米科技有限公司 一种文本主题词确定方法、装置、存储介质及终端

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717519B (zh) * 2018-04-03 2021-02-19 北京捷通华声科技股份有限公司 一种文本分类方法及装置
US20210232870A1 (en) * 2018-04-27 2021-07-29 Aipore Inc. PU Classification Device, PU Classification Method, and Recording Medium
US11113466B1 (en) * 2019-02-28 2021-09-07 Intuit, Inc. Generating sentiment analysis of content
CN110728138A (zh) * 2019-09-25 2020-01-24 杜泽壮 新闻文本识别的方法、装置以及存储介质
CN110717327B (zh) * 2019-09-29 2023-12-29 北京百度网讯科技有限公司 标题生成方法、装置、电子设备和存储介质
CN111414735B (zh) * 2020-03-11 2024-03-22 北京明略软件系统有限公司 文本数据的生成方法和装置
CN113449511B (zh) * 2020-03-24 2023-06-09 百度在线网络技术(北京)有限公司 文本处理的方法、装置、设备和存储介质
WO2022130597A1 (ja) * 2020-12-18 2022-06-23 国立大学法人東北大学 推定装置、推定方法、推定プログラム、生成装置、及び、推定システム
CN113191147A (zh) * 2021-05-27 2021-07-30 中国人民解放军军事科学院评估论证研究中心 无监督的自动术语抽取方法、装置、设备和介质
CN113836261A (zh) * 2021-08-27 2021-12-24 哈尔滨工业大学 一种专利文本新颖性/创造性预测方法及装置
US20230134796A1 (en) * 2021-10-29 2023-05-04 Glipped, Inc. Named entity recognition system for sentiment labeling

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070203885A1 (en) * 2006-02-28 2007-08-30 Korea Advanced Institute Of Science & Technology Document Classification Method, and Computer Readable Record Medium Having Program for Executing Document Classification Method By Computer
CN104915356A (zh) * 2014-03-13 2015-09-16 中国移动通信集团上海有限公司 一种文本分类校正方法及装置
CN105354184A (zh) * 2015-10-28 2016-02-24 甘肃智呈网络科技有限公司 一种使用优化的向量空间模型实现文档自动分类的方法
CN105893606A (zh) * 2016-04-25 2016-08-24 深圳市永兴元科技有限公司 文本分类方法和装置

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7203909B1 (en) * 2002-04-04 2007-04-10 Microsoft Corporation System and methods for constructing personalized context-sensitive portal pages or views by analyzing patterns of users' information access activities
US7739286B2 (en) * 2005-03-17 2010-06-15 University Of Southern California Topic specific language models built from large numbers of documents
CN100533441C (zh) * 2006-04-19 2009-08-26 中国科学院自动化研究所 基于概率主题词的两级组合文本分类方法
JP5343861B2 (ja) * 2007-12-27 2013-11-13 日本電気株式会社 テキスト分割装置とテキスト分割方法およびプログラム
US20140108376A1 (en) * 2008-11-26 2014-04-17 Google Inc. Enhanced detection of like resources
WO2010150464A1 (ja) * 2009-06-26 2010-12-29 日本電気株式会社 情報分析装置、情報分析方法、及びコンピュータ読み取り可能な記録媒体
JP2011175362A (ja) * 2010-02-23 2011-09-08 Sony Corp 情報処理装置、重要度算出方法及びプログラム
JP2012038239A (ja) * 2010-08-11 2012-02-23 Sony Corp 情報処理装置、情報処理方法、及び、プログラム
JP5691289B2 (ja) * 2010-08-11 2015-04-01 ソニー株式会社 情報処理装置、情報処理方法、及び、プログラム
US8892550B2 (en) * 2010-09-24 2014-11-18 International Business Machines Corporation Source expansion for information retrieval and information extraction
US20130273976A1 (en) * 2010-10-27 2013-10-17 Nokia Corporation Method and Apparatus for Identifying a Conversation in Multiple Strings
US8484228B2 (en) * 2011-03-17 2013-07-09 Indian Institute Of Science Extraction and grouping of feature words
US8892555B2 (en) * 2011-03-31 2014-11-18 Samsung Electronics Co., Ltd. Apparatus and method for generating story according to user information
CA2779034C (en) * 2011-06-08 2022-03-01 Accenture Global Services Limited High-risk procurement analytics and scoring system
US20130159254A1 (en) * 2011-12-14 2013-06-20 Yahoo! Inc. System and methods for providing content via the internet
US9355170B2 (en) * 2012-11-27 2016-05-31 Hewlett Packard Enterprise Development Lp Causal topic miner
US9378295B1 (en) * 2012-12-26 2016-06-28 Google Inc. Clustering content based on anticipated content trend topics
US10685181B2 (en) * 2013-03-06 2020-06-16 Northwestern University Linguistic expression of preferences in social media for prediction and recommendation
US10204026B2 (en) * 2013-03-15 2019-02-12 Uda, Llc Realtime data stream cluster summarization and labeling system
US10599697B2 (en) * 2013-03-15 2020-03-24 Uda, Llc Automatic topic discovery in streams of unstructured data
US20190129941A2 (en) * 2013-05-21 2019-05-02 Happify, Inc. Systems and methods for dynamic user interaction for improving happiness
CN103473309B (zh) * 2013-09-10 2017-01-25 浙江大学 一种基于概率单词选择和监督主题模型的文本分类方法
US9928526B2 (en) * 2013-12-26 2018-03-27 Oracle America, Inc. Methods and systems that predict future actions from instrumentation-generated events
CN104834640A (zh) * 2014-02-10 2015-08-12 腾讯科技(深圳)有限公司 网页的识别方法及装置
US20150286710A1 (en) * 2014-04-03 2015-10-08 Adobe Systems Incorporated Contextualized sentiment text analysis vocabulary generation
US20150317303A1 (en) * 2014-04-30 2015-11-05 Linkedin Corporation Topic mining using natural language processing techniques
US10373067B1 (en) * 2014-08-13 2019-08-06 Intuit, Inc. Domain-specific sentiment keyword extraction with weighted labels
US9690772B2 (en) * 2014-12-15 2017-06-27 Xerox Corporation Category and term polarity mutual annotation for aspect-based sentiment analysis
US9881255B1 (en) * 2014-12-17 2018-01-30 Amazon Technologies, Inc. Model based selection of network resources for which to accelerate delivery
US9817904B2 (en) * 2014-12-19 2017-11-14 TCL Research America Inc. Method and system for generating augmented product specifications
JP2016126575A (ja) * 2015-01-05 2016-07-11 富士通株式会社 データ関連度算出プログラム、装置、および方法
WO2016179755A1 (en) * 2015-05-08 2016-11-17 Microsoft Technology Licensing, Llc. Mixed proposal based model training system
US10025773B2 (en) * 2015-07-24 2018-07-17 International Business Machines Corporation System and method for natural language processing using synthetic text
CN105187408A (zh) * 2015-08-17 2015-12-23 北京神州绿盟信息安全科技股份有限公司 网络攻击检测方法和设备
US10482119B2 (en) * 2015-09-14 2019-11-19 Conduent Business Services, Llc System and method for classification of microblog posts based on identification of topics
US20170075978A1 (en) * 2015-09-16 2017-03-16 Linkedin Corporation Model-based identification of relevant content
US10606705B1 (en) * 2015-11-30 2020-03-31 Veritas Technologies Llc Prioritizing backup operations using heuristic techniques
US10289624B2 (en) * 2016-03-09 2019-05-14 Adobe Inc. Topic and term search analytics
US10275444B2 (en) * 2016-07-15 2019-04-30 At&T Intellectual Property I, L.P. Data analytics system and methods for text data
US11416680B2 (en) * 2016-08-18 2022-08-16 Sap Se Classifying social media inputs via parts-of-speech filtering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070203885A1 (en) * 2006-02-28 2007-08-30 Korea Advanced Institute Of Science & Technology Document Classification Method, and Computer Readable Record Medium Having Program for Executing Document Classification Method By Computer
CN104915356A (zh) * 2014-03-13 2015-09-16 中国移动通信集团上海有限公司 一种文本分类校正方法及装置
CN105354184A (zh) * 2015-10-28 2016-02-24 甘肃智呈网络科技有限公司 一种使用优化的向量空间模型实现文档自动分类的方法
CN105893606A (zh) * 2016-04-25 2016-08-24 深圳市永兴元科技有限公司 文本分类方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274798A (zh) * 2020-01-06 2020-06-12 北京大米科技有限公司 一种文本主题词确定方法、装置、存储介质及终端
CN111274798B (zh) * 2020-01-06 2023-08-18 北京大米科技有限公司 一种文本主题词确定方法、装置、存储介质及终端

Also Published As

Publication number Publication date
JP2019519019A (ja) 2019-07-04
US20190050396A1 (en) 2019-02-14
JP6661790B2 (ja) 2020-03-11
US11281860B2 (en) 2022-03-22
CN107797982B (zh) 2021-05-07
CN107797982A (zh) 2018-03-13

Similar Documents

Publication Publication Date Title
WO2018040343A1 (zh) 用于识别文本类型的方法、装置和设备
Kumar et al. Sentiment analysis of multimodal twitter data
JP6511487B2 (ja) 情報プッシュ用の方法及び装置
CN108153901B (zh) 基于知识图谱的信息推送方法和装置
JP6161679B2 (ja) 検索エンジン及びその実現方法
CN104573054B (zh) 一种信息推送方法和设备
WO2017020451A1 (zh) 信息推送方法和装置
CN107784092A (zh) 一种推荐热词的方法、服务器及计算机可读介质
JP2019519019A5 (zh)
Cataldi et al. Good location, terrible food: detecting feature sentiment in user-generated reviews
JP2018518788A (ja) ウェブページトレーニング方法及び装置、検索意図識別方法及び装置
WO2017000402A1 (zh) 网页生成方法和装置
US11361030B2 (en) Positive/negative facet identification in similar documents to search context
WO2017013667A1 (en) Method for product search using the user-weighted, attribute-based, sort-ordering and system thereof
CN106126605B (zh) 一种基于用户画像的短文本分类方法
CN108241741A (zh) 一种文本分类方法、服务器及计算机可读存储介质
WO2023029356A1 (zh) 基于句向量模型的句向量生成方法、装置及计算机设备
CN111401974A (zh) 信息发送方法、装置、电子设备和计算机可读介质
CN112668320A (zh) 基于词嵌入的模型训练方法、装置、电子设备及存储介质
CN112231569A (zh) 新闻推荐方法、装置、计算机设备及存储介质
US8600985B2 (en) Classifying documents according to readership
CN107885717B (zh) 一种关键词提取方法及装置
CN105760523A (zh) 一种信息推送方法和装置
CN113688310A (zh) 一种内容推荐方法、装置、设备及存储介质
CN106663123B (zh) 以评论为中心的新闻阅读器

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018553944

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16914905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16914905

Country of ref document: EP

Kind code of ref document: A1