WO2012039778A1 - Recognition of target words using designated characteristic values - Google Patents
Recognition of target words using designated characteristic values Download PDFInfo
- Publication number
- WO2012039778A1 WO2012039778A1 PCT/US2011/001648 US2011001648W WO2012039778A1 WO 2012039778 A1 WO2012039778 A1 WO 2012039778A1 US 2011001648 W US2011001648 W US 2011001648W WO 2012039778 A1 WO2012039778 A1 WO 2012039778A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text data
- sample
- word
- words
- segments
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/242—Dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
Definitions
- the present disclosure relates the field of computers; in particular, it relates to target word recognition techniques.
- Unlisted words refer to words yet to be recorded in the word segmentation dictionary. Unlisted words can be divided into two types. One type is words that cannot be listed in dictionaries in their entirety, but for which it is possible to summarize patterns (such as personal names, institutional names, etc.); the other type is new words that should be listed in the dictionary, but have yet to be listed. Among these new words, some are target words that should be listed in the segmentation dictionary, while others are not words, that is to say, they are non-target words that should not be listed in the dictionary.
- FIG. 1 is a functional diagram illustrating a programmed computer system for target word recognition in accordance with some embodiments.
- FIG. 2A is a system diagram illustrating an embodiment of a target word recognition system.
- FIG. 2B is a system diagram illustrating an embodiment of a target word recognition module.
- FIG. 3 is a flowchart illustrating an embodiment of a process of obtaining the screening criteria.
- FIG. 4 is a flowchart illustrating another embodiment of a process of target word recognition.
- the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.
- these implementations, or any other form that the invention may take, may be referred to as techniques.
- a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
- the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
- a target word recognition technique is disclosed.
- the technique overcomes the limitations of existing statistics-based methods of text data recognition, namely, that they typically are only able to use a relatively small number of characteristic values and that they typically require the statistical results to demonstrate linear distribution, as well as errors and instability resulting from manually adjusted characteristic value weightings and manually set threshold values.
- Embodiments of target word recognition techniques are disclosed that are able to use characteristic values of any dimensions and, when the characteristic value distribution trend is non-linear (such as non-linear distribution over time), are still able to accurately determine target words, without requiring human intervention, thus increasing the accuracy and recall rate of target word recognition.
- FIG. 1 is a functional diagram illustrating a programmed computer system for target word recognition in accordance with some embodiments. As shown, FIG. 1 provides a functional diagram of a general purpose computer system programmed to perform target word recognition in accordance with some embodiments. As will be apparent, other computer system architectures and configurations can be used to perform context sensitive script editing for form design.
- Computer system 100 which includes various subsystems as described below, includes at least one microprocessor subsystem (also referred to as a processor or a central processing unit (CPU)) 102.
- microprocessor subsystem also referred to as a processor or a central processing unit (CPU)
- processor 102 can be implemented by a single-chip processor or by multiple processors.
- processor 102 is a general purpose digital processor that controls the operation of the computer system 100. Using instructions retrieved from memory 1 10, the processor 102 controls the reception and manipulation of input data, and the output and display of data on output devices (e.g ., display 1 18).
- processor 102 includes and/or is used to provide target data providing module, target word recognition module, and/or target word listing module described below with respect to FIG. 2A and/or executes/performs the processes described below with respect to FIGS. 3-4.
- Processor 102 is coupled bi-directionally with memory 1 10, which can include a first primary storage area, typically a random access memory (RAM), and a second primary storage area, typically a read-only memory (ROM).
- primary storage can be used as a general storage area and as scratch-pad memory, and can also be used to store input data and processed data.
- Primary storage can also store programming instructions and data, in the form of data objects and text objects, in addition to other data and instructions for processes operating on processor 102.
- primary storage typically includes basic operating instructions, program code, data, and objects used by the processor 102 to perform its functions (e.g., programmed instructions).
- primary storage devices 1 10 can include any suitable computer readable storage media, described below, depending on whether, for example, data access needs to be bi-directional or uni-directional.
- processor 102 can also directly and very rapidly retrieve and store frequently needed data in a cache memory (not shown).
- a removable mass storage device 1 12 provides additional data storage capacity for the computer system 100, and is coupled either bi-directionally (read/write) or uni-directionally (read only) to processor 102.
- storage 1 12 can also include computer readable media such as magnetic tape, flash memory, PC-CARDS, portable mass storage devices, holographic storage devices, and other storage devices.
- a fixed mass storage 120 can also, for example, provide additional data storage capacity. The most common example of mass storage 120 is a hard disk drive. Mass storage 1 12 and 120 generally store additional programming instructions, data, and the like that typically are not in active use by the processor 102.
- bus 114 can be used to provide access to other subsystems and devices as well. As shown, these can include a display monitor 1 18, a network interface 1 16, a keyboard 104, and a pointing device 106, as well as an auxiliary input/output device interface, a sound card, speakers, and other subsystems as needed.
- the pointing device 106 can be a mouse, stylus, track ball, or tablet, and is useful for interacting with a graphical user interface.
- the network interface 1 16 allows processor 102 to be coupled to another computer, computer network, or telecommunications network using a network connection as shown.
- the processor 102 can receive information (e.g., data objects or program instructions) from another network, or output information to another network in the course of performing method/process steps.
- Information often represented as a sequence of instructions to be executed on a processor, can be received from and outputted to another network.
- An interface card or similar device and appropriate software implemented by e.g.,
- processor 102 can be used to connect the computer system 100 to an external network and transfer data according to standard protocols. For example, various process embodiments disclosed herein can be executed on processor 102, or can be performed across a network such as the Internet, intranet networks, or local area networks, in conjunction with a remote processor that shares a portion of the processing. Additional mass storage devices (not shown) can also be connected to processor 102 through network interface 1 16.
- auxiliary I/O device interface (not shown) can be used in conjunction with computer system 100.
- the auxiliary I/O device interface can include general and customized interfaces that allow the processor 102 to send and, more typically, receive data from other devices such as microphones, touch-sensitive displays, transducer card readers, tape readers, voice or handwriting recognizers, biometrics readers, cameras, portable mass storage devices, and other computers.
- various embodiments disclosed herein further relate to computer storage products with a computer readable medium that includes program code for performing various computer-implemented operations.
- the computer readable medium is any data storage device that can store data which can thereafter be read by a computer system.
- Examples of computer readable media include, but are not limited to, all the media mentioned above: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as optical disks; and specially configured hardware devices such as application-specific integrated circuits (ASICs), programmable logic devices (PLDs), and ROM and RAM devices.
- Examples of program code include both machine code, as produced, for example, by a compiler, or files containing higher level code (e.g., script) that can be executed using an interpreter.
- FIG. 1 is but an example of a computer system suitable for use with the various embodiments disclosed herein.
- Other computer systems suitable for such use can include additional or fewer subsystems.
- bus 1 14 is illustrative of any interconnection scheme serving to link the subsystems.
- Other computer architectures having different configurations of subsystems can also be utilized.
- target word recognition includes obtaining a candidate word set comprising text data, and corresponding characteristic computation data associated with the candidate word set. It further includes segmentation of the characteristic computation data to generate a plurality of text segments; combining the plurality of text segments to form a text data combination set; and determining an intersection of the candidate word set and the text data combination set. A plurality of designated characteristic values for the plurality of text data combinations is determined. Based at least in part on the plurality of designated characteristic values and according to at least a criterion, target words whose characteristic values fulfill the criterion are recognized among the plurality of text data combinations.
- a candidate word is a common term that fulfills the requirements of being collected into a dictionary (also referred to as a term in the usual sense)
- the candidate word is a target word; otherwise, when a candidate word is not a term in the usual sense, the candidate word is not a target word. Details of how to determine whether a candidate word is a target word are described more fully below.
- a candidate word set includes "batwing sleeves” (a type of loose-fitting sleeves often seen in women's clothing) and “sleeves women's apparel.”
- batwing sleeves is a term in the usual sense that fulfills the requirements of being collected into a dictionary since it has a specific meaning that is commonly accepted, while “sleeves women's apparel” does not have a commonly accepted meaning, therefore it is not a term in the usual sense and is not included in a dictionary.
- the candidate word set may be any text data, and its corresponding characteristic computation data may also be any text data.
- user- inputted query keywords are used, and a candidate word set is based on user-inputted query keywords, and characteristic computation data is extracted from the description of search results in response to user-inputted query keywords.
- user-inputted keywords are used in queries about products.
- a candidate word set is extracted from the query keywords, and characteristic computation data is extracted from such descriptive information as product headers and product information on e-commerce websites.
- query keywords that are inputted by users in queries about news are collected and stored, and a candidate word set is extracted from the query keywords while characteristic computation data is extracted from such descriptive information as news headlines and news content on the news website.
- the extraction of the described candidate word set and characteristic computation information can be carried out using periodic or quantitative methods. For example, candidate word sets is periodically extracted from user-inputted keywords. Accordingly, the characteristic computation data are extracted periodically. Another possibility is that once the user-inputted query keywords used for candidate set extraction reach a certain number, the corresponding characteristic computation information is extracted, and target word recognition according to embodiments of the present disclosure is executed.
- accuracy refers to the ratio of the number of correctly recognized word segments among those recognized as target words to the number of words identified as target words.
- recall rate refers to the ratio of the number of correctly recognized target words among the candidate words to the number of word segments that are actually target words among the candidate words.
- FIG. 2A is a system diagram illustrating an embodiment of a target word recognition system.
- system 200 includes a target word recognition module 210, a training data providing module 21 1, and a target word listing module 212.
- Each module or all the modules may be implemented using one or more computer systems such as 100.
- Target word recognition module 210 is used to: obtain a candidate word set and characteristic computation data; based on the text data of minimum granularity, carry out word segmentation of the characteristic computation data; carry out word segment combination processing on word segments obtained by word segmentation; obtain a text data combination set to be processed; determine the intersection of the candidate word set and the text data combination set; compute the designated characteristic values of each text data combination contained in the intersection; based on multiple designated characteristic values of each text data combination contained in the described intersection, carry out the screening of the text data combination contained in the described intersection according to screening criteria that are predetermined based on multiple designated characteristic values, and determine the candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria to be the target words.
- Training data providing module 21 1 is used to provide to the target word recognition module a training sample word set and the corresponding sample characteristic computation data that are needed to obtain the screening criteria.
- Target word listing module 212 is used to receive the target words recognized by the target word recognition module 210, and enter the previously unlisted target words in the word segment dictionary.
- FIG. 2B is a system diagram illustrating an embodiment of a target word recognition module.
- the target word recognition module includes: a receiving module 2101, used to receive a candidate word set and characteristic computation data; a segmentation module 2102, used to carry out, based on the text data of minimum granularity, word segment separation of the computation data; a word segment combination module 2103, used to cany out word segment combination processing with regard to word segments obtained by segmentation and obtain a text data combination set which is the object of processing; an intersection determination module 2104, used to determine the intersection of the described candidate word set with the described text data combination set; an assigned characteristic value computation module 2105, used to compute the multiple designated characteristic values of each text data combination contained in the intersection; a screening module 2106, used to carry out, based on multiple designated characteristic values of each text data combination contained in the described intersection, the screening of the text data combination contained in the described intersection according to screening criteria that are preset based on multiple designated
- the screening criteria based on multiple designated characteristic values are obtained by training a sorting technique for a training sample word set.
- a sorting technique such as the gradient boosting and decision tree (GBDT) sorting technique to obtain the screening criteria based on multiple designated characteristic values is described more fully below.
- the receiving module sample characteristic computation data.
- the segmentation module 2102 is used to carry out, based on the text data of minimum granularity, word segmentation for the described sample characteristic computation data.
- the word segment combination module 2103 is used to carry out word segment combination processing on the characteristic computation data and obtain a sample text data combination set which is the object of processing.
- the intersection determination module 2104 is used to determine the intersection of the described sample text data combination set with the described training sample word set.
- the assigned characteristic value computation module 2105 is used to compute the multiple designated characteristic values of each sample text data combination contained in the described intersection.
- the screening module 2106 is used to set, based on multiple designated characteristic values of each sample text data combination contained in the described intersection and on the known sorted results, threshold values of the described multiple designated characteristic values and obtain corresponding screening criteria based on the threshold values.
- the modules described above can be implemented as software components executing on one or more general purpose processors, as hardware such as programmable logic devices and/or Application Specific Integrated Circuits designed to perform certain functions or a combination thereof.
- the modules can be embodied by a form of software products which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, etc.), including a number of instructions for making a computer device (such as personal computers, servers, network equipment, etc.) implement the methods described in the embodiments of the present invention.
- the modules may be implemented on a single device or distributed across multiple devices. The functions of the modules may be merged into one another or further split into multiple sub-modules.
- designated characteristic values can include mutual information, logarithmic likelihood ratio, context entropy (left entropy, right entropy), position-based in-word probabilities of a character, and may further include dice matrices, Chi, etc.
- the required designated characteristic values can be combinations of any two or more types among the above-mentioned designated characteristic values.
- the formula to compute mutual information (MI) is
- the characteristic computation data is "round collar women's apparel, batwing sleeve women's apparel”
- the word segment “a” is “batwing”
- the word segment “b” is “sleeve”
- the word segment “ab” is “batwing sleeve.”
- p a stands for the probability of the word segment "batwing” appearing in the characteristic computation data
- p b stands for the probability of the word segment "sleeve” appearing in the characteristic computation data
- p ab stands for the probability of "batwing sleeve” appearing in the characteristic computation data.
- c a stands for the number of times "batwing” appears in the characteristic computation data
- C b stands for the number of times "sleeve” appears in the characteristic computation data
- n stands for the number of word segments resulting from performing, based on the text data of minimal granularity, separation into word segments of the characteristic computation data.
- the characteristic computation data can be separated into 5 word segments "round collar,” “women's apparel,” “batwing,” “sleeve,” and “women's apparel.”
- c ab is 1
- C b is 1
- n is 5.
- logarithmic likelihoods are used to determine the tightness of the connection between individual word segments.
- the formula for computing logarithmic likelihoods is the following:
- the characteristic computation data is separated into 5 word segments “round collar,” “women's apparel,” “batwing,” “sleeve,” and “women's apparel.”
- Two-element combining is carried out with regard to the above- mentioned word segments, 4 combinations of text data are obtained: “round collar women's apparel,” “women's apparel batwing,” “batwing sleeve,” and “sleeve women's apparel.”
- ki is the number of times "batwing sleeve” appears in the characteristic computation data
- ni is the number of combination(s) of text data out of the above-mentioned 4 combinations of text data where "batwing” appears on the left
- k 2 is the number of combination(s) of text data out of the above-mentioned 4 combinations of text data where "sleeve” appears on the right and the left side is not “batwing”
- n 2 is the logarithmic likelihood of "batwing sleeve
- Context entropy is used to express the degree of freedom of the use of multiple word segment expressions. Entropy is the expression of the uncertainty factor; the larger the entropy, the more uncertain random events are. A character string that can only be used in a fixed context has a small context entropy value, while a character string that can be used in many different contexts has a high context entropy value. Context entropy includes left entropy and right entropy. In some embodiments, the formula to compute left entropy (LE) is:
- the formula to compute right entropy (RE) is:
- the characteristic computation data is "round collar women's apparel, batwing sleeve T-shirt, batwing sleeve one-piece dress
- the characteristic computation data is "round collar women's apparel, batwing sleeve T-shirt, batwing sleeve one-piece dress
- left entropy is computed for the two-segment combination "batwing sleeve,” "a" is
- the formula to compute the position-based in-word probability of a character is:
- IP(s) IP(c, 1 ) x IP mm (c, 2) x IP(c, 0) [5] [0049]
- "s" stands for the word segment to be computed
- IP(c, 1) stands for the probability, statistically computed based on the word segment dictionary, of the initial character in "s” appearing at the beginning of a word segment in the word segment dictionary if the word segment dictionary is in a character-based language such as Chinese, or of the initial word in "s” appearing at the beginning of a word segment in the word segment dictionary if the word segment dictionary is based on a word-based language such as English.
- word segment dictionary based on a character-based language such as Chinese is described.
- IP(c, 2) stands for the probability, statistically computed based on the word segment dictionary, of the character/word in the middle position of "s" appearing in the middle position of word segments in the word segment dictionary. In the event that there are several characters in the middle position of "s", the probability is computed of each of the characters appearing in the middle position of word segments in the word segment dictionary, and then the smallest of these is considered IP m in(c, 2) to compute the position-based in-word probability of a character.
- IP(c, 0) stands for the probability, statistically computed based on the word segment dictionary, of the final character of "s", based statistically on the word segment dictionary, appearing at the end of word segments in the word segment dictionary.
- IP(c, l), IPmin(c,2), and IP(c, 0) are positively correlated.
- the word segment to be computed in the process of obtaining the screening criteria, refers to a sample word, and in the process of target word recognition, the word segment to be computed refers to a candidate word.
- IP(c, 1 ) stands for the probability of appearance of all the word segments whose initial character is "P I" ("a") based on the word segment dictionary statistics.
- IP(c,0) stands for the probability of appearance of all the word segments whose final character is " 3 ⁇ 4" ("bao") based on the word segment dictionary statistics.
- IP(c,2) there are two values: one is the probability of appearance of all the word segments whose middle character is "S" ("li") based on the word segment dictionary statistics, and the other is the probability of appearance of all the word segments whose middle character is ("tao") based on the word segment dictionary statistics.
- IP(c,2) the smaller of the two IP(c,2) values is selected as IP(c,2).
- screening criteria are obtained prior to performing target word recognition.
- the process of establishing screening criteria is a process of machine learning.
- FIG. 3 is a flowchart illustrating an embodiment of a process of obtaining the screening criteria. Process 300 may be performed on a system such as 200.
- the training sample word set includes a set of known, sorted results. Within the training sample word set, it is already known whether or not any word is a target word in a dictionary. Words recognized as target words are sorted as one type, and works not recognized as target words are sorted as another type. Markers, flags, or other similar data structures may be included in the training sample word set to indicate whether the corresponding words are target words.
- the training sample word set includes a positive example word set and a negative example word set.
- a positive example word means that the word is a target word, while a negative example word means that the word is not a target word (also referred to as a noise word.)
- a positive example word set can be retrieved directly from the word segment dictionary, while a negative example word set is based on noise words obtained by manual examination and verification during the process of building the word segment dictionary.
- the sample characteristic computation data includes the training sample words in the training sample word set, and the designated characteristic values of words in the training sample word set.
- the sample characteristic computation data is segmented to obtain a plurality of sample segments of minimum granularity.
- the text data of minimum granularity is a single character.
- Segmentation of the sample characteristic computation data is carried out using a character as a unit and the sample characteristic computation data is segmented into multiple characters. It is preferable, however, to take text data in which the most concise term that is capable of expressing linguistic meaning to serve as the minimum granularity for separating the sample characteristic computation data into multiple word segments. Using the text data in which the most concise term that is capable of expressing linguistic meaning serves as the minimum granularity can reduce the computation time and increase efficiency compared to the method in which a single character serves as the text data of minimum granularity.
- sample segments are combined to obtain a sample text data combination set.
- a language model is used to combine the sample segments.
- an n-gram language model (also referred to as "n-step Markov chain”) is used to combine the sample segments and determine a sample text data combination set for further processing. Specifically, n-gram windows that are based on an n-gram model are used with the sample segments as the basic units, the n-gram windows are shifted according to an established sequence, and combining processing is carried out of word segments contained within the windows to obtain multiple sample text data combination.
- the value of n in the n-gram model is 2 or 3.
- n of 2 indicates that a two-gram window was used for two-element combination, that is, as the window shifts, the sample segments are respectively combined into doublets with the adjoining word components
- n of 3 indicates that a three-gram window was used for three-element combination, that is, as the window shifts, the sample segments are respectively combined into triplets with the adjoining word components.
- n-gram model is used to combine the sample segments from the above- mentioned example, when n is 2, the following text data combination set can be obtained: “round collar women's apparel,” “women's apparel batwing,” “batwing sleeve,” “sleeve women's apparel”; when n is 3, the fol lowing text data combination set can be obtained: “round collar women's apparel batwing,” “women's apparel batwing sleeve,” “batwing sleeve women's apparel.”
- the designated characteristic values of the sample text data combinations included in the intersection are computed.
- multiple designated characteristic values are computed for the sample text data combinations included in the above-mentioned intersection, and these characteristic values can include mutual information value, logarithmic likelihood ratio value, context entropy (left entropy, right entropy) value, the value of the position-based in-word probability of a character as described above, as well as dice matrix value, Chi value, etc.
- the sorting technique When a sorting technique is used to sort the training sample words, the sorting technique carries out the sorting of the training sample words based on the training sample words and the corresponding characteristic values; the obtained sorting results are compared to the known sorted results of the training sample words, and the characteristic values are combinations of any two or more of the designated characteristic values. If the comparison reveals that the two results do not match, the sorting technique is adjusted with regard to the threshold values set for individual designated characteristic values, and the sorting of the training sample words is carried out once again; the above process is repeated unti l the sorting technique is able to accurately sort the training sample data.
- the process above is a machine learning process and a training process; upon using large amounts of training sample data and repeating the above-mentioned training process, the resulting threshold values established for individual characteristic values form corresponding screening criteria out of the threshold values that are set for individual characteristic values.
- the resulting screening criteria are expressions based on specific knowledge.
- these expressions may be discrete structures such as trees, diagrams, networks, rules, mathematical formulas, or other appropriate data structures.
- the training of the sorting technique is carried out using the training sample word set, and the resulting screening criteria are a sorting rule with a tree structure.
- the GBDT sorting technique employs a certain number of decision trees.
- a decision tree can be expressed as
- the GBDT sorting technique can be expressed as:
- F m (x) is a function that can be estimated using the least squares method and maximum entropy.
- the training of the GBDT sorting technique is carried out using a training sample word set; for example, the positive example words in the training sample word set include
- the computed mutual information of "soothing herbal tea” is 3.03, and its left entropy is 2.52; the mutual information of "tagging gun” is 3.93 and its left entropy is 0; the mutual information of "apple cider vinegar” is 1.39, its left entropy is 3.88.
- the mutual information of "upright edition” is 0.66, its left entropy is 1.88; the mutual information of "class class train” is 13.68, its left entropy is 2.88.
- the resulting screening criterion is as follows: [0074] Determine the interval to which the mutual information value from the designated characteristic values belongs: if the mutual information value is greater than 1.0 but smaller than
- the screening criteria are obtained based only on a small number of training sample words and a small quantity of characteristic values of each training sample word.
- large numbers of training sample words can be used to train the sorting technique and obtain screening criteria that accurately recognize target words.
- FIG. 4 is a flowchart illustrating another embodiment of a process of target word recognition.
- Process 400 may be performed on a system such as 200.
- the candidate word set can be retrieved from a C2C website query log that stores query keywords inputted by users.
- the queries may be filtered for noise and be deduplicated (or processed in other appropriate ways) to retrieve candidate data.
- Product headers most recently filled in by the C2C website vendors will serve as the characteristic computation data.
- segmentation is performed on the characteristic computation data to obtain a plurality of text segments, where the segmentation is based on text data of minimum granularity.
- the text data of minimum granularity is a single character.
- Segmentation of the characteristic computation data is carried out using a character as a unit and the characteristic computation data is segmented into multiple characters. It is preferable, however, to take text data in which the most concise term that is capable of expressing linguistic meaning to serve as the minimum granularity for separating the sample characteristic computation data into multiple segments. Using the text data in which the most concise term that is capable of expressing linguistic meaning serves as the minimum granularity can reduce the computation time and increase efficiency compared to the method in which a single character serves as the text data of minimum granularity.
- a language model is used to carry out word segment combining processing of the text segments.
- an n-gram model is used to carry out the combining and determine a text data combination set that serves as the object of processing; specifically, a n-gram model based on n-gram windows is used to separate the segments obtained into basic units, the n- gram windows are shifted according to an established sequence, and combining processing is carried out with respect to the word segments contained within the windows.
- the value of n in the n-gram model is 2 or 3.
- n 2 or 3
- n 3
- a three-gram window was used for three-element combination, that is, as the window shifts, the word segments resulting from separation are respectively combined into triplets with the adjoining word components.
- the characteristic computation data is "Adidas brand sneakers free shipping.” Taking text data in which the most concise term that is capable of expressing linguistic meaning serves as the minimum granularity and carrying out segmentation of the characteristic computation data into multiple text segments, the following text segments can be obtained:
- the designated characteristic values can include combinations of any two or more kinds of the following values: mutual information value, logarithmic likelihood ratio value, context entropy (left entropy, right entropy) value, the value of the position-based in-word probability of a character, as well as dice matrix value, Chi value, etc.
- the "a" word and "b" word in the formula can be considered text data resulting from the combining of multiple text segments. Individual characteristic values are computed in accordance with the formulas described above.
- the text data when computing mutual information for the text data "abc,” the text data can be split into “ab” and “c,” or “a” and “be.”
- Mutual information is computed separately with regard to the above-mentioned resulting two groups of text data and the greater of the two computation results is considered to be the mutual information of the text data "abc.”
- “abc” when computing the logarithmic likelihood ratio, “abc” can also be split into either “ab” and “c,” or “a” and “be.”
- the logarithmic likelihood ratio is computed separately with regard to the above- mentioned resulting two groups of text data, and the greater of the two computation results is considered to be the logarithmic likelihood ratio of the text data "abc.”
- the screening of the text data combinations is carried out according to predetermined screening criteria.
- the candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria to be the target words are recognized.
- the screening criteria are predetermined based on multiple designated characteristic values.
- the text data combination in the intersection also serves as the candidate words, and when designated characteristic values of the text data combination in the intersection are computed, it is possible, through the computation of designated characteristic values of text data combinations in a text data combination set, to further obtain designated characteristic values of text data combinations in the above-mentioned intersection. It is also possible to directly compute multiple designated characteristic values of each text data combination contained in the above- mentioned intersection.
- the individual characteristic values of the text data combination contained in the above-mentioned interaction obtained through computations serve simultaneously as the individual characteristic values corresponding to the candidate words.
- the screening criteria based on multiple designated characteristic values are obtained via the process of establishing screening criteria (that is, the training process).
- the forms of expression of these predetermined screening criteria also vary: they can be such discrete structures as trees, diagrams, networks, or rules; they can also be mathematical formulas.
- the predetermined screening criteria can be expressed using a mathematical formula as:
- the designated characteristic values of the text data combination contained in the above-mentioned intersection are compared against threshold values that are determined based on the predetermined screening criteria and correspond to the above-mentioned designated characteristic values, and the candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria are determined to be the target words.
- the target words are looked up in the dictionary of known word segments; when the target words are not contained in the dictionary of known word segments, the target words are determined to be previously unlisted words, and the target words are added into the word segment dictionary.
- [0097] it is possible, prior to carrying out recognition with regard to candidate words, to compare candidate words against the dictionary of known word segments; when the word segments are not contained in the dictionary of known word segments, then recognition is carried out with regard to candidate words; once the candidate words are determined to be target words, they are added into the known word segment dictionary.
- recognition is carried out with regard to candidate words; once the candidate words are determined to be target words, they are added into the known word segment dictionary.
- the candidate words are found to already exist in the word segment dictionary, this means that the candidate words are listed words, that is, the candidate words are target words, and, moreover, they are listed in the word segment dictionary, and there is no need to carry out the recognition process.
- the characteristic computation data is separated into segments of minimum granularity; then, a language model is used to carry out segment combining; based on combined text data, designated characteristic values of candidate words are computed; recognition of the candidate words is carried out according to predetermined screening criteria, thus utilizing the multiple designated characteristic values to perform recognition of the candidate words.
- the screening criteria are obtained through the carrying out of sorting technique training using training data, thus avoiding errors resulting from manual setting and increasing accuracy and stability. Furthermore, when sorting technique training is used to establish screening criteria to carry out recognition of candidate words, the individual designated characteristic values of candidate words are not required to display linear distribution, and when individual designated characteristic values display non-linear distribution, it is still possible to accurately recognize candidate words, and increase recognition accuracy and recall rate.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Target word recognition includes: obtaining a candidate word set and corresponding characteristic computation data, the candidate word set comprising text data, and characteristic computation data being associated with the candidate word set; performing segmentation of the characteristic computation data to generate a plurality of text segments; combining the plurality of text segments to form a text data combination set; determining an intersection of the candidate word set and the text data combination set, the intersection comprising a plurality of text data combinations; determining a plurality of designated characteristic values for the plurality of text data combinations; based at least in part on the plurality of designated characteristic values and according to at least a criterion, recognizing among the plurality of text data combinations target words whose characteristic values fulfill the criterion.
Description
RECOGNITION OF TARGET WORDS USING DESIGNATED
CHARACTERISTIC VALUES
CROSS REFERENCE TO OTHER APPLICATIONS
[0001] This application claims priority to People's Republic of China Patent Application
No. 201010295054.7 entitled A METHOD, DEVICE AND SYSTEM FOR THE RECOGNITION OF TARGET WORDS filed September 26, 2010 which is incorporated herein by reference for all purposes.
FIELD OF TECHNOLOGY
[0002] The present disclosure relates the field of computers; in particular, it relates to target word recognition techniques.
BACKGROUND OF THE INVENTION
[0003] Since the advent of Internet technology, information growth has been explosive.
Information retrieval, information analysis, and machine translation are important technologies for making effective use of the information. Automated Chinese language word segmentation is a fundamental technique for the processing of Chinese language information. One difficulty that influences the effectiveness of automated word segmentation is the recognition of previously unlisted words. Unlisted words refer to words yet to be recorded in the word segmentation dictionary. Unlisted words can be divided into two types. One type is words that cannot be listed in dictionaries in their entirety, but for which it is possible to summarize patterns (such as personal names, institutional names, etc.); the other type is new words that should be listed in the dictionary, but have yet to be listed. Among these new words, some are target words that should be listed in the segmentation dictionary, while others are not words, that is to say, they are non-target words that should not be listed in the dictionary.
[0004] When recognizing newly appeared words, first a determination must be made as to whether these newly appeared words are words or not, specifically, whether the newly appeared words are target words or not. Currently there are three typical approaches for making the determination: a rule-based method, a statistics-based method, and a method that combines rules and statistics. The most popular statistics-based method is generally to collect statistics with regard to one or several characteristic values of words to be recognized based on large-scale text data, and, based on the statistical results, manually set a threshold value. When a word to be recognized exceeds the established threshold value, the word is determined to be a target word.
[0005] However, with the widespread use of the Internet and in an environment when the volume of text data that appears on the Internet is very large, there is already a lack of complete semantic sentence patterns just for the accumulation of certain keywords. For example, on e- commerce websites, and particularly on consumer-to-consumer or customer-to-customer (C2C) e- commerce websites, there can be a massive number of product headers. A large number of these keywords are newly appeared words; however, at this time, the statistical distributions for these newly appeared words tend to be non-linear. When recognition is performed, the results obtained by setting a single threshold value with regard to characteristic values and then determining whether or not the newly appeared words are target words according to the single threshold value are often inaccurate. Thus, the conventional statistics-based method of deciding whether or not words to be recognized are target words is often not well suited for target word recognition in current network applications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
[0007] FIG. 1 is a functional diagram illustrating a programmed computer system for target word recognition in accordance with some embodiments.
[0008] FIG. 2A is a system diagram illustrating an embodiment of a target word recognition system.
[0009] FIG. 2B is a system diagram illustrating an embodiment of a target word recognition module.
[0010] FIG. 3 is a flowchart illustrating an embodiment of a process of obtaining the screening criteria.
[0011] FIG. 4 is a flowchart illustrating another embodiment of a process of target word recognition.
DETAILED DESCRIPTION
[0012] The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions
stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques.
In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
[0013] A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
[0014] A target word recognition technique is disclosed. The technique overcomes the limitations of existing statistics-based methods of text data recognition, namely, that they typically are only able to use a relatively small number of characteristic values and that they typically require the statistical results to demonstrate linear distribution, as well as errors and instability resulting from manually adjusted characteristic value weightings and manually set threshold values.
Embodiments of target word recognition techniques are disclosed that are able to use characteristic values of any dimensions and, when the characteristic value distribution trend is non-linear (such as non-linear distribution over time), are still able to accurately determine target words, without requiring human intervention, thus increasing the accuracy and recall rate of target word recognition.
[0015] FIG. 1 is a functional diagram illustrating a programmed computer system for target word recognition in accordance with some embodiments. As shown, FIG. 1 provides a functional diagram of a general purpose computer system programmed to perform target word recognition in accordance with some embodiments. As will be apparent, other computer system architectures and configurations can be used to perform context sensitive script editing for form design. Computer
system 100, which includes various subsystems as described below, includes at least one microprocessor subsystem (also referred to as a processor or a central processing unit (CPU)) 102.
For example, processor 102 can be implemented by a single-chip processor or by multiple processors. In some embodiments, processor 102 is a general purpose digital processor that controls the operation of the computer system 100. Using instructions retrieved from memory 1 10, the processor 102 controls the reception and manipulation of input data, and the output and display of data on output devices (e.g ., display 1 18). In some embodiments, processor 102 includes and/or is used to provide target data providing module, target word recognition module, and/or target word listing module described below with respect to FIG. 2A and/or executes/performs the processes described below with respect to FIGS. 3-4.
[0016] Processor 102 is coupled bi-directionally with memory 1 10, which can include a first primary storage area, typically a random access memory (RAM), and a second primary storage area, typically a read-only memory (ROM). As is well known in the art, primary storage can be used as a general storage area and as scratch-pad memory, and can also be used to store input data and processed data. Primary storage can also store programming instructions and data, in the form of data objects and text objects, in addition to other data and instructions for processes operating on processor 102. Also as well known in the art, primary storage typically includes basic operating instructions, program code, data, and objects used by the processor 102 to perform its functions (e.g., programmed instructions). For example, primary storage devices 1 10 can include any suitable computer readable storage media, described below, depending on whether, for example, data access needs to be bi-directional or uni-directional. For example, processor 102 can also directly and very rapidly retrieve and store frequently needed data in a cache memory (not shown).
[0017] A removable mass storage device 1 12 provides additional data storage capacity for the computer system 100, and is coupled either bi-directionally (read/write) or uni-directionally (read only) to processor 102. For example, storage 1 12 can also include computer readable media such as magnetic tape, flash memory, PC-CARDS, portable mass storage devices, holographic storage devices, and other storage devices. A fixed mass storage 120 can also, for example, provide additional data storage capacity. The most common example of mass storage 120 is a hard disk drive. Mass storage 1 12 and 120 generally store additional programming instructions, data, and the like that typically are not in active use by the processor 102. It will be appreciated that the information retained within mass storage 1 12 and 120 can be incorporated, if needed, in standard fashion as part of primary storage 1 10 (e.g., RAM) as virtual memory.
[0018] In addition to providing processor 102 access to storage subsystems, bus 114 can be used to provide access to other subsystems and devices as well. As shown, these can include a display monitor 1 18, a network interface 1 16, a keyboard 104, and a pointing device 106, as well as an auxiliary input/output device interface, a sound card, speakers, and other subsystems as needed.
For example, the pointing device 106 can be a mouse, stylus, track ball, or tablet, and is useful for interacting with a graphical user interface.
[0019] The network interface 1 16 allows processor 102 to be coupled to another computer, computer network, or telecommunications network using a network connection as shown. For example, through the network interface 1 16, the processor 102 can receive information (e.g., data objects or program instructions) from another network, or output information to another network in the course of performing method/process steps. Information, often represented as a sequence of instructions to be executed on a processor, can be received from and outputted to another network. An interface card or similar device and appropriate software implemented by (e.g.,
executed/performed on) processor 102 can be used to connect the computer system 100 to an external network and transfer data according to standard protocols. For example, various process embodiments disclosed herein can be executed on processor 102, or can be performed across a network such as the Internet, intranet networks, or local area networks, in conjunction with a remote processor that shares a portion of the processing. Additional mass storage devices (not shown) can also be connected to processor 102 through network interface 1 16.
[0020] An auxiliary I/O device interface (not shown) can be used in conjunction with computer system 100. The auxiliary I/O device interface can include general and customized interfaces that allow the processor 102 to send and, more typically, receive data from other devices such as microphones, touch-sensitive displays, transducer card readers, tape readers, voice or handwriting recognizers, biometrics readers, cameras, portable mass storage devices, and other computers.
[0021] In addition, various embodiments disclosed herein further relate to computer storage products with a computer readable medium that includes program code for performing various computer-implemented operations. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of computer readable media include, but are not limited to, all the media mentioned above: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as optical disks; and specially configured hardware devices such as application-specific integrated circuits (ASICs), programmable logic devices (PLDs), and ROM and RAM devices.
Examples of program code include both machine code, as produced, for example, by a compiler, or files containing higher level code (e.g., script) that can be executed using an interpreter.
[0022] The computer system shown in FIG. 1 is but an example of a computer system suitable for use with the various embodiments disclosed herein. Other computer systems suitable for such use can include additional or fewer subsystems. In addition, bus 1 14 is illustrative of any interconnection scheme serving to link the subsystems. Other computer architectures having different configurations of subsystems can also be utilized.
[0023] As used herein, words refer to words as well as phrases. In some embodiments, target word recognition includes obtaining a candidate word set comprising text data, and corresponding characteristic computation data associated with the candidate word set. It further includes segmentation of the characteristic computation data to generate a plurality of text segments; combining the plurality of text segments to form a text data combination set; and determining an intersection of the candidate word set and the text data combination set. A plurality of designated characteristic values for the plurality of text data combinations is determined. Based at least in part on the plurality of designated characteristic values and according to at least a criterion, target words whose characteristic values fulfill the criterion are recognized among the plurality of text data combinations.
[0024] As used herein, when a candidate word is a common term that fulfills the requirements of being collected into a dictionary (also referred to as a term in the usual sense), the candidate word is a target word; otherwise, when a candidate word is not a term in the usual sense, the candidate word is not a target word. Details of how to determine whether a candidate word is a target word are described more fully below.
[0025] For example, assume a candidate word set includes "batwing sleeves" (a type of loose-fitting sleeves often seen in women's clothing) and "sleeves women's apparel." Of these terms "batwing sleeves" is a term in the usual sense that fulfills the requirements of being collected into a dictionary since it has a specific meaning that is commonly accepted, while "sleeves women's apparel" does not have a commonly accepted meaning, therefore it is not a term in the usual sense and is not included in a dictionary.
[0026] The candidate word set may be any text data, and its corresponding characteristic computation data may also be any text data. In some embodiments of the present disclosure, user- inputted query keywords are used, and a candidate word set is based on user-inputted query
keywords, and characteristic computation data is extracted from the description of search results in response to user-inputted query keywords. For example, on an e-commerce website, user-inputted keywords are used in queries about products. A candidate word set is extracted from the query keywords, and characteristic computation data is extracted from such descriptive information as product headers and product information on e-commerce websites. As another example, on news websites, query keywords that are inputted by users in queries about news are collected and stored, and a candidate word set is extracted from the query keywords while characteristic computation data is extracted from such descriptive information as news headlines and news content on the news website.
[0027] The extraction of the described candidate word set and characteristic computation information can be carried out using periodic or quantitative methods. For example, candidate word sets is periodically extracted from user-inputted keywords. Accordingly, the characteristic computation data are extracted periodically. Another possibility is that once the user-inputted query keywords used for candidate set extraction reach a certain number, the corresponding characteristic computation information is extracted, and target word recognition according to embodiments of the present disclosure is executed.
[0028] As used herein, accuracy refers to the ratio of the number of correctly recognized word segments among those recognized as target words to the number of words identified as target words. As used herein, recall rate refers to the ratio of the number of correctly recognized target words among the candidate words to the number of word segments that are actually target words among the candidate words.
[0029] FIG. 2A is a system diagram illustrating an embodiment of a target word recognition system. In this example, system 200 includes a target word recognition module 210, a training data providing module 21 1, and a target word listing module 212. Each module or all the modules may be implemented using one or more computer systems such as 100.
[0030] Target word recognition module 210 is used to: obtain a candidate word set and characteristic computation data; based on the text data of minimum granularity, carry out word segmentation of the characteristic computation data; carry out word segment combination processing on word segments obtained by word segmentation; obtain a text data combination set to be processed; determine the intersection of the candidate word set and the text data combination set; compute the designated characteristic values of each text data combination contained in the intersection; based on multiple designated characteristic values of each text data combination
contained in the described intersection, carry out the screening of the text data combination contained in the described intersection according to screening criteria that are predetermined based on multiple designated characteristic values, and determine the candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria to be the target words.
[0031] Training data providing module 21 1 is used to provide to the target word recognition module a training sample word set and the corresponding sample characteristic computation data that are needed to obtain the screening criteria.
[0032] Target word listing module 212 is used to receive the target words recognized by the target word recognition module 210, and enter the previously unlisted target words in the word segment dictionary.
[0033] FIG. 2B is a system diagram illustrating an embodiment of a target word recognition module. In the embodiment shown, the target word recognition module includes: a receiving module 2101, used to receive a candidate word set and characteristic computation data; a segmentation module 2102, used to carry out, based on the text data of minimum granularity, word segment separation of the computation data; a word segment combination module 2103, used to cany out word segment combination processing with regard to word segments obtained by segmentation and obtain a text data combination set which is the object of processing; an intersection determination module 2104, used to determine the intersection of the described candidate word set with the described text data combination set; an assigned characteristic value computation module 2105, used to compute the multiple designated characteristic values of each text data combination contained in the intersection; a screening module 2106, used to carry out, based on multiple designated characteristic values of each text data combination contained in the described intersection, the screening of the text data combination contained in the described intersection according to screening criteria that are preset based on multiple designated
characteristic values, and determine the candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria to be the target words.
[0034] In some embodiments, the screening criteria based on multiple designated characteristic values are obtained by training a sorting technique for a training sample word set. An example of using a sorting technique such as the gradient boosting and decision tree (GBDT) sorting technique to obtain the screening criteria based on multiple designated characteristic values is described more fully below. When the screening criteria are obtained, the receiving module
sample characteristic computation data. The segmentation module 2102 is used to carry out, based on the text data of minimum granularity, word segmentation for the described sample characteristic computation data. The word segment combination module 2103 is used to carry out word segment combination processing on the characteristic computation data and obtain a sample text data combination set which is the object of processing. The intersection determination module 2104 is used to determine the intersection of the described sample text data combination set with the described training sample word set. The assigned characteristic value computation module 2105 is used to compute the multiple designated characteristic values of each sample text data combination contained in the described intersection. The screening module 2106 is used to set, based on multiple designated characteristic values of each sample text data combination contained in the described intersection and on the known sorted results, threshold values of the described multiple designated characteristic values and obtain corresponding screening criteria based on the threshold values.
[0035] The modules described above can be implemented as software components executing on one or more general purpose processors, as hardware such as programmable logic devices and/or Application Specific Integrated Circuits designed to perform certain functions or a combination thereof. In some embodiments, the modules can be embodied by a form of software products which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, etc.), including a number of instructions for making a computer device (such as personal computers, servers, network equipment, etc.) implement the methods described in the embodiments of the present invention. The modules may be implemented on a single device or distributed across multiple devices. The functions of the modules may be merged into one another or further split into multiple sub-modules.
[0036] In the embodiments of the present disclosure, designated characteristic values can include mutual information, logarithmic likelihood ratio, context entropy (left entropy, right entropy), position-based in-word probabilities of a character, and may further include dice matrices, Chi, etc. To obtain the screening criteria and recognize target words, the required designated characteristic values can be combinations of any two or more types among the above-mentioned designated characteristic values.
[0038] In formula [1 ], "a" and "b", respectively, stand for single word segments of minimum granularity, while "ab" stands for the text data combination resulting from the combination of two word segments; pa and pb stand for the respective probabilities of word segment "a" and word segment "b" appearing in the characteristic computation data; pab stands for the probability of "ab" appearing in the characteristic computation data; cab stands for the number of times "ab" appears together in the characteristic computation data, ca stands for the number of times "a" appears in the characteristic computation data; Cb stands for the number of times "b" appears in the characteristic computation data; n stands for the number of word segments resulting from performing, based on the text data of minimum granularity, word segmentation of the characteristic computation data. In the computation of the mutual information of the word segment "a" and the word segment "b", pab is inversely correlated to pa and pb.
[0039] For example, assuming that the characteristic computation data is "round collar women's apparel, batwing sleeve women's apparel," when computing the mutual information of the word segment "batwing sleeve," the word segment "a" is "batwing," the word segment "b" is "sleeve," and the word segment "ab" is "batwing sleeve." pa stands for the probability of the word segment "batwing" appearing in the characteristic computation data and pb stands for the probability of the word segment "sleeve" appearing in the characteristic computation data. pab stands for the probability of "batwing sleeve" appearing in the characteristic computation data. ca stands for the number of times "batwing" appears in the characteristic computation data; Cb stands for the number of times "sleeve" appears in the characteristic computation data; n stands for the number of word segments resulting from performing, based on the text data of minimal granularity, separation into word segments of the characteristic computation data. Here, the characteristic computation data can be separated into 5 word segments "round collar," "women's apparel," "batwing," "sleeve," and "women's apparel." Hence, for a of "batwing" and b of "sleeve," cab is 1 , ca is 1, Cb is 1, and n is 5.
[0040] In some embodiments, logarithmic likelihoods are used to determine the tightness of the connection between individual word segments. The formula for computing logarithmic likelihoods is the following:
1ο8 (α, 6) = / (^> Λ / 1 ) + /(^> Λ., η.) - //(^ ^, Λ /ι1) - //(^^, Λ2. ¾) »
n. n~ n, + ηΊ n. + «,
//07, *, n) = * log(p) + (« - *) log(l [2]
[0041] In formula [2], "a" and "b," respectively, represent single word segments of minimum granularity; k| stands for the number of times "ab" appears in the characteristic computation data; ni stands for the number of times "a" appears in the text data combination on the left in the multiple text data combination resulting from word segmentation of the characteristic computation data using the text data of minimum granularity; k2 stands for the number of text data combinations such that "b" appears on the right in the multiple text data combination resulting from a sequential combination using the above-mentioned language model and such that what is on the left is not "a." n2 stands for the number of text data combinations such that what is on the left is not "a" within the multiple text data combination resulting from a sequential combination using the above-mentioned language model.
[0042] Based on characteristic computation data in the example above, the characteristic computation data is separated into 5 word segments "round collar," "women's apparel," "batwing," "sleeve," and "women's apparel." Two-element combining is carried out with regard to the above- mentioned word segments, 4 combinations of text data are obtained: "round collar women's apparel," "women's apparel batwing," "batwing sleeve," and "sleeve women's apparel." When the logarithmic likelihood of "batwing sleeve" is calculated, ki is the number of times "batwing sleeve" appears in the characteristic computation data; ni is the number of combination(s) of text data out of the above-mentioned 4 combinations of text data where "batwing" appears on the left, while k2 is the number of combination(s) of text data out of the above-mentioned 4 combinations of text data where "sleeve" appears on the right and the left side is not "batwing"; n2 is the number of combination(s) of text data out of the above-mentioned 4 text data combination where the right side is not "batwing."
[0043] Context entropy is used to express the degree of freedom of the use of multiple word segment expressions. Entropy is the expression of the uncertainty factor; the larger the entropy, the more uncertain random events are. A character string that can only be used in a fixed context has a small context entropy value, while a character string that can be used in many different contexts has a high context entropy value. Context entropy includes left entropy and right entropy. In some embodiments, the formula to compute left entropy (LE) is:
LE(ab) =∑ -p(x I ab) log2 p(x \ ab) , p(x | ab) = ½sL [3]
xe/cjl Cab
[0044] In formula [3], "a" and "b, respectively, stand for single word segments of minimum granularity, "ab" stands for text data resulting from the combination of two segment words; p(x|ab) stands for the probability of the word segment "x" appearing on the left assuming that "ab" appears in the characteristic computation data. "Left" refers to the word segment set appearing on the left of "ab." cxab stands for the number of times the word segment "x" appears on the left of "ab." cab stands for the number of times "ab" appears. In the computation of left entropy, p(x|ab) and log2 p(x|ab) are positively correlated.
[0046] In formula [4], "a" and "b," respectively, stand for single word segments of minimum granularity, "ab" stands for text data resulting from the combination of two segment words. p(y|ab) stands for the probability of the word segment "y" appearing on the right assuming that "ab" appears in the characteristic computation data. "Right" refers to the word segment set appearing on the right of "ab." caby stands for the number of times a word segment "y" appears on the right of "ab"; cab stands for the number of times "ab" appears, and in the computation of right entropy, p(y|ab) and log2 p(y|ab) arc positively correlated.
[0047] For example, if the characteristic computation data is "round collar women's apparel, batwing sleeve T-shirt, batwing sleeve one-piece dress," after carrying out segmentation of the characteristic computation data using text data of minimum granularity, we obtain "round collar," "women's apparel," "batwing," "sleeve," "T-shirt," "batwing," "sleeve," "one-piece dress." When left entropy is computed for the two-segment combination "batwing sleeve," "a" is
"batwing," "b" is "sleeve." The word segments that appear on the left of "batwing sleeve" in the characteristic computation data are "women's apparels" and "T-shirt"; hence the number of "x" is 2, "women's apparel" and "T-shirt," respectively; the number of times that "batwing sleeve" appears (cab) is 2. When right entropy is computed for "batwing sleeve," "a" is "batwing," "b" is "sleeve"; the word segments "y" appearing on the right of "batwing sleeve" are "T-shirt" and "one- piece dress."
[0048] In some embodiments, the formula to compute the position-based in-word probability of a character is:
IP(s) = IP(c, 1 ) x IPmm (c, 2) x IP(c, 0) [5]
[0049] In formula [5], "s" stands for the word segment to be computed; IP(c, 1) stands for the probability, statistically computed based on the word segment dictionary, of the initial character in "s" appearing at the beginning of a word segment in the word segment dictionary if the word segment dictionary is in a character-based language such as Chinese, or of the initial word in "s" appearing at the beginning of a word segment in the word segment dictionary if the word segment dictionary is based on a word-based language such as English. In the discussion below, word segment dictionary based on a character-based language such as Chinese is described. IP(c, 2) stands for the probability, statistically computed based on the word segment dictionary, of the character/word in the middle position of "s" appearing in the middle position of word segments in the word segment dictionary. In the event that there are several characters in the middle position of "s", the probability is computed of each of the characters appearing in the middle position of word segments in the word segment dictionary, and then the smallest of these is considered IPmin(c, 2) to compute the position-based in-word probability of a character. IP(c, 0) stands for the probability, statistically computed based on the word segment dictionary, of the final character of "s", based statistically on the word segment dictionary, appearing at the end of word segments in the word segment dictionary. When computing the position-based in-word probability of a character, IP(c, l), IPmin(c,2), and IP(c, 0) are positively correlated. In some embodiments, in the process of obtaining the screening criteria, the word segment to be computed refers to a sample word, and in the process of target word recognition, the word segment to be computed refers to a candidate word.
[0050] For example, assume the word segment to be computed is "P [ /¾¾" ("a li tao bao" - a hypothetical brand name). IP(c, 1 ) stands for the probability of appearance of all the word segments whose initial character is "P I" ("a") based on the word segment dictionary statistics. IP(c,0) stands for the probability of appearance of all the word segments whose final character is " ¾" ("bao") based on the word segment dictionary statistics. For IP(c,2) there are two values: one is the probability of appearance of all the word segments whose middle character is "S" ("li") based on the word segment dictionary statistics, and the other is the probability of appearance of all the word segments whose middle character is ("tao") based on the word segment dictionary statistics. When computing the position-based in-word probability of a character, the smaller of the two IP(c,2) values is selected as IP(c,2).
[0051] In some embodiments, screening criteria are obtained prior to performing target word recognition. In some implementations, the process of establishing screening criteria is a
process of machine learning. FIG. 3 is a flowchart illustrating an embodiment of a process of obtaining the screening criteria. Process 300 may be performed on a system such as 200.
[0052] At 301 : obtain a training sample word set and sample characteristic computation data. In this example, the training sample word set includes a set of known, sorted results. Within the training sample word set, it is already known whether or not any word is a target word in a dictionary. Words recognized as target words are sorted as one type, and works not recognized as target words are sorted as another type. Markers, flags, or other similar data structures may be included in the training sample word set to indicate whether the corresponding words are target words.
[0053] The training sample word set includes a positive example word set and a negative example word set. A positive example word means that the word is a target word, while a negative example word means that the word is not a target word (also referred to as a noise word.) In this case, a positive example word set can be retrieved directly from the word segment dictionary, while a negative example word set is based on noise words obtained by manual examination and verification during the process of building the word segment dictionary.
[0054] For example, if "batwing sleeve" is in the word segment dictionary, then it is a known positive example as well as a target word in the training sample word set. If "sleeve T- shirt" is not in the word segment dictionary, then it is a known negative example/noise word in the training sample word set.
[0055] Here, the sample characteristic computation data includes the training sample words in the training sample word set, and the designated characteristic values of words in the training sample word set.
[0056] At 302, the sample characteristic computation data is segmented to obtain a plurality of sample segments of minimum granularity.
[0057] In some embodiments, the text data of minimum granularity is a single character.
Segmentation of the sample characteristic computation data is carried out using a character as a unit and the sample characteristic computation data is segmented into multiple characters. It is preferable, however, to take text data in which the most concise term that is capable of expressing linguistic meaning to serve as the minimum granularity for separating the sample characteristic computation data into multiple word segments. Using the text data in which the most concise term that is capable of expressing linguistic meaning serves as the minimum granularity can reduce the
computation time and increase efficiency compared to the method in which a single character serves as the text data of minimum granularity.
[0058] For example, segmentation of sample characteristic data of "round collar women's apparel, batwing sleeve women's apparel" using the most concise term that is capable of expressing linguistic meaning as the text data of the minimum granularity results in the following five segments: "round collar," "women's apparel," "batwing," "sleeve," "women's apparel."
[0059] At 303, the sample segments are combined to obtain a sample text data combination set.
[0060] In some embodiments, a language model is used to combine the sample segments. In some embodiments, an n-gram language model (also referred to as "n-step Markov chain") is used to combine the sample segments and determine a sample text data combination set for further processing. Specifically, n-gram windows that are based on an n-gram model are used with the sample segments as the basic units, the n-gram windows are shifted according to an established sequence, and combining processing is carried out of word segments contained within the windows to obtain multiple sample text data combination.
[0061] In some embodiments, the value of n in the n-gram model is 2 or 3. n of 2 indicates that a two-gram window was used for two-element combination, that is, as the window shifts, the sample segments are respectively combined into doublets with the adjoining word components, n of 3 indicates that a three-gram window was used for three-element combination, that is, as the window shifts, the sample segments are respectively combined into triplets with the adjoining word components.
[0062] If an n-gram model is used to combine the sample segments from the above- mentioned example, when n is 2, the following text data combination set can be obtained: "round collar women's apparel," "women's apparel batwing," "batwing sleeve," "sleeve women's apparel"; when n is 3, the fol lowing text data combination set can be obtained: "round collar women's apparel batwing," "women's apparel batwing sleeve," "batwing sleeve women's apparel."
[0063] At 304, the intersection of the sample text data combination set and the training sample word set is determined.
[0064] At 305, the designated characteristic values of the sample text data combinations included in the intersection are computed.
[0065] Based on the word segment set obtained after the separation of sample characteristic computation data based on the above-mentioned text data of minimum granularity and the sample text data combination set that serves as the above-mentioned object of processing, multiple designated characteristic values are computed for the sample text data combinations included in the above-mentioned intersection, and these characteristic values can include mutual information value, logarithmic likelihood ratio value, context entropy (left entropy, right entropy) value, the value of the position-based in-word probability of a character as described above, as well as dice matrix value, Chi value, etc.
[0066] Here, when designated characteristic values are computed for the sample text data combinations included in the intersection, it is possible, through the computation of the designated characteristic values of each sample text data combination in a sample text data combination set, to further get the designated characteristic values of each sample text data combination in the above- mentioned intersection. It is also possible to directly compute the designated characteristic values of each sample text data combination included in the above-mentioned intersection.
[0067] At 306, based on the designated characteristic values of each sample text data combination in the intersection and the known sorted results in the sample word set, set threshold values of the designated characteristic values to obtain corresponding screening criteria based on the threshold values.
[0068] By determining the intersection of the sample text data combination set and the training sample word set, designated characteristic values are obtained that correspond to each word in the training sample word set. The words contained in the above-mentioned intersection are both the sample text data combinations and the training sample words. The sorting results for the training sample words are known; stated another way, it is known whether or not the training sample words are target words. When a sorting technique is used to carry out the sorting of the training sample words in the intersection, words that belong to the target words are sorted as one type, and words that do not belong to target words are sorted as another type.
[0069] When a sorting technique is used to sort the training sample words, the sorting technique carries out the sorting of the training sample words based on the training sample words and the corresponding characteristic values; the obtained sorting results are compared to the known sorted results of the training sample words, and the characteristic values are combinations of any two or more of the designated characteristic values. If the comparison reveals that the two results do not match, the sorting technique is adjusted with regard to the threshold values set for individual
designated characteristic values, and the sorting of the training sample words is carried out once again; the above process is repeated unti l the sorting technique is able to accurately sort the training sample data. The process above is a machine learning process and a training process; upon using large amounts of training sample data and repeating the above-mentioned training process, the resulting threshold values established for individual characteristic values form corresponding screening criteria out of the threshold values that are set for individual characteristic values.
[0070] Here, the resulting screening criteria are expressions based on specific knowledge.
In various embodiments, these expressions may be discrete structures such as trees, diagrams, networks, rules, mathematical formulas, or other appropriate data structures.
[0071] For example, when the gradient boosting and decision tree (GBDT) sorting technique is used, the training of the sorting technique is carried out using the training sample word set, and the resulting screening criteria are a sorting rule with a tree structure. The GBDT sorting technique employs a certain number of decision trees. A decision tree can be expressed as
E(*) =∑/(x e tf,. )./;
Ri f) RJ = 0 , V i≠ j u D * i ,
stands for one interval (such as, R, - = {* l *i
where R, ' 1 1 < 0.2, 0.3 < ,
2 < 0.7} ' λ ).
[0072] Based on the decision tree, the GBDT sorting technique can be expressed as:
F(x) = FQ +∑Fm (x)
Where Fm(x) is a function that can be estimated using the least squares method and maximum entropy.
[0073] The training of the GBDT sorting technique is carried out using a training sample word set; for example, the positive example words in the training sample word set include
"soothing herbal tea," "tagging gun," and "apple cider vinegar," while the negative example words include "upright edition" and "class class train." Assume that the sample characteristic
computation data and the characteristic value of each training sample word is computed separately. The computed mutual information of "soothing herbal tea" is 3.03, and its left entropy is 2.52; the mutual information of "tagging gun" is 3.93 and its left entropy is 0; the mutual information of "apple cider vinegar" is 1.39, its left entropy is 3.88. The mutual information of "upright edition" is 0.66, its left entropy is 1.88; the mutual information of "class class train" is 13.68, its left entropy is 2.88. Therefore, based on the training sample word set and the characteristic value of each sample, the resulting screening criterion is as follows:
[0074] Determine the interval to which the mutual information value from the designated characteristic values belongs: if the mutual information value is greater than 1.0 but smaller than
8.0, then return 1 ; otherwise, determine the interval to which the value of the left entropy from the designated characteristic values belongs. If the value of the left entropy is less than 0.9 or greater than 2.2 but less than 2.65 or greater than 3.3, then return 1 ; otherwise return 0.
[0075] Here, when 1 is returned, this means that the input is a positive example word/target word; when 0 is returned, this means that the return is a negative example word/non-target word.
[0076] In the example, the screening criteria are obtained based only on a small number of training sample words and a small quantity of characteristic values of each training sample word. In practice, large numbers of training sample words can be used to train the sorting technique and obtain screening criteria that accurately recognize target words.
[0077] FIG. 4 is a flowchart illustrating another embodiment of a process of target word recognition. Process 400 may be performed on a system such as 200.
[0078] At 401, a candidate word set and characteristic computation data are retrieved.
[0079] For example, the candidate word set can be retrieved from a C2C website query log that stores query keywords inputted by users. The queries may be filtered for noise and be deduplicated (or processed in other appropriate ways) to retrieve candidate data. Product headers most recently filled in by the C2C website vendors will serve as the characteristic computation data.
[0080] At 402, segmentation is performed on the characteristic computation data to obtain a plurality of text segments, where the segmentation is based on text data of minimum granularity.
[0081] In some embodiments, the text data of minimum granularity is a single character.
Segmentation of the characteristic computation data is carried out using a character as a unit and the characteristic computation data is segmented into multiple characters. It is preferable, however, to take text data in which the most concise term that is capable of expressing linguistic meaning to serve as the minimum granularity for separating the sample characteristic computation data into multiple segments. Using the text data in which the most concise term that is capable of expressing linguistic meaning serves as the minimum granularity can reduce the computation time and increase efficiency compared to the method in which a single character serves as the text data of minimum granularity.
[0082] For example, by separating the sample characteristic computation data into word segments with regard to the characteristic computation data "round collar women's apparel, batwing sleeve women's apparel" using the most concise term that is capable of expressing linguistic meaning as the text data of the minimum granularity, five segments can be obtained:
"round collar," "women's apparel," "batwing," "sleeve," "women's apparel."
[0083] At 403, combine the text segments to obtain a text data combination set.
[0084] A language model is used to carry out word segment combining processing of the text segments. In some embodiments, an n-gram model is used to carry out the combining and determine a text data combination set that serves as the object of processing; specifically, a n-gram model based on n-gram windows is used to separate the segments obtained into basic units, the n- gram windows are shifted according to an established sequence, and combining processing is carried out with respect to the word segments contained within the windows. In some
embodiments, the value of n in the n-gram model is 2 or 3. When n is 2, this indicates that a two- gram window was used for two-element combination, that is, as the window shifts, the word segments resulting from separation are respectively combined into doublets with the adjoining word components. Likewise, when n is 3, this indicates that a three-gram window was used for three-element combination, that is, as the window shifts, the word segments resulting from separation are respectively combined into triplets with the adjoining word components.
[0085] For example, the characteristic computation data is "Adidas brand sneakers free shipping." Taking text data in which the most concise term that is capable of expressing linguistic meaning serves as the minimum granularity and carrying out segmentation of the characteristic computation data into multiple text segments, the following text segments can be obtained:
"Adidas," "brand," "sneakers," "free shipping." Using the n-gram model and carrying out two- element combinations (that is, n=2), text data combinations of "Adidas brand," "brand sneakers," and "sneakers free shipping" are obtained. For the same text segments of "Adidas," "brand," "sneakers," and "free shipping," if an n-gram model is used and three-element combinations are carried out (that is, n=3), text data combinations of "Adidas brand sneakers" and "brand sneakers free shipping" are obtained.
[0086] At 404, determine the intersection of the candidate word set and the text data combination set.
[0087] At 405, compute the designated characteristic values for the text data combinations included in the above-mentioned intersection.
[0088] The designated characteristic values can include combinations of any two or more kinds of the following values: mutual information value, logarithmic likelihood ratio value, context entropy (left entropy, right entropy) value, the value of the position-based in-word probability of a character, as well as dice matrix value, Chi value, etc.
[0089] In some embodiments, when computing designated characteristic values, the "a" word and "b" word in the formula can be considered text data resulting from the combining of multiple text segments. Individual characteristic values are computed in accordance with the formulas described above.
[0090] For example, when computing mutual information for the text data "abc," the text data can be split into "ab" and "c," or "a" and "be." Mutual information is computed separately with regard to the above-mentioned resulting two groups of text data and the greater of the two computation results is considered to be the mutual information of the text data "abc." Likewise, when computing the logarithmic likelihood ratio, "abc" can also be split into either "ab" and "c," or "a" and "be." The logarithmic likelihood ratio is computed separately with regard to the above- mentioned resulting two groups of text data, and the greater of the two computation results is considered to be the logarithmic likelihood ratio of the text data "abc."
[0091] At 406, based on the designated characteristic values of text data combinations included in the above-mentioned intersection, the screening of the text data combinations is carried out according to predetermined screening criteria. The candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria to be the target words are recognized. The screening criteria are predetermined based on multiple designated characteristic values.
[0092] Here, the text data combination in the intersection also serves as the candidate words, and when designated characteristic values of the text data combination in the intersection are computed, it is possible, through the computation of designated characteristic values of text data combinations in a text data combination set, to further obtain designated characteristic values of text data combinations in the above-mentioned intersection. It is also possible to directly compute multiple designated characteristic values of each text data combination contained in the above- mentioned intersection. The individual characteristic values of the text data combination contained
in the above-mentioned interaction obtained through computations serve simultaneously as the individual characteristic values corresponding to the candidate words.
[0093] The screening criteria based on multiple designated characteristic values are obtained via the process of establishing screening criteria (that is, the training process). Depending on different sorting techniques used, the forms of expression of these predetermined screening criteria also vary: they can be such discrete structures as trees, diagrams, networks, or rules; they can also be mathematical formulas. For example, the predetermined screening criteria can be expressed using a mathematical formula as:
pt = exp( ,. - c) L, = -0.0728575 xM + 0.17012 x LE L2 = 0.0728575 x MI-0.17012x LE c = Max(L)
,e{l,2}
[0094] In the formula above, when pi, obtained based on the computation of designated characteristic values is greater than p2, the candidate words are determined to be target words; otherwise, the candidate words can be determined not to be target words.
[0095] When candidate words are screened according to predetermined screening criteria, the designated characteristic values of the text data combination contained in the above-mentioned intersection are compared against threshold values that are determined based on the predetermined screening criteria and correspond to the above-mentioned designated characteristic values, and the candidate words corresponding to the text data combination whose characteristic values fulfill the screening criteria are determined to be the target words. Here, when comparing the designated characteristic values of the text data combination contained in the above-mentioned intersection against threshold values that are determined based on predetermined screening criteria and correspond to the above-mentioned designated characteristic values, it is possible to compare the designated characteristic values of the text data combination contained in the above-mentioned intersection against threshold values that are determined based on predetermined screening criteria and correspond to the above-mentioned designated characteristic values or to enter the designated characteristic values of the text data combination contained in the above-mentioned intersection into the formula that determines the screening criteria, and subsequently compare the computed values against threshold values determined by the screening criteria.
[0096] In some embodiments, once recognition with regard to candidate words has been carried out, we learn that when the candidate words are target words, the target words are looked up
in the dictionary of known word segments; when the target words are not contained in the dictionary of known word segments, the target words are determined to be previously unlisted words, and the target words are added into the word segment dictionary.
[0097] Preferably, it is possible, prior to carrying out recognition with regard to candidate words, to compare candidate words against the dictionary of known word segments; when the word segments are not contained in the dictionary of known word segments, then recognition is carried out with regard to candidate words; once the candidate words are determined to be target words, they are added into the known word segment dictionary. Upon comparison of the candidate words to the known word segment dictionary, if the candidate words are found to already exist in the word segment dictionary, this means that the candidate words are listed words, that is, the candidate words are target words, and, moreover, they are listed in the word segment dictionary, and there is no need to carry out the recognition process.
[0098] Based on the above-mentioned embodiment, through segmentation of the characteristic computation data, the characteristic computation data is separated into segments of minimum granularity; then, a language model is used to carry out segment combining; based on combined text data, designated characteristic values of candidate words are computed; recognition of the candidate words is carried out according to predetermined screening criteria, thus utilizing the multiple designated characteristic values to perform recognition of the candidate words.
Moreover, instead of manually set threshold values, the screening criteria are obtained through the carrying out of sorting technique training using training data, thus avoiding errors resulting from manual setting and increasing accuracy and stability. Furthermore, when sorting technique training is used to establish screening criteria to carry out recognition of candidate words, the individual designated characteristic values of candidate words are not required to display linear distribution, and when individual designated characteristic values display non-linear distribution, it is still possible to accurately recognize candidate words, and increase recognition accuracy and recall rate.
[0099] Obviously, a person skilled in the art can modify and vary the present application without departing from the spirit and scope of the present invention. Thus, if these modifications to and variations of the present application lie within the scope of its claims and equivalent technologies, then the present application intends to cover these modifications and variations as well.
[00100] Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are
many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
[00101] WHAT IS CLAIMED IS:
Claims
1. A method of target word recognition, comprising:
obtaining a candidate word set and corresponding characteristic computation data, the candidate word set comprising text data, and characteristic computation data being associated with the candidate word set;
performing segmentation of the characteristic computation data to generate a plurality of text segments;
combining the plurality of text segments to form a text data combination set;
determining an intersection of the candidate word set and the text data combination set, the intersection comprising a plurality of text data combinations;
determining a plurality of designated characteristic values for the plurality of text data combinations; and
based at least in part on the plurality of designated characteristic values and according to at least a criterion, recognizing among the plurality of text data combinations target words whose characteristic values fulfill the criterion.
2. The method of Claim 1 , wherein the candidate word set is based on user-inputted query keywords to a website.
3. The method of Claim 1 , wherein the characteristic computation data is obtained from descriptions of search results in response to user-inputted query keywords to a website.
4. The method of Claim 1 , wherein at least some of the target words are unlisted words in a dictionary, and the method further comprises adding the unlisted words to the dictionary.
5. The method of Claim 1 , wherein the designated characteristic values include mutual information.
6. The method of Claim I , wherein the designated characteristic values include logarithmic likelihood.
7. The method of Claim 1 , wherein the designated characteristic values include context entropy.
8. The method of Claim 1 , wherein the designated characteristic values include position-based in-word probability.
9. The method of Claim 1 , further comprising determining the criterion, including: obtaining a training sample word set and sample characteristic computation data, the training sample word set comprising a plurality of sample words and sorting results indicating whether each of the plurality of sample words is a target word, and the sample characteristic computation comprising the plurality of sample words and designated characteristic values of the plurality of sample words;
segmenting the plurality of sample words to obtain a plurality of sample segments of minimum granularity;
combining the plurality of sample segments to obtain a sample text data combination set; determining an intersection of the sample text data combination set and the training sample word set;
determining a plurality of designated characteristic values of sample text data combinations in the intersection; and
setting a threshold value of a designated characteristic value of a sample text data combination in the intersection as a part of the criterion.
10. The method of Claim 10, wherein combining the plurality of sample segments includes applying an n-gram language model to the plurality of sample segments.
1 1. The method of Claim 10, wherein setting the threshold value includes:
sorting a training sample word in the intersection using the threshold value to reach a determination of whether the training sample word in the intersection is a target word;
comparing the determination with a known result; and
adjusting the threshold value if the determination does not match the known result.
12. The method of Claim 1 , wherein combining the plurality of text segments to form a text data combination set includes:
adoption of an n-gram model based on n-gram windows, shifting of the n-gram windows according to a predetermined sequence, and performance of word segment combination of the word segments contained within the windows to obtain a post-combining text data combination.
13. A target word recognition system, comprising:
one or more processors configured to:
obtain a candidate word set and corresponding characteristic computation data, the candidate word set comprising text data, and characteristic computation data being associated with the candidate word set;
perform segmentation of the characteristic computation data to generate a plurality of text segments; combine the plurality of text segments to form a text data combination set;
determine an intersection of the candidate word set and the text data combination set, the intersection comprising a plurality of text data combinations;
determine a plurality of designated characteristic values for the plurality of text data combinations; and
based at least in part on the plurality of designated characteristic values and according to at least a criterion, recognize among the plurality of text data combinations target words whose characteristic values fulfill the criterion; and
one or more memories coupled to the one or more processors, configured to provide the one or more processors with instructions.
14. The system of Claim 13, wherein the candidate word set is based on user-inputted query keywords to a website.
15. The system of Claim 1 3, wherein the characteristic computation data is obtained from descriptions of search results in response to user-inputted query keywords to a website.
16. The system of Claim 13, wherein the one or more processors are further configured to determine the criterion, including:
obtaining a training sample word set and sample characteristic computation data, the training sample word set comprising a plurality of sample words and sorting results indicating whether each of the plurality of sample words is a target word, and the sample characteristic computation comprising the plurality of sample words and designated characteristic values of the plurality of sample words;
segmenting the plurality of sample words to obtain a plurality of sample segments of minimum granularity;
combining the plurality of sample segments to obtain a sample text data combination set; determining an intersection of the sample text data combination set and the training sample word set;
determining a plurality of designated characteristic values of sample text data combinations in the intersection; and
setting a threshold value of a designated characteristic value of a sample text data combination in the intersection as a part of the criterion.
17. The system of Claim 1 6, wherein combining the plurality of sample segments includes applying an n-gram language model to the plurality of sample segments.
18. The system of Claim 1 6, wherein setting the threshold value includes: sorting a training sample word in the intersection using the threshold value to reach a determination of whether the training sample word in the intersection is a target word;
comparing the determination with a known result; and
adjusting the threshold value if the determination does not match the known result.
19. The system of Claim 13, wherein combining the plurality of text segments to form a text data combination set includes:
adoption of an n-gram model based on n-gram windows, shifting of the n-gram windows according to a predetermined sequence, and performance of word segment combination of the word segments contained within the windows to obtain a post-combining text data combination.
20. A computer program product for target word recognition, the computer program product being embodied in a tangible computer readable storage medium and comprising computer instructions for:
obtaining a candidate word set and corresponding characteristic computation data, the candidate word set comprising text data, and characteristic computation data being associated with the candidate word set;
performing segmentation of the characteristic computation data to generate a plurality of text segments;
combining the plurality of text segments to form a text data combination set;
determining an intersection of the candidate word set and the text data combination set, the intersection comprising a plurality of text data combinations;
determining a plurality of designated characteristic values for the plurality of text data combinations; and
based at least in part on the plurality of designated characteristic values and according to at least a criterion, recognizing among the plurality of text data combinations target words whose characteristic values fulfill the criterion.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013530136A JP5608817B2 (en) | 2010-09-26 | 2011-09-23 | Target word recognition using specified characteristic values |
EP11827103.0A EP2619651A4 (en) | 2010-09-26 | 2011-09-23 | Recognition of target words using designated characteristic values |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010295054.7 | 2010-09-26 | ||
CN201010295054.7A CN102411563B (en) | 2010-09-26 | 2010-09-26 | Method, device and system for identifying target words |
US13/240,034 | 2011-09-22 | ||
US13/240,034 US8744839B2 (en) | 2010-09-26 | 2011-09-22 | Recognition of target words using designated characteristic values |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012039778A1 true WO2012039778A1 (en) | 2012-03-29 |
Family
ID=45871528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2011/001648 WO2012039778A1 (en) | 2010-09-26 | 2011-09-23 | Recognition of target words using designated characteristic values |
Country Status (7)
Country | Link |
---|---|
US (1) | US8744839B2 (en) |
EP (1) | EP2619651A4 (en) |
JP (1) | JP5608817B2 (en) |
CN (1) | CN102411563B (en) |
HK (1) | HK1166397A1 (en) |
TW (1) | TWI518528B (en) |
WO (1) | WO2012039778A1 (en) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5799733B2 (en) * | 2011-10-12 | 2015-10-28 | 富士通株式会社 | Recognition device, recognition program, and recognition method |
KR101359718B1 (en) * | 2012-05-17 | 2014-02-13 | 포항공과대학교 산학협력단 | Conversation Managemnt System and Method Thereof |
CN104111933B (en) * | 2013-04-17 | 2017-08-04 | 阿里巴巴集团控股有限公司 | Obtain business object label, set up the method and device of training pattern |
US12099936B2 (en) * | 2014-03-26 | 2024-09-24 | Unanimous A. I., Inc. | Systems and methods for curating an optimized population of networked forecasting participants from a baseline population |
US10592841B2 (en) * | 2014-10-10 | 2020-03-17 | Salesforce.Com, Inc. | Automatic clustering by topic and prioritizing online feed items |
TW201619885A (en) * | 2014-11-17 | 2016-06-01 | 財團法人資訊工業策進會 | E-commerce reputation analysis system, method and computer readable storage medium thereof |
CN105528403B (en) * | 2015-12-02 | 2020-01-03 | 小米科技有限责任公司 | Target data identification method and device |
CN106933797B (en) * | 2015-12-29 | 2021-01-26 | 北京趣拿信息技术有限公司 | Target information generation method and device |
CN105653701B (en) * | 2015-12-31 | 2019-01-15 | 百度在线网络技术(北京)有限公司 | Model generating method and device, word assign power method and device |
CN105893351B (en) * | 2016-03-31 | 2019-08-20 | 海信集团有限公司 | Audio recognition method and device |
CN108073568B (en) * | 2016-11-10 | 2020-09-11 | 腾讯科技(深圳)有限公司 | Keyword extraction method and device |
JP6618884B2 (en) * | 2016-11-17 | 2019-12-11 | 株式会社東芝 | Recognition device, recognition method and program |
CN108228556A (en) * | 2016-12-14 | 2018-06-29 | 北京国双科技有限公司 | Key phrase extracting method and device |
CN108960952A (en) * | 2017-05-24 | 2018-12-07 | 阿里巴巴集团控股有限公司 | A kind of detection method and device of violated information |
CN109241392A (en) * | 2017-07-04 | 2019-01-18 | 北京搜狗科技发展有限公司 | Recognition methods, device, system and the storage medium of target word |
WO2019023911A1 (en) * | 2017-07-31 | 2019-02-07 | Beijing Didi Infinity Technology And Development Co., Ltd. | System and method for segmenting text |
CN108304377B (en) * | 2017-12-28 | 2021-08-06 | 东软集团股份有限公司 | Extraction method of long-tail words and related device |
CN108733645A (en) * | 2018-04-11 | 2018-11-02 | 广州视源电子科技股份有限公司 | Candidate word evaluation method and device, computer equipment and storage medium |
CN108681534A (en) * | 2018-04-11 | 2018-10-19 | 广州视源电子科技股份有限公司 | Candidate word evaluation method and device, computer equipment and storage medium |
CN108595433A (en) * | 2018-05-02 | 2018-09-28 | 北京中电普华信息技术有限公司 | A kind of new word discovery method and device |
CN108874921A (en) * | 2018-05-30 | 2018-11-23 | 广州杰赛科技股份有限公司 | Extract method, apparatus, terminal device and the storage medium of text feature word |
CN109241525B (en) * | 2018-08-20 | 2022-05-06 | 深圳追一科技有限公司 | Keyword extraction method, device and system |
CN109271624B (en) * | 2018-08-23 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Target word determination method, device and storage medium |
CN109460450B (en) * | 2018-09-27 | 2021-07-09 | 清华大学 | Dialog state tracking method and device, computer equipment and storage medium |
CN109670170B (en) * | 2018-11-21 | 2023-04-07 | 东软集团股份有限公司 | Professional vocabulary mining method and device, readable storage medium and electronic equipment |
CN111222328B (en) * | 2018-11-26 | 2023-06-16 | 百度在线网络技术(北京)有限公司 | Label extraction method and device and electronic equipment |
CN109800435B (en) * | 2019-01-29 | 2023-06-20 | 北京金山数字娱乐科技有限公司 | Training method and device for language model |
CN110275938B (en) * | 2019-05-29 | 2021-09-17 | 广州伟宏智能科技有限公司 | Knowledge extraction method and system based on unstructured document |
CN110532551A (en) * | 2019-08-15 | 2019-12-03 | 苏州朗动网络科技有限公司 | Method, equipment and the storage medium that text key word automatically extracts |
CN111079421B (en) * | 2019-11-25 | 2023-09-26 | 北京小米智能科技有限公司 | Text information word segmentation processing method, device, terminal and storage medium |
CN111191446B (en) * | 2019-12-10 | 2022-11-25 | 平安医疗健康管理股份有限公司 | Interactive information processing method and device, computer equipment and storage medium |
CN111274353B (en) | 2020-01-14 | 2023-08-01 | 百度在线网络技术(北京)有限公司 | Text word segmentation method, device, equipment and medium |
CN111402894B (en) * | 2020-03-25 | 2023-06-06 | 北京声智科技有限公司 | Speech recognition method and electronic equipment |
CN111159417A (en) * | 2020-04-07 | 2020-05-15 | 北京泰迪熊移动科技有限公司 | Method, device and equipment for extracting key information of text content and storage medium |
CN111477219A (en) * | 2020-05-08 | 2020-07-31 | 合肥讯飞数码科技有限公司 | Keyword distinguishing method and device, electronic equipment and readable storage medium |
CN112101030B (en) * | 2020-08-24 | 2024-01-26 | 沈阳东软智能医疗科技研究院有限公司 | Method, device and equipment for establishing term mapping model and realizing standard word mapping |
CN112257416A (en) * | 2020-10-28 | 2021-01-22 | 国家电网有限公司客户服务中心 | Inspection new word discovery method and system |
CN112559865B (en) * | 2020-12-15 | 2023-12-08 | 泰康保险集团股份有限公司 | Information processing system, computer-readable storage medium, and electronic device |
CN113609296B (en) * | 2021-08-23 | 2022-09-06 | 南京擎盾信息科技有限公司 | Data processing method and device for public opinion data identification |
CN113836303A (en) * | 2021-09-26 | 2021-12-24 | 平安科技(深圳)有限公司 | Text type identification method and device, computer equipment and medium |
CN115879459A (en) * | 2022-06-23 | 2023-03-31 | 北京中关村科金技术有限公司 | Word determination method and device, electronic equipment and computer-readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040199375A1 (en) * | 1999-05-28 | 2004-10-07 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface |
US20070150275A1 (en) * | 1999-10-28 | 2007-06-28 | Canon Kabushiki Kaisha | Pattern matching method and apparatus |
US20100138411A1 (en) * | 2008-11-30 | 2010-06-03 | Nexidia Inc. | Segmented Query Word Spotting |
US20100211567A1 (en) * | 2001-03-16 | 2010-08-19 | Meaningful Machines, L.L.C. | Word Association Method and Apparatus |
US20100235341A1 (en) * | 1999-11-12 | 2010-09-16 | Phoenix Solutions, Inc. | Methods and Systems for Searching Using Spoken Input and User Context Information |
Family Cites Families (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2836159B2 (en) | 1990-01-30 | 1998-12-14 | 株式会社日立製作所 | Speech recognition system for simultaneous interpretation and its speech recognition method |
US7225182B2 (en) * | 1999-05-28 | 2007-05-29 | Overture Services, Inc. | Recommending search terms using collaborative filtering and web spidering |
US6711561B1 (en) * | 2000-05-02 | 2004-03-23 | Iphrase.Com, Inc. | Prose feedback in information access system |
KR100426382B1 (en) * | 2000-08-23 | 2004-04-08 | 학교법인 김포대학 | Method for re-adjusting ranking document based cluster depending on entropy information and Bayesian SOM(Self Organizing feature Map) |
CN1226717C (en) * | 2000-08-30 | 2005-11-09 | 国际商业机器公司 | Automatic new term fetch method and system |
US7475006B2 (en) * | 2001-07-11 | 2009-01-06 | Microsoft Corporation, Inc. | Method and apparatus for parsing text using mutual information |
WO2003027894A1 (en) * | 2001-09-26 | 2003-04-03 | The Trustees Of Columbia University In The City Of New York | System and method of generating dictionary entries |
US6889191B2 (en) * | 2001-12-03 | 2005-05-03 | Scientific-Atlanta, Inc. | Systems and methods for TV navigation with compressed voice-activated commands |
US20060004732A1 (en) * | 2002-02-26 | 2006-01-05 | Odom Paul S | Search engine methods and systems for generating relevant search results and advertisements |
CA2374298A1 (en) * | 2002-03-01 | 2003-09-01 | Ibm Canada Limited-Ibm Canada Limitee | Computation of frequent data values |
US7580831B2 (en) * | 2002-03-05 | 2009-08-25 | Siemens Medical Solutions Health Services Corporation | Dynamic dictionary and term repository system |
WO2004001623A2 (en) * | 2002-03-26 | 2003-12-31 | University Of Southern California | Constructing a translation lexicon from comparable, non-parallel corpora |
WO2004044887A1 (en) * | 2002-11-11 | 2004-05-27 | Matsushita Electric Industrial Co., Ltd. | Speech recognition dictionary creation device and speech recognition device |
US20040098380A1 (en) * | 2002-11-19 | 2004-05-20 | Dentel Stephen D. | Method, system and apparatus for providing a search system |
JP2004318480A (en) * | 2003-04-16 | 2004-11-11 | Sony Corp | Electronic device, method for extracting new word, and program |
US7555428B1 (en) * | 2003-08-21 | 2009-06-30 | Google Inc. | System and method for identifying compounds through iterative analysis |
US7424421B2 (en) * | 2004-03-03 | 2008-09-09 | Microsoft Corporation | Word collection method and system for use in word-breaking |
US7478033B2 (en) * | 2004-03-16 | 2009-01-13 | Google Inc. | Systems and methods for translating Chinese pinyin to Chinese characters |
US20080077570A1 (en) * | 2004-10-25 | 2008-03-27 | Infovell, Inc. | Full Text Query and Search Systems and Method of Use |
KR100682897B1 (en) * | 2004-11-09 | 2007-02-15 | 삼성전자주식회사 | Method and apparatus for updating dictionary |
JP3917648B2 (en) * | 2005-01-07 | 2007-05-23 | 松下電器産業株式会社 | Associative dictionary creation device |
CN100530171C (en) * | 2005-01-31 | 2009-08-19 | 日电(中国)有限公司 | Dictionary learning method and devcie |
US20070112839A1 (en) * | 2005-06-07 | 2007-05-17 | Anna Bjarnestam | Method and system for expansion of structured keyword vocabulary |
JP4816409B2 (en) * | 2006-01-10 | 2011-11-16 | 日産自動車株式会社 | Recognition dictionary system and updating method thereof |
JP3983265B1 (en) * | 2006-09-27 | 2007-09-26 | 沖電気工業株式会社 | Dictionary creation support system, method and program |
US8539349B1 (en) * | 2006-10-31 | 2013-09-17 | Hewlett-Packard Development Company, L.P. | Methods and systems for splitting a chinese character sequence into word segments |
WO2008066166A1 (en) | 2006-11-30 | 2008-06-05 | National Institute Of Advanced Industrial Science And Technology | Web site system for voice data search |
JP2008140117A (en) * | 2006-12-01 | 2008-06-19 | National Institute Of Information & Communication Technology | Apparatus for segmenting chinese character sequence to chinese word sequence |
JP5239161B2 (en) * | 2007-01-04 | 2013-07-17 | 富士ゼロックス株式会社 | Language analysis system, language analysis method, and computer program |
CN101261623A (en) * | 2007-03-07 | 2008-09-10 | 国际商业机器公司 | Word splitting method and device for word border-free mark language based on search |
WO2008144964A1 (en) * | 2007-06-01 | 2008-12-04 | Google Inc. | Detecting name entities and new words |
WO2008151465A1 (en) * | 2007-06-14 | 2008-12-18 | Google Inc. | Dictionary word and phrase determination |
WO2008151466A1 (en) * | 2007-06-14 | 2008-12-18 | Google Inc. | Dictionary word and phrase determination |
JP2010531492A (en) * | 2007-06-25 | 2010-09-24 | グーグル・インコーポレーテッド | Word probability determination |
US8005643B2 (en) * | 2007-06-26 | 2011-08-23 | Endeca Technologies, Inc. | System and method for measuring the quality of document sets |
US7917355B2 (en) * | 2007-08-23 | 2011-03-29 | Google Inc. | Word detection |
JP5379138B2 (en) * | 2007-08-23 | 2013-12-25 | グーグル・インコーポレーテッド | Creating an area dictionary |
CN101149739A (en) * | 2007-08-24 | 2008-03-26 | 中国科学院计算技术研究所 | Internet faced sensing string digging method and system |
CN101458681A (en) | 2007-12-10 | 2009-06-17 | 株式会社东芝 | Voice translation method and voice translation apparatus |
JP2009176148A (en) * | 2008-01-25 | 2009-08-06 | Nec Corp | Unknown word determining system, method and program |
US20090299998A1 (en) * | 2008-02-15 | 2009-12-03 | Wordstream, Inc. | Keyword discovery tools for populating a private keyword database |
US20100114878A1 (en) * | 2008-10-22 | 2010-05-06 | Yumao Lu | Selective term weighting for web search based on automatic semantic parsing |
US8346534B2 (en) * | 2008-11-06 | 2013-01-01 | University of North Texas System | Method, system and apparatus for automatic keyword extraction |
US7996369B2 (en) * | 2008-11-14 | 2011-08-09 | The Regents Of The University Of California | Method and apparatus for improving performance of approximate string queries using variable length high-quality grams |
US20100145677A1 (en) * | 2008-12-04 | 2010-06-10 | Adacel Systems, Inc. | System and Method for Making a User Dependent Language Model |
US8032537B2 (en) * | 2008-12-10 | 2011-10-04 | Microsoft Corporation | Using message sampling to determine the most frequent words in a user mailbox |
KR101255557B1 (en) * | 2008-12-22 | 2013-04-17 | 한국전자통신연구원 | System for string matching based on tokenization and method thereof |
US8145662B2 (en) * | 2008-12-31 | 2012-03-27 | Ebay Inc. | Methods and apparatus for generating a data dictionary |
JP4701292B2 (en) * | 2009-01-05 | 2011-06-15 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Computer system, method and computer program for creating term dictionary from specific expressions or technical terms contained in text data |
JP2010176285A (en) * | 2009-01-28 | 2010-08-12 | Nippon Telegr & Teleph Corp <Ntt> | Unknown word registration method, device and program, and computer readable recording medium |
US20100205198A1 (en) * | 2009-02-06 | 2010-08-12 | Gilad Mishne | Search query disambiguation |
US20100287177A1 (en) * | 2009-05-06 | 2010-11-11 | Foundationip, Llc | Method, System, and Apparatus for Searching an Electronic Document Collection |
US8392441B1 (en) * | 2009-08-15 | 2013-03-05 | Google Inc. | Synonym generation using online decompounding and transitivity |
CN101996631B (en) | 2009-08-28 | 2014-12-03 | 国际商业机器公司 | Method and device for aligning texts |
US20110082848A1 (en) * | 2009-10-05 | 2011-04-07 | Lev Goldentouch | Systems, methods and computer program products for search results management |
-
2010
- 2010-09-26 CN CN201010295054.7A patent/CN102411563B/en active Active
- 2010-11-22 TW TW099140212A patent/TWI518528B/en not_active IP Right Cessation
-
2011
- 2011-09-22 US US13/240,034 patent/US8744839B2/en active Active
- 2011-09-23 WO PCT/US2011/001648 patent/WO2012039778A1/en active Application Filing
- 2011-09-23 EP EP11827103.0A patent/EP2619651A4/en not_active Withdrawn
- 2011-09-23 JP JP2013530136A patent/JP5608817B2/en not_active Expired - Fee Related
-
2012
- 2012-07-18 HK HK12107009.0A patent/HK1166397A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040199375A1 (en) * | 1999-05-28 | 2004-10-07 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface |
US20070150275A1 (en) * | 1999-10-28 | 2007-06-28 | Canon Kabushiki Kaisha | Pattern matching method and apparatus |
US20100235341A1 (en) * | 1999-11-12 | 2010-09-16 | Phoenix Solutions, Inc. | Methods and Systems for Searching Using Spoken Input and User Context Information |
US20100211567A1 (en) * | 2001-03-16 | 2010-08-19 | Meaningful Machines, L.L.C. | Word Association Method and Apparatus |
US20100138411A1 (en) * | 2008-11-30 | 2010-06-03 | Nexidia Inc. | Segmented Query Word Spotting |
Non-Patent Citations (1)
Title |
---|
See also references of EP2619651A4 * |
Also Published As
Publication number | Publication date |
---|---|
US8744839B2 (en) | 2014-06-03 |
JP2013545160A (en) | 2013-12-19 |
TWI518528B (en) | 2016-01-21 |
JP5608817B2 (en) | 2014-10-15 |
CN102411563B (en) | 2015-06-17 |
CN102411563A (en) | 2012-04-11 |
US20120078631A1 (en) | 2012-03-29 |
EP2619651A1 (en) | 2013-07-31 |
HK1166397A1 (en) | 2012-10-26 |
EP2619651A4 (en) | 2017-12-27 |
TW201214169A (en) | 2012-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8744839B2 (en) | Recognition of target words using designated characteristic values | |
CN109189942B (en) | Construction method and device of patent data knowledge graph | |
CN106649818B (en) | Application search intention identification method and device, application search method and server | |
CN109670163B (en) | Information identification method, information recommendation method, template construction method and computing device | |
CN111274365B (en) | Intelligent inquiry method and device based on semantic understanding, storage medium and server | |
JP3041268B2 (en) | Chinese Error Checking (CEC) System | |
US7461056B2 (en) | Text mining apparatus and associated methods | |
WO2015149533A1 (en) | Method and device for word segmentation processing on basis of webpage content classification | |
CN111324771B (en) | Video tag determination method and device, electronic equipment and storage medium | |
CN107943792B (en) | Statement analysis method and device, terminal device and storage medium | |
CN109388743B (en) | Language model determining method and device | |
WO2016095645A1 (en) | Stroke input method, device and system | |
CN109299233A (en) | Text data processing method, device, computer equipment and storage medium | |
CN110705285B (en) | Government affair text subject word library construction method, device, server and readable storage medium | |
CN114547257B (en) | Class matching method and device, computer equipment and storage medium | |
CN110874408B (en) | Model training method, text recognition device and computing equipment | |
US20220129634A1 (en) | Method and apparatus for constructing event library, electronic device and computer readable medium | |
CN115329083A (en) | Document classification method and device, computer equipment and storage medium | |
CN113220824B (en) | Data retrieval method, device, equipment and storage medium | |
CN115048523A (en) | Text classification method, device, equipment and storage medium | |
CN114117007A (en) | Method, device, equipment and storage medium for searching entity | |
CN110276001B (en) | Checking page identification method and device, computing equipment and medium | |
CN116126893B (en) | Data association retrieval method and device and related equipment | |
JP2013084216A (en) | Fixed phrase discrimination device and fixed phrase discrimination method | |
CN113377921B (en) | Method, device, electronic equipment and medium for matching information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11827103 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013530136 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011827103 Country of ref document: EP |