WO2014092361A1 - Evaluation engine of patent evaluation system - Google Patents

Evaluation engine of patent evaluation system Download PDF

Info

Publication number
WO2014092361A1
WO2014092361A1 PCT/KR2013/010951 KR2013010951W WO2014092361A1 WO 2014092361 A1 WO2014092361 A1 WO 2014092361A1 KR 2013010951 W KR2013010951 W KR 2013010951W WO 2014092361 A1 WO2014092361 A1 WO 2014092361A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation
engine
computer
result
patents
Prior art date
Application number
PCT/KR2013/010951
Other languages
French (fr)
Inventor
Jung Ae Kwak
Kyeong Seon CHO
In Jae Park
Seung Taek Oh
Un Young Cho
Original Assignee
Kipa.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020120144327A external-priority patent/KR101456189B1/en
Priority claimed from KR1020120144316A external-priority patent/KR101658890B1/en
Priority claimed from KR1020120144328A external-priority patent/KR101456190B1/en
Application filed by Kipa. filed Critical Kipa.
Publication of WO2014092361A1 publication Critical patent/WO2014092361A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services
    • G06Q50/184Intellectual property management

Definitions

  • the present invention relates to an evaluation engine or an artificially intelligent evaluation-bot (or artificially intelligent evaluation agent) for a patent evaluation system.
  • IP intellectual property
  • IP owners evaluate their IP rights on their own or entrust profit/non-profit organizations to conduct IP evaluation.
  • results of evaluation of patents may be utilized for various purposes, such as maintenance of patents, offer of strategies for utilizing patents, support for research planning, estimation of patents in light of right, economy, and environment, invention evaluation, grasp of a critical invention and priority, association with business strategies (strategic alliance), allocation of R&D planning resources, technology evaluation for a loan from a financial organization, evaluation for choosing a provider (subject) of a government direct/indirect technical development support business, evaluation for intangible assets, evaluating customers’ intangible assets into current values based on clear and objective materials in consideration of technical, economical, and social aspects, compensation for inventors, asset evaluation (for depreciation), evaluation for IPs for the purpose of technology trade (technology transfer, M&A, etc.), evaluation of IP rights for loaning a technology, or attraction of investment.
  • Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system.
  • an object of an embodiment disclosed in this specification provides a system for patent evaluation. Further, an object of an embodiment disclosed in this specification is to verify an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) in a system for patent evaluation.
  • an aspect of an embodiment of the present invention provides a method of evaluating a patent using an evaluation engine.
  • the method may comprise: receiving an evaluation request for a specific patent from a user device; receiving an evaluation request for the specific patent from the user device; and providing an evaluation result, which is yielded for the specific patent using the evaluation engine, to the user device.
  • the evaluation engine may be generated by performing a machine-learning about expert’s (or patent technician’s) evaluation results on sample patents.
  • Thee evaluation engine may be generated through at least one of: calculating a correlation of evaluation factors with one or more pre-defined evaluation items, based on the expert’s evaluation result on the sample patents; mapping respective evaluation items and evaluation factors based on the calculated correlation; and performing a machine learning about the expert’s evaluation result by using the mapped evaluation factors on the evaluation item.
  • the evaluation factor may be based information extracted from one or more of bibliographic information, prosecution history information, a specification, and claims of an issued patent. Also, the evaluation factor may be based on information extracted by performing a natural language processing on the specification and the claims of the issued patent.
  • the evaluation item may include at least one of strength of patent right, quality of technology, and usability.
  • the pre-yielded evaluation result is output.
  • the evaluation result for the specific patent identified by using the information is immediately yielded and output in response to the user's request.
  • the expert’s evaluation result may be performed for each technical field.
  • the evaluation engine may be generated for each technical field. Accordingly, the outputting of the evaluation result may use an evaluation engine of a technical field corresponding to a technical field of the specific patent.
  • a correlation between experts may be calculated based on results evaluated by a plurality of experts for each technical field.
  • the evaluation engine may be established based on the expert' evaluation result having an excellent correlation based on the calculated correlation.
  • another aspect of an embodiment of the present invention provides a method of verifying an evaluation engine for a patent evaluation system.
  • the method may comprise: dividing a plurality of expert’s evaluation results on sample patents for each evaluation item into several groups; reserving an evaluation result of at least one group among the several groups for verification; generating an evaluation engine by performing a machine learning about the evaluation results of the remaining groups; and primarily verifying the evaluation engine by using the reserved evaluation result of at least one group.
  • the method may further comprise: yielding an evaluation result on a patent, in which a value of a specific evaluation factor is equal to or greater than a predetermined value, by using the evaluation engine ; and secondarily verifying the evaluation engine by using grade distribution generated based on the yielded evaluation result.
  • the secondarily verifying may include verifying whether the patent in which the value of the specific evaluation factor is equal to or greater than the predetermined value has a higher grade than a general patent.
  • the method may comprise: primarily mapping one or more predefined patent evaluation items and candidates of associated evaluation factors to provide the mapping result to an expert’s computer; receiving the evaluation results on the sample patents for each evaluation item from the expert’s computer; calculating a correlation between each evaluation item and each evaluation factor, based on the expert’s evaluation result; remapping each evaluation item and each evaluation factor based on the calculating correlation; and generating an evaluation engine by performing a machine learning about the expert’s evaluation result, by using the evaluation factor mapped in the evaluation item.
  • a patent may be automatically evaluated by a system, and a result of the evaluation may be suggested.
  • an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of this disclosure is to perform more quantitative, objective automatic evaluation for a patent.
  • Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system
  • Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention
  • Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2;
  • Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3;
  • Fig. 5 is a flowchart illustrating a method of building an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention.
  • Fig. 6 is a flowchart and a table illustrating an aspect of verifying the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) built by performing a machine learning about the expert’s evaluation result.
  • Fig. 7 is a flowchart illustrating another aspect of verifying the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent)built by performing a machine learning about the expert’s evaluation result.
  • Figs. 8a, 8b and 9 are distribution diagrams according to another aspect illustrated in Fig. 7.
  • Fig. 10 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine according to an embodiment of the present invention.
  • Fig. 11 illustrates the physical configuration of evaluation servers and service servers according to an embodiment of the present invention.
  • the technical terms are used merely to describe predetermined embodiments and should not be construed as limited thereto. Further, as used herein, the technical terms, unless defined otherwise, should be interpreted as generally understood by those of ordinary skill in the art and should not be construed to be unduly broad or narrow. Further, when not correctly expressing the spirit of the present invention, the technical terms as used herein should be understood as ones that may be correctly understood by those of ordinary skill in the art. Further, the general terms as used herein should be interpreted as defined in the dictionary or in the context and should not be interpreted as unduly narrow.
  • first and second may be used to describe various components, but these components are not limited thereto. The terms are used only for distinguishing one component from another.
  • a first component may also be referred to as a second component, and the second component may likewise be referred to as the first component.
  • Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention.
  • a patent evaluation system includes one or more servers 100 and one or more databases (hereinafter, simply referred to as “DB”) 190.
  • the one or more servers 100 may be remotely managed by a managing device 500.
  • the one or more servers 100 are connected to a wired/wireless network and may provide a user device 600 with an evaluation result service and other various services. Specifically, when receiving a request for an evaluation service for a specific patent case from the user device, the one or more servers 100 may provide a result from evaluating the specific patent case.
  • Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2.
  • one or more servers 100 may include an evaluation server 100 for domestic patents (e.g., Korean patents), a service server 120 for domestic patents (e.g., Korean patents), an evaluation server 130 for foreign patents (e.g., U.S. patents), and a service server 140 for foreign patents (e.g., U.S. patents).
  • an evaluation server 100 for domestic patents e.g., Korean patents
  • a service server 120 for domestic patents e.g., Korean patents
  • an evaluation server 130 for foreign patents e.g., U.S. patents
  • a service server 140 for foreign patents e.g., U.S. patents
  • the domestic patent service server 120 and the foreign (e.g., U.S.) patent service server 140 are shown to be physically separated from each other, but these servers may be integrated into a single physical server. Further, the servers 110, 120, 130, and 140 as illustrated may be integrated into a single physical server.
  • the above-described one or more databases 190 may include patent information DBs 191 and 192, evaluation factor (or evaluation index) DBs 193 and 194, similar patent DBs 195 and 196, and evaluation result DBs 197 and 198.
  • Each DB is illustrated to be provided separately from each other for the purpose of each of evaluation of domestic patents and evaluation of foreign (e.g., U.S.) patents, and the DBs may be integrated into one.
  • the domestic (e.g., Korean) patent information DB 191 and the foreign (e.g., U.S.) patent information DB 192 may be integrated into one, and the domestic (e.g., Korean) evaluation factor (or evaluation index) DB 193 and the foreign (e.g., U.S.) evaluation factor (or evaluation index) DB 194 may be integrated into one.
  • the DBs all may be integrated into one that may be divided into fields.
  • Such DBs may be generated based on what is received from an external DB provider.
  • the server 100 may include a data collecting unit 150 that receives a domestic (e.g., Korean) or foreign (e.g., U.S.) raw DB from the external DB provider.
  • the data collecting unit 150 physically includes a network interface (NIC).
  • the data collecting unit 150 logically may be a program constituted of an API (Application Programming Interface).
  • the data collecting unit 150 processes a raw DB received from the external DB provider and may store the received raw DB in one or more DBs 190, for example, patent information DBs 191 and 192 which are connected to the server 100.
  • the domestic/foreign patent evaluation servers 110 and 130 may include one or more of specification processing units 111 and 131, natural language processing units 112 and 132, keyword processing units 113 and 133, similar patent processing units 114 and 134, evaluation factors (or evaluation indexes) processing unit 115 and 135, and evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136.
  • the specification processing units 111 and 131 extract each information from the one or more DBs 190, for example, patent information DBs 191 and 192 and parse (or transform) the information.
  • the specification processing units 111 and 131 may extract one or more of a patent specification, bibliographic information, prosecution history information, claims, and drawings and may store the extracted information in each field of the evaluation factor (or evaluation index) DB.
  • the natural language processing units 112 and 132 perform a natural language process on text included in the extracted patent specification and the claims.
  • the “natural language process” refers to a computer analyzing a natural language used for, e.g., general conversation, rather than a special programming language for computers.
  • the natural language processing units 112 and 132 may conduct sentence analysis, syntax analysis, and a process of a mixed language. Further, the natural language processing units 112 and 132 may carry out a semantic process.
  • the keyword processing units 113 and 133 extract keywords from each patent based on a result of the natural language process.
  • a scheme such as a VSM (Vector Space Model) or LSA (Latent Sematic Analysis) may be used.
  • the “keyword” of a patent specification refers to word(s) that represents the subject of the patent specification, and for example, in the instant specification, the “patent evaluation” may be keywords.
  • the similar patent processing units 114 and 134 may search patents closest to each patent based on the extracted keywords and may store the results of search in the similar patent DBs 195 and 196.
  • similar patent groups are known to belong to the same sub class in the IPC (International Patent Classification), but according to an embodiment of this disclosure, similar patents may be searched from other sub classes as well as the same sub class.
  • IPC International Patent Classification
  • similar patents may be searched from other sub classes as well as the same sub class.
  • a mere increase in the number of keywords may lead to extraction of inaccurate keywords, thus resulting in completely different patents being searched from other sub classes as similar patents.
  • a proper number of keywords are extracted based on a result of simulation depending on the number of keywords and similar patents are searched with the extracted keywords.
  • evaluation factor (or evaluation index) processing units 115 and 116 extract values of evaluation factors (or evaluation indexes) from one or more information of a patent specification, Bibliographical information, prosecution history information, claims, and drawings and stores the extracted values in the evaluation factor DB 194.
  • evaluation factors ones for evaluating Korean patents may differ from others for evaluating foreign patents.
  • evaluation factors for evaluating Korean patents are first listed as below:
  • evaluation factors for evaluating a foreign patent e.g., a U.S. patent, may be listed in the following table:
  • evaluation factors are merely examples, and any information that may be directly induced from patent information may be utilized as evaluation factors (evaluation indexes). Further, any information that may be indirectly obtained or induced by processing patent information may be used as evaluation factors (evaluation indexes).
  • the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 evaluate each patent for each of predetermined evaluation items based on the evaluation factors and evaluation mechanism stored in the evaluation factor DBs 193 and 194 and produce results of the evaluation. Further, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 may store the produced evaluation results in the evaluation result DBs 197 and 198.
  • the evaluation items may be defined as the strength of patent right, quality of technology, and usability. Or, the evaluation items may be defined as strength of patent right and marketability (or, commercial potential). Such definition may be changed depending on what is the main object of patent evaluation. Accordingly, the scope of the present invention is not limited to those listed above, and may be expanded to anything to which the scope of the present invention may apply.
  • the evaluation mechanism may include a weight and a machine learning model.
  • the weight may be a value obtained from expert’s (or patent technician’s) evaluation results with respect to several sample patents.
  • the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 will be described below in detail with reference to Fig. 4.
  • the domestic/foreign patent service servers 120 and 140 may include one or more of evaluation report generating units 121 and 141 and portfolio analyzing units 122 and 142.
  • the evaluation report generating units 121 and 141 generate evaluation reports based on evaluation results stored in the evaluation result DBs 197 and 198.
  • the portfolio analyzing units 122 and 142 may analyze portfolios of patents owned by a patent owner based on the information stored in the similar patent DBs 195 and 196. Further, the portfolio analyzing units 122 and 142 may analyze patent statuses for each right owner by technology (or technical field) or by IPC classification.
  • the portfolio analyzing units 122 and 142 may perform various types of analysis based on the similar patent DBs 195 and 196 and the evaluation result DBs 197 and 198.
  • the portfolio analyzing units 122 and 142 may perform various types of analysis such as patent trend analysis, or per-patentee total annual fees analysis.
  • the domestic/foreign patent service servers 120 and 140 upon receiving a request for an evaluation service for a predetermined patent from a user device, may provide results of evaluation of the specific patent case. Further, in response to a user’s request, the evaluation reports may be provided in the form of a webpage, an MS-excel file, or a PDF file, or an MS-word file or the results of analysis may be offered. To provide such service, a user authentication/authority managing unit 160 may be needed.
  • Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3.
  • the specification processing units 111 and 131 receive patent specification from the patent information DBs 191 and 192 and parse the patent specification.
  • the patent specification may be written in, e.g., XML, and the specification processing units 111 and 131 may include XML tag processing units to parse the XML.
  • the evaluation factor processing units 115 and 135 may include a first evaluation factor processing unit and a second evaluation factor processing unit for processing evaluation factors based on the parsed patent specification.
  • the first evaluation factor processing unit extracts values of evaluation factors that do not require the result of natural language processing based on the parsed patent specification. For example, the first evaluation factor processing unit calculates the values of evaluation factors that do not require natural language processing, such as a length of each independent claim, the number of claims, the number of claim categories, the number of independent claims, the number of domestic family patents, the number of foreign family patents as shown in Table 1 and stores the values in the evaluation factor DBs 193 and 194.
  • the natural language processing units 112 and 132 perform natural language processing based on the parsed patent specification.
  • the natural language processing units 112 and 132 include a morpheme analyzing unit and a TM analysis that work based on a dictionary DB.
  • the “morpheme” refers to the smallest meaningful unit that cannot be analyzed any further, and the “morpheme analysis” refers to the first step of analysis of natural language, which changes an input string of letters into a string of morphemes.
  • the second evaluation factor processing unit of the evaluation factor processing units 115 and 135 calculate values of remaining evaluation factors based on the result of the natural language processing. For example, a value of the evaluation factor such as “keyword consistency with similar foreign patent group” summarized in Table 1 above is calculated and stored in the evaluation factor DBs 193 and 194.
  • the keyword extracting units 113 and 133 that extract keywords based on the result of the natural language processing may include a keyword candidate selecting unit, and useless word removing unit, and a keyword selecting unit.
  • the keyword candidate selecting unit selects keyword candidates that may represent the subject of each patent.
  • the useless word removing unit removes useless words that have low importance from among the extracted keyword candidates.
  • the keyword selecting unit finally selects a proper number of keywords from among the remaining keyword candidates after the useless words have been removed and stores the selected keywords in the evaluation factor DBs 193 and 194.
  • the chance of being recalled (that is, the chance of a keyword to be reused) is 22.7% for 10 keywords and goes up to 54.1% for 50 keywords.
  • accuracy is 10.9%
  • accuracy is 20.6%.
  • a mere increase in the number of keywords although the mere increase is able to increase the rate of recall, may lead to the accuracy being lowered, and accordingly, an optimum number of keywords may be yielded based on the obtained accuracy.
  • the similar patent extracting units 114 and 134 search similar patents based on the keywords and may include a document clustering unit, a document similarity calculating unit, and a similar patent generating unit.
  • the document clustering unit primarily clusters similar patents based on the keywords.
  • the document similarity calculating unit calculates similarity between patent documents among the clustered patents.
  • the similar patent generating unit generates, as a result, actually closest patents among the primarily clustered patent documents and stores the result in the similar patent DBs 195 and 196.
  • the patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 include a machine learning model unit and a patent evaluation unit.
  • the machine learning model unit performs a machine learning based on the expert (or patent technician) evaluation result DB. For this, an evaluation result for sample patents may be received from each expert per technology (i.e., technical field).
  • the sample patents are a set for the machine learning, and a few hundreds of patents to a few thousands of patents may be extracted from the patent information DBs 191 and 192 to select the sample patents.
  • the sample patents may be selected to evenly include the evaluation factors shown in Tables 1 and 2. For example, very few only of all the issued patents (about a few tens to a few millions of patents) have a non-zero value for some evaluation factors, such as the number of invalidation trials, the number of trials to confirm the scope of a patent, the number of defensive confirmation trials for the scope of a right, or the number of requests for accelerating appeal.
  • the sample patents may be divided into substantially a plurality of sets (for example, 10 sets). Among the plurality of sets, some may be used for machine learning, and the remainder may be used to verify the result of the machine learning.
  • the service servers 120 and 140 may provide a webpage screen which the expert logs in.
  • a list of patents to be evaluated by the expert may be provided.
  • the service servers 120 and 140 provide a webpage in which the above-described evaluation items (for example, strength of patent right, quality of technology, and usability) are listed, to the expert’ computer.
  • the service servers 120 and 140 provide the webpage in which the above-described evaluation items are listed, to the expert’ computer.
  • the service servers 120 and 140 may map and display candidates of the evaluation factors associated with each evaluation item.
  • the service servers 120 and 140 may map and display candidates of the evaluation factors associated with each evaluation item like the following Table 4.
  • the expert puts a point (or score) for each evaluation item in the webpage while viewing the associated evaluation factor candidates for each evaluation item and the service servers 120 and 140 may receive the point (or score) and store the received point (or score) in the expert evaluation DB.
  • the expert puts a point (or score) for each evaluation item in the webpage while viewing the associated evaluation factor candidates for each evaluation item and the service servers 120 and 140 may receive the point (or score) and store the received point (or score) in the expert evaluation DB.
  • the machine learning model unit extracts evaluation factors actually associated with each evaluation item based on the expert’s evaluation results stored in the expert evaluation DB. Specifically, the machine learning model unit analyzes the correlation between each evaluation item and each evaluation factor based on the expert’s evaluation results stored in the expert evaluation DB. For example, it is analyzed based on the expert’s evaluation results stored in the expert evaluation DB whether when the value of the evaluation factor increases, the point (or score) of the evaluation item input by the expert increases or when the value of the evaluation factor decreases, the point (or score) of the evaluation item input by the expert increases.
  • a result of analyzing a correlation of the evaluation factors for the evaluation item “strength of patent right” is represented as the following Table 5.
  • a negative correlation value represents that a value of the evaluation item increases when a value of the evaluation factor decreases
  • a positive correlation value represents that a value of the evaluation item increase when a value of the evaluation factor increases.
  • the machine learning model unit extracts evaluation factors having a high correlation with, a corresponding evaluation item such as “strength of patent right” among the evaluation items.
  • the correlation has a value between -1 and +1. Generally, its value, when included in a range from 0.2 to 0.6, is considered to be high. Accordingly, the machine learning model unit may select “the number of claims” “the number of independent claims” and “the number of claim categories” as evaluation factors for the “strength of patent right” among the evaluation items.
  • each expert per technology upon evaluation of patents, may exhibit different evaluation results, and to address such issue, a correlation between the experts may be additionally calculated according to an embodiment of the present invention.
  • technical fields may be categorized, e.g., into electronics, mechanics, chemistry, physics, and biology.
  • the experts per field may be grouped in pairs.
  • the correlation between experts A and B calculated for the electronics field is, as shown in Table 5, 0.64 for the Strength of Patent Right evaluation item, 0.39 for the Quality of Technology evaluation item, and 0.89 for the Usability evaluation item.
  • the correlation between paired experts is low, a result of the evaluation fulfilled by a pair of experts having a higher correlation may be used, and alternatively, a higher weight may be assigned to one of paired experts.
  • the machine learning model unit After defining the evaluation factors for each evaluation item in such a way, the machine learning model unit performs a machine learning based on the expert’s (or patent technican’s) evaluation results (e.g., the evaluation points or scores) stored in the expert evaluation DB.
  • the machine learning means to objectify the expert’s (or the patent technican’s) subjective evaluation result.
  • the machine learning model unit calculates a weight value based on an expert’s evaluation results, e.g., points (or scores) stored in the expert evaluation DB and performs a machine learning using the calculated weight.
  • the machine learning may be done per technology (or technical field).
  • the evaluation of sample patents performed by an expert is done for each technology, and the machine learning is also conducted for each technology.
  • the length of an independent claim increases, the scope of the claim decreases.
  • the length of an independent claim may have nothing to do with the broadness or narrowness of the claim scope. Accordingly, the machine learning is performed for each technology.
  • the above-described weight may also be produced separately for each technical field.
  • the patent evaluation unit evaluates patent cases according to the result of the machine learning and stores the evaluated result in the evaluation result DBs 197 and 198.
  • a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) is described.
  • Fig. 5 is a flowchart illustrating a method of building an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention.
  • evaluation items may be previously defined (S110).
  • the evaluation items may be, as described earlier, defined as strength of patent right, quality of technology, and usability. Or, the evaluation items may also be defined as strength of patent right and marketability (or, commercial potential). Such definitions may be changed depending on what goals are to be achieved by evaluating patents.
  • the service servers 120 and 140 primarily map evaluation items with evaluation factors for sample patents and provide the result of the mapping to an expert’s computer (S120).
  • the primary mapping may be to map the candidates of evaluation factors inferred to be associated with each evaluation item.
  • the result of evaluating the sample patents may be received from the expert’s computer (S130).
  • the evaluation result may be points given by the expert to the evaluation items.
  • the service servers 120 and 140 may prepare for a webpage to provide information to the expert’s computer and to receive a result of evaluation.
  • the correlations between the evaluation factors and the one or more prepared evaluation items may be calculated based on the expert’s evaluation result for the sample patents (S140).
  • the correlation may have a value from -1 to +1 as described above.
  • remapping may be done between each evaluation item and evaluation factors based on the calculated correlation (S150). Some of the evaluation factors primarily mapped to each evaluation item by such remapping may be excluded from mapping, and other evaluation factors may be mapped to arbitrary evaluation items as well.
  • the evaluation factors mapped to the evaluation items may be used to perform a machine learning about the expert’s evaluation result, thereby building or establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S160).
  • Fig. 6 is a flowchart and a table illustrating an aspect of verifying the evaluation engine built by performing a machine learning about the expert’s evaluation result.
  • the bulding or establishing of the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) may include dividing the expert’s evaluation results of the sample patents into a plurality of groups (S161), and reserving one group among the plurality of groups for verifying and performing a machine learning about the expert’s evaluation result of the remaining groups thereby building or establishing the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S162).
  • the evaluation servers 110 and 130 verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by using the expert’s evaluation result of the reserved group.
  • the evaluation servers 110 and 130 may establish the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about the expert’s evaluation result of second to tenth groups, and verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by using the first group. Further, the evaluation servers 110 and 130 may establish the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about the expert’s evaluation result of the first, and third to tenth groups, and verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by using the second group. As such, the evaluation servers 110 and 130 may also verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) only once through one group. In this case, the evaluation servers 110 and 130 may correct the calculated weight according to the one verification result.
  • the evaluation servers 110 and 130 may also repeat the verification.
  • the repeating of the verification may reserve the groups for verification from the first group and the tenth group, respectively and perform a machine learning the remaining groups. That is, first, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) may be verified by the first group, and next, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) may be verified by the second group.
  • the evaluation servers 110 and 130 may correct the calculated weight by using an average result of the respective verifications.
  • another verifying method 170b may be performed.
  • the another verifying method 170b will be described below in detail.
  • the evaluation servers 110 and 130 extract patents in which values of specific evaluation factors are equal to or greater than predetermined values, respectively, (S171b). For example, the evaluation servers 110 and 130 extract patents having three or more independent claims as the evaluation factor. In addition, the evaluation servers 110 and 130 yield evaluation results of the extracted patents (S172b). The evaluation result may relate to all the evaluation items, or may also relate to some evaluation items, for example, the evaluation item associated with the number of independent claims.
  • the evaluation servers 110 and 130 grade the evaluation result (S173b).
  • the grading may be a grade model having a normal distribution model.
  • the grade may be a nine-grade scheme.
  • the nine-grade scheme may be a system, such as AAA, AA, A, BBB, BB, B, CCC, CC, and C, or a system, such as A+, A, A-, B+, B, B-, C+, C, and C-.
  • the evaluation servers 110 and 130 analyze grade distribution of the patents in which the value of the specific evaluation factor is equal to or greater than the predetermined value to patents in which the value of the specific evaluation factor is less than the predetermined value, based on the grade distribution, and verify whether the distribution is normal or not, and perform the verification (S1734).
  • FIG. 8A distribution charts for comparing patents in which the number of independent claims is 10 or more with general patents are illustrated according to a general patent evaluation system in the related art and an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) proposed in the present patent, respectivley.
  • Fig. 9 shows a graph of excellent patents in which values of several evaluation factors are greater than the threshold, and a graph of general patents.
  • the grade of the excellent patent group increses upward as compared with the general patent group.
  • the evaluation servers 110 and 130 may compare the distribution of the patents in which the value of the specific evaluation factor is greater than the predetermined value with the distribution of the general patents in which the value of the specific evaluation factor is less than the predetermined value and verify whether the distribution is normal, thereby verifying whether the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) rightly operates.
  • Fig. 10 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of the present invention.
  • the service servers 120 and 140 may receive information on a specific patent from a user device (S210) and may receive a request for evaluating the specific patent from the user device (S220). For this purpose, the service servers 120 and 140 may provide a webpage to the user’s computer.
  • the service servers 120 and 140 may provide a result of the evaluation that has been yielded on a specific patent identified using the information, using a previously established evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent), so that the result may be output through the user’s computer (S230).
  • the service servers 120 and 140 may simply provide the result of evaluation only. However, the service servers 120 and 140 may also generate an evaluation report and may provide the generated evaluation report to the user’s computer.
  • the evaluation report may include the yielded evaluation result and additional description on the evaluation result. Such evaluation report may be made in the PDF format or may be based on a webpage.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, microcontrollers, or microprocessors.
  • the software codes may be stored in a memory unit and may be driven by a processor.
  • the memory units may be positioned inside or outside the processor and may send and receive data to/from the processor via various known means.
  • Fig. 11 illustrates the physical configuration of evaluation servers 110 and 130 and service servers 120 and 140 according to an embodiment of the present invention.
  • the evaluation servers 110 and 130 may include transmitting/receiving units 110a and 130a, controllers 110b and 130b, and storage units 110c and 130c, and the service servers 120 and 140 may transmitting/receiving units 120a and 140a, controllers 120b and 140b, and storage units 120c and 140c.
  • the storage units store the methods illustrated in Figs. 4 to 13 and what has been described.
  • the storage units 110c and 130c of the evaluation servers 110 and 130 may a program in which the above-described specification processing units 111 and 131, natural language processing units 112 and 132, keyword extracting units 113 and 133, similar patent extracting units 114 and 134, evaluation factor (or evaluation index) processing units 115 and 135, and patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 are implemented.
  • the storage units 120c and 140c of the service servers 120 and 140 may store one or more of the evaluation report generating units 121 and 141 and portfolio analysis units 122 and 142.
  • the controllers control the transmitting/receiving units and the storage units. Specifically, the controllers execute the programs or the methods stored in the storage units. The controllers transmit and receive signals through the transmitting/receiving units.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Technology Law (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

There is provided a method of evaluating a patent using an evaluation engine. The method may comprise: receiving an evaluation request for a specific patent from a user device; receiving an evaluation request for the specific patent from the user device; and providing an evaluation result, which is yielded for the specific patent using the evaluation engine, to the user device. The evaluation engine may be generated by performing a machine-learning about expert's evaluation results on sample patents.

Description

EVALUATION ENGINE OF PATENT EVALUATION SYSTEM
The present invention relates to an evaluation engine or an artificially intelligent evaluation-bot (or artificially intelligent evaluation agent) for a patent evaluation system.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority of Korean Patent applications NO. 10-2012-0144316 filed on December 12, 2012, NO. 10-2012-0144327 filed on December 12, 2012 and NO. 10-2012-0144328 filed on December 12, 2012 of which are incorporated by reference in their entirety herein.
BACKGROUND ART
Recent intellectual property (IP) strategies for protecting technologies of some companies are enjoying as significant achievements as other developed countries do. IP owners possessing a number of IPs suffer from costs and efforts of maintenance of registered IPs.
Further, it is not easy to, among registered IP rights, distinguish ones unnecessary to retain from others that must be intensively invested.
Accordingly, IP owners evaluate their IP rights on their own or entrust profit/non-profit organizations to conduct IP evaluation.
Meanwhile, results of evaluation of patents may be utilized for various purposes, such as maintenance of patents, offer of strategies for utilizing patents, support for research planning, estimation of patents in light of right, economy, and environment, invention evaluation, grasp of a critical invention and priority, association with business strategies (strategic alliance), allocation of R&D planning resources, technology evaluation for a loan from a financial organization, evaluation for choosing a provider (subject) of a government direct/indirect technical development support business, evaluation for intangible assets, evaluating customers’ intangible assets into current values based on clear and objective materials in consideration of technical, economical, and social aspects, compensation for inventors, asset evaluation (for depreciation), evaluation for IPs for the purpose of technology trade (technology transfer, M&A, etc.), evaluation of IP rights for loaning a technology, or attraction of investment.
However, data analysis that is part of a process of reporting an evaluated patent to an IP owner is high time- and cost-consuming and requires many skilled workers. Further, a majority of work is done manually, thus leading to the need for a further objective technology evaluating system and method.
Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system.
As can be seen from Fig. 1(a), a great number of patents, e.g., a few hundreds of patents or a few thousands of patents, when left to a specialist for evaluation, require a significant time and costs.
However, as shown in Fig. 1(b), in case the patents first go through filtering, a small number of patents only (e.g., a few tens of patents only) may be requested to be evaluated by a specialist, and this may save a great deal of time and expense.
Meanwhile, some standards, which are used when patents are automatically evaluated, are unclear. For example, in case the number of independent claims is the same as the number of dependent claims, it is unclear how many of the claims are good or bad.
Therefore, an object of an embodiment disclosed in this specification provides a system for patent evaluation. Further, an object of an embodiment disclosed in this specification is to verify an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) in a system for patent evaluation.
In order to solve the above-mentioned problems, an aspect of an embodiment of the present invention provides a method of evaluating a patent using an evaluation engine. The method may comprise: receiving an evaluation request for a specific patent from a user device; receiving an evaluation request for the specific patent from the user device; and providing an evaluation result, which is yielded for the specific patent using the evaluation engine, to the user device. The evaluation engine may be generated by performing a machine-learning about expert’s (or patent technician’s) evaluation results on sample patents.
Thee evaluation engine may be generated through at least one of: calculating a correlation of evaluation factors with one or more pre-defined evaluation items, based on the expert’s evaluation result on the sample patents; mapping respective evaluation items and evaluation factors based on the calculated correlation; and performing a machine learning about the expert’s evaluation result by using the mapped evaluation factors on the evaluation item.
The evaluation factor may be based information extracted from one or more of bibliographic information, prosecution history information, a specification, and claims of an issued patent. Also, the evaluation factor may be based on information extracted by performing a natural language processing on the specification and the claims of the issued patent.
The evaluation item may include at least one of strength of patent right, quality of technology, and usability.
In the outputting of the evaluation result, when the evaluation result for the specific patent identified by using the information is pre-yielded, the pre-yielded evaluation result is output. Alternatively, in the outputting of the evaluation result, the evaluation result for the specific patent identified by using the information is immediately yielded and output in response to the user's request.
The expert’s evaluation result may be performed for each technical field. Here, the evaluation engine may be generated for each technical field. Accordingly, the outputting of the evaluation result may use an evaluation engine of a technical field corresponding to a technical field of the specific patent.
A correlation between experts may be calculated based on results evaluated by a plurality of experts for each technical field. Here, the evaluation engine may be established based on the expert' evaluation result having an excellent correlation based on the calculated correlation.
In order to solve the above-mentioned problems, another aspect of an embodiment of the present invention provides a method of verifying an evaluation engine for a patent evaluation system. The method may comprise: dividing a plurality of expert’s evaluation results on sample patents for each evaluation item into several groups; reserving an evaluation result of at least one group among the several groups for verification; generating an evaluation engine by performing a machine learning about the evaluation results of the remaining groups; and primarily verifying the evaluation engine by using the reserved evaluation result of at least one group.
The method may further comprise: yielding an evaluation result on a patent, in which a value of a specific evaluation factor is equal to or greater than a predetermined value, by using the evaluation engine ; and secondarily verifying the evaluation engine by using grade distribution generated based on the yielded evaluation result.
The secondarily verifying may include verifying whether the patent in which the value of the specific evaluation factor is equal to or greater than the predetermined value has a higher grade than a general patent.
In order to solve the above-mentioned problems, other aspect of an embodiment of the present invention provides a method of receiving evaluation results on sample patents from experts thereby to generating a patent evaluation engine, the method performed by a server. The method may comprise: primarily mapping one or more predefined patent evaluation items and candidates of associated evaluation factors to provide the mapping result to an expert’s computer; receiving the evaluation results on the sample patents for each evaluation item from the expert’s computer; calculating a correlation between each evaluation item and each evaluation factor, based on the expert’s evaluation result; remapping each evaluation item and each evaluation factor based on the calculating correlation; and generating an evaluation engine by performing a machine learning about the expert’s evaluation result, by using the evaluation factor mapped in the evaluation item.
According to an embodiment of this disclosure, a patent may be automatically evaluated by a system, and a result of the evaluation may be suggested. Further, an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of this disclosure is to perform more quantitative, objective automatic evaluation for a patent.
Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system;
Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention;
Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2;
Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3;
Fig. 5 is a flowchart illustrating a method of building an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention.
Fig. 6 is a flowchart and a table illustrating an aspect of verifying the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) built by performing a machine learning about the expert’s evaluation result.
Fig. 7 is a flowchart illustrating another aspect of verifying the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent)built by performing a machine learning about the expert’s evaluation result.
Figs. 8a, 8b and 9 are distribution diagrams according to another aspect illustrated in Fig. 7.
Fig. 10 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine according to an embodiment of the present invention.
Fig. 11 illustrates the physical configuration of evaluation servers and service servers according to an embodiment of the present invention.
As used herein, the technical terms are used merely to describe predetermined embodiments and should not be construed as limited thereto. Further, as used herein, the technical terms, unless defined otherwise, should be interpreted as generally understood by those of ordinary skill in the art and should not be construed to be unduly broad or narrow. Further, when not correctly expressing the spirit of the present invention, the technical terms as used herein should be understood as ones that may be correctly understood by those of ordinary skill in the art. Further, the general terms as used herein should be interpreted as defined in the dictionary or in the context and should not be interpreted as unduly narrow.
As used herein, the singular form, unless stated otherwise, also includes the plural form. As used herein, the terms “including” or “comprising” should not be interpreted as necessarily including all of the several components or steps as set forth herein and should rather be interpreted as being able to further include additional components or steps.
Further, as used herein, the terms “first” and “second” may be used to describe various components, but these components are not limited thereto. The terms are used only for distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may also be referred to as a second component, and the second component may likewise be referred to as the first component.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The same reference numerals may refer to the same or similar elements throughout the specification and the drawings.
When determined to make the gist of the present invention unclear, the detailed description of the present invention is skipped. Further, the accompanying drawings are provided merely to give a better understanding of the spirit of the present invention, and the present invention should not be limited thereto.
Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention.
As can be seen from Fig. 2, a patent evaluation system according to an embodiment of the present invention includes one or more servers 100 and one or more databases (hereinafter, simply referred to as “DB”) 190. The one or more servers 100 may be remotely managed by a managing device 500.
The one or more servers 100 are connected to a wired/wireless network and may provide a user device 600 with an evaluation result service and other various services. Specifically, when receiving a request for an evaluation service for a specific patent case from the user device, the one or more servers 100 may provide a result from evaluating the specific patent case.
Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2.
As shown in Fig. 3, one or more servers 100 may include an evaluation server 100 for domestic patents (e.g., Korean patents), a service server 120 for domestic patents (e.g., Korean patents), an evaluation server 130 for foreign patents (e.g., U.S. patents), and a service server 140 for foreign patents (e.g., U.S. patents). Although in Fig. 3 the domestic patent evaluation server 110 and the foreign (e.g., U.S.) patent evaluation server 130 are, by way of example, physically separated from each other, these servers may be integrated into a single physical server. Further, the domestic patent service server 120 and the foreign (e.g., U.S.) patent service server 140 are shown to be physically separated from each other, but these servers may be integrated into a single physical server. Further, the servers 110, 120, 130, and 140 as illustrated may be integrated into a single physical server.
Further, as shown in Fig. 3, the above-described one or more databases 190 may include patent information DBs 191 and 192, evaluation factor (or evaluation index) DBs 193 and 194, similar patent DBs 195 and 196, and evaluation result DBs 197 and 198. Each DB is illustrated to be provided separately from each other for the purpose of each of evaluation of domestic patents and evaluation of foreign (e.g., U.S.) patents, and the DBs may be integrated into one. For example, the domestic (e.g., Korean) patent information DB 191 and the foreign (e.g., U.S.) patent information DB 192 may be integrated into one, and the domestic (e.g., Korean) evaluation factor (or evaluation index) DB 193 and the foreign (e.g., U.S.) evaluation factor (or evaluation index) DB 194 may be integrated into one. Alternatively, the DBs all may be integrated into one that may be divided into fields.
Such DBs may be generated based on what is received from an external DB provider. For such reception, the server 100 may include a data collecting unit 150 that receives a domestic (e.g., Korean) or foreign (e.g., U.S.) raw DB from the external DB provider. The data collecting unit 150 physically includes a network interface (NIC). Further, the data collecting unit 150 logically may be a program constituted of an API (Application Programming Interface). The data collecting unit 150 processes a raw DB received from the external DB provider and may store the received raw DB in one or more DBs 190, for example, patent information DBs 191 and 192 which are connected to the server 100.
Meanwhile, the domestic/foreign patent evaluation servers 110 and 130 may include one or more of specification processing units 111 and 131, natural language processing units 112 and 132, keyword processing units 113 and 133, similar patent processing units 114 and 134, evaluation factors (or evaluation indexes) processing unit 115 and 135, and evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136.
The specification processing units 111 and 131 extract each information from the one or more DBs 190, for example, patent information DBs 191 and 192 and parse (or transform) the information. For example, the specification processing units 111 and 131 may extract one or more of a patent specification, bibliographic information, prosecution history information, claims, and drawings and may store the extracted information in each field of the evaluation factor (or evaluation index) DB.
The natural language processing units 112 and 132 perform a natural language process on text included in the extracted patent specification and the claims. As used herein, the “natural language process” refers to a computer analyzing a natural language used for, e.g., general conversation, rather than a special programming language for computers. For example, the natural language processing units 112 and 132 may conduct sentence analysis, syntax analysis, and a process of a mixed language. Further, the natural language processing units 112 and 132 may carry out a semantic process.
The keyword processing units 113 and 133 extract keywords from each patent based on a result of the natural language process. In order to extract keywords, a scheme such as a VSM (Vector Space Model) or LSA (Latent Sematic Analysis) may be used. As used herein, the “keyword” of a patent specification refers to word(s) that represents the subject of the patent specification, and for example, in the instant specification, the “patent evaluation” may be keywords. As such, it may be advantageous to extract as many keywords representing the subject of each patent specification as possible, but merely increasing the number of keywords extracted may rather lead to inaccuracy. Accordingly, selecting a proper number of keywords is important.
The similar patent processing units 114 and 134 may search patents closest to each patent based on the extracted keywords and may store the results of search in the similar patent DBs 195 and 196. In general, similar patent groups are known to belong to the same sub class in the IPC (International Patent Classification), but according to an embodiment of this disclosure, similar patents may be searched from other sub classes as well as the same sub class. As such, in order to increase accuracy when searching similar patents from other sub classes, it is most critical to precisely extract keywords. In particular, as described above, a mere increase in the number of keywords may lead to extraction of inaccurate keywords, thus resulting in completely different patents being searched from other sub classes as similar patents. Accordingly, according to an embodiment of this disclosure, a proper number of keywords are extracted based on a result of simulation depending on the number of keywords and similar patents are searched with the extracted keywords.
Meanwhile, the evaluation factor (or evaluation index) processing units 115 and 116 extract values of evaluation factors (or evaluation indexes) from one or more information of a patent specification, bibliographical information, prosecution history information, claims, and drawings and stores the extracted values in the evaluation factor DB 194.
Among the evaluation factors, ones for evaluating Korean patents may differ from others for evaluating foreign patents. For example, evaluation factors for evaluating Korean patents are first listed as below:
Table 1
Evaluation Factors Description
length of each independent claim Number of words in an independent claim
Number of claims Number of claims
Number of claim categories Number of categories of independent claims (product or method)
Number of independent claims Number of independent claims
Number of domestic family patents Number of domestic family patents (divisional applications, family of patent application claiming the same priority)
Number of foreign family patents Family patents of foreign countries
Number of annual fees Number of years after the issue
Whether there exists a request for accelerating examination Whether a request for accelerated examination has been made
Elapsed Days before request for examination Days from the filing date to date filing the request for examination
Number of responses filed to Office Action(s) Number of times in which responses have been filed
Number of appeals filed to Final Office Action Number of times in which appeals to final office actions have been filed
Number of backward citations Total number in which backward citation has been done
Number of joint applicants Number of joint applicants
Number of licensees Number of licensees
Number of trials to confirm the scope of a patent Number of times in which a trial has been filed
Number of requests for accelerating appeal Number of times in which an appeal has been filed
Whether the patent is published early by a request Whether the patent is published early by a request
Number of provisions of information by third-party Number of times in which provision of information has been made by third-party
Number of oppositions Number of times in which opposition has been filed
Whether the request for examination is filed by third party Which one of applicant or third party has filed a request for examination
Number of invalidation trials Number of times in which the trial has been filed
Number of defensive confirmation trials for the scope of a right Number of times in which the trial has been filed
Number of embodiments Number of embodiments
Number of drawings Number of drawings
Number of words in detailed description Number of words included in the detailed description section of a specification
Number of IPCs Number of IPC classification codes
Number of ownership changes Number of times in which ownership has been changed
Lawsuit information Number of times in which lawsuit, if any, has been filed
prior technical document count Number of prior technical documents cited during the examination
On the other hand, evaluation factors for evaluating a foreign patent, e.g., a U.S. patent, may be listed in the following table:
Table 2
Evaluation Factors Description
Length of independent claim Number of words in an independent claim
Number of claim categories Number of categories (product or method) of independent claims
Number of independent claims Number of independent claims
Number of words in detailed description Number of words in the detailed description of a specification
Total number of claims Number of claims
Number of in-U.S. family patents Number of family patents in U.S.
Number of Reexaminations Number of times in which reexamination has been filed
Number of interferences Number of times in which interference has been filed
Number of Reissues Number of times in which reissue has been filed
Number of Backward citations Total number of times in which backward citation has been done
Number of IPCs Number of IPC classification codes
Number of foreign family patents Number of family patents in foreign countries
Number of annual fees Number of times in which annual fees have been paid
Whether there exists a request for accelerating examination Whether request for accelerated examination has been made
Number of Certification of Corrections Number of times in which Certification of Correction has been filed
Number of Ownership changes number of times in which ownership has been changed
Lawsuit information Number of times in which lawsuit, if any, has been filed
Prior technical document count Number of prior technical documents cited during the examination
Meanwhile, the above-suggested evaluation factors (evaluation indexes) are merely examples, and any information that may be directly induced from patent information may be utilized as evaluation factors (evaluation indexes). Further, any information that may be indirectly obtained or induced by processing patent information may be used as evaluation factors (evaluation indexes).
Meanwhile, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 evaluate each patent for each of predetermined evaluation items based on the evaluation factors and evaluation mechanism stored in the evaluation factor DBs 193 and 194 and produce results of the evaluation. Further, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 may store the produced evaluation results in the evaluation result DBs 197 and 198.
The evaluation items may be defined as the strength of patent right, quality of technology, and usability. Or, the evaluation items may be defined as strength of patent right and marketability (or, commercial potential). Such definition may be changed depending on what is the main object of patent evaluation. Accordingly, the scope of the present invention is not limited to those listed above, and may be expanded to anything to which the scope of the present invention may apply.
The evaluation mechanism may include a weight and a machine learning model. The weight may be a value obtained from expert’s (or patent technician’s) evaluation results with respect to several sample patents. The evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 will be described below in detail with reference to Fig. 4.
The domestic/foreign patent service servers 120 and 140 may include one or more of evaluation report generating units 121 and 141 and portfolio analyzing units 122 and 142. The evaluation report generating units 121 and 141 generate evaluation reports based on evaluation results stored in the evaluation result DBs 197 and 198. The portfolio analyzing units 122 and 142 may analyze portfolios of patents owned by a patent owner based on the information stored in the similar patent DBs 195 and 196. Further, the portfolio analyzing units 122 and 142 may analyze patent statuses for each right owner by technology (or technical field) or by IPC classification. Besides, the portfolio analyzing units 122 and 142 may perform various types of analysis based on the similar patent DBs 195 and 196 and the evaluation result DBs 197 and 198. For example, the portfolio analyzing units 122 and 142 may perform various types of analysis such as patent trend analysis, or per-patentee total annual fees analysis.
As such, the domestic/foreign patent service servers 120 and 140, upon receiving a request for an evaluation service for a predetermined patent from a user device, may provide results of evaluation of the specific patent case. Further, in response to a user’s request, the evaluation reports may be provided in the form of a webpage, an MS-excel file, or a PDF file, or an MS-word file or the results of analysis may be offered. To provide such service, a user authentication/authority managing unit 160 may be needed.
Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3.
As can be seen from Fig. 4 and what has been described above, the specification processing units 111 and 131 receive patent specification from the patent information DBs 191 and 192 and parse the patent specification. The patent specification may be written in, e.g., XML, and the specification processing units 111 and 131 may include XML tag processing units to parse the XML.
The evaluation factor processing units 115 and 135 may include a first evaluation factor processing unit and a second evaluation factor processing unit for processing evaluation factors based on the parsed patent specification. The first evaluation factor processing unit extracts values of evaluation factors that do not require the result of natural language processing based on the parsed patent specification. For example, the first evaluation factor processing unit calculates the values of evaluation factors that do not require natural language processing, such as a length of each independent claim, the number of claims, the number of claim categories, the number of independent claims, the number of domestic family patents, the number of foreign family patents as shown in Table 1 and stores the values in the evaluation factor DBs 193 and 194.
The natural language processing units 112 and 132 perform natural language processing based on the parsed patent specification. The natural language processing units 112 and 132 include a morpheme analyzing unit and a TM analysis that work based on a dictionary DB. The “morpheme” refers to the smallest meaningful unit that cannot be analyzed any further, and the “morpheme analysis” refers to the first step of analysis of natural language, which changes an input string of letters into a string of morphemes. The TM analysis is a two-level analysis task and is represented as Tm = (R, F, D), where R is a set of rules, F is a finite automatic converter, and T is a try dictionary.
If the natural language processing is done, the second evaluation factor processing unit of the evaluation factor processing units 115 and 135 calculate values of remaining evaluation factors based on the result of the natural language processing. For example, a value of the evaluation factor such as “keyword consistency with similar foreign patent group” summarized in Table 1 above is calculated and stored in the evaluation factor DBs 193 and 194.
Meanwhile, the keyword extracting units 113 and 133 that extract keywords based on the result of the natural language processing may include a keyword candidate selecting unit, and useless word removing unit, and a keyword selecting unit. The keyword candidate selecting unit selects keyword candidates that may represent the subject of each patent. The useless word removing unit removes useless words that have low importance from among the extracted keyword candidates. The keyword selecting unit finally selects a proper number of keywords from among the remaining keyword candidates after the useless words have been removed and stores the selected keywords in the evaluation factor DBs 193 and 194.
The following table shows the accuracy per number of keywords.
Table 3
Keyword count 10 50
Rate of Recall 22.7% 54.1%
Accuracy 20.6% 10.9%
Referring to Table 3, the chance of being recalled (that is, the chance of a keyword to be reused) is 22.7% for 10 keywords and goes up to 54.1% for 50 keywords. However, when the number of keywords is 50, accuracy is 10.9%, whereas when the number of keywords is 10, accuracy is 20.6%. As set forth above, a mere increase in the number of keywords, although the mere increase is able to increase the rate of recall, may lead to the accuracy being lowered, and accordingly, an optimum number of keywords may be yielded based on the obtained accuracy.
The similar patent extracting units 114 and 134 search similar patents based on the keywords and may include a document clustering unit, a document similarity calculating unit, and a similar patent generating unit. The document clustering unit primarily clusters similar patents based on the keywords. The document similarity calculating unit calculates similarity between patent documents among the clustered patents. The similar patent generating unit generates, as a result, actually closest patents among the primarily clustered patent documents and stores the result in the similar patent DBs 195 and 196.
The patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 include a machine learning model unit and a patent evaluation unit. The machine learning model unit performs a machine learning based on the expert (or patent technician) evaluation result DB. For this, an evaluation result for sample patents may be received from each expert per technology (i.e., technical field).
The sample patents are a set for the machine learning, and a few hundreds of patents to a few thousands of patents may be extracted from the patent information DBs 191 and 192 to select the sample patents. The sample patents may be selected to evenly include the evaluation factors shown in Tables 1 and 2. For example, very few only of all the issued patents (about a few tens to a few millions of patents) have a non-zero value for some evaluation factors, such as the number of invalidation trials, the number of trials to confirm the scope of a patent, the number of defensive confirmation trials for the scope of a right, or the number of requests for accelerating appeal. Accordingly, it is preferable to select the sample patents so that patents having a non-zero value for each evaluation factor are distributed at a predetermined ratio. Further, when picking up the sample patents, the patents may be divided into substantially a plurality of sets (for example, 10 sets). Among the plurality of sets, some may be used for machine learning, and the remainder may be used to verify the result of the machine learning.
In order to receive the expert’s evaluation result, the service servers 120 and 140 may provide a webpage screen which the expert logs in. When the expert logs in by inputting an account and a password, a list of patents to be evaluated by the expert may be provided. When the expert clicks any patent on the list, values of the evaluation factors for each sample patent and the above-described evaluation items are provided to the expert. For example, the service servers 120 and 140 provide a webpage in which the above-described evaluation items (for example, strength of patent right, quality of technology, and usability) are listed, to the expert’ computer. Further, the service servers 120 and 140 provide the webpage in which the above-described evaluation items are listed, to the expert’ computer. In this case, the service servers 120 and 140 may map and display candidates of the evaluation factors associated with each evaluation item. For example, the service servers 120 and 140 may map and display candidates of the evaluation factors associated with each evaluation item like the following Table 4.
Table 4
Evaluation item Evaluation interitem Evaluation factor
Strength of patent right Broadness/narrowness of Patent Scope) Length of each independent claim, number of claims, and number of claim categories
variety/diversity of claims)(or well-supported Right Number of independent claims, number of claim categories, number of claims, and number of dependent claims
Quality of technology Technology leadership Application date
Life Cycle of Technology Citing/cited relationship
Usability Commercialization Opportunities Accelerating examination, trials to confirm the scope of a patent
Enforcement Opportunities Trials to confirm the scope of a patent, Lawsuit information
In above Table 4, for example, it is illustrated that two lower evaluation items for the evaluation item “strength of patent right”, that is, “broadness/narrowness of patent scope” and “variety/diversity of claims or well-supported right” exist, two lower evaluation items for the evaluation item “quality of technology”, that is, “technology leadership” and “life cycle of technology” exist, and two lower evaluation items for the evaluation item “usability”, that is, “commercialization opportunities” and “enforcement opportunities” exist, but they are just exemplified and may be variously modified.
Then, the expert puts a point (or score) for each evaluation item in the webpage while viewing the associated evaluation factor candidates for each evaluation item and the service servers 120 and 140 may receive the point (or score) and store the received point (or score) in the expert evaluation DB.
Then, the expert puts a point (or score) for each evaluation item in the webpage while viewing the associated evaluation factor candidates for each evaluation item and the service servers 120 and 140 may receive the point (or score) and store the received point (or score) in the expert evaluation DB.
Then, the machine learning model unit extracts evaluation factors actually associated with each evaluation item based on the expert’s evaluation results stored in the expert evaluation DB. Specifically, the machine learning model unit analyzes the correlation between each evaluation item and each evaluation factor based on the expert’s evaluation results stored in the expert evaluation DB. For example, it is analyzed based on the expert’s evaluation results stored in the expert evaluation DB whether when the value of the evaluation factor increases, the point (or score) of the evaluation item input by the expert increases or when the value of the evaluation factor decreases, the point (or score) of the evaluation item input by the expert increases.
For example, a result of analyzing a correlation of the evaluation factors for the evaluation item “strength of patent right” is represented as the following Table 5.
Table 5
Evaluation factor Correlation
Number of claims 0.442313457
Number of independent claims 0.43624915
Number of claim categories 0.369720889
length of each independent claim -0.331545077
whether to be supported by description -0.21671485
Number of licensees -0.149643054
Number of words in detailed description 0.148732909
Whether there exists a request for accelerating examination -0.145481466
Number of ownership changes -0.114275841
Whether the request for examination is filed by third party 0.114180544
Whether the patent is published early by a request -0.093078214
Number of invalidation trials 0.092559136
Number of responses filed to Office Action(s) 0.090248983
Lawsuit information 0.083141351
Number of appeals filed to Final Office Action 0.061068447
Number of IPCs 0.05544511
Number of annual fees 0.053290108
Number of joint applicants 0.049645717
Number of domestic family patents -0.046240513
Number of backward citations 0.045334312
Number of requests for accelerating appeal 0.038343259
Number of defensive confirmation trials for the scope of a right 0.02519305
Number of provisions of information by third-party 0.015832255
Number of trials to confirm the scope of a patent 0.014261697
Number of foreign family patents -0.009559996
Referring to Table 5, a negative correlation value represents that a value of the evaluation item increases when a value of the evaluation factor decreases, and a positive correlation value represents that a value of the evaluation item increase when a value of the evaluation factor increases.
For example, a result of analyzing a correlation of the evaluation factors for the evaluation item “quality of technology” is represented as the following Table 6.
Table 6
Evaluation factor Correlation
Number of backward citations 0.371199951
Keyword matching with patent with high leadership 0.313839149
Average depth of dependent claims 0.228073655
Number of independent claims 0.214881486
Number of claims 0.198064827
Length of each independent claim -0.155430986
Number of owner s organizations 0.145374949
Number of IPCs 0.125438684
Number of claim categories 0.121824863
Number of ownership changes -0.119199481
Whether the patent is published early by a request -0.117456589
Number of annual fees -0.107145378
Number of invalidation trials 0.092747643
Lawsuit information 0.087185784
Number of joint applicants 0.085308543
Number of provisions of information by third-party -0.069417918
Whether there exists a request for accelerating examination -0.06285232
Number of responses filed to Office Action(s) 0.054409496
Number of requests for accelerating trial 0.048091418
Number of defensive confirmation trials for the scope of a right 0.046281442
Number of appeals filed to Final Office Action -0.044629864
Number of licensees -0.033172901
Number of domestic family patents -0.032396341
Number of words in detailed description 0.023315445
Number of trials to confirm the scope of a patent -0.021026967
Number of foreign family patents -0.01527722
Whether the request for examination is filed by third party 0.005756606
For example, a result of analyzing a correlation of the evaluation factors for the evaluation item “usability” is represented as the following Table7.
Table 7
Evaluation factor Correlation
Number of invalidation trials 0.451065838
Number of requests for accelerating trial 0.386848273
Number of defensive confirmation trials for the scope of a right 0.358071363
Lawsuit information 0.313920588
Number of annual fees 0.307371492
Number of trials to confirm the scope of a patent 0.3035969
Number of backward citations 0.285179065
Difference from application filing date to a date of backward citations 0.274485287
Length of each independent claim -0.196663177
Number of ownership changes 0.156737637
Whether to be supported by description -0.141657162
Average depth of dependent claims 0.139758293
Number of claims 0.135037121
Whether the patent is published early by a request 0.132799209
Number of independent claims 0.120920898
Number of IPCs -0.105520543
Number of licensees 0.102959263
Keyword coincidence with patent with high leadership 0.081065843
Whether there exists a request for accelerating examination 0.053285091
Number of claim categories 0.05240555
Number of appeals filed to Final Office Action 0.050829491
Whether the request for examination is filed by third party 0.049764894
Number of joint applicants -0.044103946
Number of provisions of information by third-party 0.040184183
Number of domestic family patents -0.025321814
Holding rate of similar patents by the owner -0.018127852
Number of responses filed to Office Action(s) 0.007443184
Number of words in detailed description -0.006710761
Number of foreign family patents 0.003790473
The machine learning model unit extracts evaluation factors having a high correlation with, a corresponding evaluation item such as “strength of patent right” among the evaluation items. The correlation has a value between -1 and +1. Generally, its value, when included in a range from 0.2 to 0.6, is considered to be high. Accordingly, the machine learning model unit may select “the number of claims” “the number of independent claims” and “the number of claim categories” as evaluation factors for the “strength of patent right” among the evaluation items.
Meanwhile, each expert per technology (or technical field), upon evaluation of patents, may exhibit different evaluation results, and to address such issue, a correlation between the experts may be additionally calculated according to an embodiment of the present invention.
Table 8
Field Expert Correlation
Strength of Patent Right Quality of Technology Usability
Electronics A B 0.64 0.39 0.83
C D 0.29 0.32 0.48
Mechanics E F 0.60 0.23 0.55
G H 0.59 0.23 0.63
chemistry I J 0.66 0.71 0.60
K L 0.59 0.34 0.50
Physics M N 0.50 0.35 0.48
O P 0.81 0.15 0.80
Biology Q R 0.64 0.66 0.19
S T 0.51 0.45 0.38
As summarized in Table 8 above, technical fields may be categorized, e.g., into electronics, mechanics, chemistry, physics, and biology. After a plurality of experts is assigned to each field, the experts per field may be grouped in pairs. For example, in the case of electronics, experts A, B, C, and D are assigned to the electronics field, and then, experts A and B in pair conduct evaluation on the same issued patent, and in the same way, experts C and D in pair conduct evaluation on the same issued patent. In such case, the correlation between experts A and B calculated for the electronics field is, as shown in Table 5, 0.64 for the Strength of Patent Right evaluation item, 0.39 for the Quality of Technology evaluation item, and 0.89 for the Usability evaluation item. In case the correlation between paired experts is low, a result of the evaluation fulfilled by a pair of experts having a higher correlation may be used, and alternatively, a higher weight may be assigned to one of paired experts.
After defining the evaluation factors for each evaluation item in such a way, the machine learning model unit performs a machine learning based on the expert’s (or patent technican’s) evaluation results (e.g., the evaluation points or scores) stored in the expert evaluation DB. Here, the machine learning means to objectify the expert’s (or the patent technican’s) subjective evaluation result. Specifically, the machine learning model unit calculates a weight value based on an expert’s evaluation results, e.g., points (or scores) stored in the expert evaluation DB and performs a machine learning using the calculated weight.
At this time, the machine learning may be done per technology (or technical field). As set forth earlier, the evaluation of sample patents performed by an expert is done for each technology, and the machine learning is also conducted for each technology. By way of example, in the case of mechanical or electronic field, as the length of an independent claim increases, the scope of the claim decreases. However, in the case of chemical field, the length of an independent claim may have nothing to do with the broadness or narrowness of the claim scope. Accordingly, the machine learning is performed for each technology. Thus, the above-described weight may also be produced separately for each technical field.
Among the patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136, the patent evaluation unit evaluates patent cases according to the result of the machine learning and stores the evaluated result in the evaluation result DBs 197 and 198.Hereinafter, a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) is described.
Fig. 5 is a flowchart illustrating a method of building an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention.
As can be seen from Fig. 5 and what has been described above, evaluation items may be previously defined (S110). The evaluation items may be, as described earlier, defined as strength of patent right, quality of technology, and usability. Or, the evaluation items may also be defined as strength of patent right and marketability (or, commercial potential). Such definitions may be changed depending on what goals are to be achieved by evaluating patents.
Subsequently, the service servers 120 and 140 primarily map evaluation items with evaluation factors for sample patents and provide the result of the mapping to an expert’s computer (S120). The primary mapping may be to map the candidates of evaluation factors inferred to be associated with each evaluation item.
Next, the result of evaluating the sample patents may be received from the expert’s computer (S130). The evaluation result may be points given by the expert to the evaluation items. As such, the service servers 120 and 140 may prepare for a webpage to provide information to the expert’s computer and to receive a result of evaluation.
Subsequently, the correlations between the evaluation factors and the one or more prepared evaluation items may be calculated based on the expert’s evaluation result for the sample patents (S140). The correlation may have a value from -1 to +1 as described above.
Next, remapping may be done between each evaluation item and evaluation factors based on the calculated correlation (S150). Some of the evaluation factors primarily mapped to each evaluation item by such remapping may be excluded from mapping, and other evaluation factors may be mapped to arbitrary evaluation items as well.
As such, if mapping is done, the evaluation factors mapped to the evaluation items may be used to perform a machine learning about the expert’s evaluation result, thereby building or establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S160).
Fig. 6 is a flowchart and a table illustrating an aspect of verifying the evaluation engine built by performing a machine learning about the expert’s evaluation result.
First, as can be seen from FIG. 6A, the bulding or establishing of the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S160) of FIG. 5 is described in more detail, and verifying (S270) is described.
The bulding or establishing of the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S160) may include dividing the expert’s evaluation results of the sample patents into a plurality of groups (S161), and reserving one group among the plurality of groups for verifying and performing a machine learning about the expert’s evaluation result of the remaining groups thereby building or establishing the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S162). As such, when the establishment of the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) through the the machine learning is completed, the evaluation servers 110 and 130 verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by using the expert’s evaluation result of the reserved group.
As can be seen from FIG. 6B, for example, when the expert’s evaluation result is divided into ten groups, the evaluation servers 110 and 130 may establish the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about the expert’s evaluation result of second to tenth groups, and verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by using the first group. Further, the evaluation servers 110 and 130 may establish the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about the expert’s evaluation result of the first, and third to tenth groups, and verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by using the second group. As such, the evaluation servers 110 and 130 may also verify the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) only once through one group. In this case, the evaluation servers 110 and 130 may correct the calculated weight according to the one verification result.
Alternatively, the evaluation servers 110 and 130 may also repeat the verification. The repeating of the verification may reserve the groups for verification from the first group and the tenth group, respectively and perform a machine learning the remaining groups. That is, first, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) may be verified by the first group, and next, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) may be verified by the second group. As such, when the verification is repeated, the evaluation servers 110 and 130 may correct the calculated weight by using an average result of the respective verifications.
As can be seen from FIG. 7, when the establishing of the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S160) of FIG. 5 is completed, another verifying method 170b may be performed. The another verifying method 170b will be described below in detail.
First, the evaluation servers 110 and 130 extract patents in which values of specific evaluation factors are equal to or greater than predetermined values, respectively, (S171b). For example, the evaluation servers 110 and 130 extract patents having three or more independent claims as the evaluation factor. In addition, the evaluation servers 110 and 130 yield evaluation results of the extracted patents (S172b). The evaluation result may relate to all the evaluation items, or may also relate to some evaluation items, for example, the evaluation item associated with the number of independent claims.
When the evaluation result is yielded, the evaluation servers 110 and 130 grade the evaluation result (S173b). The grading may be a grade model having a normal distribution model. For example, the grade may be a nine-grade scheme. The nine-grade scheme may be a system, such as AAA, AA, A, BBB, BB, B, CCC, CC, and C, or a system, such as A+, A, A-, B+, B, B-, C+, C, and C-.
When the grading is completed, the evaluation servers 110 and 130 analyze grade distribution of the patents in which the value of the specific evaluation factor is equal to or greater than the predetermined value to patents in which the value of the specific evaluation factor is less than the predetermined value, based on the grade distribution, and verify whether the distribution is normal or not, and perform the verification (S1734).
For example, in FIG. 8A, distribution charts for comparing patents in which the number of independent claims is 10 or more with general patents are illustrated according to a general patent evaluation system in the related art and an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) proposed in the present patent, respectivley.
That is, referring to a left side of FIG. 8A, in the general patent evaluation system in the related art, the patents in which the number of independent claims is 10 or more distribute about 40% at the BB grade to occupy the largest number of patents. On the other hand, when the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) proposed in the present patent is used, the patents in which the number of independent claims is 10 or more distribute as many as 60% at the AAA grade, and the general patents in which the number of independent claims is not 10 have normal distribution.
Further, referring to a left side of FIG. 8B, in the general patent evaluation system in the related art, patents in which the invalidation trial was filed distribute 30% at the BBB grade and about 30% at the BB grade. On the other hand, when the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) proposed in the present patent is used, the patents in which the invalidation trial was filed may know that the grade is improved as compared with a left distribution chart.
Meanwhile, Fig. 9 shows a graph of excellent patents in which values of several evaluation factors are greater than the threshold, and a graph of general patents. As can be seen from this, when the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) proposed in the present patent is used, the grade of the excellent patent group increses upward as compared with the general patent group.
As described above, the evaluation servers 110 and 130 may compare the distribution of the patents in which the value of the specific evaluation factor is greater than the predetermined value with the distribution of the general patents in which the value of the specific evaluation factor is less than the predetermined value and verify whether the distribution is normal, thereby verifying whether the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) rightly operates.
Fig. 10 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of the present invention.
As can be seen from Fig. 10, the service servers 120 and 140 may receive information on a specific patent from a user device (S210) and may receive a request for evaluating the specific patent from the user device (S220). For this purpose, the service servers 120 and 140 may provide a webpage to the user’s computer.
Then, the service servers 120 and 140 may provide a result of the evaluation that has been yielded on a specific patent identified using the information, using a previously established evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent), so that the result may be output through the user’s computer (S230).
At this time, the service servers 120 and 140 may simply provide the result of evaluation only. However, the service servers 120 and 140 may also generate an evaluation report and may provide the generated evaluation report to the user’s computer. The evaluation report may include the yielded evaluation result and additional description on the evaluation result. Such evaluation report may be made in the PDF format or may be based on a webpage.
The embodiments disclosed herein have been described with reference to the accompanying drawings. Here, the above-described methods may be implemented by various means. For example, the embodiments of the present invention may be embodied in hardware, firmware, or software, or a combination thereof.
When implemented in hardware, methods according to embodiments of the present invention may be realized in one or more ASICs (application specific integrated circuits), DSPs (digital signal processors), DSPDs (digital signal processing devices), PLDs (programmable logic devices), FPGAs (field programmable gate arrays), processors, controllers, microcontrollers, or microprocessors.
When implemented in firmware or software, methods according to embodiments of the present invention may be realized in modules, procedures, or functions that perform the above-described functions or operations. The software codes may be stored in a memory unit and may be driven by a processor. The memory units may be positioned inside or outside the processor and may send and receive data to/from the processor via various known means.
Fig. 11 illustrates the physical configuration of evaluation servers 110 and 130 and service servers 120 and 140 according to an embodiment of the present invention.
As shown in Fig. 11, the evaluation servers 110 and 130 may include transmitting/receiving units 110a and 130a, controllers 110b and 130b, and storage units 110c and 130c, and the service servers 120 and 140 may transmitting/receiving units 120a and 140a, controllers 120b and 140b, and storage units 120c and 140c.
The storage units store the methods illustrated in Figs. 4 to 13 and what has been described. For example, the storage units 110c and 130c of the evaluation servers 110 and 130 may a program in which the above-described specification processing units 111 and 131, natural language processing units 112 and 132, keyword extracting units 113 and 133, similar patent extracting units 114 and 134, evaluation factor (or evaluation index) processing units 115 and 135, and patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 are implemented. The storage units 120c and 140c of the service servers 120 and 140 may store one or more of the evaluation report generating units 121 and 141 and portfolio analysis units 122 and 142.
The controllers control the transmitting/receiving units and the storage units. Specifically, the controllers execute the programs or the methods stored in the storage units. The controllers transmit and receive signals through the transmitting/receiving units.
The embodiments disclosed herein have been described thus far with reference to the accompanying drawings. Here, the terms or words used in the specification and claims should not be construed as limited to the meanings commonly used or included in the dictionary, but should be rather interpreted to have the meanings and concept that fit for the technical spirit disclosed herein.
Accordingly, the embodiments disclosed herein are merely an example of the present invention and do not represent all the technical spirit as disclosed herein, and accordingly, it should be understood that various equivalents and changes may be made thereto, which may replace the embodiments of the present invention.

Claims (13)

  1. A method of evaluating a patent using an evaluation engine, the method performed by a computer and comprising:
    receiving, by the computer, an evaluation request for a specific patent from a user device;
    receiving, by the computer, an evaluation request for the specific patent from the user device; and
    providing, by the computer, an evaluation result, which is yielded for the specific patent using the evaluation engine, to the user device,
    wherein the evaluation engine is generated by performing a machine-learning about patent technician’s evaluation results on sample patents.
  2. The method of claim 1, wherein the evaluation engine is generated through at least one of:
    calculating a correlation of evaluation factors with one or more pre-defined evaluation items, based on the patent technician’s evaluation result on the sample patents;
    mapping respective evaluation items and evaluation factors based on the calculated correlation; and
    performing the machine learning about the patent technician’s evaluation result by using the mapped evaluation factors on the evaluation item.
  3. The method of claim 2, wherein the evaluation factor is information extracted from one or more of bibliographic information, prosecution history information, a specification, and claims of an issued patent.
  4. The method of claim 2, wherein the evaluation factor includes information extracted by performing a natural language processing on the specification and the claims of the issued patent.
  5. The method of claim 1, wherein the evaluation item includes at least one of
    strength of patent right, quality of technology, and usability.
  6. The method of claim 1, wherein in the outputting of the evaluation result,
    when the evaluation result for the specific patent identified by using the information is pre-yielded, the pre-yielded evaluation result is output.
  7. The method of of claim 1, wherein in the outputting of the evaluation result,
    the evaluation result for the specific patent identified by using the information is immediately yielded and output in response to the user's request.
  8. The method of claim 1, wherein the patent technician’s evaluation result is performed for each technical field,
    the evaluation engine is generated for each technical field, and
    the outputting of the evaluation result uses an evaluation engine of a technical field corresponding to a technical field of the specific patent.
  9. The method of claim 1, wherein the a correlation between patent technicians is calculated based on results evaluated by a plurality of patent technicians for each technical field, and
    wherein the evaluation engine is established based on the patent technician' evaluation result having an excellent correlation based on the calculated correlation.
  10. A method of verifying a patent evaluation engine, the method performed by a computer and comprising:
    dividing, by the computer, a plurality of patent technician’s evaluation results on sample patents for each evaluation item into several groups;
    reserving, by the computer, an evaluation result of at least one group among the several groups for verification;
    generating, by the computer, an evaluation engine by performing a machine learning about the evaluation results of the remaining groups; and
    primarily verifying, by the computer, the evaluation engine by using the reserved evaluation result of at least one group.
  11. The method of claim 10, further comprising:
    yielding an evaluation result on a patent, in which a value of a specific evaluation factor is equal to or greater than a predetermined value, by using the evaluation engine ; and
    secondarily verifying the evaluation engine by using grade distribution generated based on the yielded evaluation result.
  12. The method of claim 11, wherein the secondarily verifying including
    verifying whether the patent in which the value of the specific evaluation factor is equal to or greater than the predetermined value has a higher grade than a general patent.
  13. A method of receiving evaluation results on sample patents from patent technicians thereby to generating a patent evaluation engine, the method performed by a computer and comprising:
    primarily mapping, by the computer, one or more predefined patent evaluation items and candidates of associated evaluation factors to provide the mapping result to a patent technician’s computer;
    receiving, by the computer, the evaluation results on the sample patents for each evaluation item from the patent technician’s computer;
    calculating, by the computer, a correlation between each evaluation item and each evaluation factor, based on the patent technician’s evaluation result;
    remapping, by the computer, each evaluation item and each evaluation factor based on the calculating correlation; and
    generating, by the computer, an evaluation engine by performing a machine learning about the patent technician’s evaluation result, by using the evaluation factor mapped in the evaluation item.
PCT/KR2013/010951 2012-12-12 2013-11-29 Evaluation engine of patent evaluation system WO2014092361A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR1020120144327A KR101456189B1 (en) 2012-12-12 2012-12-12 Method for evaluating patents using engine and evaluation server
KR1020120144316A KR101658890B1 (en) 2012-12-12 2012-12-12 Method for online evaluating patents
KR10-2012-0144328 2012-12-12
KR1020120144328A KR101456190B1 (en) 2012-12-12 2012-12-12 Method for verifying evaluation engine of patent evaluation system
KR10-2012-0144316 2012-12-12
KR10-2012-0144327 2012-12-12

Publications (1)

Publication Number Publication Date
WO2014092361A1 true WO2014092361A1 (en) 2014-06-19

Family

ID=50934599

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/010951 WO2014092361A1 (en) 2012-12-12 2013-11-29 Evaluation engine of patent evaluation system

Country Status (1)

Country Link
WO (1) WO2014092361A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3489873A1 (en) * 2017-11-27 2019-05-29 Korea Invention Promotion Association Method and system on evaluating patent using structural equation model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110068278A (en) * 2009-12-15 2011-06-22 한국발명진흥회 Method on patent rating
KR20110068277A (en) * 2009-12-15 2011-06-22 한국발명진흥회 Patent rating system and rating factor information processing method of the same system
KR20120046670A (en) * 2010-11-02 2012-05-10 (주)광개토연구소 Method, system, media and program on making patent evalucation model and patent evaluation
KR20120069206A (en) * 2010-12-20 2012-06-28 한국발명진흥회 Method and system on analysis map information of similar patents

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110068278A (en) * 2009-12-15 2011-06-22 한국발명진흥회 Method on patent rating
KR20110068277A (en) * 2009-12-15 2011-06-22 한국발명진흥회 Patent rating system and rating factor information processing method of the same system
KR20120046670A (en) * 2010-11-02 2012-05-10 (주)광개토연구소 Method, system, media and program on making patent evalucation model and patent evaluation
KR20120069206A (en) * 2010-12-20 2012-06-28 한국발명진흥회 Method and system on analysis map information of similar patents

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3489873A1 (en) * 2017-11-27 2019-05-29 Korea Invention Promotion Association Method and system on evaluating patent using structural equation model
EP4040360A1 (en) * 2017-11-27 2022-08-10 Korea Invention Promotion Association Method and system on evaluating patent using structural equation model

Similar Documents

Publication Publication Date Title
CN109872162B (en) Wind control classification and identification method and system for processing user complaint information
US8769708B2 (en) Privileged document identification and classification system
CN110929125B (en) Search recall method, device, equipment and storage medium thereof
JP5627820B1 (en) Document analysis system, document analysis method, and document analysis program
CN110674360B (en) Tracing method and system for data
CN110727852A (en) Method, device and terminal for pushing recruitment recommendation service
CN108734296A (en) Optimize method, apparatus, electronic equipment and the medium of the training data of supervised learning
CN106055994A (en) Information processing method, system and device
Strotmann et al. Author name disambiguation for collaboration network analysis and visualization
Becker et al. Preservation decisions: Terms and conditions apply
KR102155877B1 (en) Method for providing information of projects matching qualifications of crowdsourcing platform for artificial intelligence training data generation
CN111143394B (en) Knowledge data processing method, device, medium and electronic equipment
CN117520503A (en) Financial customer service dialogue generation method, device, equipment and medium based on LLM model
Sun et al. Mining software repositories for automatic interface recommendation
CN111178701A (en) Risk control method and device based on feature derivation technology and electronic equipment
WO2012046904A1 (en) Device and method for providing multi -resource based search information
WO2014092361A1 (en) Evaluation engine of patent evaluation system
Ponelis et al. A descriptive framework of business intelligence derived from definitions by academics, practitioners and vendors
KR101658890B1 (en) Method for online evaluating patents
TW201539217A (en) A document analysis system, document analysis method and document analysis program
WO2014092360A1 (en) Method for evaluating patents based on complex factors
KR20140080594A (en) Method for evaluating patents using engine and evaluation server
Kuhrmann et al. A mapping study on method engineering: first results
TW201514903A (en) Document inspection system which provides prior information
Chen et al. [Retracted] Research on Enterprise HRM Effectiveness Evaluation Index System Based on Decision Tree Algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13862841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13862841

Country of ref document: EP

Kind code of ref document: A1