CN100440224C - Automatization processing method of rating of merit of search engine - Google Patents

Automatization processing method of rating of merit of search engine Download PDF

Info

Publication number
CN100440224C
CN100440224C CNB200610144289XA CN200610144289A CN100440224C CN 100440224 C CN100440224 C CN 100440224C CN B200610144289X A CNB200610144289X A CN B200610144289XA CN 200610144289 A CN200610144289 A CN 200610144289A CN 100440224 C CN100440224 C CN 100440224C
Authority
CN
China
Prior art keywords
inquiry
user
search engine
query
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CNB200610144289XA
Other languages
Chinese (zh)
Other versions
CN1963816A (en
Inventor
刘奕群
张敏
金奕江
马少平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Beijing Sogou Technology Development Co Ltd
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB200610144289XA priority Critical patent/CN100440224C/en
Publication of CN1963816A publication Critical patent/CN1963816A/en
Application granted granted Critical
Publication of CN100440224C publication Critical patent/CN100440224C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

This invention belongs to internet information process field, which is characterized by the following: firstly it extracts user requirement information from index engine user visit log for sorting on this basis; then for different types of user requirement according to user visit log click information for automatic analysis to get relative click focus point; finally for evaluating index engine to catch index result and processing result evaluation according to user required click focus point to get index engine property data result.

Description

A kind of automatic processing method of rating of merit of search engine
Technical field
The invention belongs to the internet information process field, particularly relate to a kind of automatic processing method of the rating of merit of search engine based on user behavior analysis, excavation.
Background technology
1. the definition of search engine and structure
Search engine is a kind of computer system that the network information service is provided, three parts of software program that it comprises computer network, computer hardware system and moves on hardware system.Its main effect is to help the user to obtain the high quality information that can meet consumers' demand that is present in the internet information environment fast, efficiently.
At present, it is that mode by keyword query realizes that promptly the user utilizes web browser access search engine website that most search engines provide service manner, and submits the keyword (being generally several words, speech) of the own query demand of reflection to; Search engine system is the results list relevant with user inquiring on the feedback interconnection network then.Normally a series of Webpages of the results list or the file that can obtain by computer network, and according to search engine size ordering that calculate and the user inquiring degree of correlation, the page that degree of correlation is high (or file) comes position earlier in the tabulation.
The process of using search engine to inquire about can be regarded as: the query demand that the user represents with keyword to system's input, webpage (or file) tabulation that system's output is relevant with query demand.Search engine is collected internet page and file and is set up index by the webpage deriving means that is called as Web Spider, and then by the inquiry unit processes user queries, and the feedback result tabulation, reach the effect that satisfies the user inquiring demand.
2. the performance evaluation technology of search engine
The performance evaluation of search engine is subjected to industrial community and researchist's extensive concern always: for search engine service supplier, performance evaluation is further to improve the necessary supplementary means of retrieval service quality; For general enterprise, performance evaluation is related to the validity problem of Intranet advertisement putting; For the general user, the result of performance evaluation then can use the behavior of search engine to produce clear and definite guiding function to them, from and the customer volume of search engine changed exert an influence.In a word, just and sound, accurate, comprehensive, objective rating of merit of search engine can obtain social groups' concern widely, and has very strong guiding function.
Because search engine system belongs to the category of networked information retrieval system to a great extent, so the main flow researchist uses the performance that traditional information retrieval evaluation method is evaluated and tested search engine.In the information retrieval evaluation method, evaluation and test is two indispensable factors with the model answer set of query set and corresponding these inquiries.And in traditional evaluating method, the two is established a capital really and need expend the great amount of manpower work.Comparatively speaking, because the scale of query set generally arrives several thousand at hundreds of, therefore can be undertaken by the mode of search engine logs or user's investigation, difficulty is relatively low, but the query set that how to define adequate representation also is the problem that needs further investigation.
What really cause search engine evaluation and test predicament is determining of query criteria answer, because the searching object of search engine can think that the internet data set is all, consider the quantity of internet page vastness, determine then can not finish by manual merely fully with the corresponding model answer set of these query sets.
The current main thought that addresses this problem comes from by the text retrieval meeting of American National Standard technical institute (NIST) tissue (Text retrieval conference, TREC), TREC just will promote the research of extensive document information retrieval as its primary and foremost purpose at the beginning of founding from 1992, by organizing various forms of retrieval evaluation and tests every year, TREC has accumulated the abundant evaluation experience to extensive text information retrieval system, and its core technology is called as outcome pool filtering technique (pooling).
The administration step of outcome pool filtering technique is as follows:
1. according to the size of data scale, select suitable Query Result set scale N.
2. for certain inquiry theme, utilize several retrieval technology (T 1, T 2..., T i) extensive collection of document is retrieved, and the Query Result that draws is respectively separately gathered (RC 1, RC 2..., RC n), wherein | RC i|=N, (i=1,2 ..., n).
3. structure outcome pool, pool=RC 1∪ RC 2∪ ... RC n
4. the The above results pond is filtered, utilize the document in the manual evaluation result pond whether relevant with the inquiry theme
Through the document in the filtering result pond, just can regard the relevant documentation set of this theme as.TREC is accumulating suitable rich experience aspect the set of application result pond filtering technique structure relevant documentation: the data retrieved object is gathered real network data set from plain text, data scale then from (1-2G) on a small scale to (20G) on a large scale again to ultra-large (500G), yet variation regardless of collection of document, query task, the relevant documentation set that the outcome pool filtering technique constructs still can access approval widely, and the evaluation result of TREC also has higher public credibility all the time.Domestic since 2003, also begin to carry out relevant evaluation and test gradually, and the method for the structure model answer of being adopted set has also been continued to use the outcome pool filter method substantially at search engine system.
Although the outcome pool filtering technique in the evaluation and test of existing search engine by widespread usage, but its shortcoming also is tangible, although the workload of manual mark has been greatly reduced, large-scale evaluation and test still is difficult to operation, and the subjective influence of the mark personnel that manual mark brings also still is difficult to avoid.Although the evaluation and test corpus scale of TREC is about 1,000 ten thousand documents, and the scale of evaluation and test query set is generally only about 200, NIST still needs to spend the time of some months and organizes tens reviewers that model answer is marked.This for extensive (the language material scale surpasses 1,000,000,000 documents), the network search engines evaluation and test and the performance change trend analysis of (weekly or every several days feedbacks once) in time be far from being enough.
Outside the filtering technique of division result pond, for the user's request of some particular types, can also utilize existing network information resource to seek the relevant documentation set automatically.The Chowdhury of America Online Inc is just studying the possibility of utilizing open directory plan (ODP, Open Directory Project, a project of utilizing the volunteer to mark Internet resources) to search the navigation type query aim page automatically.Because the uniqueness of the navigation type query aim page, this trial has obtained success, but owing to lack corresponding Internet resources, for target pages was gathered bigger query demand, this automatic location can not be widely used.
The graduate researchist of IBM Haifa proposes gathers based on relevant lexical item that (Term Relevant Set, TRELS) evaluating method of method can be alleviated the problem that above-mentioned two class evaluation methods are brought to a certain extent.This method is chosen a part of representational user inquiring, and organizes the reviewer to choose the lexical item that is commonly used to describe these user inquiring speech on the network, and these lexical items are formed relevant lexical item set.In the evaluation and test process,, then can think the model answer of retrieving if a webpage comprises more relevant lexical item.By this method, the evaluation and test algorithm has been avoided organizing the troublesome operation of manual file correlation mark when each the evaluation, thereby can feed back evaluation result in time.But this method itself also is difficult to overcome the subjectivity of the relevant lexical item of manual mark and mark personnel's inconsistent problem, and the reliability of utilizing relevant lexical item set to estimate this hypothesis of file correlation also is worth discussion.
3. search engine inquiry taxonomic hierarchies
To inquire about (Query) and classify and be not the new idea that just occurs recent years, but really more feasible mode classification is to have passed through discussion for a long time and comprehensively just obtain.It is generally acknowledged that now proper classification is: the classification of search engine is divided into navigation type retrieval (Navigational search) and information transaction class retrieval (Informational ﹠amp; Trasactionalsearch) two classes.The criteria for classification of this two class retrieval then is whether the user has definite target pages of searching.
For navigation type retrieval, the user has definite target of searching, and the knowing of this target browsed experience before perhaps coming from, and perhaps comes from hearsay.In a word, he wants to browse this page, but forgets or do not know the address (URL) of this page, therefore needs the help of search engine.Typical example is such inquiry (selecting from the instantaneous weathervane of inquiry of Baidu): " Shanghai City Jiangkou District government ", " Chinese visa net ", " State Environmental Protection Administration ", " German embassy business visa table " or the like.
And for information transaction class retrieval, user's neither one is determined searches target, and the purpose that it is searched is in order to obtain the information about certain theme, perhaps to want to enjoy certain type service (as software download, buying commodity etc.).Typical example is (selecting from the instantaneous weathervane of inquiry of Baidu equally): " FIFA2004 game download ", " form of the system of modern enterprise ", " rural area party member troop situation " etc.
When considering the enforcement of search engine assessment technique, must carry out different processing to this two classes inquiry theme, this is because this two class is inquired about corresponding search purposes, search method, evaluation index etc. bigger difference to be arranged all, estimates the retrieval performance quality that helps estimating the search engine different aspect respectively.
Summary of the invention
The objective of the invention is at the existing methods deficiency, propose search engine evaluation method based on user behavior analysis.This method is utilized the search engine user inquiry, is clicked the macroanalysis of behavior, selects the query set that is applicable to that search engine is estimated automatically, and the further model answer of corresponding these inquiries in location automatically.Because the process of query set and model answer of selecting is finished automatically by computing machine, therefore can in time, accurately, objectively reflect the actual performance of search engine.The particular content of this method is described below:
1. from the evaluation algorithms design point of view, the user inquiring of search engine can whether unique this standard be divided into two classes: navigation type inquiry (query aim is unique) and info class inquiry (query aim is not unique) according to user's the query aim page;
2. according to user click frequency and factors such as intensity that different ranking results are clicked, utilize machine learning algorithm that user inquiring is categorized into the navigation type inquiry automatically and info class is inquired about;
3. according to factors such as user query frequency, clicks as a result, Automatic sieve is selected and is fit to serve as the inquiry of estimating with query set, and these inquiries should possess the representative of consumer demand, characteristics such as clearer and more definite answer are arranged;
4. at the navigation type in query set inquiry and info class inquiry, utilize the user to click factor such as intensity respectively and mark its model answer automatically;
5. according to the query set of above automatic screening and the answer set of mark the Query Result of different search engines (being not limited to provide the search engine of user journal) is evaluated and tested, factors such as evaluation metrics can adopt mean accuracy, preceding n position conventional information such as precision retrieval as a result index.
The invention is characterized in:
It is finished on computers, contains successively to have the following steps:
Step 1. is estimated screening and the classification with query set
The pre-service of step 1.1 data
The query set that the search engine evaluation is used is to come from the search engine user daily record, and for the user journal of certain search engine, it should comprise that at least following content just can be used for extracting the evaluation query set:
The content that table 1 comprises for the search engine user daily record of estimating use
Title Recorded content Record figure place (Bit)
Query The inquiry that the user submits to 256
URL The result address that corresponding this inquiring user is clicked 256
Rank The rank of URL and size user inquiring degree of correlation that calculate in return results according to search engine 4
Id By the customer identification number that system distributes automatically, certain user can be distributed a different identification number during certain use search engine automatically. ?32
General search engine service provider can obtain above information by search engine web server easily, thereby has guaranteed the feasibility of this method.User journal is carried out pretreated step to be comprised:
Step 1.1.1 carries out the user journal code conversion, the coded format (being generally the generic resource identifier is the URI form) of server record is converted to the GBK form of Chinese characters of the national standard coding.
Step 1.1.2 utilizes the content item of listing in the table 1 that user journal is put in order, removes the information outside table 1 content item, and daily record is organized into the form of above content item character string.
Step 1.1.3 utilizes the noise information in the string matching technical filter user inquiring, comprises the query word of violated query word, some online product promotion use etc., only keeps the content item that directly reflects search engine common user query demand and behavior.
Through the data preprocessing process, we can extract above content from the daily record of search engine original user, and are applied to the following steps of method.
Step 1.2 is extracted " preceding N position result meet consumers' demand rate " information
By user inquiring and the click information that table 1 provides, we can calculate " preceding N position result meet consumers' demand rate " at certain inquiry, and the web results that N position search engine returns before promptly only needing to click just satisfies user's ratio of its information requirement.For certain inquiry Q, concrete computing formula is:
Figure C20061014428900101
Wherein, " total number of users of inquiry Q " can count to get by the different I d to inquiry Q, " inquiry during Q user all clicked the result of which ordering " then can obtain by the rank to the different I d correspondence of inquiry Q, and then also can add up " only clicking top n result's number of users during inquiry Q ".
According to its definition, because " top n result's user is only clicked in inquiry during Q " must be the part of " user of inquiry Q ", therefore the span of " preceding N position result meet consumers' demand rate " is inevitable between 0 to 1.
Step 1.3 is extracted " preceding N click meet consumers' demand rate " information
Similar with step 1.2, the user inquiring and the click information that provide by table 1, we can calculate " preceding N time click meet consumers' demand rate " at certain inquiry, promptly only need the result that return search engine to be less than or equal to user's ratio that its information requirement is just satisfied in N click.For certain inquiry Q, concrete computing formula is:
Figure C20061014428900111
Wherein, " total number of users of inquiry Q " can count to get by the different I d to inquiry Q, " inquiry user's during Q clicks " then can obtain by the user's clicks to the different I d correspondence of inquiry Q, and then also can add up " click during inquiry Q and be less than or equal to N time number of users ".
According to its definition, because when Q " inquiry number of clicks be less than or equal to N time user " must be the part of " user of inquiry Q ", therefore the span of " preceding N click meet consumers' demand rate " must be between 0 to 1.
Step 1.4 is extracted " user clicks concentration degree " information
With step 1.2 and 1.3 similar, user inquiring that we can provide by table 1 and click information are calculated " user clicks concentration degree " at certain inquiry, the i.e. intensity of clicking for the search engine return results at certain inquiring user.For certain inquiry Q, we can at first define " user click the most concentrated Query Result " and are: in the inquiry at Q, and the Query Result that the number of times of being clicked by different user is maximum.
Then be for Q " user clicks concentration degree " concrete computing formula:
Wherein, " total clicks of inquiry Q user " can obtain by the user's click-through count to inquiry Q, " user click the most concentrated the clicked number of times of Query Result " then can obtain by to inquiry Q the time in user's click-through count of " Query Result that user's click is the most concentrated ", and then also can add up " user clicks concentration degree ".
According to its definition, because " user clicks the most concentrated clicked number of times of Query Result " must be less than or equal to " total clicks of inquiry Q user ", therefore the span of " user clicks concentration degree " is inevitable between 0 to 1.
Step 1.5 inquiry classification to be evaluated
" preceding N time click the rate of meeting consumers' demand " that utilizes that step 1.2-1.4 calculates, " preceding N position result meet consumers' demand rate " and " user clicks concentration degree " can judge that it still is " info class inquiry " that certain user inquiring Q belongs to " navigation type inquiry " according to following statistical law (as shown in Figure 2).
If: value is between 0.6 to 1.0 for " preceding 5 results meet consumers' demand rate " of Q, and then the Q preliminary judgement is " navigation type inquiry ".
If: " preceding 5 results meet consumers' demand rate " value of Q between 0 to 0.6, but " preceding 2 times click the rate of meeting consumers' demand " value is between 0.9 to 1, then the Q preliminary judgement is " navigation type inquiry ".
Otherwise: the Q preliminary judgement is " info class inquiry "
For trying to achieve the result of determination of classifying more accurately, the result of above preliminary judgement is revised, promptly
If: " user clicks concentration degree " value of Q is between 0.5 to 1.0, and then Q is judged to be " navigation type inquiry ";
If: " user clicks concentration degree " value of Q is between 0 to 0.2, and then Q is judged to be " info class inquiry ";
Otherwise: the preliminary judgement result of Q remains unchanged.
Step 1.6 is determined the evaluation and test query set
Select the query set S that is used to evaluate and test according to following rule:
If: the number of times of being inquired about by different user in search engine logs of certain inquiry Q is then got rid of outside S less than 50 times.
If: certain inquiry Q is the info class inquiry, for Q, if " user clicks concentration degree " sum of " user clicks five the most concentrated Query Results " correspondence is then got rid of outside S less than 0.8.
For the set of the Q that does not belong to above any one restrictive condition, select the inquiry about 300-500 to enter S according to the height of Computing processing power.According to existing extensive information retrieval system evaluation and test work, the query set of this scale has representative preferably, can play role of evaluation comparatively reliably.
The automatic mark of step 2. navigation type/info class inquiry answer
The automatic mark of step 2.1 navigation type inquiry answer
Navigation type inquiry for picking out according to step 1 is designated as Q (NAV).Then according to the corresponding model answer of following rule mark Q (NAV):
For Q (NAV), the maximum webpage of its " user clicks concentration degree " promptly is its model answer.
According to the picking rule of step 1.5, " user clicks concentration degree " of all Q (NAV) correspondence all greater than 0.5, the maximum webpage of this expression " user clicks concentration degree " has and only has one, guarantees the uniqueness of navigation type inquiry answer with this.
The automatic mark of step 2.2 info class inquiry answer
Navigation type inquiry for picking out according to step 1 is designated as Q (INF).Then according to the corresponding model answer of following rule mark Q (INF):
For Q (INF), maximum continuously preceding M the webpage of its " user clicks concentration degree " promptly is its model answer, wherein M satisfies: from " user clicks concentration degree " maximum webpage, " user clicks concentration degree " sum of continuously preceding M webpage is greater than 0.8, but " user clicks concentration degree " sum of continuously preceding M-1 webpage is less than 0.8.
According to the requirement of step 1.6, M should be not more than 5, and this number that has guaranteed info class inquiry answer is in the reasonable range.
The extracting of step 3. search-engine results and filtration
Step 3.1 grasps at the search-engine results page of given query speech
Each query word Q among the query set S to be evaluated that step 1 is picked out need grasp the search-engine results page, so that further obtain the Query Result clauses and subclauses of search engine at Q.
The method that grasps the search-engine results page is:
At first select a kind of internet web page capture program for use, as the open source code instrument wget under the Linux platform, the freeware FlashGet under the windows platform etc.So that utilize this instrument that the webpage of corresponding URL is grasped.When these programs were used, all having the user provided webpage URL the address, the characteristic that program is downloaded corresponding webpage and preserved.
Secondly according to the difference of Q, the mode of utilizing pattern to replace generates the URL of the search-engine results page of corresponding Q.The mode difference of different search-engine results page URL record Q., transmits by search engine the information of Q but all need writing down Q in URL to server.Results page URL as the corresponding Q of Baidu search engine is Http:// www.baidu.com/baidu? wd=QThe results page URL of the corresponding Q of Google search engine is Http:// www.google.cn/search? q=QAnd the results page URL of the corresponding Q of Sogou search engine is exactly Http:// www.sogou.com/web? query=QBecause the number of search engine to be evaluated is few, therefore can in browser, carry out the search of some sample inquiries at different search engines.According to the corresponding relation of sample inquiry, obtain the rule that the search-engine results page generates URL automatically with search-engine results page URL.
At last, utilize computer network and operation software program on computers, call the internet web page capture program, grasp the Query Result page of each the query word Q correspondence among the query set S to be evaluated automatically and also preserved.
The extraction of clauses and subclauses as a result in the step 3.2 search-engine results page
By step 3.1, can obtain the Query Result page of each the query word Q among the corresponding query set S to be evaluated of each search engine to be evaluated.To these results page, can utilize the method acquisition Query Result clauses and subclauses wherein of pattern match.
Because the Query Result page of search engine all is to generate automatically by script, therefore can finds the organization rule of Query Result, and then utilize the method for this rule and pattern match to realize that the result extracts according to its html text.
For example: for the Baidu search engine, the Query Result clauses and subclauses are recorded in the following form:
<td class=f〉<a href=" Query Result URL " target=" _ blank "〉<font size=" 3 " 〉
For the Google search engine, entry record form is as follows as a result for it:
<p class=g〉<a class=1 href=" Query Result URL " target=_blank
And for the Sogou search engine, its Query Result entry record form is as follows:
<a class=" ff " href=" Query Result URL " onclick=" itmclk
Because the number of search engine to be evaluated is few, therefore can in browser, carry out the search of some sample inquiries at different search engines.According to the corresponding relation of HTML content in the clauses and subclauses as a result of sample inquiry and the search-engine results page, the rule of the automatic generated query results page of the acquisition search-engine results page, and this rule noted with the mode of computer program.
The computer program of search-engine results page organization rule has been write down in utilization, promptly can obtain the Query Result clauses and subclauses of the corresponding Q of different search engines at each the query word Q among the query set S to be evaluated.
Step 4. is according to the search-engine results evaluation and test of model answer
The clauses and subclauses as a result of inquiring about among the corresponding S of search engine to step 3 acquisition, and the model answer of inquiring about among the corresponding S of step 2 mark are estimated the query performance of search engine.The leading indicator of estimating comprises following several:
1. average retrieval precision (Average Precision, AP): be applied to navigation type and info class inquiry evaluation and test.
AP = 1 K Σ i = 1 K Precision ( i ) , Wherein
Figure C20061014428900142
The combination property that average retrieval precision is used for estimating search engine (had both comprised the info class retrieval performance, also comprise the navigation type retrieval performance), K in the formula represents the number of model answer, and Precision (i) then is the degree of accuracy (number of results of match-on criterion answer/overall result number) of system's return results when finding i answer.For example a query has 2 model answers, return at the 3rd and the 5th respectively, then our system hereto the AP of query be exactly 0.5* (1/3+2/5)=36.67%.AP averages for each user inquiring, is exactly average retrieval precision, and this index can be used for the evaluation and test to any class of two classes inquiry theme.
Ordering reciprocal (Reciprocal Rank, RR): be applied to navigation type inquiry evaluation and test.
RR = 1 Rank ( 1 ) , Wherein Rank (1) represents the ranking value that the 1st model answer occurs
The RR reciprocal that sorts is meant the ordering inverse that first model answer occurs, and this index is mainly used in the evaluation and test of navigation type retrieval.It should be noted that the result that model answer appears at the prostatitis is given a very high evaluation, model answer is returned at first that then RR=100% returns at the 2nd, then RR drops to 50%.In addition, when having only a model answer, RR=AP.
3. preceding 10 precision (Precision@10) as a result: be applied to info class inquiry evaluation and test.
Figure C20061014428900144
Preceding 10 precision preceding 10 degree of accuracy of correspondence as a result of being meant that search engine returns as a result.Actual application background be exactly see search engine returns first page of result (because each results page that most of search engines return all comprises 10 results) how high Precision is arranged, it relatively is applicable to the evaluation that the information transaction class is retrieved.
Utilize above three indexs, can provide absolute performance and lateral comparison result that search engine to be evaluated is handled dissimilar inquiries, thereby realize the performance evaluation of search engine.
In order to verify validity of the present invention and reliability, we have carried out the correlation test of performance evaluating.
On operational efficiency, when the program run hardware environment was 1.8G dominant frequency CPU, 1G internal memory and 100MLAN network, computing machine was handled 400 required times of inquiry and is about 2 hours when carrying out rating of merit of search engine.The way that this more original artificial evaluation method several thoughtful some months consuming time just can carry out the performance evaluation feedback is greatly improved.
On the correctness of estimating, pass through contrast (81 info class inquiries, 152 navigation type inquiries and their corresponding model answers) with a certain amount of manual annotation results, automatically the accuracy rate of annotation results is as follows: info class inquiry mark accuracy rate automatically is 72%, and the mark accuracy rate is 91% and navigation type is inquired about automatically.Table 2 has been listed the part annotation results:
Table 2: part annotation results
Figure C20061014428900151
In February, the 2006 search engine user daily record that utilization is obtained from sogou company is compared through the performance to the famous Chinese search engine of part, and we find that the result of two types of automatic Evaluation is basic identical with the result of manually evaluation and test:
Table 3: the comparison of manual evaluation result and the automatic evaluation result of search engine
Figure C20061014428900161
Moreover, this estimate with authoritative institution to the market study result of user's experience also basic identical (numerical relation is basic identical, and the ordering relation of Google and Sohu is difference slightly):
Table 4: market study result and search engine are commented side result's comparison automatically
" search engine of new user's first-selection " ordering that CNNIC China search engine market survey report provides The evaluation result of utilizing search engine logs to mark answer automatically to carry out/average retrieval precision
Baidu Baidu/0.206
Search dog Google/0.185
Google Search dog/0.179
Sina Sina/0.135
Other (3721) Other (in search)/0.133
The present invention can find and extract the user inquiring that is used for the search engine automatic Evaluation automatically from the search engine logs data, and these inquiries are classified and the automatic mark of answer, and then utilization can realize the automatic Evaluation of search engine to the extracting of internet data.Model structure and parameter are simple, and algorithm complex is low, have obtained good performance on test data of experiment, with manual search engine evaluation result and authoritative institution's market survey basically identical as a result.This explanation the present invention has generalization and adaptability preferably, and the evaluation of search engine performance is had objective, reliable, comprehensive characteristics, has a good application prospect.
Description of drawings
Fig. 1. search engine automatic evaluation method process flow diagram;
Fig. 2. pretreated daily record organization chart;
Fig. 3. inquiry sorting algorithm process flow diagram;
Fig. 4. collection of user queries to be evaluated is selected process flow diagram;
Fig. 5. the inquiry answer marks automatically:
5a. navigation type inquiry answer mark; 5b. info class inquiry answer mark.
Embodiment
Accompanying drawing 1 has been described the flow process of this method.The present invention has adaptability widely for estimating various search engine performances, but for the convenience of describing, will estimate Baidu search engine retrieving performance with the search engine logs of utilizing the Sogou website below is example, is described in detail with regard to above method:
1. data pre-service
Employed daily record has comprised all inquiries in the 28 day time on February 28,1 day to 2006 February in 2006 of search dog search engine.Wherein, totally 45,745,985 of non-NULL inquiries, totally 4,345,557 of non-repetitive non-NULL inquiries.The information that comprises in the daily record has:
The item of information that table 5:Sogou search engine logs comprises
Title Recorded content
?query The inquiry that the user submits to
?URL The result address that the user clicks
?time The user clicks date, the time when taking place
?rank The rank of this URL in return results
?order The serial number (this is which page that the user clicks) that the user clicks
?id By the automatic customer identification number that distributes of system
?submitter?information Browser information, computerized information
Comprise enough items of information that is used for the search engine automatic Evaluation in the above log information, therefore can utilize this daily record to carry out the performance evaluation of each Chinese search engine.
The data pre-service of search engine logs comprises: original search engine logs is carried out Unified coding, and (what write down in the daily record generally is the UTF-8 coding, need the unified GBK coding unified Analysis that is converted to handle), filtering useless information (only keeping the required item of information of search engine automatic Evaluation), each operations such as " user click concentration degree " of URL as a result of " the user inquiring amount " of unified calculation search engine each inquiry, " preceding 5 results meet consumers' demand rate ", " preceding 2 times click the rate of meeting consumers' demand ", corresponding this inquiry.
The pretreated search engine logs of process data is unified into the form as accompanying drawing 2, write down " user inquiring amount ", " preceding 5 results meet consumers' demand rate ", " preceding 2 times click the rate of meeting consumers' demand " information of query word, query word correspondence successively, and the N of this a query word correspondence user clicks result's URL and the user of their correspondences clicks concentration degree information.
2. query set screening to be evaluated
Can screen user inquiring according to the step of accompanying drawing 4, pick out the query set Q that is applicable to the search engine automatic Evaluation, wherein the action need of inquiry classification carries out according to the decision tree mode of accompanying drawing 3.
Its concrete steps are:
1. the inquiry to occurring in each daily record is at first screened according to its user inquiring amount, if total inquiry times is less than 50, thinks that then this inquiry does not have enough macroscopical users to click behavioural information, automatic Evaluation that can't the user search engine.Find according to our employed sogou daily record being analyzed the back, the user inquiring number of times greater than 100 inquiry above 30,000, and total number of clicks that the user inquires about in this section accounts for about 70% of whole numbers of clicks, these some results of study with forefathers are identical, be in the search engine, the inquiry of lesser amt is inquired about repeatedly, has occupied the most search engine service time.
2. after inquiry being screened according to the user inquiring amount, promptly it is carried out sort operation according to the decision tree mode of Fig. 3.Because it is single that navigation type inquiry has a query aim page, and search engine system for the query performance of navigation type inquiry generally also than higher (inquiry to 80% can be returned correct result at first); Therefore " preceding 2 times click the rate of meeting consumers' demand " of navigation type inquiry and " preceding 5 results meet consumers' demand rate " than higher also be predictable.It is just rational to think that these two standards belong to the navigation type inquiry than higher inquiry.And because the ambiguousness of navigation type inquiry is less, different user search the target relative fixed, so its " user clicks concentration degree " is naturally also than higher.Utilize form of decision tree that three features are comprehensive in addition, just obtained sorting technique shown in Figure 3.Behind the input inquiry, just it can be categorized as navigation type inquiry or info class inquiry according to these three features.According to the evaluation and test that we utilize manual annotation results to carry out, the classification accuracy of this algorithm and recall rate can satisfy the needs (as shown in table 6) of next step performance evaluating algorithm preferably all more than 80%.
Table 6: the performance of inquiry sorting algorithm
Figure C20061014428900181
3. after the inquiry classification finishes, need the difference of foundation inquiry kind that user inquiring is further screened, this is for the corresponding answer page quantity of control inquiry, chooses the comparatively concentrated inquiry of the answer page and is used for evaluation and test.For navigation type inquiry, because its " user clicks concentration degree " all surpasses 0.5, and its corresponding answer page generally only has one, thus can be simply the page of this " user clicks concentration degree " maximum as the answer page.Inquire about for info class, then need to carry out the control of answer page quantity, according to existing research to the info class searching algorithm, typical this type of is inquired about the pairing answer page generally at 4-5, therefore regulation has only " user clicks concentration degree " maximum preceding 5 results pairing " user clicks concentration degree " sum greater than 0.8, when promptly surpassing 80% user and clicking on the result who concentrates on these 5 (or lesser number), we think that just the corresponding answer page of this info class inquiry is more concentrated, can be used for the search engine evaluation and test.
Through above-mentioned 3 steps, can filter out query set to be evaluated.After screening, there are 2637 info class inquiries and 793 navigation type inquiries to enter query set to be evaluated in one month the user journal.
3. the automatic mark of the corresponding answer of user inquiring
The automatic mark of navigation type inquiry answer can be referring to the flow process shown in Fig. 5 a, for navigation type query word Q, its mark process is exactly to pick out the process that its user clicks the focus page, because according to sorting technique, the navigation type inquiry has and only has " user clicks concentration degree " of a results page correspondence greater than 0.5, therefore this process of selecting focus can be reduced to " user the clicks concentration degree " process greater than 0.5 the page of finding out again, in case find out this page, algorithm just can finish.
The automatic mark of info class inquiry answer can be referring to the flow process shown in Fig. 5 b, according to screening technique, have only " user clicks concentration degree " maximum preceding 5 results pairing " user clicks concentration degree " sum just to understand selected come out greater than 0.8 info class inquiry.This just means, if we choose " user clicks concentration degree " sum greater than 0.8 preceding N position result, N necessarily is less than or equal to 5.This has guaranteed that we choose the model answer page of 5 pages as the info class inquiry at the most.
The rationality of utilizing " user clicks concentration degree " to carry out the answer mark is: " user clicks concentration degree " write down the degree that the page is paid close attention to by the user, and the page that this numerical value is bigger is the click focus of user when carrying out certain inquiry, also is focus.And the macroscopic behavior of magnanimity search engine user, can reflect to a great extent the page content quality and with inquiry in semantically correlativity, the page that becomes user's focus then must have the high-quality on the content or have bigger correlativity with current inquiry.
Behind the automatic mark of answer, all navigation type inquiries (793) all have and only have a model answer; And all info class inquiry (2637) has been marked 9558 answers altogether, i.e. corresponding about 3.6 answers of each inquiry.
4. rating of merit of search engine
Through above step, we have selected the query set that is used to evaluate and test, and have marked the model answer page of corresponding these inquiries.Consider the actual treatment ability of computing machine, network system and the reliability of evaluation and test, can choose wherein about 1/6 inquiry and be used for final search engine evaluation and test operation.
For each inquiry to be evaluated, can obtain the Query Result of search engine correspondence as follows:
1. grasp the Query Result page of search engine correspondence.According to the form of search engine network service, can generate the URL of the corresponding Query Result page to be evaluated of search engine automatically, thereby realize the extracting of the page.Query Result page URL as the Baidu search engine be exactly " http://www.baidu.com/baidu? the wd=query word ", as long as " query word " changed into inquiry to be evaluated, just can grasp results page.
2. the results page that search engine is returned extracts wherein Query Result URL according to its page organizational form.Because the Query Result page of search engine all is to generate automatically by script, therefore can finds the organization rule of Query Result, and then utilize this rule to realize that the result extracts according to its html text.For example for the Baidu search engine, Query Result is recorded in the following form:
<td class=f〉<a href=" Query Result URL " target=" _ blank "〉<font size=" 3 " 〉
3. the Query Result sequence that different search engines are returned is estimated it according to model answer.Wherein average retrieval precision (MAP) is used for the evaluation of combination property, and average ordering (MRR) reciprocal is used for the evaluation of navigation type query performance, and top ten precision (P@10) as a result then is used for the evaluation of info class query performance.
According to above step, just can realize the automatic Evaluation of search engine performance, utilize search macro engine user's behavior objective, reliably query performance of search engines estimated.

Claims (4)

1. the automatic processing method of a rating of merit of search engine is characterized in that this method contains successively to have the following steps:
Step 1 is estimated screening and the classification with query set, search engine service provider obtains the search engine user daily record by the service of search engine network, has wherein write down following list item successively: the resource address URL of the inquiry Q that the user submits to, result address URL that corresponding this inquiring user is clicked, unified standard press rank in return results of the degree relevant with user inquiring that search engine calculates, when certain uses search engine as certain user by the unique customer identification number ID of the automatic distribution of system; Then, carry out according to the following steps:
The pre-service of step 1.1 data
This search engine Internet service provider of step 1.1.1 carries out the user journal code conversion, the coded format of this server record is become the GBK form of Chinese characters of the national standard coding from the URL format conversion;
Step 1.1.2 utilizes redundant information and the noise information in the string matching technical filter user inquiring process, and the content of user journal is organized into unique identification number that the rank of result address that the inquiry that comprises the user and submit to, corresponding this inquiring user click, result address and size user inquiring degree of correlation that calculate according to search engine in return results and user distribute automatically by search engine system when using in interior content item character string;
Step 1.2 is extracted " preceding N position result meet consumers' demand rate " information:
Figure C2006101442890002C1
Span is between 0 to 1, and wherein, N is a setting value,
" total number of users of inquiry Q " counts to get by the different I D to inquiry Q,
" top n result's number of users is only clicked in inquiry during Q " then obtains by the rank to the different I D correspondence of inquiry Q;
Step 1.3 is extracted " preceding N click meet consumers' demand rate " information:
Figure C2006101442890002C2
Span is between 0 to 1, and wherein, N is a setting value,
" inquiry is clicked during Q and is less than or equal to N time number of users " obtains by the number of users that click in user's clicks of the different I D correspondence of inquiry Q is less than or equal to N time;
Step 1.4 is extracted " user clicks concentration degree " information:
Figure C2006101442890003C1
Span is between 0 to 1;
The classification of step 1.5 inquiry to be evaluated:
If: value is between 0.6 to 1.0 for " preceding 5 results meet consumers' demand rate " of Q, and then Q is " navigation type inquiry ",
If: " preceding 5 results meet consumers' demand rate " value of Q between 0 to 0.6, but " preceding 2 times click the rate of meeting consumers' demand " value is between 0.9 to 1, then Q is " navigation type inquiry ", is the unique inquiry of a kind of query aim,
Otherwise: Q is " info class inquiry ", is the not unique inquiry of a kind of query aim;
Step 1.6 is determined the query set S of evaluation and test usefulness, forms model answer:
If: the number of times that certain inquiry Q is inquired about by different user in the search engine user daily record is then got rid of outside S less than 50 times,
If: certain inquiry Q is the info class inquiry, for this inquiry, if " user clicks concentration degree " sum of " user clicks five the most concentrated Query Results " correspondence is then got rid of outside S less than 0.8;
The automatic mark of step 2 navigation type, info class inquiry answer:
For the navigation type inquiry, the maximum webpage of its " user clicks concentration degree " promptly is its model answer,
Inquire about for info class, maximum continuously preceding M the webpage of its " user clicks concentration degree " promptly is its model answer, wherein M satisfies: from " user clicks concentration degree " maximum webpage, " user clicks concentration degree " sum of continuously preceding M webpage is greater than 0.8, but " user clicks concentration degree " sum of continuously preceding M-1 webpage is less than 0.8;
The extracting of step 3 search-engine results and filtration:
Step 3.1 grasps at the search-engine results page of given query speech: each the query word Q among the query set S to be evaluated that step 1 is picked out, its search-engine results page is grasped, so that further obtain the Query Result clauses and subclauses of search engine at Q, its steps in sequence is as follows:
Step 3.1.1 selects a kind of internet web page capture program for use;
Step 3.1.2 is according to different query categories, and the mode of utilizing pattern to replace generates the URL of the search-engine results page of corresponding inquiry, and simultaneously, search engine writes down this inquiry in this URL;
Internet web page capture program among the step 3.1.3 invocation step 3.1.1 grasps the Query Result page and the preservation of each the query word correspondence among the query set S to be evaluated automatically;
The extracting of clauses and subclauses as a result in the step 3.2 search-engine results page, contain following steps successively:
Step 3.2.1 finds out the HTML (Hypertext Markup Language) text in the script of the Query Result page that forms search engine;
Step 3.2.2 carries out the search of some sample inquiries at different search engines in browser, obtain the clauses and subclauses as a result of sample inquiry;
Step 3.2.3 is by the method for pattern match, and the corresponding relation according to html text in the clauses and subclauses as a result of sample inquiry and the search-engine results page obtains describing the program by the automatic generated query result of the search-engine results page;
The program that step 3.2.4 obtains according to step 3.2.3 at each query word in the query set to be evaluated, obtains corresponding Query Result clauses and subclauses;
Step 4 is carried out the search-engine results evaluation according to the model answer that step 1.6 obtains, and used evaluation metrics is as follows:
A. the evaluation and test that average retrieval precision AP is applied to navigate simultaneously and info class is inquired about, the combination property of evaluation search engine:
AP = 1 K Σ i = 1 K precision ( i ) ,
Wherein, K represents the number of model answer,
B. the RR reciprocal that sorts is used for navigation type inquiry evaluation and test:
RR = 1 Rank ( 1 ) , Wherein Rank (1) represents the ranking value that the 1st model answer occurs, and RR is the ordering inverse of the 1st model answer;
C. top ten precision as a result is used for info class inquiry evaluation and test, represents with Precision@10:
Figure C2006101442890004C4
Precision@10 represents preceding 10 results' that search engine returns precision, all includes 10 results in the results page homepage that most of search engines return, so Precision@10 has also represented the 1st page of result's that search engine returns precision.
2. the automatic processing method of a kind of rating of merit of search engine according to claim 1 is characterized in that, in the described inquiry classification to be evaluated of step 1.5;
If " user clicks concentration degree " value of Q is between 0.5 to 1.0, then Q is judged to be " navigation type inquiry ", if value between 0 to 0.2, then is judged to be " info class inquiry ", otherwise the preliminary judgement result of inquiry Q remains unchanged.
3. the automatic processing method of a kind of rating of merit of search engine according to claim 1, it is characterized in that, in step 1.6 determines that evaluation and test is with query set S,, select 300-500 inquiry to enter S set for the S set that does not belong to described any one restrictive condition.
4. the automatic processing method of a kind of rating of merit of search engine according to claim 1, it is characterized in that, in the described navigation type inquiry of step 2, all users click concentration degree greater than 0.5, and the webpage maximum with regard to expression " user clicks concentration degree " has and only have one.
CNB200610144289XA 2006-12-01 2006-12-01 Automatization processing method of rating of merit of search engine Active CN100440224C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200610144289XA CN100440224C (en) 2006-12-01 2006-12-01 Automatization processing method of rating of merit of search engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200610144289XA CN100440224C (en) 2006-12-01 2006-12-01 Automatization processing method of rating of merit of search engine

Publications (2)

Publication Number Publication Date
CN1963816A CN1963816A (en) 2007-05-16
CN100440224C true CN100440224C (en) 2008-12-03

Family

ID=38082869

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200610144289XA Active CN100440224C (en) 2006-12-01 2006-12-01 Automatization processing method of rating of merit of search engine

Country Status (1)

Country Link
CN (1) CN100440224C (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146113B (en) * 2007-10-11 2010-08-11 清华大学 Dynamic configuration system and method for network service performance
CN101188521B (en) * 2007-12-05 2010-07-14 北京金山软件有限公司 A method for digging user behavior data and website server
KR100843544B1 (en) * 2008-03-24 2008-07-04 방용정 Method for generating connection statistics of respective user connected with website
CN101551800B (en) * 2008-03-31 2012-02-15 富士通株式会社 Marked information generation device, inquiry unit and sharing system
CN101789018B (en) * 2010-02-09 2013-03-13 清华大学 Method and device for retrieving webpage
US20110314011A1 (en) * 2010-06-18 2011-12-22 Microsoft Corporation Automatically generating training data
CN102314462A (en) * 2010-06-30 2012-01-11 北京搜狗科技发展有限公司 Method and system for obtaining navigation result on input method platform
CN102314461B (en) * 2010-06-30 2015-03-11 北京搜狗科技发展有限公司 Navigation prompt method and system
CN102521350B (en) * 2011-12-12 2014-07-16 浙江大学 Selection method of distributed information retrieval sets based on historical click data
CN103297286B (en) * 2012-02-23 2016-12-14 百度在线网络技术(北京)有限公司 The test system of the reliability of distributed type assemblies, method and apparatus
CN104050197B (en) * 2013-03-15 2018-08-17 腾讯科技(深圳)有限公司 A kind of information retrieval system evaluating method and device
CN103235796B (en) * 2013-04-07 2019-12-24 北京百度网讯科技有限公司 Search method and system based on user click behavior
US11636120B2 (en) * 2014-11-21 2023-04-25 Microsoft Technology Licensing, Llc Offline evaluation of ranking functions
CN104699825B (en) * 2015-03-30 2016-10-05 北京奇虎科技有限公司 The balancing method of Performance of Search Engine and device
CN104699830B (en) * 2015-03-30 2017-04-12 北京奇虎科技有限公司 Method and device for evaluating search engine ordering algorithm effectiveness
CN105573887B (en) * 2015-12-14 2018-07-13 合一网络技术(北京)有限公司 The method for evaluating quality and device of search engine
CN107704467B (en) * 2016-08-09 2021-08-24 百度在线网络技术(北京)有限公司 Search quality evaluation method and device
CN107273404A (en) * 2017-04-26 2017-10-20 努比亚技术有限公司 Appraisal procedure, device and the computer-readable recording medium of search engine
CN107688595B (en) * 2017-05-10 2019-03-15 平安科技(深圳)有限公司 Information retrieval Accuracy Evaluation, device and computer readable storage medium
CN107239399B (en) * 2017-05-27 2020-06-05 北京京东尚科信息技术有限公司 Index generation method, device and system for testing and readable storage medium
CN107609020B (en) * 2017-08-07 2020-06-05 北京京东尚科信息技术有限公司 Log classification method and device based on labels
CN108305197A (en) * 2018-01-29 2018-07-20 广州源创网络科技有限公司 A kind of data statistical approach and system
US11682029B2 (en) 2018-03-23 2023-06-20 Lexisnexis, A Division Of Reed Elsevier Inc. Systems and methods for scoring user reactions to a software program
CN111294275B (en) * 2020-02-26 2022-10-14 上海云鱼智能科技有限公司 User information indexing method, device, server and storage medium of IM tool
CN113312537A (en) * 2021-06-22 2021-08-27 中山大学 Evaluation index calculation method for service reliability of search engine

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040220914A1 (en) * 2003-05-02 2004-11-04 Dominic Cheung Content performance assessment optimization for search listings in wide area network searches
CN1682216A (en) * 2002-09-13 2005-10-12 奥弗图尔服务公司 Automated processing of appropriateness determination of content for search listings in wide area network searches

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1682216A (en) * 2002-09-13 2005-10-12 奥弗图尔服务公司 Automated processing of appropriateness determination of content for search listings in wide area network searches
US20040220914A1 (en) * 2003-05-02 2004-11-04 Dominic Cheung Content performance assessment optimization for search listings in wide area network searches

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
搜索引擎性能综合评价的联系数学模型及其应用. 汪新凡.株洲工学院学报,第20卷第2期. 2006
搜索引擎性能综合评价的联系数学模型及其应用. 汪新凡.株洲工学院学报,第20卷第2期. 2006 *
面向信息检索需要的网络数据清理研究. 刘弈群,张敏,马少平.中文信息学报,第20卷第3期. 2006
面向信息检索需要的网络数据清理研究. 刘弈群,张敏,马少平.中文信息学报,第20卷第3期. 2006 *

Also Published As

Publication number Publication date
CN1963816A (en) 2007-05-16

Similar Documents

Publication Publication Date Title
CN100440224C (en) Automatization processing method of rating of merit of search engine
CN100507920C (en) Search engine retrieving result reordering method based on user behavior information
CN100507918C (en) Automatic positioning method of network key resource page
CN103020164B (en) Semantic search method based on multi-semantic analysis and personalized sequencing
US8473473B2 (en) Object oriented data and metadata based search
CN101320375B (en) Digital book search method based on user click action
CN101908071B (en) Method and device thereof for improving search efficiency of search engine
CN101382954B (en) Method and system for providing web site collection name
CN102760151B (en) Implementation method of open source software acquisition and searching system
CN102314443B (en) The modification method of search engine and system
CN103914478A (en) Webpage training method and system and webpage prediction method and system
CN111708740A (en) Mass search query log calculation analysis system based on cloud platform
CN101609450A (en) Web page classification method based on training set
CN106126648B (en) It is a kind of based on the distributed merchandise news crawler method redo log
CN102722501B (en) Search engine and realization method thereof
WO2007021386A2 (en) Analysis and transformation tools for strctured and unstructured data
CN103365839A (en) Recommendation search method and device for search engines
CN103116635B (en) Field-oriented method and system for collecting invisible web resources
CN101710318A (en) Knowledge intelligent acquiring system of vegetable supply chains
KR101801257B1 (en) Text-Mining Application Technique for Productive Construction Document Management
CN101281519A (en) Method for evaluating network resource value and application of searching engine field
CN105095175A (en) Method and device for obtaining truncated web title
KR100557874B1 (en) Method of scientific information analysis and media that can record computer program thereof
Pearce-Moses et al. An Arizona Model for preservation and access of Web documents
Liu REMOTE USERS'OPAC SEARCHING HABITS: A COMPARATIVE CASE STUDY THROUGH WEB TRANSACTION LOG ANALYSIS.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20090807

Address after: Beijing 100084-82 mailbox: 100084

Co-patentee after: Sogo Science-Technology Development Co., Ltd., Beijing

Patentee after: Tsinghua University

Address before: Beijing 100084-82 mailbox: 100084

Patentee before: Tsinghua University

CB03 Change of inventor or designer information

Inventor after: Sun Maosong

Inventor after: Liu Yiqun

Inventor after: Zhang Min

Inventor after: Jin Yijiang

Inventor after: Ma Shaoping

Inventor after: Ru Liyun

Inventor after: Tong Zijian

Inventor before: Liu Yiqun

Inventor before: Zhang Min

Inventor before: Jin Yijiang

Inventor before: Ma Shaoping

Inventor before: Ru Liyun

Inventor before: Tong Zijian

COR Change of bibliographic data