CN111782982A - Method and device for sorting search results and computer-readable storage medium - Google Patents

Method and device for sorting search results and computer-readable storage medium Download PDF

Info

Publication number
CN111782982A
CN111782982A CN201910419029.6A CN201910419029A CN111782982A CN 111782982 A CN111782982 A CN 111782982A CN 201910419029 A CN201910419029 A CN 201910419029A CN 111782982 A CN111782982 A CN 111782982A
Authority
CN
China
Prior art keywords
input features
input
feature
machine learning
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910419029.6A
Other languages
Chinese (zh)
Inventor
邱德军
任恺
刘燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910419029.6A priority Critical patent/CN111782982A/en
Publication of CN111782982A publication Critical patent/CN111782982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a method and a device for sorting search results and a computer-readable storage medium, and relates to the technical field of computers. The method comprises the following steps: determining a corresponding machine learning model according to a received sorting request of a search result; determining input features required to be generated according to a machine learning model; determining whether to store the input features according to the condition that the input features can be reused by other machine learning models; and sorting the search results by utilizing a machine learning model according to the input characteristics. The technical scheme of the disclosure can improve the processing efficiency.

Description

Method and device for sorting search results and computer-readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for sorting search results and a computer-readable storage medium.
Background
In the face of massive network resources, the search function can provide a portal for the user, so that the user can arrive at any target from the search. In the e-commerce field, a user can acquire a desired item information list through a search function. Therefore, it is important that search results be organized in order of importance for presentation to the user.
In the related art, a required feature is generated for each machine learning model, and then sorting is performed using a plurality of machine learning models.
Disclosure of Invention
The inventors of the present disclosure found that the following problems exist in the above-described related art: the same features need to be generated repeatedly in the whole search sorting process, resulting in waste of on-line computing resources and low processing efficiency.
In view of this, the present disclosure provides a technical solution for ranking search results, which can improve processing efficiency.
According to some embodiments of the present disclosure, there is provided a method of ranking search results, including: determining a corresponding machine learning model according to a received sorting request of a search result; determining input features required to be generated according to the machine learning model; determining whether to store the input features according to the condition that the input features can be reused by other machine learning models; and sorting the search results by utilizing the machine learning model according to the input features.
In some embodiments, determining whether the computational cost of the input features is greater than a threshold; and determining to store the input feature in the case that the calculation cost is larger than a threshold value.
In some embodiments, the length of time to store the input features is determined according to the range in which the input features can be multiplexed.
In some embodiments, where the input features can be multiplexed in a lifecycle of a respective feature generation operator, storing the input features in the lifecycle; in the case where the input features can be multiplexed in a current sorting process, storing the input features in the current sorting process.
In some embodiments, an identification of the input feature is determined from the machine learning model; and calling a corresponding feature generation operator to generate the input feature according to the identifier of the input feature.
In some embodiments, the input feature is a plurality of input features; calculating a common intermediate quantity for generating the plurality of input features; generating the plurality of input features according to the common intermediate quantity.
In some embodiments, according to the sorting request, related entity samples are determined, and each entity sample has one or more input features contained in a feature set; generating common input features for a plurality of the entity samples; unique input features of the entity samples are generated respectively.
In some embodiments, the input features are stored for use in training a respective machine learning model in the event that the ordering request satisfies a predetermined condition.
In some embodiments, calling a feature generation operator corresponding to the input feature as a first feature generation operator; calling a second feature generation operator corresponding to other input features to generate the other input features under the condition that the first feature generation operator needs to rely on other input features to generate the input features; and generating the input feature by using the first feature generation operator according to the other input features.
In some embodiments, the instance of the feature generation operator is created globally by a software entity in the case that the feature generation operator corresponding to the input feature requires only static data as input.
In some embodiments, determining a software entity corresponding to the feature generation operator according to the registration information of the feature generation operator; multiplexing, with the software entity, instances of the feature generation operator to generate the input feature in the presence of the instances of the feature generation operator; in the absence of an instance of the feature generation operator, generating, with the software entity, an instance of the feature generation operator to generate the input feature.
In some embodiments, the input features are converted to a format required by the machine learning model; and sorting the search results by utilizing the machine learning model according to the converted input features.
According to further embodiments of the present disclosure, there is provided a search result ranking apparatus including: the determining unit is used for determining a corresponding machine learning model according to a received sequencing request of a search result, determining input features needing to be generated according to the machine learning model, and determining whether to store the input features according to the condition that the input features can be reused by other machine learning models; and the sequencing unit is used for sequencing the search results by utilizing the machine learning model according to the input characteristics.
According to still other embodiments of the present disclosure, there is provided a device for ranking search results, including: the processor is used for determining a corresponding machine learning model according to a received ordering request of a search result, determining input features required to be generated according to the machine learning model, determining whether to store the input features according to the condition that the input features can be reused by other machine learning models, and ordering the search result according to the input features by utilizing the machine learning model; a memory for storing input characteristics.
According to still further embodiments of the present disclosure, there is provided a search result ranking apparatus including: a memory; and a processor coupled to the memory, the processor configured to perform the method of ranking search results in any of the above embodiments based on instructions stored in the memory device.
According to still further embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of ranking search results in any of the above embodiments.
In the above embodiment, the input features that can be reused by multiple machine learning models are cached, and repeated generation of features in the sorting process is avoided. This can save on-line computational resources, thereby improving processing efficiency.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 illustrates a flow diagram of some embodiments of a method of ranking search results of the present disclosure;
FIG. 2 illustrates a flow diagram of some embodiments of the present disclosure to generate input features;
FIG. 3 illustrates a flow diagram of further embodiments of the present disclosure to generate input features;
FIG. 4 illustrates a flow diagram of yet other embodiments of the present disclosure for generating input features;
FIG. 5 illustrates a flow diagram of some embodiments of an example creation method of a feature generation operator of the present disclosure;
FIG. 6 illustrates a block diagram of some embodiments of an apparatus for ranking search results of the present disclosure;
FIG. 7 illustrates a block diagram of still further embodiments of an apparatus for ranking search results of the present disclosure;
FIG. 8 illustrates a block diagram of still further embodiments of an apparatus for ranking search results of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Fig. 1 illustrates a flow diagram of some embodiments of a method of ranking search results of the present disclosure.
As shown in fig. 1, the method includes: step 110, determining a machine learning model required by sequencing; step 120, determining input features to be generated; step 130, determining whether to store the input features; and step 140, ranking the search results.
In step 110, a corresponding machine learning model is determined according to the received request for ranking of search results. For example, different machine learning models may be employed to accomplish the ranking for different ranking requests. Each ordering may be accomplished by invoking multiple machine learning models. Each machine learning model requires at least one input feature to complete the ranking.
In step 120, input features that need to be generated are determined from the machine learning model. For example, after determining the input features required to be generated, the input features required by the machine learning model may be generated by using the corresponding feature generation operators.
In some embodiments, the feature generation operator may be a software entity in a framework for computing corresponding features from raw data as input to a machine learning model. For example, the raw data may be sales data for an item over a period of time, and the features generated by the feature generation operator may be scores of the item's decay over time. The original data may also be a description text of a certain article, and the feature generated by the feature generation operator may be a feature vector corresponding to the description text.
In some embodiments, the input features may be generated by the steps of FIG. 2.
FIG. 2 illustrates a flow diagram of some embodiments of the present disclosure to generate input features.
As shown in fig. 2, the process includes: step 210, determining the identification of the input features; and step 220, generating input features.
In step 210, an identification of the input features is determined from the machine learning model. For example, a unique number may be provided for each input feature as an identification of the input feature.
In step 220, according to the identification of the input feature, a corresponding feature generation operator is called to generate the input feature. For example, the feature management module may be configured to search for a corresponding feature generation operator according to the number of the requested input feature.
Therefore, a uniform interface suitable for all feature generation operators is equivalently set, the detail of feature generation is hidden, and each feature generation operator only needs to realize the uniform interface, so that the system processing efficiency is improved.
In some embodiments, the feature generation operator may pre-generate the input features to be generated. For example, when a plurality of necessary input features are provided, a common intermediate amount for generating the plurality of input features may be calculated, and the plurality of input features may be generated based on the common intermediate amount.
In some embodiments, pre-generation may also be achieved by the steps in fig. 3.
FIG. 3 illustrates a flow diagram of further embodiments of the present disclosure for generating input features.
As shown in fig. 3, the process includes: step 310, determining a related entity sample; step 320, generating common input features; and step 330, generating unique input features.
In step 310, according to the sorting request, related entity samples are determined, and the feature set of each entity sample includes one or more of the input features required by the machine learning model.
In some embodiments, the ordering request is: and sorting the search results with the 'top loading' as a key word, wherein the required input features comprise an input feature 1, an input feature 2 and an input feature 3. The related physical samples may include "shirts", "coats", and the like. The set of features of the "shirt" includes input feature 1 and input feature 2, and the set of features of the "jacket" includes input feature 1 and input feature 3.
In step 320, input features common to a plurality of entity samples are generated. For example, since the common input feature is input feature 1, the feature generation operator corresponding to input feature 1 may be called first to generate input feature 1.
In step 330, unique input features are generated for each entity sample. For example, the unique input feature of "shirt" is input feature 2, and the unique input feature of "jacket" is input feature 3. The feature generation operators of input features 2 and input features 3 may be invoked to generate the respective features.
The pre-generation of features can reduce the repetitive work in generating features for each sample, thereby improving the system processing efficiency.
In some embodiments, the feature generation operator must rely on other input features to generate the required input features, which can be handled by the steps in FIG. 4.
FIG. 4 illustrates a flow diagram of yet other embodiments of the present disclosure for generating input features.
As shown in fig. 4, the process includes: step 410, calling a first feature generation operator; step 420, calling a second feature generation operator; and step 430, generating input features.
In step 410, a feature generation operator corresponding to the input feature is called as a first feature generation operator.
In step 420, in the case that the first feature generation operator needs to generate the input feature depending on other input features, a second feature generation operator corresponding to other input features is called to generate other input features.
In some embodiments, the identity of the other input features may be determined, and the corresponding second feature generation operator may be invoked according to the identity.
In step 430, the input features are generated using the first feature generation operator based on the other input features. For example, data cleansing may be performed after the input features are generated.
In some embodiments, after determining the input features that need to be generated, the sorting may be performed by other steps in fig. 1.
In step 130, it is determined whether to store the input features based on the fact that the input features can be reused by other machine learning models.
In some embodiments, the input features may be generated first and then stored in a case where the input features can be reused by other machine learning models so that the other machine learning models reuse the input features.
In some embodiments, the input features that need to be produced may be determined after the machine learning model is determined; determining whether to store the input features according to the condition that the input features can be multiplexed; these input features are then generated and stored, with the input features having been previously determined to need to be stored.
In some embodiments, it may be determined whether the computational cost of the input features is greater than a threshold. In the case of whether the computation cost is greater than a threshold, the input features are stored in a buffer for reuse in order to avoid repeated computation of the input features.
In some embodiments, the length of time to store the input features may be determined based on the range in which the input features can be multiplexed. For example, where an input feature can be multiplexed in the lifecycle of the corresponding feature generation operator, the input feature is stored in the lifecycle; in case the input features can be multiplexed in the current sorting process, the input features are stored in the current sorting process.
Therefore, whether the generated input features are stored in the caches of different levels can be determined according to different characteristics of the input features required to be generated, and balance between the processing speed of the system and the storage space is achieved. For example, different levels of caching may include caching during an ordering request session, caching during the entire lifecycle of the feature generation operator, not caching, and so forth.
In some embodiments, a software entity may be provided in the system as a feature operator factory for creating instances of feature generation operators. A feature operator factory can create multiple instances of feature generation operators.
In some embodiments, where the feature generation operator only requires static data as input, instances of the feature generation operator are created globally with software entities, thereby increasing system processing efficiency. For example, a feature generator requires static data that does not change the user's associated attributes (e.g., name, address, gender, etc.) with the ordering request, i.e., an instance can be created globally for the feature generator for reuse.
The input features are generated and stored and may be ordered by other steps in FIG. 1.
In step 140, the search results are ranked using a machine learning model according to the input features.
In some embodiments, an instance of the feature generation operator may be created by the steps in FIG. 5.
FIG. 5 illustrates a flow diagram of some embodiments of an example creation method of a feature generation operator of the present disclosure.
As shown in fig. 5, the method includes: step 510, determining a corresponding software entity; step 520, judging whether the instance exists; step 530, calling the instance to generate input features; and step 540, generating instance generation input features.
In step 510, the software entity corresponding to the feature generation operator is determined according to the registration information of the feature generation operator.
In some embodiments, during the system initialization phase, the feature generation operator may perform self-registration with the feature management module in the system, and the registration information may include features that the feature generation operator can generate, features that the feature generation operator relies on, corresponding feature operator factories, and the like.
In step 520, it is determined whether an instance of the feature generation operator exists. In the case that an instance exists, perform step 530; in the case that an instance does not exist, step 540 is performed.
In step 530, instances of the feature generation operators are multiplexed with the software entities to generate the input features. For example, a feature operator factory implementation creates the instance globally, at which point the instance can be invoked.
In step 540, an instance of a feature generation operator is generated with the software entity to generate the input feature.
In some embodiments, where the ordering request conforms to a configured storage policy, the input features are stored for use in training the corresponding machine learning model. For example, a feature collection module may be provided in the system for collecting the features generated and used in the sorting process in a uniform manner, and then sending the collected features to a feature storage end.
In some embodiments, different storage policies may be configured according to different attributes of the sort request (e.g., location information of the user, identity information, various attributes of the item, etc.). For example, the storage policy may be configured to: features resulting from a search ranking request from a Beijing user are stored. In this case, if the location of the originating user of the sort request is Beijing, the features generated during the sort process are stored.
In the above embodiments, the feature collector is provided to provide a mechanism for controlling the feature collection data for a large amount of feature data generated during the search ranking process. The mechanism may decide whether to collect the features generated by the request ordering, how many features to collect, etc., based on the characteristics of the request. Therefore, on the basis of ensuring the authenticity of the training data, different collection strategies do not need to be set for different machine learning models, and resources are saved.
In some embodiments, the input features are converted to a format required by the machine learning model; and sorting the search results by utilizing a machine learning model according to the converted input features. For example, if the machine learning model requires that the input is a 256-dimensional real vector, the generated input features need to be converted into the 256-dimensional real vector.
In the above embodiment, the input features that can be reused by multiple machine learning models are cached, and repeated generation of features in the sorting process is avoided. This can save on-line computational resources, thereby improving processing efficiency.
Fig. 6 illustrates a block diagram of some embodiments of an apparatus for ranking search results of the present disclosure.
As shown in fig. 6, the sorting apparatus 6 includes a determining unit 61 and a sorting unit 64.
The determining unit 61 determines a corresponding machine learning model according to the received search result ordering request, determines an input feature to be generated according to the machine learning model, and determines whether to store the input feature according to a condition that the input feature can be reused by other machine learning models; the ranking unit 64 ranks the search results using a machine learning model according to the input features.
In some embodiments, ranking unit 64 converts the input features into a format required by the machine learning model; the ranking unit 64 ranks the search results using a machine learning model according to the converted input features.
In some embodiments, the determination unit 61 determines to store the input feature in a case where it is judged that the calculation cost of the input feature is larger than the threshold value.
In some embodiments, the determination unit 61 determines the length of time to store the input features according to the range in which the input features can be multiplexed.
In some embodiments, in case the input features can be multiplexed in the life cycle of the respective feature generation operator, the determining unit 61 determines to store the input features in the life cycle. In the case where the input features can be multiplexed in the current sorting process, the determination unit 61 stores the input features in the current sorting process.
In some embodiments, the sorting apparatus 6 further comprises a generating unit 62. The determination unit 61 determines the identity of the input feature according to the machine learning model; the generating unit 62 invokes the corresponding feature generation operator to generate the input feature according to the identifier of the input feature.
In some embodiments, the generation unit 62 calculates a common intermediate quantity for generating the plurality of input features; the generating unit 62 generates a plurality of input features from the common intermediate quantity.
In some embodiments, the determining unit 61 determines, according to the sorting request, related entity samples, where the feature set of each entity sample includes one or more of the plurality of input features; the generating unit 62 generates a common input feature of the plurality of entity samples; unique input features are generated for each entity sample, respectively.
In the above embodiment, the input features that can be reused by multiple machine learning models are cached, and repeated generation of features in the sorting process is avoided. This can save on-line computational resources, thereby improving processing efficiency.
In some embodiments, the sorting apparatus 6 further comprises a storage unit 63 for storing the input features that the determining unit 61 determines need to store. In the case where the sort request satisfies a predetermined condition, the storage unit 63 stores input features for training a corresponding machine learning model.
In some embodiments, the generation unit 62 invokes a feature generation operator corresponding to the input feature as the first feature generation operator; the generating unit 62 calls a second feature generating operator corresponding to the other input features to generate the other input features under the condition that the first feature generating operator needs to generate the input features depending on the other input features; the generating unit 62 generates the input feature using the first feature generation operator based on the other input features.
In some embodiments, the generation unit 62 creates instances of feature generation operators on a global scale with software entities in the case that the feature generation operator corresponding to the input feature requires only static data as input.
In some embodiments, the determining unit 61 determines the software entity corresponding to the feature generation operator according to the registration information of the feature generation operator; the generation unit 62 multiplexes the instances of the feature generation operators with the software entities to generate the input features in the presence of the instances of the feature generation operators; the generation unit 62 generates instances of feature generation operators with software entities to generate input features in the absence of instances of feature generation operators.
FIG. 7 illustrates a block diagram of further embodiments of an apparatus for ranking search results of the present disclosure.
As shown in fig. 7, the search result sorting apparatus 7 includes a memory 71 and a processor 72 coupled to the memory 71.
The processor 72 determines a corresponding machine learning model according to the received search result ranking request, determines an input feature to be generated according to the machine learning model, determines whether to store the input feature according to a condition that the input feature can be reused by other machine learning models, and ranks the search results by using the machine learning model according to the input feature. The memory 71 stores input characteristics.
In some embodiments, the processor 72 is configured to perform a method of ranking search results in any of the embodiments of the present disclosure based on instructions stored in the memory 71.
The memory 71 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), a database, and other programs.
FIG. 8 illustrates a block diagram of still further embodiments of an apparatus for ranking search results of the present disclosure.
As shown in fig. 8, the ranking means 8 of the search result of this embodiment includes: a memory 810 and a processor 820 coupled to the memory 810, the processor 820 being configured to execute the method of ranking search results in any of the embodiments described above based on instructions stored in the memory 810.
Memory 810 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
The search result ranking means 8 may further comprise an input output interface 830, a network interface 840, a storage interface 850, etc. These interfaces 830, 840, 850 and between the memory 810 and the processor 820 may be connected, for example, by a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 840 provides a connection interface for various networking devices. The storage interface 850 provides a connection interface for external storage devices such as an SD card and a usb disk.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
So far, the ranking method of search results, the ranking apparatus of search results, and the computer-readable storage medium according to the present disclosure have been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (16)

1. A method of ranking search results, comprising:
determining a corresponding machine learning model according to a received sorting request of a search result;
determining input features required to be generated according to the machine learning model;
determining whether to store the input features according to the condition that the input features can be reused by other machine learning models;
and sorting the search results by utilizing the machine learning model according to the input features.
2. The ranking method of claim 1, wherein the determining whether to store the input features comprises:
judging whether the calculation cost of the input features is greater than a threshold value;
and determining to store the input feature in the case that the calculation cost is larger than a threshold value.
3. The sorting method of claim 1, further comprising:
determining a length of time to store the input features according to a range in which the input features can be multiplexed.
4. The ranking method of claim 3, wherein the determining a length of time to store the input features comprises:
storing the input features in a lifecycle of a respective feature generation operator if the input features can be multiplexed in the lifecycle;
in the case where the input features can be multiplexed in a current sorting process, storing the input features in the current sorting process.
5. The sorting method according to claim 1, further comprising:
determining an identity of the input feature according to the machine learning model;
and calling a corresponding feature generation operator to generate the input feature according to the identifier of the input feature.
6. The sequencing method of claim 1 wherein the input feature is a plurality of input features;
the sorting method further comprises the following steps:
calculating a common intermediate quantity for generating the plurality of input features;
generating the plurality of input features according to the common intermediate quantity.
7. The sequencing method of claim 1 wherein the input feature is a plurality of input features;
the sorting method further comprises the following steps:
determining related entity samples according to the sorting request, wherein the feature set of each entity sample comprises one or more of the input features;
generating input features common to a plurality of the entity samples;
unique input features are generated for each of the entity samples, respectively.
8. The ranking method according to any one of claims 1-7, further comprising:
and storing the input features for training a corresponding machine learning model under the condition that the sequencing request meets a preset condition.
9. The ranking method according to any one of claims 1-7, further comprising:
calling a characteristic generating operator corresponding to the input characteristic as a first characteristic generating operator;
calling a second feature generation operator corresponding to other input features to generate the other input features under the condition that the first feature generation operator needs to rely on other input features to generate the input features;
and generating the input feature by using the first feature generation operator according to the other input features.
10. The ranking method according to any one of claims 1-7, further comprising:
and under the condition that the feature generation operator corresponding to the input feature only needs static data as input, creating an instance of the feature generation operator on a global scale by using a software entity.
11. The sequencing method of claim 10, further comprising:
determining a software entity corresponding to the feature generation operator according to the registration information of the feature generation operator;
multiplexing, with the software entity, instances of the feature generation operator to generate the input feature in the presence of the instances of the feature generation operator;
in the absence of an instance of the feature generation operator, generating, with the software entity, an instance of the feature generation operator to generate the input feature.
12. The ranking method of any of claims 1-7, wherein ranking the search results using the machine learning model comprises:
converting the input features into a format required by the machine learning model;
and sorting the search results by utilizing the machine learning model according to the converted input features.
13. An apparatus for ranking search results, comprising:
the determining unit is used for determining a corresponding machine learning model according to a received sequencing request of a search result, determining input features needing to be generated according to the machine learning model, and determining whether to store the input features according to the condition that the input features can be reused by other machine learning models;
and the sequencing unit is used for sequencing the search results by utilizing the machine learning model according to the input characteristics.
14. An apparatus for ranking search results, comprising:
the processor is used for determining a corresponding machine learning model according to a received ordering request of a search result, determining input features required to be generated according to the machine learning model, determining whether to store the input features according to the condition that the input features can be reused by other machine learning models, and ordering the search result according to the input features by utilizing the machine learning model;
a memory for storing input characteristics.
15. An apparatus for ranking search results, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the method of ranking search results of any of claims 1-12 based on instructions stored in the memory device.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of ranking search results of any one of claims 1 to 12.
CN201910419029.6A 2019-05-20 2019-05-20 Method and device for sorting search results and computer-readable storage medium Pending CN111782982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910419029.6A CN111782982A (en) 2019-05-20 2019-05-20 Method and device for sorting search results and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910419029.6A CN111782982A (en) 2019-05-20 2019-05-20 Method and device for sorting search results and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111782982A true CN111782982A (en) 2020-10-16

Family

ID=72755556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910419029.6A Pending CN111782982A (en) 2019-05-20 2019-05-20 Method and device for sorting search results and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111782982A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156744A1 (en) * 2012-11-30 2014-06-05 Ming Hua Updating features based on user actions in online systems
CN106484766A (en) * 2016-09-07 2017-03-08 北京百度网讯科技有限公司 Searching method based on artificial intelligence and device
CN108334627A (en) * 2018-02-12 2018-07-27 北京百度网讯科技有限公司 Searching method, device and the computer equipment of new media content
CN108416028A (en) * 2018-03-09 2018-08-17 北京百度网讯科技有限公司 A kind of method, apparatus and server of search content resource
US20180349382A1 (en) * 2017-06-04 2018-12-06 Apple Inc. Methods and systems using linear expressions for machine learning models to rank search results
CN109299344A (en) * 2018-10-26 2019-02-01 Oppo广东移动通信有限公司 The generation method of order models, the sort method of search result, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156744A1 (en) * 2012-11-30 2014-06-05 Ming Hua Updating features based on user actions in online systems
CN106484766A (en) * 2016-09-07 2017-03-08 北京百度网讯科技有限公司 Searching method based on artificial intelligence and device
US20180349382A1 (en) * 2017-06-04 2018-12-06 Apple Inc. Methods and systems using linear expressions for machine learning models to rank search results
CN108334627A (en) * 2018-02-12 2018-07-27 北京百度网讯科技有限公司 Searching method, device and the computer equipment of new media content
CN108416028A (en) * 2018-03-09 2018-08-17 北京百度网讯科技有限公司 A kind of method, apparatus and server of search content resource
CN109299344A (en) * 2018-10-26 2019-02-01 Oppo广东移动通信有限公司 The generation method of order models, the sort method of search result, device and equipment

Similar Documents

Publication Publication Date Title
US9460117B2 (en) Image searching
CN111506849B (en) Page generation method and device
CN110309251B (en) Text data processing method, device and computer readable storage medium
CN107807957A (en) entity library generating method and device
CN111400586A (en) Group display method, terminal, server, system and storage medium
WO2013192093A1 (en) Search method and apparatus
US20170193541A1 (en) Agricultural products processing center adaptive analysis system and processing method thereof
CN109983459A (en) Method and apparatus for identifying the counting of the N-GRAM occurred in corpus
CN109885535A (en) A kind of method and relevant apparatus of file storage
KR20110139896A (en) Method for recommendation the financial goods
CN106919576A (en) Using the method and device of two grades of classes keywords database search for application now
CN111047434A (en) Operation record generation method and device, computer equipment and storage medium
CN105354490B (en) Method and equipment for processing hijacked browser
US11080249B2 (en) Establishing industry ground truth
JP2023145767A (en) Vocabulary extraction support system and vocabulary extraction support method
CN110968666A (en) Similarity-based title generation model training method and computing equipment
JP7450190B2 (en) Patent information processing device, patent information processing method, and program
CN111782982A (en) Method and device for sorting search results and computer-readable storage medium
CN107305552A (en) Aid reading method and apparatus
CN112199573B (en) Illegal transaction active detection method and system
JP2020071678A (en) Information processing device, control method, and program
JP2019200582A (en) Search device, search method, and search program
CN114139727A (en) Feature processing method, feature processing device, computing equipment and medium
CN114817590A (en) Path storage method, path query method and device, medium and electronic equipment
CN110929207B (en) Data processing method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination