CN117807428A - Object characteristic value recognition model training method, object recognition method and device - Google Patents

Object characteristic value recognition model training method, object recognition method and device Download PDF

Info

Publication number
CN117807428A
CN117807428A CN202211176728.0A CN202211176728A CN117807428A CN 117807428 A CN117807428 A CN 117807428A CN 202211176728 A CN202211176728 A CN 202211176728A CN 117807428 A CN117807428 A CN 117807428A
Authority
CN
China
Prior art keywords
value
feature
object feature
recognition
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211176728.0A
Other languages
Chinese (zh)
Inventor
田明杨
白冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202211176728.0A priority Critical patent/CN117807428A/en
Publication of CN117807428A publication Critical patent/CN117807428A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an object characteristic value recognition model training method, an object recognition method and an object recognition device. One embodiment of the method comprises the following steps: acquiring initial object characteristics of each object corresponding to the target circulation service to obtain an initial object characteristic set; constructing a first object feature sample set and a second object feature sample set according to the initial object feature set; determining an initial object characteristic value identification model based on the set model output type; and training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model. The implementation mode is related to digital marketing, so that deviation of a model prediction result is reduced, and robustness of the model is improved.

Description

Object characteristic value recognition model training method, object recognition method and device
Technical Field
The embodiment of the disclosure relates to the field of computers, in particular to an object feature value recognition model training method, an object recognition method and an object recognition device.
Background
The target circulation service generally refers to a service for accelerating the circulation of the objects, which is pushed by the online shopping platform. For example, the target flow service may refer to a marketing campaign service. Currently, for incremental prediction (increment of goods/turnover brought by marketing activities) corresponding to a target circulation business, the following methods are generally adopted: training the depth model by adopting single characteristics, and predicting the target circulation business by utilizing the trained depth model.
However, the following technical problems generally exist in the above manner: the depth model is trained by adopting single features, the influence of other features is not considered, so that deviation exists in the prediction result of the trained model, and the robustness of the model is low.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an object feature value recognition model training method, an object recognition method, an apparatus, an electronic device, a computer readable medium and a program product to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an object feature value recognition model training method, the method comprising: obtaining initial object characteristics of each object corresponding to the target circulation service to obtain an initial object characteristic set, wherein the initial object characteristics in the initial object characteristic set comprise: object representation features, value flow features, and interest item features; constructing a first object feature sample group and a second object feature sample group according to the initial object feature set, wherein a first object feature sample in the first object feature sample group comprises a first object feature, and a second object feature sample in the second object feature sample group comprises a second object feature; determining an initial object characteristic value identification model based on the set model output type; and training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model.
Optionally, constructing the first object feature sample set and the second object feature sample set according to the initial object feature set includes: dividing the initial object feature set into a first initial object feature set and a second initial object feature set; constructing a first object feature sample set according to the first initial object feature set; and constructing a second object characteristic sample set according to the second initial object characteristic set.
Optionally, constructing a first object feature sample set according to the first initial object feature set includes: selecting a first number of first initial object features from the first initial object feature set as a first object feature set; for each first object feature in the first object feature group, combining the first object feature with a set first object feature value tag corresponding to the first object feature into a first object feature sample.
Optionally, constructing a second object feature sample set according to the second initial object feature set includes: selecting a second number of second initial object features from the second initial object feature set as a second object feature set, wherein the second number is greater than the first number; and combining the second object feature with a set second object feature value label corresponding to the second object feature for each second object feature in the second object feature group to form a second object feature sample.
In a second aspect, some embodiments of the present disclosure provide an object recognition method, the method comprising: acquiring a first object feature sample set and a second object feature sample set, wherein a first object feature sample in the first object feature sample set comprises a first object feature, and a second object feature sample in the second object feature sample set comprises a second object feature; inputting the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set, wherein a first object feature sample in the first object feature sample set corresponds to a first object feature value in the first object feature value set, and the object feature value recognition model is generated by a method described in any implementation manner of the first aspect; inputting the second object feature sample set into the object feature value recognition model to obtain a second object recognition feature value set, wherein a second object feature sample in the second object feature sample set corresponds to a second object recognition feature value in the second object recognition feature value set; and determining a target object group corresponding to the second object characteristic sample group according to the first object characteristic value group and the second object characteristic value group.
Optionally, the determining, according to the first object recognition feature value set and the second object recognition feature value set, a target object set corresponding to the second object feature sample set includes: for each first object identification feature value in the first object identification feature value group, the following processing steps are performed: selecting at least one second object recognition feature value corresponding to the first object recognition feature value from the second object recognition feature value group as an alternative second object recognition feature value group; determining each object corresponding to the second object identification characteristic value set as an object set; combining the determined candidate objects into a candidate object set; performing de-duplication processing on the candidate object set to generate a de-duplication candidate object set as a first candidate object set; and determining a target object group according to the first candidate object set and the first object identification characteristic value group.
Optionally, the determining the target object group according to the first candidate object set and the first object identification feature value group includes: determining a second object recognition characteristic value variance and a second object recognition characteristic value mean according to each second object recognition characteristic value corresponding to the first candidate object set; determining a first object recognition characteristic value variance and a first object recognition characteristic value mean corresponding to the first object recognition characteristic value set; determining a second object feature value variance and a second object feature value mean according to each second object feature value label corresponding to the first candidate object set; determining a first object feature value variance and a first object feature value mean corresponding to the first object feature sample set; generating an object recognition feature standard square error according to the first object recognition feature value variance, the first object recognition feature value mean, the second object recognition feature value variance and the second object recognition feature value mean; generating an object feature standard mean according to the first object feature value variance, the first object feature value mean, the second object feature value variance and the second object feature value mean; and determining the first candidate object set as a target object group in response to determining that the object recognition feature standard deviation is less than or equal to the object feature standard deviation.
Optionally, the method further comprises: determining each object corresponding to the first object characteristic sample group as a first object group; for each preset time period in the set of preset time periods, performing the following processing steps: acquiring a target service value of each first object in the first object group in the preset time period as a first service value to obtain a first service value group; acquiring a target service value of each target object in the target object group in the preset time period as a second service value, and obtaining a second service value group; determining the sum of the first service values included in the first service value group as a first service total value; determining the sum of the second service values included in the second service value group as a second service total value; determining a difference value between the first service total value and the second service total value as a service increasing value; determining an average value of the determined service increasing values as a target service increasing value; and carrying out service adjustment on the target circulation service according to the target service increasing value.
In a third aspect, some embodiments of the present disclosure provide an object feature value recognition model training apparatus, the apparatus comprising: the acquiring unit is configured to acquire initial object features of each object corresponding to the target circulation service to obtain an initial object feature set, wherein the initial object features in the initial object feature set comprise: object representation features, value flow features, and interest item features; a construction unit configured to construct a first object feature sample group and a second object feature sample group according to the initial object feature set, wherein a first object feature sample in the first object feature sample group includes a first object feature and a second object feature sample in the second object feature sample group includes a second object feature; a determining unit configured to determine an initial object feature value recognition model based on the set model output type; and the training unit is configured to train the initial object feature value recognition model based on the first object feature sample group and the second object feature sample group to obtain a trained object feature value recognition model.
Optionally, the building unit is further configured to: dividing the initial object feature set into a first initial object feature set and a second initial object feature set; constructing a first object feature sample set according to the first initial object feature set; and constructing a second object characteristic sample set according to the second initial object characteristic set.
Optionally, the building unit is further configured to: selecting a first number of first initial object features from the first initial object feature set as a first object feature set; for each first object feature in the first object feature group, combining the first object feature with a set first object feature value tag corresponding to the first object feature into a first object feature sample.
Optionally, the building unit is further configured to: selecting a second number of second initial object features from the second initial object feature set as a second object feature set, wherein the second number is greater than the first number; and combining the second object feature with a set second object feature value label corresponding to the second object feature for each second object feature in the second object feature group to form a second object feature sample.
In a fourth aspect, some embodiments of the present disclosure provide an object recognition apparatus, the apparatus comprising: an acquisition unit configured to acquire a first object feature sample group and a second object feature sample group, wherein a first object feature sample in the first object feature sample group includes a first object feature and a second object feature sample in the second object feature sample group includes a second object feature; a first input unit configured to input the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set, where a first object feature sample in the first object feature sample set corresponds to a first object feature value in the first object feature value set, and the object feature value recognition model is generated by a method as described in any implementation manner of the first aspect; a second input unit configured to input the second object feature sample set into the object feature value recognition model to obtain a second object recognition feature value set, where a second object feature sample in the second object feature sample set corresponds to a second object recognition feature value in the second object recognition feature value set; and a determining unit configured to determine a target object group corresponding to the second object feature sample group based on the first object identification feature value group and the second object identification feature value group.
Optionally, the determining unit is further configured to: for each first object identification feature value in the first object identification feature value group, the following processing steps are performed: selecting at least one second object recognition feature value corresponding to the first object recognition feature value from the second object recognition feature value group as an alternative second object recognition feature value group; determining each object corresponding to the second object identification characteristic value set as an object set; combining the determined candidate objects into a candidate object set; performing de-duplication processing on the candidate object set to generate a de-duplication candidate object set as a first candidate object set; and determining a target object group according to the first candidate object set and the first object identification characteristic value group.
Optionally, the determining unit is further configured to: determining a second object recognition characteristic value variance and a second object recognition characteristic value mean according to each second object recognition characteristic value corresponding to the first candidate object set; determining a first object recognition characteristic value variance and a first object recognition characteristic value mean corresponding to the first object recognition characteristic value set; determining a second object feature value variance and a second object feature value mean according to each second object feature value label corresponding to the first candidate object set; determining a first object feature value variance and a first object feature value mean corresponding to the first object feature sample set; generating an object recognition feature standard square error according to the first object recognition feature value variance, the first object recognition feature value mean, the second object recognition feature value variance and the second object recognition feature value mean; generating an object feature standard mean according to the first object feature value variance, the first object feature value mean, the second object feature value variance and the second object feature value mean; and determining the first candidate object set as a target object group in response to determining that the object recognition feature standard deviation is less than or equal to the object feature standard deviation.
Optionally, the object recognition device further comprises: an object determining unit configured to determine each object corresponding to the first object feature sample group as a first object group; a service value processing unit configured to perform the following processing steps for each preset time period in the preset time period group: acquiring a target service value of each first object in the first object group in the preset time period as a first service value to obtain a first service value group; acquiring a target service value of each target object in the target object group in the preset time period as a second service value, and obtaining a second service value group; determining the sum of the first service values included in the first service value group as a first service total value; determining the sum of the second service values included in the second service value group as a second service total value; determining a difference value between the first service total value and the second service total value as a service increasing value; an increase value determination unit configured to determine an average value of the determined individual service increase values as a target service increase value; and the service adjustment unit is configured to adjust the service of the target circulation service according to the target service increasing value.
In a fifth aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first or second aspects above.
In a sixth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements the method described in any of the implementations of the first or second aspects above.
In a seventh aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the method described in any of the implementations of the first or second aspects above.
The above embodiments of the present disclosure have the following advantageous effects: according to the object feature value recognition model training method, deviation of a model prediction result is reduced, and robustness of the model is improved. In particular, the reason why the robustness of the model is low is that: the depth model is trained by adopting single features, the influence of other features is not considered, so that deviation exists in the prediction result of the trained model, and the robustness of the model is low. Based on this, in the method for generating an object image according to some embodiments of the present disclosure, first, initial object features of each object corresponding to a target circulation service are acquired, and an initial object feature set is obtained. Wherein, the initial object features in the initial object feature set include: object representation features, value flow features, and interest item features. Thus, by means of a model trained by a plurality of features, the deviation of the result of model prediction is reduced. Then, a first object feature sample set and a second object feature sample set are constructed according to the initial object feature set. Therefore, the model is trained through two groups of different samples, and the reliability of the model prediction result is improved. Then, based on the set model output type, an initial object feature value identification model is determined. Therefore, the trained model can be enabled to be more fit with the service requirement, and the accuracy of model identification is improved. And finally, training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model. Therefore, the accuracy of the result prediction of the trained model is improved, the deviation of the result of the model prediction is reduced, and the robustness of the model is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of an object feature value recognition model training method according to some embodiments of the present disclosure;
FIG. 2 is a schematic illustration of an application scenario of an object recognition method according to some embodiments of the present disclosure;
FIG. 3 is a flow chart of some embodiments of an object feature value recognition model training method according to the present disclosure;
FIG. 4 is a flow chart of some embodiments of an object recognition method according to the present disclosure;
FIG. 5 is a schematic diagram of the structure of some embodiments of an object feature value recognition model training apparatus according to the present disclosure;
FIG. 6 is a schematic structural diagram of some embodiments of an object recognition device according to the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Operations such as collection, storage, use, etc. of personal information (e.g., initial object features) of a user involved in the present disclosure, prior to performing the corresponding operations, the relevant organization or individual is up to the end of obligations including developing personal information security impact assessment, fulfilling informed obligations to the personal information body, pre-characterizing authorized consent of the personal information body, etc., and meeting the regulations of the relevant legal regulations.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of an object feature value recognition model training method according to some embodiments of the present disclosure.
In the application scenario schematic diagram of fig. 1, the method includes: computing device 100, first object feature sample set 101, second object feature sample set 102, initial object feature value identification model 103, object feature value identification model 104. Wherein the first object feature sample set 101 comprises first object feature samples 1011. The second object feature sample set 102 includes a second object feature sample 1021.
The first object feature sample 1011 may include: object representation features, value flow features, interest item features. Here, the object representation feature may include a feature of name, age, sex, or the like. For example, the object representation feature may be "name: opening XX; age: 26; gender: a male). The value flow feature may refer to a behavioral feature such as a browse purchase of an item involved in a certain marketing campaign. For example, the value flow feature may include "A marketing campaign: browsing times, purchasing frequency, etc. The item of interest feature may refer to a feature of an item that is favored by the object (i.e., user). For example, the item of interest feature may be an item feature of "cosmetics, digital products, household appliances" or the like. The second object feature sample 1021 may include: object representation features, value flow features, interest item features. For example, the object representation feature may be "name: plum XX; age: 25, a step of selecting a specific type of material; gender: female ", etc. The value flow feature may be "a marketing campaign: browsing times, not purchased "and the like. The interesting item features may be "cosmetic, digital product" or the like item features. For example, "not purchased" may indicate that the user did not purchase an item corresponding to the target circulation service.
In practice, first, the computing device 100 may obtain an initial object feature set corresponding to each object of the target circulation service. Wherein, the initial object features in the initial object feature set include: object representation features, value flow features, and interest item features. The computing device 100 may then construct the first object feature sample set 101 and the second object feature sample set 102 from the initial object feature set described above. Wherein the first object feature sample 1011 in the first object feature sample group 101 includes a first object feature and the second object feature sample 1021 in the second object feature sample group 102 includes a second object feature. The computing device 100 may then determine an initial object feature value recognition model 103 based on the set model output type. Finally, the computing device 100 may train the initial object feature value recognition model 103 based on the first object feature sample set 101 and the second object feature sample set 102, resulting in a trained object feature value recognition model 104.
Fig. 2 is a schematic diagram of one application scenario of an object recognition method according to some embodiments of the present disclosure.
In the application scenario of fig. 2, first, the computing device 200 may obtain a first object feature sample set 201 and a second object feature sample set 202. Wherein the first object feature samples in the first object feature sample set 201 include first object features, and the second object feature samples in the second object feature sample set 202 include second object features. The computing device 200 may then input the first set of object feature samples 201 into a pre-trained object feature value recognition model 203, resulting in a first set of object recognition feature values 204. Wherein the first object feature samples in the first object feature sample set 201 correspond to first object identification feature values in the first object identification feature value set 204, and the object feature value identification model 203 is generated by a method as described in any implementation manner of the first aspect. The computing device 200 may then input the second set of object feature samples 202 into the object feature value identification model 203, resulting in a second set of object identification feature values 205. Wherein the second object feature samples in the second object feature sample set 202 correspond to the second object recognition feature values in the second object recognition feature value set 205. Finally, the computing device 200 may determine the target object group 206 corresponding to the second object feature sample group 202 according to the first object identification feature value group 204 and the second object identification feature value group 205.
It should be noted that, the computing device 200 may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of computing devices in fig. 2 is merely illustrative. There may be any number of computing devices, as desired for an implementation.
With continued reference to fig. 3, a flow 300 of some embodiments of an object feature value recognition model training method according to the present disclosure is shown. The object characteristic value recognition model training method comprises the following steps:
step 301, obtaining an initial object feature set of each object corresponding to the target circulation service.
In some embodiments, the execution body of the object feature value recognition model training method (for example, the computing device 100 in fig. 1) may obtain, by using a wired connection or a wireless connection, an initial object feature set of each object corresponding to the target circulation service from the terminal device. Wherein, the initial object features in the initial object feature set include: object representation features, value flow features, and interest item features. Here, the target circulation service may refer to a service for accelerating the circulation of the article, which is pushed out by the electronic commerce platform. For example, the target flow service may refer to a marketing campaign service. An object may refer to a user shopping on an e-commerce platform. The initial object features may refer to user features that have not been data cleaned. The object portrayal feature may refer to a portrayal feature of the user. The value transfer feature may refer to a value transfer feature of a user shopping on an e-commerce platform. The item of interest feature may refer to an item of interest to the user. For example, the subject representation features may include "age, gender" and the like. The value circulation feature may include features such as "number of browses, frequency of purchases, total value of purchased items (total amount of consumption)". The item of interest feature may include "item category, item name". The object may be used to characterize the user.
Step 302, constructing a first object feature sample set and a second object feature sample set according to the initial object feature set.
In some embodiments, the executing entity may construct the first object feature sample set and the second object feature sample set according to the initial object feature set.
In practice, the execution subject may construct the first object feature sample set and the second object feature sample set by:
first, for each initial object feature in the initial object feature set, determining an object identifier of an object corresponding to the initial object feature. The object identification may be a distinguishing identification of different types of users by the e-commerce platform.
And a second step of determining each initial object feature of which the object identifier corresponding to the initial object feature set is the first object identifier as a first object feature group. Here, the first object identifier may represent a user who wants to acquire an item corresponding to the target circulation service. For example, the first object identification may characterize that the user has received a discount coupon (coupon) for an item corresponding to the target circulation business.
And thirdly, determining each initial object feature of which the object identifier corresponding to the initial object feature set is a second object identifier as a second object feature group. The second object identification may represent a user not interested in the item corresponding to the target circulation business. For example, the second object identification may characterize a discount coupon (coupon) for an item that the user did not receive the targeted circulation business.
Fourth, for each first object feature in the first object feature group, the first object feature is spliced with a set first object feature value label corresponding to the first object feature to form a first object feature sample. Here, the first object feature value tag may represent an acquired feature value of an item corresponding to the target circulation service by the user. For example, the first object characteristic value tag may represent a purchase amount or a consumption amount.
And fifthly, for each second object feature in the second object feature group, splicing the second object feature with a set second object feature value label corresponding to the second object feature into a second object feature sample. Here, the second object feature value tag may represent an acquired feature value of an item corresponding to the target circulation service by the user. For example, the second object characteristic value tag may represent a purchase amount or a consumption amount.
In some optional implementations of some embodiments, the executing entity may construct the first object feature sample set and the second object feature sample set by:
the first step is to divide the initial object feature set into a first initial object feature set and a second initial object feature set. In practice, first, the execution body may divide each initial object feature of the object identifier corresponding to the initial object feature set as the first object identifier into a first initial object feature group. Then, the execution body may divide each initial object feature of the object identifier corresponding to the initial object feature set as the second object identifier into a second initial object feature group.
And a second step of constructing a first object feature sample set according to the first initial object feature set.
In practice, the second step may comprise the sub-steps of:
a first sub-step of selecting a first number of first initial object features from the first initial object feature set as a first object feature set. Here, the first number is not limited to be set. Here, the first initial object feature of the first initial object feature group may be randomly selected from the first initial object feature group as the first object feature group.
And a second sub-step of combining the first object feature and a set first object feature value label corresponding to the first object feature into a first object feature sample for each first object feature in the first object feature group. The first object feature value tag may represent an acquired feature value of an object (user) for an item corresponding to the target circulation service. For example, the first object characteristic value tag may represent an item purchase amount or a consumption amount. Here, combining may refer to stitching.
And thirdly, constructing a second object feature sample set according to the second initial object feature set.
In practice, the fourth step may comprise the sub-steps of:
And a first sub-step of selecting a second number of second initial object features from the second initial object feature group as a second object feature group. Wherein the second number is greater than the first number. Here, the second number may be a multiple of the preset number of the first number. For example, the second number may be 10-20 times the first number. Here, a second number of second initial object features may be randomly selected from the above-described second initial object feature groups as the second object feature group.
And a second sub-step of combining, for each second object feature in the second object feature group, the second object feature with a set second object feature value label corresponding to the second object feature as a second object feature sample. The second object feature value tag may represent an acquired feature value of the object (user) for the object corresponding to the target circulation service. For example, the second object characteristic value tag may represent an item purchase amount or a consumption amount. Here, combining may refer to stitching.
It should be noted that, before the first object feature sample set and the second object feature sample set are constructed, the initial object feature set needs to be normalized. Here, the normalization processing may refer to removal of the above-described initial object feature from the abnormality in the initial object feature set. For example, the abnormal initial object feature may be a blank initial object feature.
Step 303, determining an initial object characteristic value identification model based on the set model output type.
In some embodiments, the executing entity may determine the initial object feature value identification model based on the set model output type. In practice, the execution subject may select, from among the respective initial neural network models, an initial neural network model characterized by the model output type as an initial object feature value recognition model. Here, the model output type may be a result type of the set model output. For example, when the results output by the model represent incremental data (e.g., the user's amount of consumption or the amount of purchased items), then the regression model should be employed for training. When the result output by the model represents change type data (for example, the ratio of the purchase times and the browsing times of the object corresponding to the target circulation business by the user), the classification model is adopted for training. For example, the determined initial object feature value identification model may be an initial regression NN (neural network) model.
Step 304, training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model.
In some embodiments, the execution body may train the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set, to obtain a trained object feature value recognition model.
In practice, based on the first object feature sample set and the second object feature sample set, the execution subject may train the initial object feature value recognition model to obtain a trained object feature value recognition model by:
first, determining the network structure of the initial object feature value recognition model and initializing the network parameters of the initial object feature value recognition model.
And a second step of training the initial object feature value recognition model by a deep learning method by taking a first object feature sample in the first object feature sample set as a positive sample, a second object feature sample in the second object feature sample set as a positive sample, a first object feature included in the first object feature sample set and a second object feature included in the second object feature sample set as inputs to the initial object feature value recognition model, and a first object feature value tag included in the first object feature sample set and a second object feature value tag included in the second object feature sample set as desired outputs of the initial object feature value recognition model.
And thirdly, taking the initial object characteristic value recognition model obtained through training as the object characteristic value recognition model after training.
The above embodiments of the present disclosure have the following advantageous effects: according to the object feature value recognition model training method, deviation of a model prediction result is reduced, and robustness of the model is improved. In particular, the reason why the robustness of the model is low is that: the depth model is trained by adopting single features, the influence of other features is not considered, so that deviation exists in the prediction result of the trained model, and the robustness of the model is low. Based on this, in the method for generating an object image according to some embodiments of the present disclosure, first, initial object features of each object corresponding to a target circulation service are acquired, and an initial object feature set is obtained. Wherein, the initial object features in the initial object feature set include: object representation features, value flow features, and interest item features. Thus, by means of a model trained by a plurality of features, the deviation of the result of model prediction is reduced. Then, a first object feature sample set and a second object feature sample set are constructed according to the initial object feature set. Therefore, the model is trained through two groups of different samples, and the reliability of the model prediction result is improved. Then, based on the set model output type, an initial object feature value identification model is determined. Therefore, the trained model can be enabled to be more fit with the service requirement, and the accuracy of model identification is improved. And finally, training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model. Therefore, the accuracy of the result prediction of the trained model is improved, the deviation of the result of the model prediction is reduced, and the robustness of the model is improved.
With further reference to fig. 4, some embodiments of an object recognition method according to the present disclosure are shown. The object identification method comprises the following steps:
step 401, acquiring a first object feature sample set and a second object feature sample set.
In some embodiments, the execution subject of the object recognition method (e.g., computing device 200 in fig. 2) may obtain the first object feature sample set and the second object feature sample set from the terminal device by way of a wired connection or a wireless connection. The first object feature sample in the first object feature sample set includes a first object feature and a first object feature value tag, and the second object feature sample in the second object feature sample set includes a second object feature and a second object feature value tag. The first object features may include object representation features, value flow features, and item of interest features. The second object features may include object representation features, value flow features, and item of interest features. The object portrayal feature may refer to a portrayal feature of the user. The value transfer feature may refer to a value transfer feature of a user shopping on an e-commerce platform. The item of interest feature may refer to an item of interest to the user. For example, the subject representation features may include "age, gender" and the like. The value circulation feature may include features such as "number of browses, frequency of purchases, total value of items purchased", etc. The item of interest feature may include "item category, item name". Here, the first object feature sample in the first object feature sample group corresponds to an object (user). The second object feature sample in the second set of object feature samples corresponds to an object (user). Here, the first object feature value tag may represent an acquired feature value of an item corresponding to the target circulation service by the user. For example, the first object characteristic value tag may represent a purchase amount or a consumption amount. Here, the target circulation service may refer to a service for accelerating the circulation of the article, which is pushed out by the electronic commerce platform. For example, the target flow service may refer to a marketing campaign service. Here, the second object feature value tag may represent an acquired feature value of an item corresponding to the target circulation service by the user. For example, the second object characteristic value tag may represent a purchase amount or a consumption amount.
Step 402, inputting the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set.
In some embodiments, the executing body may input the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set. Wherein the first object feature sample in the first object feature sample set corresponds to a first object recognition feature value in the first object recognition feature value set, and the object feature value recognition model is generated by an object feature value recognition model training method according to some embodiments of the present disclosure. In practice, the execution body may input the first object feature included in each first object feature sample in the first object feature sample set into a pre-trained object feature value recognition model to generate a first object recognition feature value, so as to obtain a first object recognition feature value set. For example, the pre-trained object feature value recognition model may be a trained regression NN (neural network) model.
Step 403, inputting the second object feature sample set into the object feature value recognition model to obtain a second object feature value set.
In some embodiments, the execution body may input the second object feature sample set into the object feature value recognition model to obtain a second object feature value set. Wherein the second object feature samples in the second object feature sample set correspond to second object identification feature values in the second object identification feature value set. In practice, the execution body may input the second object feature included in each second object feature sample in the second object feature sample set into the object feature value identification model to generate a second object identification feature value, so as to obtain a second object identification feature value set.
Step 404, determining a target object group corresponding to the second object feature sample group according to the first object recognition feature value group and the second object recognition feature value group.
In some embodiments, the execution body may determine a target object group corresponding to the second object feature sample group according to the first object identification feature value group and the second object identification feature value group.
In practice, according to the first object recognition feature value set and the second object recognition feature value set, the execution body may determine a target object set corresponding to the second object feature sample set by:
A first step of executing the following processing steps for each first object recognition feature value in the first object recognition feature value group:
a first sub-step of selecting at least one second object recognition feature value identical to the first object recognition feature value from the second object recognition feature value group as an alternative second object recognition feature value group;
and a second sub-step of determining each object corresponding to the second object identification characteristic value set as an object set.
And a second step of combining and combining the determined candidate objects into a candidate object set.
And thirdly, performing de-duplication processing on the candidate object set to generate a de-duplication candidate object set as a target object group.
In some optional implementations of some embodiments, according to the first object recognition feature value set and the second object recognition feature value set, the execution body may further determine a target object set corresponding to the second object feature sample set by:
a first step of executing the following processing steps for each first object recognition feature value in the first object recognition feature value group:
a first sub-step of selecting at least one second object recognition feature value corresponding to the first object recognition feature value from the second object recognition feature value set as an alternative second object recognition feature value set. That is, the difference between the candidate second object recognition feature value in the candidate second object recognition feature value group and the first object recognition feature value is equal to or smaller than a preset difference. Here, the setting of the preset difference is not limited.
And a second sub-step of determining each object corresponding to the second object identification characteristic value set as an object set.
And a second step of combining and combining the determined candidate objects into a candidate object set.
And thirdly, performing de-duplication processing on the candidate object set to generate a de-duplication candidate object set as a first candidate object set.
And step four, determining a target object group according to the first candidate object set and the first object identification characteristic value group.
In practice, the fourth step may comprise the sub-steps of:
and a first sub-step of determining a second object recognition characteristic value variance and a second object recognition characteristic value mean according to each second object recognition characteristic value corresponding to the first candidate object set. In practice, first, the variance of each second object recognition feature value corresponding to the above-described first candidate object set may be determined as the second object recognition feature value variance. Then, an average value of the respective second object recognition feature values corresponding to the first candidate object set may be determined as a second object recognition feature value average value.
And a second sub-step of determining a first object recognition feature value variance and a first object recognition feature value mean corresponding to the first object recognition feature value set. In practice, first, the variance of each first object recognition feature value included in the above-described first object recognition feature value group may be determined as the first object recognition feature value variance. Then, an average value of the respective first object recognition feature values included in the above-described first object recognition feature value group may be determined as a first object recognition feature value average value.
And a third sub-step of determining a second object eigenvalue variance and a second object eigenvalue mean according to each second object eigenvalue label corresponding to the first candidate object set. In practice, first, the variance of each numerical value represented by each of the above-described second object feature value tags may be determined as the second object feature value variance. Then, the average value of the values represented by the second object feature value tags may be determined as a second object feature value average value.
And a fourth sub-step of determining a first object feature value variance and a first object feature value mean corresponding to the first object feature sample set. In practice, first, the variance of the numerical value represented by each first object feature value label corresponding to the first object feature sample set is determined as the first object feature value variance. And then, determining the average value of the values represented by the first object feature value labels corresponding to the first object feature sample group as a first object feature value average value.
And a fifth sub-step of generating an object recognition feature standard mean based on the first object recognition feature value variance, the first object recognition feature value mean, the second object recognition feature value variance, and the second object recognition feature value mean. In practice, first, a difference between the first object recognition feature value average value and the second object recognition feature value average value is determined as an object recognition feature value average difference value. Then, one half of the sum of the first object recognition feature value variance and the second object recognition feature value variance is subjected to an open square process to generate an object recognition feature value open square variance. And finally, determining the ratio of the average difference value of the object recognition characteristic values to the square variance of the object recognition characteristic values as the standard square variance of the object recognition characteristic values.
And a sixth sub-step of generating an object feature standard mean according to the first object feature value variance, the first object feature value mean, the second object feature value variance and the second object feature value mean. In practice, first, a difference between the first object feature value average value and the second object feature value average value is determined as an object feature value average difference value. Then, one half of the sum of the first object eigenvalue variance and the second object eigenvalue variance is subjected to square-opening processing to generate an object eigenvalue square variance. And finally, determining the ratio of the average difference value of the object characteristic values to the square variance of the object characteristic values as the standard square variance of the object characteristic values.
A seventh substep, in response to determining that the object recognition feature standard deviation is less than or equal to the object feature standard deviation, determining the first candidate object set as a target object group.
Optionally, each object corresponding to the first object feature sample set is determined as a first object set.
In some embodiments, the execution body may determine each object corresponding to the first object feature sample set as the first object set.
Optionally, for each preset time period in the set of preset time periods, the following processing steps are performed:
the first step is to obtain a target service value of each first object in the first object group in the preset time period as a first service value, and obtain a first service value group. In practice, the executing body may acquire, from the terminal device, the target service value of each first object in the first object group in the preset time period as the first service value by using a wired connection or a wireless connection manner, so as to obtain the first service value group. Here, the target service value may refer to an acquired feature value of an object corresponding to the target circulation service by the user. For example, the target business value may represent an item purchase amount or a consumption amount.
And a second step of obtaining a target service value of each target object in the target object group in the preset time period as a second service value to obtain a second service value group. In practice, the executing body may acquire, from the terminal device, the target service value of each target object in the target object group in the preset time period as the second service value by using a wired connection or a wireless connection manner, so as to obtain the second service value group.
And thirdly, determining the sum of the first service values included in the first service value group as a first service total value.
And fourth, determining the sum of the second service values included in the second service value group as a second service total value.
And fifthly, determining a difference value between the first service total value and the second service total value as a service increasing value.
It should be noted that the preset time period in the preset time period group may be a preset future time period. The duration of the preset time period is not limited. For example, the duration of the preset time period may be 5 days.
Therefore, target service values in a plurality of time periods can be acquired, and cross verification is performed so as to improve the confidence of the service increasing value.
Optionally, an average value of the determined individual service increment values is determined as the target service increment value.
In some embodiments, the executing entity may determine an average value of the determined service increment values as the target service increment value.
Optionally, according to the target service increasing value, performing service adjustment on the target circulation service.
In some embodiments, the executing entity may perform service adjustment on the target circulation service according to the target service increment value. In practice, in response to the target service increment value being smaller than or equal to the preset increment value, the inventory reserve of the articles corresponding to the target circulation service can be reduced.
As can be seen from fig. 4, the process 400 in some embodiments corresponding to fig. 4 can screen out objects with smaller differences based on two different sets of samples. Therefore, the incremental data brought by the target circulation business can be more accurately determined according to the selected objects. Furthermore, the business adjustment of the target circulation business is facilitated, so that the dispatching efficiency of the objects corresponding to the target circulation business is improved.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an object feature value recognition model training apparatus, which correspond to those method embodiments shown in fig. 3, and which are particularly applicable to various electronic devices.
As shown in fig. 5, the object feature value recognition model training apparatus 500 of some embodiments includes: an acquisition unit 501, a construction unit 502, a determination unit 503, and a training unit 504. The acquiring unit 501 is configured to acquire an initial object feature of each object corresponding to the target circulation service, and obtain an initial object feature set, where the initial object feature in the initial object feature set includes: object representation features, value flow features, and interest item features; a construction unit 502 configured to construct a first object feature sample group and a second object feature sample group according to the initial object feature set, wherein a first object feature sample in the first object feature sample group includes a first object feature and a first object feature value tag, and a second object feature sample in the second object feature sample group includes a second object feature and a second object feature value tag; a determining unit 503 configured to determine an initial object feature value identification model based on the set model output type; training unit 504 is configured to train the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set, and obtain a trained object feature value recognition model.
Optionally, the construction unit 502 is further configured to: dividing the initial object feature set into a first initial object feature set and a second initial object feature set; constructing a first object feature sample set according to the first initial object feature set; and constructing a second object characteristic sample set according to the second initial object characteristic set.
Optionally, the construction unit 502 is further configured to: selecting a first number of first initial object features from the first initial object feature set as a first object feature set; for each first object feature in the first object feature group, combining the first object feature with a set first object feature value tag corresponding to the first object feature into a first object feature sample.
Optionally, the construction unit 502 is further configured to: selecting a second number of second initial object features from the second initial object feature set as a second object feature set, wherein the second number is greater than the first number; and combining the second object feature with a set second object feature value label corresponding to the second object feature for each second object feature in the second object feature group to form a second object feature sample.
It will be appreciated that the elements described in the object feature value recognition model training apparatus 500 correspond to the respective steps in the method described with reference to fig. 3. Thus, the operations, features and advantages described above for the method are equally applicable to the object feature value recognition model training apparatus 500 and the units contained therein, and are not described herein.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an object recognition apparatus, which correspond to those method embodiments shown in fig. 4, and which are particularly applicable in various electronic devices.
As shown in fig. 6, an object recognition apparatus 600 of some embodiments includes: an acquisition unit 601, a first input unit 602, a second input unit 603, and a determination unit 604. Wherein the obtaining unit 601 is configured to obtain a first object feature sample set and a second object feature sample set, where a first object feature sample in the first object feature sample set includes a first object feature and a second object feature sample in the second object feature sample set includes a second object feature; a first input unit 602, configured to input the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set, where a first object feature sample in the first object feature sample set corresponds to a first object feature value in the first object feature value set, and the object feature value recognition model is generated by an object feature value recognition model training method according to some embodiments of the present disclosure; a second input unit 603 configured to input the second object feature sample set into the object feature value recognition model to obtain a second object recognition feature value set, where a second object feature sample in the second object feature sample set corresponds to a second object recognition feature value in the second object recognition feature value set; a determining unit 604 configured to determine a target object group corresponding to the second object feature sample group according to the first object identification feature value group and the second object identification feature value group.
Optionally, the determining unit 604 is further configured to: for each first object identification feature value in the first object identification feature value group, the following processing steps are performed: selecting at least one second object recognition feature value corresponding to the first object recognition feature value from the second object recognition feature value group as an alternative second object recognition feature value group; determining each object corresponding to the second object identification characteristic value set as an object set; combining the determined candidate objects into a candidate object set; performing de-duplication processing on the candidate object set to generate a de-duplication candidate object set as a first candidate object set; and determining a target object group according to the first candidate object set and the first object identification characteristic value group.
Optionally, the determining unit 604 is further configured to: determining a second object recognition characteristic value variance and a second object recognition characteristic value mean according to each second object recognition characteristic value corresponding to the first candidate object set; determining a first object recognition characteristic value variance and a first object recognition characteristic value mean corresponding to the first object recognition characteristic value set; determining a second object feature value variance and a second object feature value mean according to each second object feature value label corresponding to the first candidate object set; determining a first object feature value variance and a first object feature value mean corresponding to the first object feature sample set; generating an object recognition feature standard square error according to the first object recognition feature value variance, the first object recognition feature value mean, the second object recognition feature value variance and the second object recognition feature value mean; generating an object feature standard mean according to the first object feature value variance, the first object feature value mean, the second object feature value variance and the second object feature value mean; and determining the first candidate object set as a target object group in response to determining that the object recognition feature standard deviation is less than or equal to the object feature standard deviation.
Optionally, the object recognition apparatus 600 further includes: an object determining unit configured to determine each object corresponding to the first object feature sample group as a first object group; a service value processing unit configured to perform the following processing steps for each preset time period in the preset time period group: acquiring a target service value of each first object in the first object group in the preset time period as a first service value to obtain a first service value group; acquiring a target service value of each target object in the target object group in the preset time period as a second service value, and obtaining a second service value group; determining the sum of the first service values included in the first service value group as a first service total value; determining the sum of the second service values included in the second service value group as a second service total value; determining a difference value between the first service total value and the second service total value as a service increasing value; an increase value determination unit configured to determine an average value of the determined individual service increase values as a target service increase value; and the service adjustment unit is configured to adjust the service of the target circulation service according to the target service increasing value.
It will be appreciated that the elements recited in the object recognition device 600 correspond to the various steps in the method described with reference to fig. 4. Thus, the operations, features and advantages described above with respect to the method are equally applicable to the object recognition device 600 and the units contained therein, and are not described here again.
Referring now to FIG. 7, a schematic diagram of a structure of an electronic device (e.g., computing device 100 in FIG. 1 or computing device 200 in FIG. 2) 700 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic devices in some embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is only one example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processor, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 7 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 709, or from storage 708, or from ROM 702. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 701.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: obtaining initial object characteristics of each object corresponding to the target circulation service to obtain an initial object characteristic set, wherein the initial object characteristics in the initial object characteristic set comprise: object representation features, value flow features, and interest item features; constructing a first object feature sample group and a second object feature sample group according to the initial object feature set, wherein a first object feature sample in the first object feature sample group comprises a first object feature, and a second object feature sample in the second object feature sample group comprises a second object feature; determining an initial object characteristic value identification model based on the set model output type; and training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model.
Or cause the electronic device to: acquiring a first object feature sample set and a second object feature sample set, wherein a first object feature sample in the first object feature sample set comprises a first object feature, and a second object feature sample in the second object feature sample set comprises a second object feature; inputting the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set, wherein a first object feature sample in the first object feature sample set corresponds to a first object feature value in the first object feature value set, and the object feature value recognition model is generated by an object feature value recognition model training method according to some embodiments of the present disclosure; inputting the second object feature sample set into the object feature value recognition model to obtain a second object recognition feature value set, wherein a second object feature sample in the second object feature sample set corresponds to a second object recognition feature value in the second object recognition feature value set; and determining a target object group corresponding to the second object characteristic sample group according to the first object characteristic value group and the second object characteristic value group.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a construction unit, a determination unit, and a training unit. The names of these units do not constitute a limitation of the unit itself in some cases, and for example, the determination unit may also be described as "a unit that determines an initial object feature value identification model based on a set model output type".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, implements any of the object feature value recognition model training described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (13)

1. An object feature value recognition model training method, comprising:
obtaining initial object characteristics of each object corresponding to the target circulation service to obtain an initial object characteristic set, wherein the initial object characteristics in the initial object characteristic set comprise: object representation features, value flow features, and interest item features;
constructing a first object feature sample group and a second object feature sample group according to the initial object feature set, wherein a first object feature sample in the first object feature sample group comprises a first object feature, and a second object feature sample in the second object feature sample group comprises a second object feature;
Determining an initial object characteristic value identification model based on the set model output type;
and training the initial object feature value recognition model based on the first object feature sample set and the second object feature sample set to obtain a trained object feature value recognition model.
2. The method of claim 1, wherein the constructing a first object feature sample set and a second object feature sample set from the initial object feature set comprises:
dividing the initial object feature set into a first initial object feature set and a second initial object feature set;
constructing a first object feature sample set according to the first initial object feature set;
and constructing a second object feature sample set according to the second initial object feature set.
3. The method of claim 2, wherein said constructing a first set of object feature samples from said first initial set of object features comprises:
selecting a first number of first initial object features from the first initial object feature set as a first object feature set;
and combining the first object feature and a set first object feature value label corresponding to the first object feature into a first object feature sample for each first object feature in the first object feature group.
4. A method according to claim 3, wherein said constructing a second set of object feature samples from said second initial set of object features comprises:
selecting a second number of second initial object features from the second initial object feature set as a second object feature set, wherein the second number is greater than the first number;
and combining the second object feature with a set second object feature value label corresponding to the second object feature into a second object feature sample for each second object feature in the second object feature group.
5. An object recognition method, comprising:
acquiring a first object feature sample set and a second object feature sample set, wherein a first object feature sample in the first object feature sample set comprises a first object feature and a second object feature sample in the second object feature sample set comprises a second object feature;
inputting the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object feature value set, wherein a first object feature sample in the first object feature sample set corresponds to a first object feature value in the first object feature value set, and the object feature value recognition model is generated by the method according to any one of claims 1-4;
Inputting the second object feature sample set into the object feature value recognition model to obtain a second object recognition feature value set, wherein a second object feature sample in the second object feature sample set corresponds to a second object recognition feature value in the second object recognition feature value set;
and determining a target object group corresponding to the second object characteristic sample group according to the first object identification characteristic value group and the second object identification characteristic value group.
6. The method of claim 5, wherein the determining a target object group corresponding to the second object feature sample group from the first object identification feature value group and the second object identification feature value group comprises:
for each first object identification feature value in the first object identification feature value set, performing the following processing steps:
selecting at least one second object recognition characteristic value corresponding to the first object recognition characteristic value from the second object recognition characteristic value group as an alternative second object recognition characteristic value group;
determining each object corresponding to the alternative second object identification characteristic value group as an alternative object group;
combining the determined candidate objects into a candidate object set;
Performing de-duplication processing on the candidate object set to generate a de-duplication candidate object set as a first candidate object set;
and determining a target object group according to the first candidate object set and the first object identification characteristic value group.
7. The method of claim 6, wherein the determining a set of target objects from the first set of candidate objects and the first set of object identification feature values comprises:
determining a second object recognition characteristic value variance and a second object recognition characteristic value mean according to each second object recognition characteristic value corresponding to the first candidate object set;
determining a first object recognition characteristic value variance and a first object recognition characteristic value mean corresponding to the first object recognition characteristic value set;
determining a second object feature value variance and a second object feature value mean according to each second object feature value label corresponding to the first candidate object set;
determining a first object feature value variance and a first object feature value mean corresponding to the first object feature sample set;
generating an object recognition feature standard mean according to the first object recognition feature value variance, the first object recognition feature value mean, the second object recognition feature value variance and the second object recognition feature value mean;
Generating an object feature standard mean according to the first object feature value variance, the first object feature value mean, the second object feature value variance and the second object feature value mean;
in response to determining that the object recognition feature standard deviation is less than or equal to the object feature standard deviation, the first set of candidate objects is determined to be a target object group.
8. The method of claim 5, wherein the method further comprises:
determining each object corresponding to the first object characteristic sample group as a first object group;
for each preset time period in the set of preset time periods, performing the following processing steps:
acquiring a target service value of each first object in the first object group in the preset time period as a first service value to obtain a first service value group;
acquiring a target service value of each target object in the target object group in the preset time period as a second service value, and obtaining a second service value group;
determining the sum of the first service values included in the first service value group as a first service total value;
determining the sum of the second service values included in the second service value group as a second service total value;
Determining a difference value between the first service total value and the second service total value as a service increasing value;
determining an average value of the determined service increasing values as a target service increasing value;
and carrying out service adjustment on the target circulation service according to the target service increasing value.
9. An object feature value recognition model training apparatus, comprising:
the acquiring unit is configured to acquire initial object characteristics of each object corresponding to the target circulation service to obtain an initial object characteristic set, wherein the initial object characteristics in the initial object characteristic set comprise: object representation features, value flow features, and interest item features;
a construction unit configured to construct a first object feature sample set and a second object feature sample set from the initial object feature set, wherein a first object feature sample in the first object feature sample set comprises a first object feature and a second object feature sample in the second object feature sample set comprises a second object feature;
a determining unit configured to determine an initial object feature value recognition model based on the set model output type;
and the training unit is configured to train the initial object feature value recognition model based on the first object feature sample group and the second object feature sample group to obtain a trained object feature value recognition model.
10. An object recognition apparatus comprising:
an acquisition unit configured to acquire a first object feature sample group and a second object feature sample group, wherein a first object feature sample in the first object feature sample group includes a first object feature and a second object feature sample in the second object feature sample group includes a second object feature;
a first input unit configured to input the first object feature sample set into a pre-trained object feature value recognition model to obtain a first object recognition feature value set, wherein a first object feature sample in the first object feature sample set corresponds to a first object recognition feature value in the first object recognition feature value set, and the object feature value recognition model is generated by the method according to any one of claims 1-4;
the second input unit is configured to input the second object characteristic sample set into the object characteristic value recognition model to obtain a second object recognition characteristic value set, wherein second object characteristic samples in the second object characteristic sample set correspond to second object recognition characteristic values in the second object recognition characteristic value set;
And the determining unit is configured to determine a target object group corresponding to the second object characteristic sample group according to the first object characteristic value group and the second object characteristic value group.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4 or 5-8.
12. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4 or 5-8.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-4 or 5-8.
CN202211176728.0A 2022-09-26 2022-09-26 Object characteristic value recognition model training method, object recognition method and device Pending CN117807428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211176728.0A CN117807428A (en) 2022-09-26 2022-09-26 Object characteristic value recognition model training method, object recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211176728.0A CN117807428A (en) 2022-09-26 2022-09-26 Object characteristic value recognition model training method, object recognition method and device

Publications (1)

Publication Number Publication Date
CN117807428A true CN117807428A (en) 2024-04-02

Family

ID=90428623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211176728.0A Pending CN117807428A (en) 2022-09-26 2022-09-26 Object characteristic value recognition model training method, object recognition method and device

Country Status (1)

Country Link
CN (1) CN117807428A (en)

Similar Documents

Publication Publication Date Title
CN111199459B (en) Commodity recommendation method, commodity recommendation device, electronic equipment and storage medium
US10290040B1 (en) Discovering cross-category latent features
CN109803008B (en) Method and apparatus for displaying information
CN110619078B (en) Method and device for pushing information
CN113610582A (en) Advertisement recommendation method and device, storage medium and electronic equipment
CN112836128A (en) Information recommendation method, device, equipment and storage medium
CN113822734A (en) Method and apparatus for generating information
CN113763077A (en) Method and apparatus for detecting false trade orders
CN116109374A (en) Resource bit display method, device, electronic equipment and computer readable medium
CN113450167A (en) Commodity recommendation method and device
CN111787042A (en) Method and device for pushing information
CN112860999B (en) Information recommendation method, device, equipment and storage medium
US20240078585A1 (en) Method and apparatus for sharing information
CN114926234A (en) Article information pushing method and device, electronic equipment and computer readable medium
CN117807428A (en) Object characteristic value recognition model training method, object recognition method and device
CN113516524B (en) Method and device for pushing information
CN111767290B (en) Method and apparatus for updating user portraits
CN113554493A (en) Interactive ordering method, device, electronic equipment and computer readable medium
CN113793167A (en) Method and apparatus for generating information
CN111897951A (en) Method and apparatus for generating information
CN111339432A (en) Recommendation method and device of electronic object and electronic equipment
CN111563797A (en) House source information processing method and device, readable medium and electronic equipment
CN111309230A (en) Information display method and device, electronic equipment and computer readable storage medium
CN113177174B (en) Feature construction method, content display method and related device
CN116911954B (en) Method and device for recommending items based on interests and popularity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination