CN111506753A - Recommendation method and device, electronic equipment and readable storage medium - Google Patents

Recommendation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111506753A
CN111506753A CN202010158274.9A CN202010158274A CN111506753A CN 111506753 A CN111506753 A CN 111506753A CN 202010158274 A CN202010158274 A CN 202010158274A CN 111506753 A CN111506753 A CN 111506753A
Authority
CN
China
Prior art keywords
training
quality
image
layer
image sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010158274.9A
Other languages
Chinese (zh)
Other versions
CN111506753B (en
Inventor
信峥
董健
王永康
王兴星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Liangxin Technology Co ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010158274.9A priority Critical patent/CN111506753B/en
Publication of CN111506753A publication Critical patent/CN111506753A/en
Application granted granted Critical
Publication of CN111506753B publication Critical patent/CN111506753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure provides a recommendation method, a recommendation device, an electronic device and a readable storage medium, wherein the method comprises the following steps: performing first training on the first quality estimation model by adopting a first image sample; inputting the second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model and the first quality prediction model share a processing layer and a first output layer, the second quality prediction model further comprises a second output layer, and the output of the processing layer is used as the input of the second output layer after noise is added; performing second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the second sample score and the loss value of the second training score of the second image sample; and estimating the quality score of the image to be recommended by adopting the first quality estimation model obtained by the second training, and recommending the image to be recommended according to the quality score. The present disclosure may improve the accuracy of the recommendation.

Description

Recommendation method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of personalized recommendation technologies, and in particular, to a recommendation method, an apparatus, an electronic device, and a readable storage medium.
Background
In the technical field of personalized recommendation of images, quality evaluation needs to be performed on the images so as to recommend the images with higher quality to users preferentially. Wherein the quality of the image is related to the definition of the image, the content of the image and the rest inherent properties of the image.
In the prior art, a pre-trained quality prediction model is adopted to predict the quality of an image, the quality prediction model usually needs a large number of image samples for training, the image samples can be obtained from a network and an application, so that the image samples have some interference information, namely noise, and the image samples with the noise cause lower accuracy of the trained quality prediction model, so that the recommended accuracy is poorer.
Disclosure of Invention
The present disclosure provides a recommendation method, an apparatus, an electronic device, and a readable storage medium, which may perform a second training on a second quality prediction model after a first training is performed on a first quality prediction model, and since training image information is restored from an image to which noise is added when the second quality prediction model is performed on the second quality prediction model, and a loss value is determined in combination with a difference between the training image information and a second image sample, a second training process of the first quality prediction model in combination with noise is realized, which is helpful for improving accuracy of the first quality prediction model, and further improving accuracy of recommendation.
According to a first aspect of the present disclosure, there is provided a recommendation method, the method comprising:
carrying out first training on a first quality estimation model by adopting a first image sample, wherein the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer;
inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality pre-estimation model and the first quality pre-estimation model obtained by the first training share the processing layer and the first output layer, the second quality pre-estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample;
performing second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample;
and estimating the quality score of the image to be recommended by adopting the first quality estimation model obtained by the second training, and recommending the image to be recommended according to the quality score.
According to a second aspect of the present disclosure, there is provided a recommendation apparatus, the apparatus comprising:
the first training module is used for performing first training on a first quality estimation model by adopting a first image sample, and the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer;
the second input module is used for inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality pre-estimation model and the first quality pre-estimation model obtained by the first training share the processing layer and the first output layer, the second quality pre-estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample;
the second training module is used for carrying out second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample;
and the quality score prediction module is used for predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training and recommending the image to be recommended according to the quality score.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the aforementioned recommendation method when executing the program.
According to a fourth aspect of the present disclosure, there is provided a readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned recommendation method.
The disclosure provides a recommendation method, a recommendation device, an electronic device and a readable storage medium, which can firstly adopt a first image sample to carry out first training on a first quality estimation model; then inputting the second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model and the first quality prediction model share a processing layer and a first output layer, the second quality prediction model further comprises a second output layer, and the output of the processing layer is used as the input of the second output layer after noise is added; training a second quality estimation model according to the loss values of the second image sample and the training image information, and the second sample score and the loss value of the second training score of the second image sample; and finally, estimating the quality score of the image to be recommended by adopting a second quality estimation model, and recommending the image to be recommended according to the quality score. According to the method and the device, after the first training is carried out on the first quality estimation model, the second training is carried out on the second quality estimation model, the training image information is restored from the image added with the noise when the second training is carried out on the second quality estimation model, and the loss value is determined by combining the difference between the training image information and the second image sample, so that the second training process of the first quality estimation model by combining the noise is realized, the accuracy of the first quality estimation model is improved, and the recommendation accuracy is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure, the drawings needed to be used in the description of the present disclosure will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 illustrates a flow chart of the steps of a recommendation method of the present disclosure;
FIG. 2 is a schematic diagram illustrating a second quality prediction model according to the present disclosure;
FIG. 3 illustrates another schematic structural diagram of a second quality prediction model of the present disclosure;
FIG. 4 shows a block diagram of a recommendation device of the present disclosure;
fig. 5 shows a block diagram of an electronic device of the present disclosure.
Detailed Description
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is obvious that the described embodiments are some, not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Referring to fig. 1, a flow chart of the steps of the recommendation method of the present disclosure is shown, specifically as follows:
step 101, performing first training on a first quality estimation model by using a first image sample, wherein the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer.
The first quality prediction model may be a deep learning model that outputs any prediction value, and includes a processing layer and a first output layer, where the processing layer is configured to perform linear or nonlinear operation on input information, and then the first output layer outputs a prediction value according to information output by the processing layer. Therefore, the process of training the first image sample is to input the first image sample into the first quality estimation model to obtain the quality score of the first image sample, so that the quality score of the first image sample is approximate to the sample score of the first image sample.
In an embodiment of the present disclosure, the quality score may be a numerical representation of quality in various dimensions, such as CTR (Click Through Rate), sharpness of the image, and the like.
It is to be understood that the first training is a first training of the first quality prediction model.
Step 102, inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality estimation model and the first quality estimation model obtained by the first training share the processing layer and the first output layer, the second quality estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample.
The second quality prediction model includes the first quality prediction model, such as the schematic structural diagram of the second quality prediction model shown in fig. 2, the second quality prediction model includes a processing layer and a first output layer of the first quality prediction model, and the first output layer is used for predicting a quality score of the second image sample during training, which is called a second training score of the second image sample. In addition, the second quality estimation model is additionally provided with a second output layer, the second output layer is used for restoring images from noise images added with noise during training to obtain training image information, and the second output layer is equivalent to the inverse operation of the processing layer.
In an embodiment of the present disclosure, the noise may be any type of noise, may be random noise, or may be noise conforming to a distribution function such as a normal distribution or an average distribution. The embodiments of the present disclosure do not impose limitations thereon.
It is to be understood that the second image sample is an image used for training the second quality prediction model, and the first image sample is an image used for training the first quality prediction model, and the second image sample may be the same as or different from the first image sample.
Step 103, performing a second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample.
Specifically, the loss value of the second image sample and the training image information represents a difference between the second image sample and the training image information, the loss value of the second sample score and the second training score represents a difference between the second sample score and the second training score, and the larger the difference, the larger the loss value; the smaller the gap, the smaller the loss value. The loss value can be calculated by selecting the existing loss function according to the actual requirement.
Based on the loss value, the process of training the second quality estimation model is as follows: and determining a comprehensive loss value according to the two loss values, and adjusting the parameters of the second quality prediction model according to the gradient of the comprehensive loss value to the parameters of the second quality prediction model so that the comprehensive loss value after the next iteration is smaller than that of the current iteration until the comprehensive loss value is not continuously reduced in multiple iterations.
For example, if the prediction accuracy of the quality score is desired to be higher, a larger weight may be set for the loss values of the second sample score and the second training score, and a smaller weight may be set for the loss values of the second image sample and the training image information.
It should be noted that, the training of the second quality prediction model is after the training of the first quality prediction model, that is: after the first quality prediction model is trained according to step 101, the parameters of the processing layer in the first quality prediction model are used as the initial parameters of the processing layer in the second quality prediction model, the parameters of the first output layer in the first quality prediction model are used as the initial parameters of the first output layer in the second quality prediction model, and the initial parameters of the second output layer in the second quality prediction model can be set according to empirical values or set randomly.
It is understood that the second training is the training of the second quality prediction model, which is equivalent to the second training of the first quality prediction model by combining the noise and the second output layer.
And 104, estimating the quality score of the image to be recommended by using the first quality estimation model obtained by the second training, and recommending the image to be recommended according to the quality score.
The image to be recommended can be any image provided by the personalized recommendation platform or an image related to a search word input by the user. The embodiments of the present disclosure do not impose limitations thereon.
The process of predicting the quality score of the image to be recommended is as follows: and inputting the image to be recommended into a first quality pre-estimation model, and outputting the quality score of the image to be recommended by a first output layer of the first quality pre-estimation model.
In practical applications, recommendation is generally performed based on a plurality of images to be recommended, so that the quality score of each image to be recommended is predicted and is arranged in a descending order to be recommended to a user, or the image to be recommended with the quality score greater than or equal to a preset quality score threshold is obtained and recommended to the user.
Optionally, in another embodiment of the present disclosure, the step 103 includes sub-steps a1 to a 5:
sub-step a1, determining a first loss value for the second image sample based on the second image sample and the training image information, and determining a second loss value for the second image sample based on a second sample score and a second training score for the second image sample.
The first loss value is a loss value of the second image sample and the training image information, and the second loss value is a loss value of the second sample score and the second training score.
Sub-step a2, inputting the first loss value of the second image sample into a monotonically decreasing function, to obtain the weight of the second image sample.
Wherein the monotonically decreasing function is used to convert the first loss value into a weight, and guarantees the following relationship: if the first loss value is larger, the weight is smaller; if the first loss value is smaller, the weight is larger. The monotonically decreasing function may be any function that decreases in a value range greater than 0, for example, an exponential function, and may obtain the following weight of the second sample image:
Figure BDA0002404860730000061
wherein, WiWeight of the ith second image sample, L OSS1iIs the first loss value for the ith second image sample.
Sub-step a3, weighting the second loss value of the second image sample by using the weight of the second image sample, to obtain a weighted loss value of the second image sample.
In one embodiment of the present disclosure, the weighted loss value may be calculated with reference to the following formula:
Figure BDA0002404860730000071
wherein, W L OSSiWeighted loss value for the ith second image sample, L OSS2iIs the second loss value for the ith second image sample.
Sub-step a4 determines a composite loss value for the second image sample based on the weighted loss value and the first loss value for the second image sample.
In one embodiment of the present disclosure, the integrated loss value may be calculated with reference to the following formula:
Figure BDA0002404860730000072
among them, T L OSSiIs the integrated loss value of the ith second image sample.
And a substep A5, performing a second training on the second quality estimation model according to the comprehensive loss value of the second image sample.
It should be noted that, in general, a large number of second image samples are used to perform second training on the second quality estimation model, for example, in each iteration, a large number of second image samples may be input into the second quality estimation model, the comprehensive loss value of each second image sample is determined, the comprehensive loss values of the second image samples are averaged to obtain an average loss value, and the second quality estimation model is subjected to second training according to the average loss value. Wherein, the average loss value can be calculated according to the following formula:
Figure BDA0002404860730000073
where A L OSS is the average loss value, I is the number of second image samples used per iteration, first loss value L OSS1iAnd a second loss value of L OSS2iThe calculation of the loss function can be selected according to the actual application requirements, for example, the first loss value L OSS1iThe following square sum loss function calculation can be used:
Figure BDA0002404860730000074
wherein, N is the number of pixel points contained in the second image sample, the number of pixel points of the second image sample and the training image information is the same, and CHi,nIs the value of the nth pixel point in the ith second image sample, CH'i,nThe value of the nth pixel point in the training image information is obtained.
As another example, the second loss value of L OSS2iThe cross entropy loss function can be calculated as follows:
LOSS2i=-[yi·log(y'i)+(1-yi)·log(1-y'i)](6)
wherein, yiIs a second sample score, y ', corresponding to the ith second image sample'iA second training score for the ith second image sample.
It can be understood that the process of performing the second training on the second quality estimation model according to the average loss value is as follows: and adjusting the parameters of the second quality prediction model according to the gradient of the average loss value to the parameters of the second quality prediction model, so that the average loss value after the next iteration is smaller than that of the current iteration until the average loss value is not continuously reduced in multiple iterations.
The embodiment of the disclosure can adopt the first loss value to weight the second loss value, so as to reduce the influence of noise on the comprehensive loss value, and contribute to improving the accuracy of the comprehensive loss value.
Optionally, in another embodiment of the present disclosure, the second image sample is replaced with second feature information of the commercial product, the second feature information includes a second discrete feature and a second continuous feature, the training image information is replaced with third feature information of the commercial product, the third feature information includes a third discrete feature and a third continuous feature, the sub-step a1 includes sub-steps B1 to B3:
and a substep B1 of inputting the second discrete feature and the third discrete feature into a discrete loss function to obtain a discrete loss value of the second feature information.
The discrete loss function calculates the loss value for the data of the discrete value, and the accuracy is high.
And a substep B2 of inputting the second continuous characteristic and the third continuous characteristic into a continuous loss function to obtain a continuous loss value of the second characteristic information.
The continuous loss function calculates the loss value for the continuously-valued data, and the accuracy is high.
And a sub-step B3 of determining a first loss value of the second characteristic information based on the discrete loss value and the continuous loss value.
In particular, the first loss value may be a sum of a discrete loss value and a continuous loss value.
Embodiments of the present disclosure may calculate discrete loss values and continuous loss values for discrete features and continuous features, respectively, which may help to improve the accuracy of the first loss value.
Optionally, in another embodiment of the present disclosure, the first image sample is replaced with first feature information of the commercial product, the first feature information corresponds to a first sample score, and the step 101 includes sub-steps C1 to C3:
and a substep C1, inputting the first feature information to a first quality estimation model, and obtaining a first training score of the first feature information.
Wherein the first training score is a quality score of the first feature information.
Sub-step C2, determining a loss value of the first feature information according to the first training score of the first feature information and the first sample score of the first feature information.
The loss value of the first feature information may adopt any existing loss function, such as a cross entropy loss function, a sum of squares loss function, an absolute value loss function, and the like.
And a substep C3 of performing a first training on the first quality estimation model according to the loss value of the first feature information.
Specifically, the parameters of the first quality prediction model are adjusted through the gradient of the loss value of the first characteristic information to the parameters of the first quality prediction model, so that the loss value of the first characteristic information after the next iteration is smaller than the loss value of the current iteration.
The embodiment of the disclosure may perform first training on the first quality prediction model by using the first characteristic information to implement second training of the second quality prediction model on the basis of the first quality prediction model.
Optionally, in another embodiment of the present disclosure, the first quality prediction model is a DNN model, the processing layer includes an input layer and a hidden layer, an input of the DNN model is an input of the input layer, an output of the input layer is an input of the hidden layer, an output of the hidden layer is an output of the first output layer, and an output of the first output layer is an output of the first quality prediction model.
The DNN (Deep Neural Networks) model is a commonly used Deep learning model, and the main structure of the DNN model includes: an input layer, several hidden layers and an output layer. Referring to the schematic structural diagram of the second quality prediction model shown in fig. 3, DNN included therein is the first quality prediction model.
Embodiments of the present disclosure may employ a commonly used DNN model as the first quality prediction model.
Optionally, in another embodiment of the present disclosure, the second output layer is a denoising layer, the first quality prediction model and the second quality prediction model share the input layer and the hidden layer, an input of the second quality prediction model is used as an input of the input layer, an output of the input layer is used as an input of the hidden layer, an output of the hidden layer is used as an input of the denoising layer after noise is added, and an output of the first output layer is an output of the second quality prediction model.
The denoising layer is used for removing noise from the noise image so as to restore the image. The Denoising layer of the second quality prediction model shown in fig. 3 is a second output layer, and is used for outputting a restored image, and training image information is output in the training process, and the processing and Denoising layer in fig. 3 forms a DAE (Denoising auto encoder, Denoising self encoding) network.
Optionally, in another embodiment of the present disclosure, the discrete loss function is a sum of squares loss function, and the continuous loss function is a cross entropy loss function.
In one embodiment of the present disclosure, when the discrete loss function is a sum of squares loss function, the following discrete loss values may be obtained:
Figure BDA0002404860730000101
among them, D L OSSiIs the discrete loss value of the ith second feature information, N1 is the number of discrete features contained in each second feature information, Di,n1Is n1 discrete feature information, D'i,n1Is the n1 discrete feature information in the ith third feature information.
When the continuous loss function is a cross-entropy loss function, the following continuous loss values can be obtained:
Figure BDA0002404860730000102
among them, C L OSSiIs the continuous loss value of the ith second feature information, N2 is the number of continuous features contained in each second feature information, Ci,n2Is n2 continuous feature information, C 'in the ith second feature information'i,n2Is the n2 th continuous feature information in the ith third feature information.
Embodiments of the present disclosure may use a sum of squares loss function to calculate discrete loss values and a cross entropy loss function to calculate continuous loss values.
In summary, the present disclosure provides a recommendation method, including: carrying out first training on a first quality estimation model by adopting a first image sample, wherein the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer; inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality pre-estimation model and the first quality pre-estimation model obtained by the first training share the processing layer and the first output layer, the second quality pre-estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample; performing second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample; and estimating the quality score of the image to be recommended by adopting the first quality estimation model obtained by the second training, and recommending the image to be recommended according to the quality score. According to the method and the device, after the first training is carried out on the first quality estimation model, the second training is carried out on the second quality estimation model, the training image information is restored from the image added with the noise when the second training is carried out on the second quality estimation model, and the loss value is determined by combining the difference between the training image information and the second image sample, so that the second training process of the first quality estimation model by combining the noise is realized, the accuracy of the first quality estimation model is improved, and the recommendation accuracy is further improved.
Referring to fig. 4, a block diagram of the recommendation device of the present disclosure is shown, specifically as follows:
the first training module 201 is configured to perform first training on a first quality prediction model by using a first image sample, where the first quality prediction model includes: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer.
A second input module 202, configured to input a second image sample to a second quality estimation model, so as to obtain training image information and a second training score of the second image sample; the second quality estimation model and the first quality estimation model obtained by the first training share the processing layer and the first output layer, the second quality estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample.
The second training module 203 is configured to perform a second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample.
And the quality score prediction module 204 is configured to predict a quality score of the image to be recommended by using the first quality prediction model obtained through the second training, and recommend the image to be recommended according to the quality score.
Optionally, in another embodiment of the present disclosure, the second training module 203 includes a loss value determining sub-module, a weight calculating sub-module, a loss value weighting sub-module, a combined loss value operator module, and a second training sub-module:
and the loss value determining submodule is used for determining a first loss value of the second image sample according to the second image sample and the training image information, and determining a second loss value of the second image sample according to a second sample score and a second training score of the second image sample.
And the weight calculation submodule is used for inputting the first loss value of the second image sample into a monotone decreasing function to obtain the weight of the second image sample.
And the loss value weighting submodule is used for weighting the second loss value of the second image sample by adopting the weight of the second image sample to obtain the weighted loss value of the second image sample.
And the comprehensive loss value operator module is used for determining the comprehensive loss value of the second image sample according to the weighted loss value and the first loss value of the second image sample.
And the second training submodule is used for carrying out second training on the second quality estimation model according to the comprehensive loss value of the second image sample.
Optionally, in another embodiment of the present disclosure, the second image sample is replaced with second feature information of the commodity, the second feature information includes a second discrete feature and a second continuous feature, the training image information is replaced with third feature information of the commodity, the third feature information includes a third discrete feature and a third continuous feature, and the loss value determination sub-module includes a discrete loss value determination unit, a continuous loss value determination unit, and a first loss value determination unit:
and the discrete loss value determining unit is used for inputting the second discrete feature and the third discrete feature into a discrete loss function to obtain a discrete loss value of the second feature information.
And a continuous loss value determination unit, configured to input the second continuous characteristic and the third continuous characteristic into a continuous loss function, so as to obtain a continuous loss value of the second characteristic information.
A first loss value determining unit, configured to determine a first loss value of the second feature information according to the discrete loss value and the continuous loss value.
Optionally, in another embodiment of the present disclosure, the first image sample is replaced with first feature information of a commodity, the first feature information corresponds to a first sample score, and the first training module includes a first training score prediction sub-module, a first loss value determination sub-module, and a first training sub-module:
and the first training score prediction submodule is used for inputting the first characteristic information into a first quality prediction model to obtain a first training score of the first characteristic information.
And the first loss value determining submodule is used for determining the loss value of the first characteristic information according to the first training score of the first characteristic information and the first sample score of the first characteristic information.
And the first training submodule is used for carrying out first training on the first quality estimation model according to the loss value of the first characteristic information.
Optionally, in another embodiment of the present disclosure, the first quality prediction model is a DNN model, the processing layer includes an input layer and a hidden layer, an input of the DNN model is an input of the input layer, an output of the input layer is an input of the hidden layer, an output of the hidden layer is an output of the first output layer, and an output of the first output layer is an output of the first quality prediction model.
Optionally, in another embodiment of the present disclosure, the second output layer is a denoising layer, the first quality prediction model and the second quality prediction model share the input layer and the hidden layer, an input of the second quality prediction model is used as an input of the input layer, an output of the input layer is used as an input of the hidden layer, an output of the hidden layer is used as an input of the denoising layer after noise is added, and an output of the first output layer is an output of the second quality prediction model.
Optionally, in another embodiment of the present disclosure, the discrete loss function is a sum of squares loss function, and the continuous loss function is a cross entropy loss function.
In summary, the present disclosure provides a recommendation apparatus, the apparatus comprising: the first training module is used for performing first training on a first quality estimation model by adopting a first image sample, and the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer; the second input module is used for inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality pre-estimation model and the first quality pre-estimation model obtained by the first training share the processing layer and the first output layer, the second quality pre-estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample; the second training module is used for carrying out second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample; and the quality score prediction module is used for predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training and recommending the image to be recommended according to the quality score. According to the method and the device, after the first training is carried out on the first quality estimation model, the second training is carried out on the second quality estimation model, the training image information is restored from the image added with the noise when the second training is carried out on the second quality estimation model, and the loss value is determined by combining the difference between the training image information and the second image sample, so that the second training process of the first quality estimation model by combining the noise is realized, the accuracy of the first quality estimation model is improved, and the recommendation accuracy is further improved.
The embodiments of the apparatus of the present disclosure may refer to the detailed description of the embodiments of the method, which is not repeated herein.
The present disclosure also provides an electronic device, referring to fig. 5, including: a processor 301, a memory 302 and a computer program 3021 stored on the memory 302 and executable on the processor, the processor 301 implementing the proposed method of the previous embodiments when executing the program.
The present disclosure also provides a readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the recommendation method of the aforementioned embodiments.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, this disclosure is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the present disclosure as described herein, and any descriptions above of specific languages are provided for disclosure of enablement and best mode of the present disclosure.
In the description provided herein, numerous specific details are set forth. It can be appreciated, however, that the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a recommendation device in accordance with the present disclosure. The present disclosure may also be embodied as an apparatus or device program for performing a portion or all of the methods described herein. Such programs implementing the present disclosure may be stored on a computer-readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A recommendation method, characterized in that the method comprises:
carrying out first training on a first quality estimation model by adopting a first image sample, wherein the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer;
inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality pre-estimation model and the first quality pre-estimation model obtained by the first training share the processing layer and the first output layer, the second quality pre-estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample;
performing second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample;
and estimating the quality score of the image to be recommended by adopting the first quality estimation model obtained by the second training, and recommending the image to be recommended according to the quality score.
2. The method of claim 1, wherein the step of performing the second training on the second quality prediction model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample comprises:
determining a first loss value of the second image sample according to the second image sample and the training image information, and determining a second loss value of the second image sample according to a second sample score and a second training score of the second image sample;
inputting the first loss value of the second image sample into a monotonically decreasing function to obtain the weight of the second image sample;
weighting a second loss value of the second image sample by adopting the weight of the second image sample to obtain a weighted loss value of the second image sample;
determining a comprehensive loss value of the second image sample according to the weighted loss value and the first loss value of the second image sample;
and performing second training on the second quality estimation model according to the comprehensive loss value of the second image sample.
3. The method of claim 2, wherein the second image sample is replaced with second feature information of a commodity, the second feature information includes a second discrete feature and a second continuous feature, the training image information is replaced with third feature information of the commodity, the third feature information includes a third discrete feature and a third continuous feature, and the step of determining the first loss value of the second feature information according to the second discrete feature, the second continuous feature, the third discrete feature and the third continuous feature comprises:
inputting the second discrete feature and the third discrete feature into a discrete loss function to obtain a discrete loss value of the second feature information;
inputting the second continuous characteristic and the third continuous characteristic into a continuous loss function to obtain a continuous loss value of the second characteristic information;
and determining a first loss value of the second characteristic information according to the discrete loss value and the continuous loss value.
4. The method according to any one of claims 1 to 3, wherein the first image sample is replaced with first feature information of a commodity, the first feature information corresponds to a first sample score, and the step of performing first training on the first quality estimation model by using the first feature information comprises:
inputting the first characteristic information into a first quality estimation model to obtain a first training score of the first characteristic information;
determining a loss value of the first feature information according to a first training score of the first feature information and a first sample score of the first feature information;
and performing first training on the first quality estimation model through the loss value of the first characteristic information.
5. The method of claim 1, wherein the first quality prediction model is a DNN model, wherein the processing layer comprises an input layer and a hidden layer, wherein an input of the DNN model is an input of the input layer, wherein an output of the input layer is an input of the hidden layer, wherein an output of the hidden layer is an output of the first output layer, and wherein an output of the first output layer is an output of the first quality prediction model.
6. The method of claim 5, wherein the second output layer is a denoising layer, the first quality prediction model and the second quality prediction model share the input layer and the hidden layer, the input of the second quality prediction model is used as the input of the input layer, the output of the input layer is used as the input of the hidden layer, the output of the hidden layer is used as the input of the denoising layer after noise is added, and the output of the first output layer is the output of the second quality prediction model.
7. The method of claim 3, wherein the discrete loss function is a sum of squares loss function and the continuous loss function is a cross-entropy loss function.
8. A recommendation device, characterized in that the device comprises:
the first training module is used for performing first training on a first quality estimation model by adopting a first image sample, and the first quality estimation model comprises: a processing layer and a first output layer, an output of the processing layer being an input to the first output layer;
the second input module is used for inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality pre-estimation model and the first quality pre-estimation model obtained by the first training share the processing layer and the first output layer, the second quality pre-estimation model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs the training image information of the second image sample, and the first output layer outputs the second training score of the second image sample;
the second training module is used for carrying out second training on the second quality estimation model according to the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample;
and the quality score prediction module is used for predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training and recommending the image to be recommended according to the quality score.
9. An electronic device, comprising:
processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements the recommendation method according to any of claims 1-7 when executing the program.
10. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the recommendation method according to any of method claims 1-7.
CN202010158274.9A 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium Active CN111506753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158274.9A CN111506753B (en) 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158274.9A CN111506753B (en) 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111506753A true CN111506753A (en) 2020-08-07
CN111506753B CN111506753B (en) 2023-09-12

Family

ID=71877665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158274.9A Active CN111506753B (en) 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111506753B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633425A (en) * 2021-03-11 2021-04-09 腾讯科技(深圳)有限公司 Image classification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121733A1 (en) * 2016-10-27 2018-05-03 Microsoft Technology Licensing, Llc Reducing computational overhead via predictions of subjective quality of automated image sequence processing
CN109002792A (en) * 2018-07-12 2018-12-14 西安电子科技大学 SAR image change detection based on layering multi-model metric learning
CN109308696A (en) * 2018-09-14 2019-02-05 西安电子科技大学 Non-reference picture quality appraisement method based on hierarchy characteristic converged network
CN110766052A (en) * 2019-09-20 2020-02-07 北京三快在线科技有限公司 Image display method, evaluation model generation device and electronic equipment
CN110807757A (en) * 2019-08-14 2020-02-18 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121733A1 (en) * 2016-10-27 2018-05-03 Microsoft Technology Licensing, Llc Reducing computational overhead via predictions of subjective quality of automated image sequence processing
CN109002792A (en) * 2018-07-12 2018-12-14 西安电子科技大学 SAR image change detection based on layering multi-model metric learning
CN109308696A (en) * 2018-09-14 2019-02-05 西安电子科技大学 Non-reference picture quality appraisement method based on hierarchy characteristic converged network
CN110807757A (en) * 2019-08-14 2020-02-18 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment
CN110766052A (en) * 2019-09-20 2020-02-07 北京三快在线科技有限公司 Image display method, evaluation model generation device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633425A (en) * 2021-03-11 2021-04-09 腾讯科技(深圳)有限公司 Image classification method and device
CN112633425B (en) * 2021-03-11 2021-05-11 腾讯科技(深圳)有限公司 Image classification method and device

Also Published As

Publication number Publication date
CN111506753B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN111031346B (en) Method and device for enhancing video image quality
WO2021097442A1 (en) Guided training of machine learning models with convolution layer feature data fusion
JP2020522061A (en) Sample weight setting method and device, and electronic device
CN111724370B (en) Multi-task image quality evaluation method and system based on uncertainty and probability
WO2017197330A1 (en) Two-stage training of a spoken dialogue system
CN108875519B (en) Object detection method, device and system and storage medium
CN113011603A (en) Model parameter updating method, device, equipment, storage medium and program product
CN113537630A (en) Training method and device of business prediction model
CN111538897A (en) Recommended abnormality detection method and device, electronic equipment and readable storage medium
CN111506753A (en) Recommendation method and device, electronic equipment and readable storage medium
KR20190047576A (en) Alternating AutoencoderMethod and System for recommendation System
CN112925924A (en) Multimedia file recommendation method and device, electronic equipment and storage medium
CN113592593B (en) Training and application method, device, equipment and storage medium of sequence recommendation model
CN114861783A (en) Recommendation model training method and device, electronic equipment and storage medium
CN111428125B (en) Ordering method, ordering device, electronic equipment and readable storage medium
JP2012238212A (en) Addition ratio learning device and method, image processing device and method, program and recording medium
CN114830137A (en) Method and system for generating a predictive model
CN116310356B (en) Training method, target detection method, device and equipment of deep learning model
CN112801890A (en) Video processing method, device and equipment
CN113014928B (en) Compensation frame generation method and device
CN113823312B (en) Speech enhancement model generation method and device, and speech enhancement method and device
US20130211803A1 (en) Method and device for automatic prediction of a value associated with a data tuple
CN115858911A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
WO2002080563A2 (en) Scalable expandable system and method for optimizing a random system of algorithms for image quality
CN112561050A (en) Neural network model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220324

Address after: 571924 Room 302, building b-20, Hainan Ecological Software Park, west of Meilun South Road, Laocheng town economic and Technological Development Zone, Chengmai County, Hainan Province

Applicant after: Hainan Liangxin Technology Co.,Ltd.

Address before: 100083 2106-030, 9 North Fourth Ring Road, Haidian District, Beijing.

Applicant before: BEIJING SANKUAI ONLINE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant