CN106874922B - Method and device for determining service parameters - Google Patents

Method and device for determining service parameters Download PDF

Info

Publication number
CN106874922B
CN106874922B CN201510922446.4A CN201510922446A CN106874922B CN 106874922 B CN106874922 B CN 106874922B CN 201510922446 A CN201510922446 A CN 201510922446A CN 106874922 B CN106874922 B CN 106874922B
Authority
CN
China
Prior art keywords
image sample
sample
layer
label
partial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510922446.4A
Other languages
Chinese (zh)
Other versions
CN106874922A (en
Inventor
吴振国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510922446.4A priority Critical patent/CN106874922B/en
Publication of CN106874922A publication Critical patent/CN106874922A/en
Application granted granted Critical
Publication of CN106874922B publication Critical patent/CN106874922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for determining service parameters, which comprises the following steps: acquiring an application head portrait of a user with service parameters to be determined; classifying the application head portrait by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application head portrait, determining a numerical value corresponding to each class label in the plurality of class labels to obtain a head portrait numerical value of the application head portrait, wherein the head portrait numerical value is used for participating in determining the service parameters. The method for determining the service parameters provided by the embodiment of the invention can determine the service parameters according to the application head portrait of the user, thereby improving the service parameter determination universality and the service popularization universality.

Description

Method and device for determining service parameters
Technical Field
The invention relates to the technical field of internet, in particular to a method for determining business parameters, a method for establishing an image classification model and a device.
Background
Currently, many services are directly related to service parameters, and the service parameters directly influence whether service application can be successful or not. When allocating service to a user, a service provider will evaluate whether to allocate service to the user according to the existing service parameters.
However, at present, people who have service parameter records at a service provider only account for a small part of the general population, most people do not have service parameter records, and the service provider cannot judge users who do not have service parameter records, so that the service provided by the service provider is difficult to widely popularize.
Disclosure of Invention
In order to solve the problem that the service parameters of most people cannot be obtained in the prior art, embodiments of the present invention provide a method for determining service parameters, which can determine the service parameters according to an application avatar of a user, thereby improving the universality of service parameter determination and the universality of service popularization. The embodiment of the invention also provides a corresponding device.
A first aspect of the present invention provides a method for determining service parameters, including:
acquiring an application head portrait of a user with service parameters to be determined;
classifying the application avatar by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar;
determining a numerical value corresponding to each category label in the plurality of category labels to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter;
the CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and then fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users.
The second aspect of the present invention provides a method for establishing an image classification model, comprising:
acquiring a total image sample and an initial Convolutional Neural Network (CNN) model, wherein the total image sample is an application head portrait of a large number of users;
performing layer-by-layer training on the initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and the total image sample to obtain an initial characteristic weight for image classification;
fine-tuning the initial characteristic weight by using a first partial image sample to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample.
A third aspect of the present invention provides an apparatus for determining service parameters, including:
the acquiring unit is used for acquiring an application head portrait of a user with service parameters to be determined;
the classification unit is used for classifying the application head portrait acquired by the acquisition unit by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application head portrait;
the determining unit is used for determining a numerical value corresponding to each category label in the plurality of category labels obtained after the classification by the classifying unit to obtain an avatar numerical value of the application avatar, and the avatar numerical value is used for participating in determining the service parameters;
the CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and then fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users.
A fourth aspect of the present invention provides an apparatus for creating an image classification model, including:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a total image sample and an initial Convolutional Neural Network (CNN) model, and the total image sample is an application head portrait of a large number of users;
the computing unit is used for performing layer-by-layer training on the initial CNN model based on unsupervised learning by adopting a limited Boltzmann machine (RBM) and the total image sample acquired by the acquiring unit to obtain an initial characteristic weight for image classification;
the adjusting unit is used for finely adjusting the initial characteristic weight value obtained by the calculating unit by using a first partial image sample to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample.
The embodiment of the invention adopts the application head portrait of the user for obtaining the service parameter to be determined; classifying the application avatar by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar; determining a numerical value corresponding to each category label in the plurality of category labels to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter; the CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and then fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users. Compared with the prior art that the service parameters of most people cannot be obtained, the method for determining the service parameters provided by the embodiment of the invention can determine the service parameters according to the application head portrait of the user, thereby improving the universality of service parameter determination and the universality of service popularization. For example: the credibility of most people cannot be evaluated in the prior art, but the scheme provided by the application can evaluate the credibility of the application head portrait of the user to obtain the credit score of the user, so that the popularity of the credit score of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an embodiment of a method for building an image classification model according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a CNN model in an embodiment of the present invention;
FIG. 3 is a schematic diagram of another embodiment of a method for building an image classification model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of a method for determining service parameters in the embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of an apparatus for determining a service parameter in an embodiment of the present invention;
fig. 6 is a schematic diagram of another embodiment of the apparatus for determining service parameters in the embodiment of the present invention;
FIG. 7 is a schematic diagram of an embodiment of a method for building an image classification model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of another embodiment of a method for building an image classification model according to an embodiment of the present invention;
fig. 9 is a schematic diagram of another embodiment of the apparatus for determining service parameters in the embodiment of the present invention;
fig. 10 is a schematic diagram of another embodiment of the method for building the image classification model in the embodiment of the present invention.
Detailed description of the preferred embodiments
The embodiment of the invention provides a method for determining service parameters, which can evaluate the credibility according to the application head portrait of a user, thereby improving the universality of the evaluation of the credibility of the user. The following are detailed below.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Before describing the embodiments of the present invention, first, terms related to the embodiments of the present invention are introduced:
convolutional Neural Networks (CNN) model: a deep learning model extracts features through convolution operation, and convolution kernels are obtained through automatic machine learning.
A Restricted Boltzmann Machine (RBM), a neural network model, trains parameters by minimizing an energy function, which can pre-train a deep learning model.
Single label classification: a classification method computes a label for each sample, wherein the label belongs to a mutually exclusive set of labels.
Multi-label classification: a classification method computes multiple labels simultaneously for each sample.
Unsupervised learning is a training method with training samples but no training labels.
Supervised learning is adopted, namely a training method comprises training samples and training labels.
Semi-supervised learning: one training method has training samples, but only a portion has training labels.
With the widespread use of various social applications, a user's application avatar may be selected to analyze the user's credit situation while avoiding concerns about user privacy, such as: the QQ head portrait, the WeChat head portrait, the Paibao head portrait, the microblog head portrait and the like, the application head portrait is visible to everyone and does not relate to the privacy of the user, and the head portrait of the user is set by the user according to the interest, hobbies and subjective wishes of the user, so that the user can take an active action to reflect the psychology of the user to a certain extent. It should be noted that the solution of the embodiment of the present invention is not limited to social applications, and all user application avatars that can be disclosed can be used as the embodiment of the present invention.
In the embodiment of the invention, the credibility of the user is evaluated by mining the application head portrait of the user, namely the credit score is given to the user. Of course, the factors for determining the user credit score may be many, and is not limited to only the application avatar, the application avatar may only be one item participating in the user credit score, and a reliable user credit score may be obtained by comprehensively processing the credibility assessment results of the factors.
In the embodiment of the invention, the service parameter is determined based on the application head portrait of the user, and actually the service parameter is a parameter capable of reflecting the reliability of the user. The service parameter capable of reflecting the user reliability in the embodiment of the invention can be understood as the credit score of the user. The credibility assessment and the user credit score in the embodiment of the invention are only different in expression mode, and the principle is the same in practice.
The unsupervised deep training process of the CNN based on the RBM may be: inputting image samples into a CNN model, and then performing layer-by-layer training, wherein the output of the previous layer is used as the input of the next layer, and the input is used as a label during the layer-by-layer training, and the training of the layer is completed by minimizing the reconstruction error.
The training process based on supervised training may be: and training by taking the sample and the sample label as the input of a model, and finishing training by optimizing an objective function, wherein the model can obtain the label result of the target.
The unsupervised training does not need to label the image, but the trained result has no label information and is inaccurate, and the supervised training needs to label a large number of image labels, which leads to complicated work and easy error.
In view of the above conventional method and its disadvantages, embodiments of the present invention provide a method for establishing an image classification model based on an application image, and a method for determining a business parameter based on the image classification model.
Referring to fig. 1, an embodiment of the method for creating an image classification model according to the embodiment of the present invention includes:
101. obtaining a total image sample and an initial Convolutional Neural Network (CNN) model, wherein the total image sample is an application head portrait of a large number of users.
102. And performing layer-by-layer training on the initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and the total image sample to obtain an initial characteristic weight for image classification.
The initial feature weight in the embodiment of the invention is a quantized value of the image feature.
103. Fine-tuning the initial characteristic weight by using a first partial image sample to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample.
The fine tuning process is a process of establishing a corresponding relationship between the initial feature weight and the category label.
There may be two methods of fine tuning, one is: the fine-tuning the initial feature weights by using the first partial image samples may include:
performing supervised learning layer-by-layer training on the first partial image sample, and extracting a feature weight in the first partial image sample and a class label of the first partial image sample;
and establishing association between the initial characteristic weight and the class label of the first partial image sample to obtain a CNN model with an image classification function.
In the scheme, only a small part of the image samples in the total image samples are labeled with the category labels, so that the work complexity can be reduced, and the problem that the image classification cannot be realized under the condition of no label can be solved.
The other is as follows: when the initial feature weights are fine-tuned by using the first partial image sample, the method may further include:
performing supervised learning layer-by-layer training on a second partial image sample, and extracting a feature weight in the second partial image sample and a class label of the second partial image sample, wherein the class label of the second partial image sample is a class label with the highest probability of output from an output layer in the initial CNN model, and the second partial image sample is a residual image sample excluding the first partial image sample from the total image sample;
and establishing association between the initial characteristic weight and the class label of the second partial image sample.
In the scheme, only a small part of image samples in the total image samples are labeled with the class labels, so that the work complexity can be reduced, and the class label with the highest probability output by the output layer can be adopted as the class label of the image aiming at the image samples which are not labeled in the total image samples, so that the image classification accuracy is further improved compared with the previous image samples which only depend on a small part of image samples with the class labels.
In addition, in consideration of the fact that the application head looks like a self-portrait photo uploaded by a user mostly, the photo representing the same category may be classified into different categories due to different brightness, and therefore, in the embodiment of the present invention, a brightness normalization layer is set in the CNN model, for example: as shown in fig. 2, a brightness normalization layer is added before the Softmax layer. Thus, before the fine-tuning the initial feature weights by using the first partial image samples, the method may further include:
performing brightness normalization processing on the first partial image sample;
the fine-tuning the initial feature weights by using the first partial image samples may include:
and fine-tuning the initial characteristic weight by using the first partial image sample after the brightness normalization processing.
The process of establishing the image classification model in the embodiment of the present invention is further described below with reference to fig. 3:
201. and establishing an initial CNN model, and then performing unsupervised learning-based layer-by-layer training on the initial CNN model by using the RBM based on the original total image sample to obtain an initial characteristic weight for image classification.
202. A small number of image samples a and a large number of image samples B are randomly extracted from the original total image sample.
203. For a small amount of image samples A, multi-label labeling can be carried out on the samples A by using self-defined class labels, and the multi-label labeling is that one sample is labeled with a plurality of class labels at the same time.
According to the method, some category labels useful for credit scoring are customized according to experience knowledge, and when the labels are selected, a plurality of large-class label sets { A, B, C, … } can be defined, wherein each large class comprises a plurality of small classes, for example, the small-class label set of the large class A is { a1, a2, a3, … }, and the like.
For example, a in the large label set may be a type of image content, and B may be an image pixel, although other labels may be available for the large label set, which is not listed here. Subclass tag set a1 may be animal, a2 may be human, a3 may be cartoon, and …, B may contain ambiguity and clarity. The subclasses of the other major classes are not listed here.
And carrying out multi-label labeling on the sample A by using the self-defined category label. The class label of each sample must contain all major classes, but for each major class it has one and only one minor class; for example: the labels of an image sample may be a-a1, B-B1 …, i.e., the people of the image content type, image pixel blur …, "…" indicates that no other type labels are listed.
204. And carrying out fine adjustment based on supervised learning by using the unlabeled image sample B and the labeled image sample A on the basis of the initial feature weight, wherein the unlabeled sample B uses the class with the highest probability of the output layer as the label of the unlabeled sample B, and the obtained trained CNN model is the CNN model with the image classification function.
Where unlabeled sample B uses the class with the highest probability of the output layer as its label.
Namely, it is
Figure BDA0000875644910000081
Wherein f '(x) is the output of the output layer, y'iFor the ith label of unlabeled sample B, in the fine tuning stage, the cost function of CNN is as follows:
Figure BDA0000875644910000083
where C is the overall cost function, n is the total number of labeled image samples A, S is the number of labels, λ is the balance factor of labeled and unlabeled samples, n' is the total number of unlabeled samples B,
Figure BDA0000875644910000089
is the ith label of the mth sample a,
Figure BDA0000875644910000085
the ith output of the output layer for the mth sample a,
Figure BDA0000875644910000086
is the ith label of the mth sample B,
Figure BDA0000875644910000087
is the ith output of the output layer for the mth sample, B, and L is the cross entropy. The CNN model is shown in fig. 2. Mobile phone allowing for a large number of pictures of application head portrait from usersOr the network camera takes a picture by itself, and the pictures representing the same scene may cause misclassification due to different illumination, and are classified into different categories when the Softmax layer is classified, so as to overcome the problem. As shown in FIG. 2, a Luminance normalization layer, which may be represented by a Luminince Norm layer, is added before the Softmax layer.
The forward formula for the luminence Norm layer can be expressed as:
Figure BDA0000875644910000088
for the trained CNN model with the image classification function, only the application head portrait of the user needs to be input when the business parameters are determined.
The process of determining the user traffic parameters based on the CNN model with image classification function can be understood with reference to the embodiment shown in fig. 4.
Referring to fig. 4, an embodiment of the method for determining a service parameter according to the embodiment of the present invention includes:
301. and acquiring an application head portrait of the user with the service parameters to be determined.
302. And classifying the application avatar by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar.
The CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and then fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users.
303. And determining a numerical value corresponding to each category label in the plurality of category labels to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter.
The category label may correspond to a value, which may be predetermined, for example: inside the image content category label, the animal is defined as 0, the task is defined as 1, and the cartoon is defined as 2. Inside the pixel label, the sharpness is defined as 0 and the blur is defined as 1. Variables in other category labels may be similarly defined.
Thus, for an application avatar we can obtain a series of values, e.g. 3 for image content and 1, … for pixel, so that the score values for the application avatar can be determined in order: 3, 1, ….
The embodiment of the invention adopts the application head portrait of the user for obtaining the service parameter to be determined; classifying the application avatar by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar; determining a numerical value corresponding to each category label in the plurality of category labels to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter; the CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and then fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users. Compared with the prior art that the service parameters of most people cannot be obtained, the method for determining the service parameters provided by the embodiment of the invention can determine the service parameters according to the application head portrait of the user, thereby improving the universality of service parameter determination and the universality of service popularization. For example: the credibility of most people cannot be evaluated in the prior art, but the scheme provided by the application can evaluate the credibility of the application head portrait of the user to obtain the credit score of the user, so that the popularity of the credit score of the user is improved.
Optionally, on the basis of the embodiment corresponding to fig. 1, in an optional embodiment of the method for determining a service parameter provided in the embodiment of the present invention, the classifying the application avatar by using a convolutional neural network CNN model with an image classification function to obtain a plurality of class labels of the application avatar may include:
calculating to obtain a plurality of characteristic values of the application head portrait by adopting a Convolutional Neural Network (CNN) model with an image classification function;
and determining the category label corresponding to each characteristic value according to the plurality of characteristic values and the pre-established corresponding relation between the characteristic weight and the category label to obtain the plurality of category labels of the application avatar.
Optionally, on the basis of the embodiment corresponding to fig. 1, in an optional embodiment of the method for determining a service parameter provided in the embodiment of the present invention, after determining a numerical value corresponding to each of the plurality of category labels and obtaining a score numerical value of the application avatar, the method may further include:
and outputting the scoring value of the application avatar to a credit scoring total model so that the scoring value of the application avatar is used for the credit scoring of the user.
In the embodiment of the invention, the score value of the application head portrait is as follows: 3, 1, … is input into the credit score total model, and the total credit score of the user can be obtained.
Referring to fig. 5, an embodiment of the apparatus 40 for determining a service parameter provided in the embodiment of the present invention includes:
an obtaining unit 401, configured to obtain an application avatar of a user with a service parameter to be determined;
a classification unit 402, configured to classify the application avatar acquired by the acquisition unit 401 by using a convolutional neural network CNN model with an image classification function, so as to obtain a plurality of class labels of the application avatar;
the CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users;
a determining unit 403, configured to determine a value corresponding to each of the plurality of category labels obtained after the classification by the classifying unit 402, to obtain an avatar value of the application avatar, where the avatar value is used to participate in determining the service parameter.
An obtaining unit 401 obtains an application avatar of a user with a service parameter to be determined; a classification unit 402 classifies the application avatar acquired by the acquisition unit 401 by using a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar, a determination unit 403 determines a value corresponding to each class label in the plurality of class labels obtained by the classification unit 402 to obtain an avatar value of the application avatar, the avatar value is used for participating in determining the service parameters, wherein the CNN model with the image classification function is obtained by performing layer-by-layer training based on unsupervised learning on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, fine-tuning is performed by using a small number of image samples with class labels attached to a small number of image samples in the total image samples, and the total image sample is obtained by comparing the application avatar of a large number of users with the service parameters which cannot be obtained by the prior art, the method for determining the service parameters provided by the embodiment of the invention can determine the service parameters according to the application head portrait of the user, thereby improving the service parameter determination universality and the service popularization universality. For example: the credibility of most people cannot be evaluated in the prior art, but the scheme provided by the application can evaluate the credibility of the application head portrait of the user to obtain the credit score of the user, so that the popularity of the credit score of the user is improved.
Optionally, on the basis of the embodiment corresponding to fig. 5, in an optional embodiment of the apparatus 40 for determining service parameters provided in the embodiment of the present invention,
the classifying unit 402 is configured to:
calculating to obtain a plurality of characteristic values of the application head portrait by adopting a Convolutional Neural Network (CNN) model with an image classification function;
and determining the category label corresponding to each characteristic value according to the plurality of characteristic values and the pre-established corresponding relation between the characteristic weight and the category label to obtain the plurality of category labels of the application avatar.
Optionally, on the basis of the embodiment corresponding to fig. 5, referring to fig. 6, in another optional embodiment of the apparatus 40 for determining service parameters provided in the embodiment of the present invention, the apparatus 50 further includes:
an output unit 404, configured to output the score value of the application avatar determined by the determination unit 403 to a credit score total model, so that the score value of the application avatar is used for the credit score of the user.
The embodiment or the optional embodiment of the apparatus 40 for determining a service parameter corresponding to fig. 5 or fig. 6 can be understood with reference to the related description of fig. 4, and will not be repeated herein.
Referring to fig. 7, an embodiment of the apparatus 50 for establishing an image classification model according to the present invention includes:
an obtaining unit 501, configured to obtain a total image sample and an initial convolutional neural network CNN model, where the total image sample is an application avatar of a large number of users;
a calculating unit 502, configured to perform layer-by-layer training on the initial CNN model based on unsupervised learning by using the restricted boltzmann machine RBM and the total image sample acquired by the acquiring unit 501, so as to obtain an initial feature weight for image classification;
an adjusting unit 503, configured to fine-tune the initial feature weight obtained by the calculating unit 502 by using a first partial image sample, to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample.
In the embodiment of the present invention, the obtaining unit 501 is configured to obtain a total image sample and an initial convolutional neural network CNN model, where the total image sample is an application avatar of a user; a calculating unit 502, configured to perform layer-by-layer training on the initial CNN model based on unsupervised learning by using the restricted boltzmann machine RBM and the total image sample acquired by the acquiring unit 501, so as to obtain an initial feature weight for image classification; an adjusting unit 503, configured to fine-tune the initial feature weight obtained by the calculating unit 502 by using a first partial image sample, to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample. The device 50 for establishing the image classification model provided by the embodiment of the invention only attaches the classification label to a small part of image samples in the total image samples, can reduce the complexity of work, and can solve the problem that the image classification can not be realized under the condition of no label at all.
Alternatively, on the basis of the embodiment corresponding to fig. 7, in a first optional embodiment of the apparatus 50 for establishing an image classification model provided by the embodiment of the present invention,
the adjusting unit 503 is configured to:
performing supervised learning layer-by-layer training on the first partial image sample, and extracting a feature weight in the first partial image sample and a class label of the first partial image sample;
and establishing association between the initial characteristic weight and the class label of the first partial image sample to obtain a CNN model with an image classification function.
Alternatively, on the basis of the embodiment corresponding to fig. 7, in a second alternative embodiment of the apparatus 50 for establishing an image classification model provided by the embodiment of the present invention,
the adjusting unit 503 is further configured to:
performing supervised learning layer-by-layer training on a second partial image sample, and extracting a feature weight in the second partial image sample and a class label of the second partial image sample, wherein the class label of the second partial image sample is a class label with the highest probability of output from an output layer in the initial CNN model, and the second partial image sample is a residual image sample excluding the first partial image sample from the total image sample;
and establishing association between the initial characteristic weight and the class label of the second partial image sample.
Alternatively, on the basis of the above-mentioned embodiment corresponding to fig. 7 and the first or second one of the apparatuses 50, referring to fig. 8, in a third alternative embodiment of the apparatus 50 for establishing an image classification model according to the embodiment of the present invention, the apparatus 50 further includes a brightness processing unit 504,
the brightness processing unit 504 is configured to perform brightness normalization processing on the first partial image sample;
the adjusting unit 503 is configured to fine tune the initial feature weight by using the first partial image sample after the luminance normalization processing by the luminance processing unit 504.
The embodiment or any optional embodiment of the apparatus 50 for establishing an image classification model corresponding to fig. 7 or fig. 8 can be understood with reference to the related descriptions in fig. 1 to fig. 3, and will not be repeated herein.
Fig. 9 is a schematic structural diagram of an apparatus 40 for determining service parameters according to an embodiment of the present invention. The apparatus 40 for determining business parameters includes a processor 410, a memory 450 and an input/output I/O device 430, the memory 450 may include a read-only memory and a random access memory, and provides operational instructions and data to the processor 410. A portion of the memory 450 may also include non-volatile random access memory (NVRAM).
In some embodiments, memory 450 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
in embodiments of the present invention, by calling the operation instructions stored in memory 450 (which may be stored in an operating system),
acquiring an application head portrait of a user with service parameters to be determined;
classifying the application avatar by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar;
determining a numerical value corresponding to each category label in the plurality of category labels to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter;
the CNN model with the image classification function is obtained by performing unsupervised learning-based layer-by-layer training on an initial CNN model by using a Restricted Boltzmann Machine (RBM) and a total image sample, and then fine-tuning by using a small amount of image samples with attached category labels after a small amount of image samples in the total image samples are attached with the category labels, wherein the total image sample is an application head portrait of a large amount of users.
The device 40 for determining the service parameters provided by the embodiment of the invention can determine the service parameters according to the application avatar of the user, thereby improving the service parameter determination universality and the service popularization universality. For example: the credibility of most people cannot be evaluated in the prior art, but the scheme provided by the application can evaluate the credibility of the application head portrait of the user to obtain the credit score of the user, so that the popularity of the credit score of the user is improved.
The processor 410 controls the operation of the means 40 for determining the traffic parameters, the processor 410 may also be referred to as a Central Processing Unit (CPU). Memory 450 may include both read-only memory and random-access memory, and provides instructions and data to processor 410. A portion of the memory 450 may also include non-volatile random access memory (NVRAM). The various components of the apparatus 40 for determining traffic parameters in the application of (a) are coupled together by a bus system 520, wherein the bus system 520 may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, however, the various buses are designated in the figure as the bus system 520.
The method disclosed in the above embodiments of the present invention may be applied to the processor 410, or implemented by the processor 410. The processor 410 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 410. The processor 410 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 450, and the processor 410 reads the information in the memory 450, and performs the steps of the above method in combination with the hardware thereof.
Optionally, the processor 410 is configured to:
calculating to obtain a plurality of characteristic values of the application head portrait by adopting a Convolutional Neural Network (CNN) model with an image classification function;
and determining the category label corresponding to each characteristic value according to the plurality of characteristic values and the pre-established corresponding relation between the characteristic weight and the category label to obtain the plurality of category labels of the application avatar.
Optionally, the input/output I/O device 430 is further configured to output the score value of the application avatar to a credit score total model, so that the score value of the application avatar is used for the credit score of the user.
Fig. 10 is a schematic structural diagram of an apparatus 50 for building an image classification model according to an embodiment of the present invention. The apparatus 50 for creating an image classification model includes a processor 510, a memory 550 and an input/output I/O device 530, the memory 550 may include a read-only memory and a random access memory, and provides operating instructions and data to the processor 510. A portion of the memory 550 may also include non-volatile random access memory (NVRAM).
In some embodiments, memory 550 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
in an embodiment of the present invention, by calling the operation instructions stored in the memory 550 (which may be stored in an operating system),
acquiring a total image sample and an initial Convolutional Neural Network (CNN) model, wherein the total image sample is an application head portrait of a large number of users;
performing layer-by-layer training on the initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and the total image sample to obtain an initial characteristic weight for image classification;
fine-tuning the initial characteristic weight by using a first partial image sample to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample.
The device 50 for establishing the image classification model provided by the embodiment of the invention only attaches the classification label to a small part of image samples in the total image samples, can reduce the complexity of work, and can solve the problem that the image classification can not be realized under the condition of no label at all.
The processor 510 controls the operation of the apparatus 50 for creating an image classification model, and the processor 510 may also be referred to as a Central Processing Unit (CPU). Memory 550 may include both read-only memory and random-access memory, and provides instructions and data to processor 510. A portion of the memory 550 may also include non-volatile random access memory (NVRAM). The various components of the apparatus 50 for creating an image classification model in (1) are coupled together by a bus system 520, wherein the bus system 520 may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, however, the various buses are designated in the figure as the bus system 520.
The method disclosed in the above embodiments of the present invention may be applied to the processor 510, or implemented by the processor 510. Processor 510 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 510. The processor 510 described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 550, and the processor 510 reads the information in the memory 550 and performs the steps of the above method in combination with the hardware thereof.
Optionally, processor 510 is configured to:
performing supervised learning layer-by-layer training on the first partial image sample, and extracting a feature weight in the first partial image sample and a class label of the first partial image sample;
and establishing association between the initial characteristic weight and the class label of the first partial image sample to obtain a CNN model with an image classification function.
Optionally, processor 510 is configured to:
performing supervised learning layer-by-layer training on a second partial image sample, and extracting a feature weight in the second partial image sample and a class label of the second partial image sample, wherein the class label of the second partial image sample is a class label with the highest probability of output from an output layer in the initial CNN model, and the second partial image sample is a residual image sample excluding the first partial image sample from the total image sample;
and establishing association between the initial characteristic weight and the class label of the second partial image sample.
Optionally, the processor 510 is further configured to:
performing brightness normalization processing on the first partial image sample;
and fine-tuning the initial characteristic weight by using the first partial image sample after the brightness normalization processing.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The method for determining service parameters, the method for establishing an image classification model and the device provided by the embodiment of the invention are described in detail above, a single embodiment is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A method for determining a service parameter, comprising:
acquiring an application head portrait of a user with service parameters to be determined;
classifying the application avatar by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar; the application head portrait is set by the user according to the interest and hobbies of the user and is used for reflecting the psychology of the user; the plurality of category labels of the application head portrait at least comprise at least one of animals and cartoons;
determining a numerical value corresponding to each category label in the plurality of category labels to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter, the service parameter is a user credit score, and the application avatar is one of a plurality of determination factors of the user credit score;
the CNN model with the image classification function is obtained by performing layer-by-layer training on an initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and a total image sample, attaching class labels to a small number of image samples in the total image sample, and performing fine adjustment on the small number of image samples with the attached class labels and the residual image sample with the class label with the highest probability after taking the class label with the highest probability output by an output layer of the initial CNN model as the class label of the residual image sample, wherein the total image sample is an application head portrait of a large number of users;
the step of using the class label with the highest probability output by the initial CNN model output layer as the class label of the remaining image sample is implemented by using the following formula:
Figure FDA0002974476210000011
wherein f '(x) is the output of the output layer, y'iFor the ith label of unlabeled sample B, in the fine tuning stage, the cost function of CNN is as follows:
Figure FDA0002974476210000012
where C is the overall cost function, n is the total number of labeled image samples A, S is the number of labels, λ is the balance factor of labeled and unlabeled samples, n' is the total number of unlabeled samples B,
Figure FDA0002974476210000013
is the ith label, f, of the mth sample Ai mThe ith output of the output layer for the mth sample a,
Figure FDA0002974476210000014
is the ith label, f 'of the mth sample B'i mIs the ith output of the output layer of the mth sample B, and L is the cross entropy;
the Softmax layer of the CNN model with the image classification function further comprises a brightness normalization layer, the brightness normalization layer is used for avoiding misclassification of the Softmax layer caused by different illumination of images of the same scene, and the forward formula of the brightness normalization layer is as follows:
Figure FDA0002974476210000021
2. the method according to claim 1, wherein the classifying the application avatar by using a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application avatar comprises:
calculating to obtain a plurality of characteristic values of the application head portrait by adopting a Convolutional Neural Network (CNN) model with an image classification function;
and determining the category label corresponding to each characteristic value according to the plurality of characteristic values and the pre-established corresponding relation between the characteristic weight and the category label to obtain the plurality of category labels of the application avatar.
3. A method of building an image classification model, comprising:
acquiring a total image sample and an initial Convolutional Neural Network (CNN) model, wherein the total image sample is an application head portrait of a large number of users;
performing layer-by-layer training on the initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and the total image sample to obtain an initial characteristic weight for image classification;
fine-tuning the initial feature weight by using a first partial image sample, and fine-tuning the initial feature weight by using a second partial image sample to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample; the second partial image sample is the residual image sample except the first partial image sample in the total image sample, and the class label of the second partial image sample is the class label with the highest probability of output from the output layer in the initial CNN model;
the CNN model with the image classification function is used for determining a plurality of category labels of an application avatar according to the application avatar of a user, the category labels are used for determining avatar values of the application avatar, the avatar values are used for participating in determining service parameters, the service parameters are user credit scores, and the application avatar is one of a plurality of determination factors of the user credit scores; the application head portrait is set by the user according to the interest and hobbies of the user and is used for reflecting the psychology of the user; the plurality of category labels of the application head portrait at least comprise at least one of animals and cartoons;
the CNN model with the image classification function is obtained by performing layer-by-layer training on an initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and a total image sample, attaching class labels to a small amount of image samples in the total image sample, and performing fine adjustment on the small amount of image samples attached with the class labels and the residual image sample with the class label with the highest probability after taking the class label with the highest probability output by an output layer of the initial CNN model as the class label of the residual image sample;
the step of using the class label with the highest probability output by the initial CNN model output layer as the class label of the remaining image sample is implemented by using the following formula:
Figure FDA0002974476210000031
wherein f '(x) is the output of the output layer, y'iFor the ith label of unlabeled sample B, in the fine tuning stage, the cost function of CNN is as follows:
Figure FDA0002974476210000032
where C is the overall cost function, n is the total number of labeled image samples A, S is the number of labels, λ is the balance factor of labeled and unlabeled samples, n' is the total number of unlabeled samples B,
Figure FDA0002974476210000033
is the ith label, f, of the mth sample Ai mThe ith output of the output layer for the mth sample a,
Figure FDA0002974476210000034
is the ith label, f 'of the mth sample B'i mIs the ith output of the output layer of the mth sample B, and L is the cross entropy;
the Softmax layer of the CNN model with the image classification function further comprises a brightness normalization layer, the brightness normalization layer is used for avoiding misclassification of the Softmax layer caused by different illumination of images of the same scene, and the forward formula of the brightness normalization layer is as follows:
Figure FDA0002974476210000035
4. the method of claim 3, wherein the fine-tuning the initial feature weights with the first portion of image samples comprises:
performing supervised learning layer-by-layer training on the first partial image sample, and extracting a feature weight in the first partial image sample and a class label of the first partial image sample;
and establishing association between the initial characteristic weight and the class label of the first partial image sample to obtain a CNN model with an image classification function.
5. The method of claim 3, wherein the fine-tuning the initial feature weights with the second portion of image samples comprises:
performing supervised learning layer-by-layer training on a second partial image sample, and extracting a feature weight in the second partial image sample and a class label of the second partial image sample, wherein the class label of the second partial image sample is a class label with the highest probability of output from an output layer in the initial CNN model, and the second partial image sample is a residual image sample excluding the first partial image sample from the total image sample;
and establishing association between the initial characteristic weight and the class label of the second partial image sample.
6. The method according to any of claims 3-5, wherein before fine-tuning the initial feature weights with the first partial image samples, the method further comprises:
performing brightness normalization processing on the first partial image sample;
the fine tuning of the initial feature weights by using the first partial image samples includes:
and fine-tuning the initial characteristic weight by using the first partial image sample after the brightness normalization processing.
7. An apparatus for determining a traffic parameter, comprising:
the acquiring unit is used for acquiring an application head portrait of a user with service parameters to be determined;
the classification unit is used for classifying the application head portrait acquired by the acquisition unit by adopting a Convolutional Neural Network (CNN) model with an image classification function to obtain a plurality of class labels of the application head portrait; the application head portrait is set by the user according to the interest and hobbies of the user and is used for reflecting the psychology of the user; the plurality of category labels of the application head portrait at least comprise at least one of animals and cartoons;
the determining unit is used for determining a numerical value corresponding to each category label in the plurality of category labels obtained after the classification by the classifying unit to obtain an avatar numerical value of the application avatar, wherein the avatar numerical value is used for participating in determining the service parameter, the service parameter is a user credit score, and the application avatar is one of a plurality of determining factors of the user credit score;
the CNN model with the image classification function is obtained by performing layer-by-layer training on an initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and a total image sample, attaching class labels to a small number of image samples in the total image sample, and performing fine adjustment on the small number of image samples with the attached class labels and the residual image sample with the class label with the highest probability after taking the class label with the highest probability output by an output layer of the initial CNN model as the class label of the residual image sample, wherein the total image sample is an application head portrait of a large number of users;
the step of using the class label with the highest probability output by the initial CNN model output layer as the class label of the remaining image sample is implemented by using the following formula:
Figure FDA0002974476210000051
wherein f '(x) is the output of the output layer, y'iFor the ith label of unlabeled sample B, in the fine tuning stage, the cost function of CNN is as follows:
Figure FDA0002974476210000052
where C is the overall cost function, n is the total number of labeled image samples A, S is the number of labels, λ is the balance factor of labeled and unlabeled samples, n' is the total number of unlabeled samples B,
Figure FDA0002974476210000053
is the ith label, f, of the mth sample Ai mThe ith output of the output layer for the mth sample a,
Figure FDA0002974476210000054
is the ith label, f 'of the mth sample B'i mIs the ith output of the output layer of the mth sample B, and L is the cross entropy;
the Softmax layer of the CNN model with the image classification function further comprises a brightness normalization layer, the brightness normalization layer is used for avoiding misclassification of the Softmax layer caused by different illumination of images of the same scene, and the forward formula of the brightness normalization layer is as follows:
Figure FDA0002974476210000055
8. the apparatus of claim 7,
the classification unit is configured to:
calculating to obtain a plurality of characteristic values of the application head portrait by adopting a Convolutional Neural Network (CNN) model with an image classification function;
and determining the category label corresponding to each characteristic value according to the plurality of characteristic values and the pre-established corresponding relation between the characteristic weight and the category label to obtain the plurality of category labels of the application avatar.
9. An apparatus for modeling an image classification, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a total image sample and an initial Convolutional Neural Network (CNN) model, and the total image sample is an application head portrait of a large number of users;
the computing unit is used for performing layer-by-layer training on the initial CNN model based on unsupervised learning by adopting a limited Boltzmann machine (RBM) and the total image sample acquired by the acquiring unit to obtain an initial characteristic weight for image classification;
the adjusting unit is used for fine-tuning the initial characteristic weight value obtained by the calculating unit by using a first partial image sample and fine-tuning the initial characteristic weight value obtained by the calculating unit by using a second partial image sample to obtain a CNN model with an image classification function; the first partial image sample is a small part of the total image sample, and a category label is attached to the first partial image sample after the first partial image sample is extracted from the total image sample; the second partial image sample is the residual image sample except the first partial image sample in the total image sample, and the class label of the second partial image sample is the class label with the highest probability of output from the output layer in the initial CNN model;
the CNN model with the image classification function is used for determining a plurality of category labels of an application avatar according to the application avatar of a user, the category labels are used for determining avatar values of the application avatar, the avatar values are used for participating in determining service parameters, the service parameters are user credit scores, and the application avatar is one of a plurality of determination factors of the user credit scores; the application head portrait is set by the user according to the interest and hobbies of the user and is used for reflecting the psychology of the user; the plurality of category labels of the application head portrait at least comprise at least one of animals and cartoons;
the CNN model with the image classification function is obtained by performing layer-by-layer training on an initial CNN model based on unsupervised learning by adopting a Restricted Boltzmann Machine (RBM) and a total image sample, attaching class labels to a small amount of image samples in the total image sample, and performing fine adjustment on the small amount of image samples attached with the class labels and the residual image sample with the class label with the highest probability after taking the class label with the highest probability output by an output layer of the initial CNN model as the class label of the residual image sample;
the step of using the class label with the highest probability output by the initial CNN model output layer as the class label of the remaining image sample is implemented by using the following formula:
Figure FDA0002974476210000071
wherein f '(x) is the output of the output layer, y'iFor the ith label of unlabeled sample B, in the fine tuning stage, the cost function of CNN is as follows:
Figure FDA0002974476210000072
where C is the overall cost function, n is the total number of labeled image samples A, S is the number of labels, λ is the balance factor of labeled and unlabeled samples, n' is the total number of unlabeled samples B,
Figure FDA0002974476210000073
is the ith label, f, of the mth sample Ai mThe ith output of the output layer for the mth sample a,
Figure FDA0002974476210000074
is the ith label, f 'of the mth sample B'i mIs the ith output of the output layer of the mth sample B, and L is the cross entropy;
the Softmax layer of the CNN model with the image classification function further comprises a brightness normalization layer, the brightness normalization layer is used for avoiding misclassification of the Softmax layer caused by different illumination of images of the same scene, and the forward formula of the brightness normalization layer is as follows:
Figure FDA0002974476210000075
10. the apparatus of claim 9,
the fine tuning of the initial feature weight obtained by the calculating unit by the adjusting unit with the first partial image sample comprises:
performing supervised learning layer-by-layer training on the first partial image sample, and extracting a feature weight in the first partial image sample and a class label of the first partial image sample;
and establishing association between the initial characteristic weight and the class label of the first partial image sample to obtain a CNN model with an image classification function.
11. The apparatus of claim 9,
the fine tuning of the initial feature weight obtained by the calculating unit by the adjusting unit with the second partial image sample comprises:
performing supervised learning layer-by-layer training on a second partial image sample, and extracting a feature weight in the second partial image sample and a class label of the second partial image sample, wherein the class label of the second partial image sample is a class label with the highest probability of output from an output layer in the initial CNN model, and the second partial image sample is a residual image sample excluding the first partial image sample from the total image sample;
and establishing association between the initial characteristic weight and the class label of the second partial image sample.
12. The apparatus according to any one of claims 9-11, further comprising a brightness processing unit,
the brightness processing unit is used for carrying out brightness normalization processing on the first partial image sample;
and the adjusting unit is used for finely adjusting the initial characteristic weight by using the first partial image sample after the brightness normalization processing of the brightness processing unit.
13. An apparatus for determining a traffic parameter, comprising a processor and a memory;
the memory is used for storing software modules;
the processor is used for calling the software module stored in the memory to execute the method for determining the service parameter according to any one of claims 1-2.
14. An apparatus for creating an image classification model, comprising a processor and a memory;
the memory is used for storing software modules;
the processor is used for calling the software module stored in the memory to execute the method for establishing the image classification model according to any one of claims 3-6.
15. A computer-readable storage medium, characterized in that a program is stored in the computer-readable storage medium, which program, when being invoked by a processor, executes the method for determining business parameters according to any one of claims 1 to 2 or the method for building an image classification model according to any one of claims 3 to 6.
CN201510922446.4A 2015-12-11 2015-12-11 Method and device for determining service parameters Active CN106874922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510922446.4A CN106874922B (en) 2015-12-11 2015-12-11 Method and device for determining service parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510922446.4A CN106874922B (en) 2015-12-11 2015-12-11 Method and device for determining service parameters

Publications (2)

Publication Number Publication Date
CN106874922A CN106874922A (en) 2017-06-20
CN106874922B true CN106874922B (en) 2021-04-27

Family

ID=59178715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510922446.4A Active CN106874922B (en) 2015-12-11 2015-12-11 Method and device for determining service parameters

Country Status (1)

Country Link
CN (1) CN106874922B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171254A (en) * 2017-11-22 2018-06-15 北京达佳互联信息技术有限公司 Image tag determines method, apparatus and terminal
CN108647714A (en) * 2018-05-09 2018-10-12 平安普惠企业管理有限公司 Acquisition methods, terminal device and the medium of negative label weight
CN111127060B (en) * 2018-10-31 2023-08-08 百度在线网络技术(北京)有限公司 Method and device for determining popularization users of service
CN110232405A (en) * 2019-05-24 2019-09-13 东方银谷(北京)科技发展有限公司 Method and device for personal credit file
CN113177543B (en) * 2021-05-28 2024-01-23 平安国际智慧城市科技股份有限公司 Certificate identification method, device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9065979B2 (en) * 2005-07-01 2015-06-23 The Invention Science Fund I, Llc Promotional placement in media works
CN103678323A (en) * 2012-09-03 2014-03-26 上海唐里信息技术有限公司 Friend recommendation method and system in SNS network
CN103634680B (en) * 2013-11-27 2017-09-15 青岛海信电器股份有限公司 The control method for playing back and device of a kind of intelligent television
CN104270525B (en) * 2014-09-28 2017-12-22 酷派软件技术(深圳)有限公司 Information processing method and information processor
CN104699770B (en) * 2015-03-02 2018-09-18 惠州Tcl移动通信有限公司 A kind of analysis based on mobile terminal obtains the method and system of user's character trait

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
半监督分类进展研究;谭琨;《高光谱遥感影像半监督分类研究》;中国矿业大学出版社;20140131;第8-10页 *

Also Published As

Publication number Publication date
CN106874922A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
WO2020221278A1 (en) Video classification method and model training method and apparatus thereof, and electronic device
CN108710847B (en) Scene recognition method and device and electronic equipment
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
CN106874922B (en) Method and device for determining service parameters
CN109344884B (en) Media information classification method, method and device for training picture classification model
Kao et al. Visual aesthetic quality assessment with a regression model
KR102385463B1 (en) Facial feature extraction model training method, facial feature extraction method, apparatus, device and storage medium
CN111738357B (en) Junk picture identification method, device and equipment
US9906704B2 (en) Managing crowd sourced photography in a wireless network
CN111738243B (en) Method, device and equipment for selecting face image and storage medium
CN110263215B (en) Video emotion positioning method and system
CN107368827B (en) Character recognition method and device, user equipment and server
CN111814620A (en) Face image quality evaluation model establishing method, optimization method, medium and device
JP2018531543A6 (en) Managing cloud source photography in a wireless network
US11367196B2 (en) Image processing method, apparatus, and storage medium
CN113496208B (en) Video scene classification method and device, storage medium and terminal
JP5214679B2 (en) Learning apparatus, method and program
CN112084812A (en) Image processing method, image processing device, computer equipment and storage medium
CN113902944A (en) Model training and scene recognition method, device, equipment and medium
CN112995690A (en) Live content item identification method and device, electronic equipment and readable storage medium
WO2024041108A1 (en) Image correction model training method and apparatus, image correction method and apparatus, and computer device
US20230066331A1 (en) Method and system for automatically capturing and processing an image of a user
CN108596068B (en) Method and device for recognizing actions
Gomez-Nieto et al. Quality aware features for performance prediction and time reduction in video object tracking
CN113221690A (en) Video classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant