CN113657460A - Boosting-based attribute identification method and device - Google Patents

Boosting-based attribute identification method and device Download PDF

Info

Publication number
CN113657460A
CN113657460A CN202110857789.2A CN202110857789A CN113657460A CN 113657460 A CN113657460 A CN 113657460A CN 202110857789 A CN202110857789 A CN 202110857789A CN 113657460 A CN113657460 A CN 113657460A
Authority
CN
China
Prior art keywords
classifier
weight
strong
training sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110857789.2A
Other languages
Chinese (zh)
Inventor
孙腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yingpu Technology Co Ltd
Original Assignee
Shanghai Yingpu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yingpu Technology Co Ltd filed Critical Shanghai Yingpu Technology Co Ltd
Priority to CN202110857789.2A priority Critical patent/CN113657460A/en
Publication of CN113657460A publication Critical patent/CN113657460A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a Boosting-based attribute identification method and device, and relates to an attribute identification technology in an IP monitoring network. The method comprises the following steps: extracting a characteristic diagram of a training sample image in a training sample set by using hypercolmn; inputting the feature map into a BARF, and classifying the training sample image data by the BARF according to a preset classification standard to obtain an ith weak classifier; for the sample wrongly classified by the ith weak classifier, the sample weight is increased, and then the (i + 1) th weak classifier is trained; constructing a strong classifier, wherein the strong classifier is formed by adding all weak classifiers according to respective weights; and calculating the error rate of the strong classifier, judging whether to return to the training step or not according to the error rate and the value of N, and taking the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified when the judgment result is negative.

Description

Boosting-based attribute identification method and device
Technical Field
The present application relates to IP monitoring networks, and more particularly, to attribute identification techniques in IP monitoring networks.
Background
In recent years, the IP monitoring network is expected to realize various practical applications. In order to support the concept of IP monitoring networks, automatic attribute recognition systems have become promising intelligent management systems, and today, intelligent management of IP monitoring networks has been on a growing trend. In today's research, there have been many attempts to try and emphasize the importance of automatic attribute identification in IP monitoring networks. In early studies, different classifiers were trained for multiple attributes (e.g., gender, expression, and age), and in the past few years, deep Convolutional Neural Networks (CNNs) were considered as a general solution to most attribute recognition problems, with deep CNNs being utilized to predict facial features therein. To automatically identify the attributes of pedestrians, we implement deep CNNs, namely AlexNet, google lenet and ResNet. Each of the three deep learning models may predict attributes (e.g., gender and clothing).
For object detection, most methods are machine learning based methods such as Scale Invariant Feature Transform (SIFT) and Histogram of Oriented Gradient (HOG) features, or deep learning based methods such as single-shot multi box detector (SSD) and "look-once-only" (YOLO), which are commonly used today, YOLOv3, which has the advantage of enabling end-to-end object detection without the need for a specially defined function, which are typically based on deep Convolutional Neural Networks (CNN). Therefore, many methods focus on using CNN to acquire facial images, but few studies focus on the monitoring task and take into account the attributes of the person as a whole, e.g., gender and clothing, etc. The prior art is not perfect for an integrated learning method of AlexNet, GoogleNet and ResNet algorithms in CNN, and a YOLO algorithm has a large number of label rewriting phenomena due to a special grid prediction mode when people are dense and have small size difference.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided a Boosting-based attribute identification method, the method including:
a characteristic extraction step, namely extracting a characteristic diagram of a training sample image in a training sample set by using hypercolmn;
a training step, inputting the feature map of the training sample image into a BARF, and classifying the training sample image data by the BARF according to a preset classification standard to obtain an ith weak classifier; the initial value of i is 1;
a sample weight updating step, namely, for the sample wrongly classified by the ith weak classifier, increasing the sample weight of the sample to update the training sample set, when the value of i reaches a preset value N, executing the strong classifier construction step, otherwise, making i ═ i +1, and then returning to the feature extraction step;
a strong classifier construction step, namely:
Strong_Classifier=∑(weight[i])*Classifier[i],
wherein, Strong _ Classifier represents a Strong Classifier, Classifier [ i ] represents the ith weak Classifier, and weight [ i ] represents the weight of the ith weak Classifier in the Strong Classifier;
and a calculating step, namely calculating the error rate of the strong classifier, judging whether to return to the training step or not according to the error rate and the value of N, and taking the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified when the judgment result is negative.
Optionally, the extracting the feature map of the image to be classified by using hypercolmn includes:
according to the formula
Figure BDA0003184746050000021
And extracting a feature map of the image to be classified, wherein,
o represents the characteristic diagram, and O is g (m (O)1),u(m(O2),21),…,u(m(On),2n-1));
m (-) represents a conversion function for converting an input image from a form of a × b × c × δ to a form of a × b × c × δ, δ being a constant;
u (·, ω) denotes upsampling the input image by a factor ω.
Optionally, the classifying, by the BARF, the training sample image data according to a preset classification standard includes:
respectively inputting the feature maps of the training sample images into AlexNet, GoogleNet and ResNet;
the AlexNet, the GoogLeNet and the ResNet classify the input data according to a preset classification standard;
and counting the output results of AlexNet, GoogLeNet and ResNet to obtain a classification result.
Optionally, in the strong classifier constructing step, the method for obtaining the weight of the ith weak classifier in the strong classifier includes:
counting the correct classification quantity and the wrong classification quantity of each weak classifier on each feature, and performing normalization processing on the statistical result to obtain feature weight;
and training the ith weak classifier by using the training sample image and the characteristic weight to obtain the classification error rate of the ith weak classifier, thereby calculating the weight of the ith weak classifier.
According to another aspect of the present application, there is provided a Boosting-based attribute identification apparatus, the apparatus including:
the feature extraction module is configured to extract a feature map of the training sample images in the training sample set by using hyperclemn;
the training module is configured to input the feature map of the training sample image into a BARF, and the BARF classifies the training sample image data according to a preset classification standard to obtain an ith weak classifier; the initial value of i is 1;
a sample weight updating module configured to increase the sample weight of the sample misclassified by the ith weak classifier, so as to update the training sample set, and when the value of i reaches a preset value N, execute a strong classifier construction step, otherwise, make i ═ i +1, and then return to the feature extraction step;
a Strong Classifier construction module configured to make Strong Classifier ∑ (weight [ i ]) Classifier [ i ], wherein Strong Classifier represents a Strong Classifier, Classifier [ i ] represents an ith weak Classifier, and weight [ i ] represents a weight of the ith weak Classifier in the Strong Classifier; and
and the calculation module is configured to calculate the error rate of the strong classifier, judge whether to return to the training step according to the error rate and the value of N, and when the judgment result is negative, use the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified.
Optionally, the extracting the feature map of the image to be classified by using hypercolmn includes:
according to the formula
Figure BDA0003184746050000031
And extracting a feature map of the image to be classified, wherein,
o represents the characteristic diagram, and O is g (m (O)1),u(m(O2),21),…,u(m(On),2n-1));
m (-) represents a conversion function for converting an input image from a form of a × b × c × δ to a form of a × b × c × δ, δ being a constant;
u (·, ω) denotes upsampling the input image by a factor ω.
Optionally, the classifying, by the BARF, the training sample image data according to a preset classification standard includes:
respectively inputting the feature maps of the training sample images into AlexNet, GoogleNet and ResNet;
the AlexNet, the GoogLeNet and the ResNet classify the input data according to a preset classification standard;
and counting the output results of AlexNet, GoogLeNet and ResNet to obtain a classification result.
Optionally, in the strong classifier construction module, the method for obtaining the weight of the ith weak classifier in the strong classifier includes:
and counting the correct classification quantity and the incorrect classification quantity of each weak classifier on each feature, and normalizing the counting result to obtain the feature weight.
The Boosting-based attribute identification method and device mainly improve a classification strategy, classify objects (namely pedestrians) according to attributes (such as gender and clothes), use a Boosting-based attribute identification framework (BARF), and use an improved Poly-YOLO algorithm to solve the problem of label rewriting. Because the three deep CNNs (namely AlexNet, GoogleNet and ResNet) are subjected to ensemble learning, the classification precision of the final strong classifier is higher than that of the three deep CNNs which are used independently; and, because of adopting the improved Poly-YOLO algorithm, the label rewriting problem of YOLOv3 can be solved.
Further, the Poly-YOLO parameter is 40% less than YOLOv3, and therefore, the calculation efficiency can be improved.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic flow chart diagram of a Boosting-based attribute identification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a BARF according to one embodiment of the present application;
FIG. 3 is a schematic structural diagram of a Boosting-based attribute identification apparatus according to an embodiment of the present application;
FIG. 4 is a schematic block diagram of a computing device according to one embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Fig. 1 is a schematic flow chart of a Boosting-based attribute identification method according to an embodiment of the present application. The Boosting-based attribute identification method may generally include the following steps S1 through S5.
Step S1, feature extraction, namely sampling the training sample set with the replacement, and then extracting a feature map of the extracted sample image by using hypercelmn, wherein the specific method for extracting the feature map comprises the following steps:
according to the formula
Figure BDA0003184746050000051
And extracting a feature map of the image to be classified, wherein,
o represents the characteristic diagram, and O is g (m (O)1),u(m(O2),21),…,u(m(On),2n-1));
m (-) represents a conversion function for converting an input image from a form of a × b × c × δ to a form of a × b × c × δ, δ being a constant;
u (·, ω) denotes upsampling the input image by a factor ω.
Step S2, training
Inputting a feature map of a training sample image in a training sample set into a BARF, and classifying the training sample image data by the BARF according to a preset classification standard to obtain an ith weak classifier; the initial value of i is 1;
the classification principle of the BARF is shown in fig. 2:
dividing a sampling result into three parts, and respectively inputting the three parts into AlexNet, GoogleNet and ResNet;
the AlexNet, GoogleNet and ResNet classify the input samples according to preset classification standards (such as gender, color, clothes and the like);
and counting the output results of AlexNet, GoogLeNet and ResNet to obtain a classification result.
Step S3, sample weight update
And for the samples which are wrongly classified by the ith weak classifier, increasing the sample weights (the initial weights of all the samples can be set to be the same) so as to update the training sample set, executing the step S4 when the value of i reaches a preset value N, otherwise, making i equal to i +1, returning to the step S1, resampling and training the next weak classifier until all the training of the N weak classifiers is completed.
Step S4, constructing strong classifier
Let Strong _ Classifier ∑ (weight [ i ]) Classifier [ i ];
wherein, Strong _ Classifier represents a Strong Classifier, Classifier [ i ] represents the ith weak Classifier, and weight [ i ] represents the weight of the ith weak Classifier in the Strong Classifier.
The weight [ i ] calculation method comprises the following steps: and counting the correct classification quantity and the incorrect classification quantity of each weak classifier on each feature, and normalizing the counting result to obtain the feature weight.
Step S5, calculating
And re-extracting the training samples from the training sample set, extracting a characteristic diagram, inputting the characteristic diagram into the strong classifier, judging the error rate of the strong classifier according to the output result of the strong classifier, judging whether the weak classifier needs to be re-trained according to the error rate and the value of N, and taking the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified when the judgment result is negative.
In summary, in the Boosting-based attribute identification method of this embodiment, in the training phase, each sample in the training sample set is given the same weight (general condition), the feature map of the training sample image is sent to the BARF shown in fig. 1, then the training sample image is classified according to certain classification criteria (for example, attributes of people, gender, color, clothing, or the like of the target), so as to obtain the 1 st weak Classifier [1], and then the training sample set is updated in the following manner: counting the samples which are classified in error in the classification, and improving the weight of the samples; then, 2 nd weak Classifier [2] is re-sampled and trained, in this training, the error rate of this training can be influenced according to the sample weight, so that Classifier [2] can sufficiently pay attention to the previously wrongly classified samples, then the weight of the samples wrongly classified by Classifier [2] is increased, so as to update training sample set … … again, and N weak classifiers Classifier [1] to Classifier [ N ] are trained in the same way. For each weak Classifier i, a Classifier i +1 with better classification capability for its erroneous samples can be obtained. Then, the weights weight [1] to weight [ N ] of N weak classifiers Classifier [1] to Classifier [ N ] in the final strong Classifier are determined (the weights of the weak classifiers can also be determined after the training of each weak Classifier is completed). And obtaining Strong classifiers Strong Classifier Sigma (weight [ i ]) Classifier [ i ] (linear combination of weak classifiers) according to the weak classifiers and the weights thereof, calculating the error rate of the Strong Classifier, and then comprehensively considering the program iteration times to judge whether to continue the training of the weak classifiers. In the summary stage, these predictions are combined mathematically. And finally, when the pedestrian attribute is identified, the image collected by the camera is subjected to characteristic diagram extraction and then is input into the strong classifier, and the strong classifier can output a classification result.
In order to reduce the label rewriting rate of the Poly-YOLO object detection method, the size of the feature map is increased, and this embodiment improves the feature map capture method. This embodiment uses hypercolmn to implement single-scale output synthesis for multiple scale segments, assuming that O is a feature map, the u (·, ω) function represents upsampling the input image by a factor ω, the m (·) function represents a transformation that converts a × b × c to a × b × c × δ, where δ is a constant whose specific data can be determined according to the dimension, and further, g (O) is considered1,…,On) Is an n-gram composition/aggregation function, for which the output profile using hyperclmn can be expressed as: g (m (O))1),u(m(O2),21),…,u(m(On),2n-1) Add as aggregation function, the formula can be rewritten as
Figure BDA0003184746050000061
Through testing, the size of the finally obtained output characteristic graph meets the requirement, and the label rewriting rate is reduced.
Fig. 3 is a schematic structural diagram of a Boosting-based attribute identification apparatus according to an embodiment of the present application. The Boosting based attribute identification apparatus may generally include:
the feature extraction module 1 is configured to extract a feature map of a training sample image in a training sample set by using hypercolumn;
the training module 2 is configured to input the feature map of the training sample image into a BARF, and the BARF classifies the training sample image data according to a preset classification standard to obtain an ith weak classifier; the initial value of i is 1;
a sample weight updating module 3 configured to increase the sample weight of the sample misclassified by the ith weak classifier, so as to update the training sample set, and when the value of i reaches a preset value N, execute the strong classifier construction step, otherwise, make i ═ i +1, and then return to the feature extraction step;
a Strong Classifier construction module 4 configured to make Strong _ Classifier ═ Σ (weight [ i ]) classifierr [ i ], where Strong _ Classifier denotes a Strong Classifier, classifierr [ i ] denotes an ith weak Classifier, and weight [ i ] denotes a weight of the ith weak Classifier in the Strong Classifier; and
and the calculating module 5 is configured to calculate the error rate of the strong classifier, judge whether to return to the training step according to the error rate and the value of N, and when the judgment result is negative, use the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified.
As a preferred embodiment of the present application, the extracting the feature map of the image to be classified by using hypercolumn includes:
according to the formula
Figure BDA0003184746050000071
And extracting a feature map of the image to be classified, wherein,
o represents the characteristic diagram, and O is g (m (O)1),u(m(O2),21),…,u(m(On),2n-1));
m (-) represents a conversion function for converting an input image from a form of a × b × c × δ to a form of a × b × c × δ, δ being a constant;
u (·, ω) denotes upsampling the input image by a factor ω.
As a preferred embodiment of the present application, the classifying, by the BARF, the training sample image data according to a preset classification standard includes:
respectively inputting the feature maps of the training sample images into AlexNet, GoogleNet and ResNet;
the AlexNet, the GoogLeNet and the ResNet classify the input data according to a preset classification standard;
and counting the output results of AlexNet, GoogLeNet and ResNet to obtain a classification result.
As a preferred embodiment of the present application, in the strong classifier construction module, a method for obtaining the weight of the ith weak classifier in the strong classifier includes:
and counting the correct classification quantity and the error classification quantity of each weak classifier on each feature, carrying out normalization processing on the statistical result to obtain feature weight, training the weak classifiers through training sample image data and the feature weight, obtaining the classification error rate of the weak classifiers, and calculating the weight of the weak classifiers by the aid of the feature weight, wherein each weak classifier has corresponding weight, and the classifier with small classification error has larger weight. .
Embodiments also provide a computing device, referring to fig. 4, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 5, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A Boosting-based attribute identification method comprises the following steps:
a characteristic extraction step, namely extracting a characteristic diagram of a training sample image in a training sample set by using hypercolmn;
a training step, inputting the feature map of the training sample image into a BARF, and classifying the training sample image data by the BARF according to a preset classification standard to obtain an ith weak classifier; the initial value of i is 1;
a sample weight updating step, namely, for the sample wrongly classified by the ith weak classifier, increasing the sample weight of the sample to update the training sample set, when the value of i reaches a preset value N, executing the strong classifier construction step, otherwise, making i ═ i +1, and then returning to the feature extraction step;
a strong classifier construction step, namely:
Strong_Classifier=∑(weight[i])*Classifier[i],
wherein, Strong _ Classifier represents a Strong Classifier, Classifier [ i ] represents the ith weak Classifier, and weight [ i ] represents the weight of the ith weak Classifier in the Strong Classifier;
and a calculating step, namely calculating the error rate of the strong classifier, judging whether to return to the training step or not according to the error rate and the value of N, and taking the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified when the judgment result is negative.
2. The method of claim 1, wherein the extracting the feature map of the image to be classified by hypercolmn comprises:
according to the formula
Figure FDA0003184746040000011
And extracting a feature map of the image to be classified, wherein,
o represents the characteristic diagram, and O is g (m (O)1),u(m(O2),21),...,u(m(On),2n-1));
m (-) represents a conversion function for converting an input image from a form of a × b × c × δ to a form of a × b × c × δ, δ being a constant;
u (·, ω) denotes upsampling the input image by a factor ω.
3. The method of claim 1, wherein the BARF classifying the training sample image data according to preset classification criteria comprises:
respectively inputting the feature maps of the training sample images into AlexNet, GoogleNet and ResNet;
the AlexNet, the GoogLeNet and the ResNet classify the input data according to a preset classification standard;
and counting the output results of AlexNet, GoogLeNet and ResNet to obtain a classification result.
4. The method according to claim 1, wherein in the strong classifier constructing step, the method for obtaining the weight of the ith weak classifier in the strong classifier comprises:
counting the correct classification quantity and the wrong classification quantity of each weak classifier on each feature, and performing normalization processing on the statistical result to obtain feature weight;
and training the ith weak classifier by using the training sample image and the characteristic weight to obtain the classification error rate of the ith weak classifier, thereby calculating the weight of the ith weak classifier.
5. A Boosting-based attribute identification apparatus, comprising:
the feature extraction module is configured to extract a feature map of the training sample images in the training sample set by using hyperclemn;
the training module is configured to input the feature map of the training sample image into a BARF, and the BARF classifies the training sample image data according to a preset classification standard to obtain an ith weak classifier; the initial value of i is 1;
a sample weight updating module configured to increase the sample weight of the sample misclassified by the ith weak classifier, so as to update the training sample set, and when the value of i reaches a preset value N, execute a strong classifier construction step, otherwise, make i ═ i +1, and then return to the feature extraction step;
a Strong Classifier construction module configured to make Strong Classifier ∑ (weight [ i ]) Classifier [ i ], wherein Strong Classifier represents a Strong Classifier, Classifier [ i ] represents an ith weak Classifier, and weight [ i ] represents a weight of the ith weak Classifier in the Strong Classifier; and
and the calculation module is configured to calculate the error rate of the strong classifier, judge whether to return to the training step according to the error rate and the value of N, and when the judgment result is negative, use the strong classifier as a final attribute identification classifier for performing attribute classification on the image to be classified.
6. The apparatus of claim 5, wherein the feature map extraction of the image to be classified by hypercelmn comprises:
according to the formula
Figure FDA0003184746040000021
And extracting a feature map of the image to be classified, wherein,
o represents the characteristic diagram, and O is g (m (O)1),u(m(O2),21),...,u(m(On),2n-1));
m (-) represents a conversion function for converting an input image from a form of a × b × c × δ to a form of a × b × c × δ, δ being a constant;
u (·, ω) denotes upsampling the input image by a factor ω.
7. The apparatus of claim 1, wherein the BARF classifying the training sample image data according to preset classification criteria comprises:
respectively inputting the feature maps of the training sample images into AlexNet, GoogleNet and ResNet;
the AlexNet, the GoogLeNet and the ResNet classify the input data according to a preset classification standard;
and counting the output results of AlexNet, GoogLeNet and ResNet to obtain a classification result.
8. The apparatus of claim 6, wherein in the strong classifier construction module, the method for obtaining the weight of the ith weak classifier in the strong classifier comprises:
and counting the correct classification quantity and the incorrect classification quantity of each weak classifier on each feature, and normalizing the counting result to obtain the feature weight.
CN202110857789.2A 2021-07-28 2021-07-28 Boosting-based attribute identification method and device Pending CN113657460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857789.2A CN113657460A (en) 2021-07-28 2021-07-28 Boosting-based attribute identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857789.2A CN113657460A (en) 2021-07-28 2021-07-28 Boosting-based attribute identification method and device

Publications (1)

Publication Number Publication Date
CN113657460A true CN113657460A (en) 2021-11-16

Family

ID=78490771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857789.2A Pending CN113657460A (en) 2021-07-28 2021-07-28 Boosting-based attribute identification method and device

Country Status (1)

Country Link
CN (1) CN113657460A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154441A1 (en) * 2013-12-02 2015-06-04 Huawei Technologies Co., Ltd. Method and apparatus for generating strong classifier for face detection
CN108596268A (en) * 2018-05-03 2018-09-28 湖南大学 A kind of data classification method
CN108765373A (en) * 2018-04-26 2018-11-06 西安工程大学 A kind of insulator exception automatic testing method based on integrated classifier on-line study
CN110706235A (en) * 2019-08-30 2020-01-17 华南农业大学 Far infrared pedestrian detection method based on two-stage cascade segmentation
CN112587129A (en) * 2020-12-01 2021-04-02 上海影谱科技有限公司 Human body action recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154441A1 (en) * 2013-12-02 2015-06-04 Huawei Technologies Co., Ltd. Method and apparatus for generating strong classifier for face detection
CN108765373A (en) * 2018-04-26 2018-11-06 西安工程大学 A kind of insulator exception automatic testing method based on integrated classifier on-line study
CN108596268A (en) * 2018-05-03 2018-09-28 湖南大学 A kind of data classification method
CN110706235A (en) * 2019-08-30 2020-01-17 华南农业大学 Far infrared pedestrian detection method based on two-stage cascade segmentation
CN112587129A (en) * 2020-12-01 2021-04-02 上海影谱科技有限公司 Human body action recognition method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汐蟀: "mAP提升40%!YOLO3改进版 —— Poly-YOLO:更快,更精确的检测和实例分割", Retrieved from the Internet <URL:https://mp.weixin.qq.com/s/nvUhve8kcXHYZ9n61-EMVQ> *

Similar Documents

Publication Publication Date Title
CN111476284B (en) Image recognition model training and image recognition method and device and electronic equipment
JP6941123B2 (en) Cell annotation method and annotation system using adaptive additional learning
CN108681746B (en) Image identification method and device, electronic equipment and computer readable medium
CN112632980B (en) Enterprise classification method and system based on big data deep learning and electronic equipment
JP2019091443A (en) Open set recognition method and apparatus, and computer readable storage medium
CN110738247B (en) Fine-grained image classification method based on selective sparse sampling
JP6897749B2 (en) Learning methods, learning systems, and learning programs
CN111475622A (en) Text classification method, device, terminal and storage medium
CN113128478B (en) Model training method, pedestrian analysis method, device, equipment and storage medium
CN113221918B (en) Target detection method, training method and device of target detection model
WO2022199214A1 (en) Sample expansion method, training method and system, and sample learning system
CN111694954B (en) Image classification method and device and electronic equipment
WO2019167784A1 (en) Position specifying device, position specifying method, and computer program
CN111178196B (en) Cell classification method, device and equipment
CN110717407A (en) Human face recognition method, device and storage medium based on lip language password
Gao et al. An improved XGBoost based on weighted column subsampling for object classification
CN109657710B (en) Data screening method and device, server and storage medium
CN117218408A (en) Open world target detection method and device based on causal correction learning
CN110675382A (en) Aluminum electrolysis superheat degree identification method based on CNN-LapseLM
WO2022237065A1 (en) Classification model training method, video classification method, and related device
CN113657460A (en) Boosting-based attribute identification method and device
CN115203408A (en) Intelligent labeling method for multi-modal test data
CN114912502B (en) Double-mode deep semi-supervised emotion classification method based on expressions and voices
CN111625672B (en) Image processing method, image processing device, computer equipment and storage medium
Tennakoon et al. Deep multi-instance volumetric image classification with extreme value distributions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination