CN110929785B - Data classification method, device, terminal equipment and readable storage medium - Google Patents
Data classification method, device, terminal equipment and readable storage medium Download PDFInfo
- Publication number
- CN110929785B CN110929785B CN201911150175.XA CN201911150175A CN110929785B CN 110929785 B CN110929785 B CN 110929785B CN 201911150175 A CN201911150175 A CN 201911150175A CN 110929785 B CN110929785 B CN 110929785B
- Authority
- CN
- China
- Prior art keywords
- value
- sample data
- label
- preset
- tag
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013145 classification model Methods 0.000 claims abstract description 61
- 230000006870 function Effects 0.000 claims abstract description 59
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 20
- 238000010801 machine learning Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 229910003460 diamond Inorganic materials 0.000 description 2
- 239000010432 diamond Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application is applicable to the technical field of machine learning, and provides a data classification method, a device, terminal equipment and a readable storage medium, wherein the data classification method comprises the following steps: firstly, receiving data to be classified, and then inputting the data to be classified into a trained classification model to obtain at least one data tag of the data to be classified. The classification model is trained according to a distance focus loss function, and the distance focus loss function is used for representing the difference between the predicted label of the sample data and the preset label of the sample data according to the distance between the predicted label of the sample data and the preset label of the sample data. When the sample data is subjected to classification training, the number of samples is not required to be considered, and the classification boundary is only required to be determined according to the difference between the prediction label of the sample data and the preset label of the sample data, so that the situation of unbalanced data can be effectively reduced, and the trained classification model can more accurately classify the data to be classified.
Description
Technical Field
The present application belongs to the technical field of machine learning, and in particular, relates to a data classification method, a data classification device, a terminal device, and a readable storage medium.
Background
The multi-label classification is one of basic research tasks of machine learning, and aims to predict a plurality of class labels appearing in each sample data, and when a classification model is trained, the types and the number of the class labels in each sample data are not fixed, so that the situation that the data volume of an individual class label is greatly different from the data volumes of other class labels in the predicted labels of the sample data, and data imbalance occurs is caused, and the accuracy of the machine learning model is further reduced.
In the prior art, when the condition of data unbalance is improved, a convolutional neural network can be used for extracting characteristics of sample data, then the characteristics are linearly combined through a full-connection layer, classification probability is generated through a sigmoid function, then a focus loss function is used as a loss function to conduct back propagation, a classification model is trained, and classification is conducted through the trained classification model.
However, when the classification model is trained according to the prior art, since the number of each category in the sample data is different, for the category with a smaller number, enough sample data features which cannot be extracted are adjusted only by the weight in the focus loss function, so that the situation that the classification boundary cannot be accurately obtained is caused, and therefore, a certain degree of data imbalance still can be caused, and the accuracy of label classification is affected.
Disclosure of Invention
The embodiment of the application provides a data classification method, a data classification device, terminal equipment and a readable storage medium, which are used for solving the problems that in the prior art, the adjustment is only carried out by means of weights in a focus loss function, so that the situation that a classified boundary cannot be accurately obtained, the data is unbalanced to a certain extent, and the accuracy of label classification is affected.
In a first aspect, an embodiment of the present application provides a data classification method, including:
firstly, receiving data to be classified, and then inputting the data to be classified into a trained classification model to obtain at least one data tag of the data to be classified. The classification model is trained according to a distance focus loss function, and the distance focus loss function is used for representing the difference between the predicted label of the sample data and the preset label of the sample data according to the distance between the predicted label of the sample data and the preset label of the sample data.
In some implementations, the training manner of the trained classification model is: at least one sample data is obtained from a preset database, wherein each sample data comprises at least one preset label. And then obtaining a prediction label of each sample data through a preset classification model. And obtaining the interval distance between the predicted label of the sample data and the preset label of the sample data. And calculating a maximum interval focus loss value according to the interval distance through an interval focus loss function, wherein the maximum interval focus loss value is used for indicating the maximum value of the difference between the predicted label of the sample data and the preset label of the sample data. And finally, training a preset classification model according to the maximum interval focus loss value, and obtaining a trained classification model.
It should be noted that the predictive label includes N classifications, where N is an integer greater than 1.
Correspondingly, acquiring the interval distance between the predicted tag of the sample data and the preset tag of the sample data comprises the following steps: and obtaining the interval distance between the i-th type predictive tag and the i-th type preset tag of the sample data according to the value of the i-th type predictive tag and the value of the i-th type preset tag of the sample data, wherein i is an integer which is more than or equal to 1 and less than or equal to N.
In still other implementations, according to the value of the i-th type predictive tag of the sample data and the value of the i-th type preset tag of the sample data, the interval distance between the i-th type predictive tag and the i-th type preset tag of the sample data is obtained, and the value of the i-th type predictive tag can be subtracted from the value of the i-th type preset tag to obtain the absolute distance between the i-th type predictive tag and the i-th type preset tag of the sample data. And multiplying the absolute distance by a preset scaling factor to obtain the interval distance between the i-th type predictive label and the i-th type preset label of the sample data.
Optionally, according to the interval distance, calculating and obtaining the maximum interval focus loss value through an interval focus loss function, and adjusting the value range of the i-th type predicted label value according to the interval distance, the i-th type predicted label value and the i-th type preset label value to obtain the i-th type predicted label value after range adjustment. And obtaining the maximum interval focus loss value according to the i-th type predictive label value and the interval focus loss function after the range adjustment.
In still other implementations, adjusting the range of values of the i-th class predictive label according to the interval distance, the i-th class predictive label value and the i-th class preset label value to obtain the i-th class predictive label value after range adjustment includes: multiplying the i-th preset label value by two and subtracting one to obtain the mapped i-th preset label value. And subtracting the product of the interval distance and the mapped i-th preset label value from the i-th predicted label value to obtain the mapped i-th predicted label value. And finally multiplying the mapped i-th type predictive label value by a preset range scaling factor to obtain the i-th type predictive label value after range adjustment.
Optionally, obtaining the maximum pitch focus loss value according to the i-th type predictive label value and the pitch focus loss function after the range adjustment includes: and firstly, carrying out second classification on the i-th type predictive tag value after the range adjustment, and obtaining the i-th type predictive tag value after the second classification. And then obtaining the maximum interval focus loss value according to the i-th type predictive label value after the second classification and the interval focus loss function.
In some implementations, the subject of execution of the data classification method is a terminal with image processing capabilities. The terminal may be an entity terminal, such as a desktop computer, a server, a notebook computer, a tablet computer, or the like, or may be a virtual terminal, such as a cloud server, cloud computing, or the like. It should be understood that the above execution subject is only an example and not necessarily the above terminal.
In a second aspect, an embodiment of the present application provides a data classification apparatus, including: and the receiving module is used for receiving the data to be classified. The classification module is used for inputting the data to be classified into a trained classification model to obtain at least one data tag of the data to be classified, wherein the classification model is trained according to a distance focus loss function, and the distance focus loss function is used for representing the difference between the predicted tag of the sample data and the preset tag of the sample data according to the distance between the predicted tag of the sample data and the preset tag of the sample data.
In some implementations, the apparatus further includes a training module configured to obtain a trained classification model according to: at least one sample data is obtained from a preset database, wherein each sample data comprises at least one preset label. And then obtaining a prediction label of each sample data through a preset classification model. And obtaining the interval distance between the predicted label of the sample data and the preset label of the sample data. And calculating a maximum interval focus loss value according to the interval distance through an interval focus loss function, wherein the maximum interval focus loss value is used for indicating the maximum value of the difference between the predicted label of the sample data and the preset label of the sample data. And finally, training a preset classification model according to the maximum interval focus loss value, and obtaining a trained classification model.
It should be noted that the predictive label includes N classifications, where N is an integer greater than 1.
Correspondingly, the training module is specifically configured to obtain, according to the value of the i-th type predictive tag of the sample data and the value of the i-th type preset tag of the sample data, a separation distance between the i-th type predictive tag of the sample data and the i-th type preset tag, where i is an integer greater than or equal to 1 and less than or equal to N.
In still other implementations, the training module is specifically configured to subtract the value of the i-th type preset tag from the value of the i-th type prediction tag to obtain an absolute distance between the i-th type prediction tag and the i-th type preset tag of the sample data. And multiplying the absolute distance by a preset scaling factor to obtain the interval distance between the i-th type predictive label and the i-th type preset label of the sample data.
Optionally, the training module is specifically configured to adjust a value range of the i-th type predicted tag value according to the interval distance, the i-th type predicted tag value and the i-th type preset tag value, so as to obtain the i-th type predicted tag value after the range adjustment. And obtaining the maximum interval focus loss value according to the i-th type predictive label value and the interval focus loss function after the range adjustment.
In still other implementations, the training module is specifically configured to multiply the i-th preset tag value by two and then subtract one to obtain a mapped i-th preset tag value. And subtracting the product of the interval distance and the mapped i-th preset label value from the i-th predicted label value to obtain the mapped i-th predicted label value. And finally multiplying the mapped i-th type predictive label value by a preset range scaling factor to obtain the i-th type predictive label value after range adjustment.
Optionally, the training module is specifically configured to perform second classification on the i-th type predictive tag value after the range adjustment, and obtain the i-th type predictive tag value after the second classification. And then obtaining the maximum interval focus loss value according to the i-th type predictive label value after the second classification and the interval focus loss function.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method as provided in the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as provided in the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product for causing a terminal device to carry out the method provided in the first aspect above when the computer program product is run on the terminal device.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Compared with the prior art, the embodiment of the application has the beneficial effects that: and classifying the received data to be classified through the trained classification model to obtain at least one data tag of the data to be classified. The trained classification model is obtained by training a preset classification model according to a distance focus loss function. The distance focus loss function can represent the difference between the predicted label of the sample data and the preset label of the sample data according to the distance between the predicted label of the sample data and the preset label of the sample data. Therefore, the preset classification model is trained through the interval focus loss function, when the sample data is subjected to classification training, the number of samples is not required to be considered, and the classification boundary is only required to be determined according to the prediction label of the sample data and the difference between the preset labels of the sample data, so that the situation of unbalanced data can be effectively reduced, and the trained classification model can more accurately classify the data to be classified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of a data classification method according to an embodiment of the present application;
FIG. 2 is a flow chart of a data classification method according to an embodiment of the application;
FIG. 3 is a flow chart of a data classification method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a sample data tag in a data classification method according to an embodiment of the present application;
FIG. 5 is a flow chart of a data classification method according to another embodiment of the present application;
FIG. 6 is a flow chart of a data classification method according to another embodiment of the present application;
FIG. 7 is a flow chart of a data classification method according to another embodiment of the present application;
FIG. 8 is a flow chart of a data classification method according to another embodiment of the present application;
FIG. 9 is a schematic diagram of a data classifying device according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a data classification device according to another embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in various places throughout this specification are not necessarily all referring to the same embodiment, but mean "one or more, but not all, embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The data classification method provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA), security cameras, monitoring cameras and the like, and the embodiment of the application does not limit the specific types of the terminal equipment.
Fig. 1 shows an application scenario schematic diagram of a data classification method provided by the application. Referring to fig. 1, in this scenario, the image capturing device 11, the server 12, and the database 13 are included, a communication connection is formed between the image capturing device 11 and the server 12, a communication connection is formed between the server 12 and the database 13, and a communication connection manner may be a wired network or a Wireless network, where the Wireless network may include a Wireless local area network (Wireless LocalareaNetworks, WLAN) (such as Wi-Fi network), bluetooth, zigbee, a mobile communication network, a short-range Wireless communication technology (Near Field Communication, NFC), an Infrared technology (IR), and other communication solutions. The wired network may include a fiber optic network, a telecommunications network, an intranet, etc., such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), a metropolitan area network (Metropolitan Area Network, MAN), a public switched telephone network (Public Switched Telephone Network, PSTN), etc. The types of wireless networks and wired networks are not limited herein.
By way of example only and not limitation, image capture device 11 may include tablet 111, notebook 112, desktop 113, smartphone 114, digital camera 115, surveillance camera 116, etc., capturing images by capturing live images with a camera, calling up images stored in image capture device 11, or accessing a server, database, etc. storing images by image capture device 11 and forwarding the images to server 12.
For example, when the image acquisition apparatus 11 is an apparatus having a photographing function such as a smartphone 114, a digital camera 115, or a monitoring camera 116, a real-time image can be photographed by the camera and transmitted to the server 12.
When the image capturing device 11 is a tablet computer 111, a notebook computer 112, a desktop computer 113, or the like, the image stored therein may be transmitted to the server 12, and at this time, the image capturing device 11 and the server 12 may be two separate devices, that is, the server 12 is a cloud server, a rack server, a blade server, or the like; alternatively, the image capturing device 11 and the server 12 may be the same device, for example, the server 12 may be a virtual server running on the desktop computer 113, which is not limited herein.
Similarly, the database 13 may be implemented on the same device as the server 12, or may be implemented on a different device, which is a common way for those skilled in the art, and will not be described herein.
Fig. 2 shows a flow chart of a data classification method according to an embodiment of the present application, which can be applied to a terminal device in the above scenario, such as a tablet 111, a notebook 112, a desktop 113, a smart phone 114, a digital camera 115, or a monitoring camera 116, by way of example and not limitation.
Referring to fig. 2, the data classification method includes:
s21, receiving data to be classified.
It should be noted that, in the present application, the pictures are used as the data to be classified, but the type of the data to be classified is not limited thereto, for example, the data to be classified may be other forms of data such as video, text, audio, etc., and at this time, the adjustment method needs to be correspondingly adjusted according to the type of the data, which is a common way for those skilled in the art, and will not be described herein.
S22, inputting the data to be classified into the trained classification model to obtain at least one data tag of the data to be classified.
The classification model is trained according to a distance focus loss function, and the distance focus loss function is used for representing the difference between the predicted label of the sample data and the preset label of the sample data according to the distance between the predicted label of the sample data and the preset label of the sample data.
It should be noted that, if the difference between the prediction label of the sample data and the preset label of the sample data is small, it is easy to distinguish the category, and the boundary of the category can be relatively close (i.e. the interval distance); otherwise, the classification is difficult to distinguish, and the boundaries of the classification are required to be set farther, so that the difficulty of distinguishing is reduced.
In the implementation manner, the received data to be classified is classified through the trained classification model, and at least one data tag of the data to be classified is obtained. The trained classification model is obtained by training a preset classification model according to a distance focus loss function. The distance focus loss function can represent the difference between the predicted label of the sample data and the preset label of the sample data according to the distance between the predicted label of the sample data and the preset label of the sample data. Therefore, the preset classification model is trained through the interval focus loss function, when the sample data is subjected to classification training, the number of samples is not required to be considered, and the classification boundary is only required to be determined according to the prediction label of the sample data and the difference between the preset labels of the sample data, so that the situation of unbalanced data can be effectively reduced, and the trained classification model can more accurately classify the data to be classified.
Referring to fig. 2, in another embodiment of the data classification method, the training manner of the trained classification model is:
s31, acquiring at least one sample of data from a preset database.
Wherein each sample data includes at least one preset tag.
In some implementations, the preset database stores a plurality of sample data and at least one category of preset labels corresponding to each sample data, for example, when the sample data is a picture, if there are two preset labels corresponding to the sample data, namely, a "cat" and a "dog", if there are two cats and dogs in the picture at the same time.
The preset label can be represented by a vector containing N elements, wherein N is the number of label categories, N is an integer greater than 1, and the value range of each element is [0,1].
For example only and not limitation, referring to fig. 4, a schematic diagram of a sample data tag is shown in fig. 4, and the tag types in the sample data are 4, which are square, circular, triangular, and diamond, respectively, in fig. 4, square 15, circular 16, and triangular 17 are present, and then a predetermined tag vector y of the sample data may be expressed as y e [1, 0].
Wherein the first element has a value of 1, then the probability that the first class of tags (i.e., square 15 tags) is present in the sample data is 100%; the value of the second element is 1, indicating that the probability of the presence of the label of the second category (i.e., the label of the circle 16) in the sample data is 100%; the value of the third element is 1, then the probability that there is a label of the third class (i.e., the label of triangle 17) in the sample data is 100%; the value of the fourth element is 0, which indicates that the probability that the fourth class of tags (i.e., diamond-shaped tags) exists in the sample data is 0%.
S32, obtaining a prediction label of each sample data through a preset classification model.
In some embodiments, the predicted tags may also be represented by a vector containing N elements, and in general, for the same batch of sample data, the number of tag classes is constant, i.e. the predicted tags also contain N classes of tags, and referring to the example in S31 and fig. 4, the predicted tag vector of the sample data may be usedIndicating that, where the value of the first element is 0.9, the probability of the first class of tags (i.e., square 15 tags) being present in the sample data is 90%; the value of the second element is 0.7, then it indicates that the probability of the second category of labels (i.e., labels of circle 16) being present in the sample data is 70%; the value of the third element is 0.6, then the probability that there is a label of the third category (i.e., label of triangle 17) in the sample data is 60%; the value of the fourth element is 0, which indicates that the probability of the presence of the label of the fourth category (i.e., the label of the diamond shape) in the sample data is 80%.
S33, acquiring the interval distance between the predicted label of the sample data and the preset label of the sample data.
The predicted tag vector of the sample data given in S32 and S31 and the tag vector preset for the sample data are referred to, because The prediction result obtained by the classification of the preset classification model has a certain difference with y, and the difference is the interval distance between the prediction label of the sample data and the preset label of the sample data.
In some embodiments, the value of the tag may be predicted from sample data class iValue y of i-th preset label of sample data i And obtaining the interval distance between the i type predictive label and the i type preset label of the sample data, wherein i is an integer which is more than or equal to 1 and less than or equal to N.
Referring to fig. 5, a method for obtaining a separation distance between an i-th type predictive tag and an i-th type preset tag of sample data may include:
s331, subtracting the value of the i-th type predictive label from the value of the i-th type preset label to obtain the absolute distance between the i-th type predictive label and the i-th type preset label of the sample data.
By way of example only and not limitation, reference is made to S31 and S32And y, the value y of the class 1 preset label 1 A value of 1, predictive tag of first class +.>0.9, the absolute distance between the 1 st type predictive label and the 1 st type preset label of the sample data is
And S332, multiplying the absolute distance by a preset scaling factor to obtain the interval distance between the i type predictive label and the i type preset label of the sample data.
In some embodiments, the preset scaling factor may be represented by λ, then the separation distance
Due toAnd y has a value of [0,1 ]]Absolute distance->Also in the range of [0,1 ]]Between them. The smaller the absolute distance of the i-th class, the easier it is to distinguish the i-th class, and the larger the absolute distance of the i-th class, the more difficult it is to distinguish the i-th class.
However, the absolute distance is [0,1]In the case of the above, since the range of values is narrow, it is difficult to obtain a range that effectively reflects the degree of easy distinction of the category, and therefore, it is easier to judge whether the category is easy to distinguish by multiplying the absolute distance by λ, for example, refer to examples in S31 and S32, y ε [1, 0],When not zoomed, the absolute distance of the second category is 0.3, the absolute distance of the third category is 0.4, the absolute distance of the third category is 0.5, the absolute distance of the second category and the third category is between the easy distinguishing and the difficult distinguishing, lambda can be set to 4, then the absolute distance is zoomed, the absolute distance of the second category is enlarged from 0.3 to 1.2, the absolute distance of the third category is enlarged from 0.4 to 1.6, the distance between the third category and the midpoint 2 of the value range is enlarged by four times, the distance between the third category and the midpoint of the value range is farther, and whether the category is easy to distinguish is judged more easily.
In the embodiment, the absolute distance between the i-th type predicted tag and the i-th type preset tag of the sample data is amplified, and the amplified absolute distance is used as the interval distance between the i-th type predicted tag and the i-th type preset tag of the sample data, so that the difference between the predicted tag of the sample data and the preset tag of the sample data is amplified, the decision boundary is clearer, and the predicted tag of the sample data is more accurate when the predicted tag of the sample data is acquired. Meanwhile, as the absolute distance is obtained by subtracting the value of the i-th type predictive label from the value of the i-th type preset label, for each training, the absolute distance of the i-th type can be adaptively changed according to the value of the i-th type predictive label, so that the obtained interval distance is more accurate, the prediction of the i-th type predictive label is more accurate, and the prediction effect of the classification model is improved.
S34, calculating a maximum interval focus loss value according to the interval distance through an interval focus loss function.
The maximum interval focus loss value is used for indicating the maximum value of the difference between the predicted label of the sample data and the preset label of the sample data.
Referring to fig. 6, the maximum pitch focus loss value may be calculated in the following manner.
S341, adjusting the value range of the i-th type predictive label value according to the interval distance, the value of the i-th type predictive label and the value of the i-th type preset label to obtain the i-th type predictive label value after range adjustment.
In some embodiments, the range of the value of the i-th type predictive label is adjusted by using the distance between the values of the i-th type predictive label and the value of the i-th type preset label, and the change curvature of the output curve of the predictive label value can be adjusted on the basis of S33, so that the decision boundary of the value of the i-th type predictive label after the range adjustment is clearer, and the prediction effect of the classification model is improved.
Referring to fig. 7, adjusting a range of values of the i-th type predictive label value according to the interval distance, the i-th type predictive label value, and the i-th type preset label value to obtain a range-adjusted i-th type predictive label value may include:
s3411, multiplying the i-th preset label value by two and subtracting one to obtain a mapped i-th preset label value.
S3412, subtracting the product of the interval distance and the mapped i-th preset label value from the i-th predicted label value to obtain the mapped i-th predicted label value.
S3413, multiplying the mapped i-th type predictive label value by a preset range scaling factor to obtain the i-th type predictive label value with the adjusted range.
In some embodiments, the steps in S3411, S3412, and S3413 may be formulated, i.e., the range-adjusted i-th class predictive label valueThe calculation mode of (a) is as follows:
where s is the scale factor.
By way of example only, and not limitation, referring to examples in S31, S32,and y i The value ranges of the (E) are all 0,1]The value of s can be set to 10, then +.>The range of the value of (C) is [ -10m i ,10+10m i ]。
Relative to When the types and the numbers of the predictive labels are the same, the change curvature of the predictive label value output curve is larger, and the difference between the predictive label values of different types is larger, so that the decision boundary of the i-th predictive label is clearer.
S342, obtaining a maximum pitch focus loss value according to the i-th type predictive label value and the pitch focus loss function after range adjustment.
The mode of obtaining the maximum interval focus loss value can be achieved through the following steps:
s3421, performing second classification on the i-th type predictive label value after the range adjustment to obtain the i-th type predictive label value after the second classification.
Since the decision boundary of the i-th class predictive label value after the range adjustment is quite clear, the i-th class predictive label value after the range adjustment needs to be classified into two classes to determine whether each type of label exists in the sample data.
There are various ways of two classification, such as using Sigmoid function, logistic regression, etc.
By way of example only and not limitation, when calculated using Sigmoid functions, bisectsClass i predictive tag value after classCan be expressed by the following formula:
s3422, obtaining a maximum interval focus loss value according to the i-th type predictive label value after the second classification and the interval focus loss function.
In some embodiments, the pitch focus loss function is:
will beAs->(i.e.)>) Substituting the maximum interval focal point loss function into a formula to obtain the maximum interval focal point loss function:
wherein w is i 0 The weight of the corresponding loss function when the predictive label of the ith category does not exist in the sample data is represented; w (w) i 1 The weight of the corresponding loss function when the predictive label of the ith category exists in one sample data is represented by the following calculation mode:
α and β are preset parameters, in some embodiments, α=0.5, β=2, but not limited thereto.
And finally, calculating the maximum focus loss value of each category through a maximum interval focus loss function.
S35, training a preset classification model according to the maximum interval focus loss value, and obtaining a trained classification model.
It should be noted that, the maximum interval focus loss value may be used to perform back propagation, iterate for multiple times, and train the preset classification model repeatedly, so as to finally obtain a trained classification model, and the specific training method is not limited herein.
Here, taking training of an automatic image classification model as an example, an application scenario of the data classification method provided by the application is described.
First, a large number of image samples are collected first, and as sample data, d= { < x can be used i ,y i >I=1, 2, 3..n } to be expressed. Wherein x is i Is an image sample, y i Is a plurality of category labels corresponding to the image samples.
A machine-learned classification model is then determined, where a convolutional neural network f may be used θ Where θ is a parameter of the model.
Then, B image samples are input into a convolutional neural network f θ And updating the parameters θ of the convolutional neural network according to the following formula:
wherein L is the maximum focus loss value calculated by the maximum focus loss function provided by the application.
Then iterating the previous step for T times until the model converges or L is smaller than a preset threshold value, and obtaining the trained classification model
Finally, inputting the image x to be predicted into the trained classification modelOutputting a multi-category label vector of the image to be predicted +.>
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the data classification method described in the above embodiments, fig. 9 shows a schematic structural diagram of a data classification device according to an embodiment of the present application, and for convenience of explanation, only the portions related to the embodiments of the present application are shown.
Referring to fig. 9, the apparatus includes: the receiving module 51 is configured to receive data to be classified. The classification module 52 is configured to input the data to be classified into a trained classification model to obtain at least one data tag of the data to be classified, where the classification model is trained according to a distance focus loss function, and the distance focus loss function is used to represent a difference between a predicted tag of sample data and a preset tag of sample data according to a distance between the predicted tag of sample data and the preset tag of sample data.
In some implementations, referring to fig. 10, the apparatus further includes a training module 53 for obtaining a trained classification model according to the following steps: at least one sample data is obtained from a preset database, wherein each sample data comprises at least one preset label. And then obtaining a prediction label of each sample data through a preset classification model. And obtaining the interval distance between the predicted label of the sample data and the preset label of the sample data. And calculating a maximum interval focus loss value according to the interval distance through an interval focus loss function, wherein the maximum interval focus loss value is used for indicating the maximum value of the difference between the predicted label of the sample data and the preset label of the sample data. And finally, training a preset classification model according to the maximum interval focus loss value, and obtaining a trained classification model.
It should be noted that the predictive label includes N classifications, where N is an integer greater than 1.
Correspondingly, the training module 53 is specifically configured to obtain, according to the value of the i-th type predicted tag of the sample data and the value of the i-th type preset tag of the sample data, a separation distance between the i-th type predicted tag and the i-th type preset tag of the sample data, where i is an integer greater than or equal to 1 and less than or equal to N.
In still other implementations, the training module 53 is specifically configured to subtract the value of the i-th type preset tag from the value of the i-th type predicted tag to obtain an absolute distance between the i-th type predicted tag and the i-th type preset tag of the sample data. And multiplying the absolute distance by a preset scaling factor to obtain the interval distance between the i-th type predictive label and the i-th type preset label of the sample data.
Optionally, the training module 53 is specifically configured to adjust a value range of the i-th type predicted tag value according to the interval distance, the i-th type predicted tag value, and the i-th type preset tag value, to obtain the i-th type predicted tag value after the range adjustment. And obtaining the maximum interval focus loss value according to the i-th type predictive label value and the interval focus loss function after the range adjustment.
In still other implementations, the training module 53 is specifically configured to multiply the i-th preset tag value by two and then subtract one to obtain the mapped i-th preset tag value. And subtracting the product of the interval distance and the mapped i-th preset label value from the i-th predicted label value to obtain the mapped i-th predicted label value. And finally multiplying the mapped i-th type predictive label value by a preset range scaling factor to obtain the i-th type predictive label value after range adjustment.
Optionally, the training module 53 is specifically configured to perform the second classification on the i-th class predicted tag value after the range adjustment, and obtain the i-th class predicted tag value after the second classification. And then obtaining the maximum interval focus loss value according to the i-th type predictive label value after the second classification and the interval focus loss function.
It should be noted that, because the content of information interaction and execution process between the above devices is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 11 shows a schematic structural diagram of a terminal device provided in an embodiment of the present application, and referring to fig. 11, the terminal device 6 includes:
a memory 62, a processor 61 and a computer program 63 stored in the memory 62 and executable on the processor 61, the steps of the various method embodiments described above being implemented when the processor 61 executes the computer program 63.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (7)
1. A method of classifying data, comprising:
receiving data to be classified, wherein the data to be classified refers to images, videos, texts or audios;
inputting the data to be classified into a trained classification model to obtain at least one data tag of the data to be classified, wherein the classification model is trained according to a distance focus loss function, and the distance focus loss function is used for representing the difference between a predicted tag of sample data and a preset tag of the sample data according to the distance between the predicted tag of the sample data and the preset tag of the sample data;
The training mode of the trained classification model is as follows:
acquiring at least one sample data from a preset database, wherein each sample data comprises at least one preset label;
obtaining a prediction label of each sample data through a preset classification model;
acquiring the interval distance between a predicted tag of the sample data and a preset tag of the sample data;
calculating a maximum interval focus loss value according to the interval distance through the interval focus loss function, wherein the maximum interval focus loss value is used for indicating the maximum value of the difference between the predicted label of the sample data and the preset label of the sample data;
training the preset classification model according to the maximum interval focus loss value to obtain a trained classification model;
the predictive tag includes N classifications, where N is an integer greater than 1; correspondingly, the interval distance between the predicted tag for acquiring the sample data and the preset tag for acquiring the sample data comprises the following steps:
obtaining the interval distance between the i-th type predictive tag and the i-th type preset tag of the sample data according to the value of the i-th type predictive tag of the sample data and the value of the i-th type preset tag of the sample data, wherein i is an integer which is more than or equal to 1 and less than or equal to N;
Calculating a maximum interval focus loss value according to the interval distance and the interval focus loss function, including:
according to the interval distance, the value of the i-th type predictive label and the value of the i-th type preset label, adjusting the value range of the i-th type predictive label value to obtain an i-th type predictive label value with the range adjusted;
and obtaining the maximum interval focus loss value according to the i-th type predictive label value after the range adjustment and the interval focus loss function.
2. The method according to claim 1, wherein the obtaining the interval distance between the i-th type predictive tag and the i-th type preset tag of the sample data according to the i-th type predictive tag value of the sample data and the i-th type preset tag value of the sample data comprises:
subtracting the value of the i-th type predictive label from the value of the i-th type preset label to obtain the absolute distance between the i-th type predictive label and the i-th type preset label of the sample data;
multiplying the absolute distance by a preset scaling factor to obtain the interval distance between the i type predictive label and the i type preset label of the sample data.
3. The method according to claim 1, wherein adjusting the range of values of the i-th type predictive label value according to the distance, the i-th type predictive label value, and the i-th type preset label value to obtain the range-adjusted i-th type predictive label value includes:
multiplying the i-th preset label value by two and subtracting one to obtain a mapped i-th preset label value;
subtracting the product of the interval distance and the mapped i-th preset label value from the i-th predicted label value to obtain a mapped i-th predicted label value;
multiplying the mapped i-th type predictive label value by a preset range scaling factor to obtain the range-adjusted i-th type predictive label value.
4. The method according to claim 1, wherein the obtaining the maximum pitch focus loss value according to the i-th type of predicted tag value after the range adjustment and the pitch focus loss function includes:
performing second classification on the i-th type predictive label value after the range adjustment to obtain the i-th type predictive label value after the second classification;
and obtaining the maximum interval focus loss value according to the i-th type predictive label value after the second classification and the interval focus loss function.
5. A data sorting apparatus, comprising:
the receiving module is used for receiving data to be classified, wherein the data to be classified refers to images, videos, texts or audios;
the classification module is used for inputting the data to be classified into a trained classification model to obtain at least one data tag of the data to be classified, wherein the classification model is trained according to a distance focus loss function, and the distance focus loss function is used for representing the difference between the predicted tag of the sample data and the preset tag of the sample data according to the distance between the predicted tag of the sample data and the preset tag of the sample data;
the data classification device further comprises a training module, wherein the training module is used for acquiring at least one sample data from a preset database, and each sample data comprises at least one preset label; obtaining a prediction label of each sample data through a preset classification model; acquiring the interval distance between a predicted tag of the sample data and a preset tag of the sample data; calculating a maximum interval focus loss value according to the interval distance through the interval focus loss function, wherein the maximum interval focus loss value is used for indicating the maximum value of the difference between the predicted label of the sample data and the preset label of the sample data; training the preset classification model according to the maximum interval focus loss value to obtain a trained classification model;
The predictive tag includes N classifications, where N is an integer greater than 1; correspondingly, the training module is specifically configured to obtain an interval distance between the i-th type predictive tag and the i-th type preset tag of the sample data according to the value of the i-th type predictive tag of the sample data and the value of the i-th type preset tag of the sample data, where i is an integer greater than or equal to 1 and less than or equal to N;
the training module is further specifically configured to adjust a value range of the i-th type predicted tag value according to the interval distance, the i-th type predicted tag value and the i-th type preset tag value, so as to obtain an i-th type predicted tag value after range adjustment; and obtaining the maximum interval focus loss value according to the i-th type predictive label value after the range adjustment and the interval focus loss function.
6. A computer terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 4 when executing the computer program.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911150175.XA CN110929785B (en) | 2019-11-21 | 2019-11-21 | Data classification method, device, terminal equipment and readable storage medium |
PCT/CN2020/128856 WO2021098618A1 (en) | 2019-11-21 | 2020-11-13 | Data classification method and apparatus, terminal device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911150175.XA CN110929785B (en) | 2019-11-21 | 2019-11-21 | Data classification method, device, terminal equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110929785A CN110929785A (en) | 2020-03-27 |
CN110929785B true CN110929785B (en) | 2023-12-05 |
Family
ID=69850664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911150175.XA Active CN110929785B (en) | 2019-11-21 | 2019-11-21 | Data classification method, device, terminal equipment and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110929785B (en) |
WO (1) | WO2021098618A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929785B (en) * | 2019-11-21 | 2023-12-05 | 中国科学院深圳先进技术研究院 | Data classification method, device, terminal equipment and readable storage medium |
CN111330871B (en) * | 2020-03-31 | 2023-03-28 | 新华三信息安全技术有限公司 | Quality classification method and device |
CN112054967A (en) * | 2020-08-07 | 2020-12-08 | 北京邮电大学 | Network traffic classification method and device, electronic equipment and storage medium |
CN112884569A (en) * | 2021-02-24 | 2021-06-01 | 中国工商银行股份有限公司 | Credit assessment model training method, device and equipment |
CN113807400B (en) * | 2021-08-17 | 2024-03-29 | 西安理工大学 | Hyperspectral image classification method, hyperspectral image classification system and hyperspectral image classification equipment based on attack resistance |
CN117633456B (en) * | 2023-11-17 | 2024-05-31 | 国网江苏省电力有限公司 | Marine wind power weather event identification method and device based on self-adaptive focus loss |
CN118262181B (en) * | 2024-05-29 | 2024-08-13 | 山东鲁能控制工程有限公司 | Automatic data processing system based on big data |
CN118297630A (en) * | 2024-06-06 | 2024-07-05 | 北京国能国源能源科技有限公司 | Electric power spot market big data analysis method and system based on artificial intelligence |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599913A (en) * | 2016-12-07 | 2017-04-26 | 重庆邮电大学 | Cluster-based multi-label imbalance biomedical data classification method |
CN109189767A (en) * | 2018-08-01 | 2019-01-11 | 北京三快在线科技有限公司 | Data processing method, device, electronic equipment and storage medium |
CN109635677A (en) * | 2018-11-23 | 2019-04-16 | 华南理工大学 | Combined failure diagnostic method and device based on multi-tag classification convolutional neural networks |
CN109816092A (en) * | 2018-12-13 | 2019-05-28 | 北京三快在线科技有限公司 | Deep neural network training method, device, electronic equipment and storage medium |
WO2019100723A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Method and device for training multi-label classification model |
WO2019100724A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Method and device for training multi-label classification model |
CN110147456A (en) * | 2019-04-12 | 2019-08-20 | 中国科学院深圳先进技术研究院 | A kind of image classification method, device, readable storage medium storing program for executing and terminal device |
CN110163252A (en) * | 2019-04-17 | 2019-08-23 | 平安科技(深圳)有限公司 | Data classification method and device, electronic equipment, storage medium |
CN110442722A (en) * | 2019-08-13 | 2019-11-12 | 北京金山数字娱乐科技有限公司 | Method and device for training classification model and method and device for data classification |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015070314A1 (en) * | 2013-11-13 | 2015-05-21 | Yong Liu | Supervised credit classifier with accounting ratios |
CN110413791A (en) * | 2019-08-05 | 2019-11-05 | 哈尔滨工业大学 | File classification method based on CNN-SVM-KNN built-up pattern |
CN110929785B (en) * | 2019-11-21 | 2023-12-05 | 中国科学院深圳先进技术研究院 | Data classification method, device, terminal equipment and readable storage medium |
-
2019
- 2019-11-21 CN CN201911150175.XA patent/CN110929785B/en active Active
-
2020
- 2020-11-13 WO PCT/CN2020/128856 patent/WO2021098618A1/en active Application Filing
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599913A (en) * | 2016-12-07 | 2017-04-26 | 重庆邮电大学 | Cluster-based multi-label imbalance biomedical data classification method |
WO2019100723A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Method and device for training multi-label classification model |
WO2019100724A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Method and device for training multi-label classification model |
CN109840530A (en) * | 2017-11-24 | 2019-06-04 | 华为技术有限公司 | The method and apparatus of training multi-tag disaggregated model |
CN109840531A (en) * | 2017-11-24 | 2019-06-04 | 华为技术有限公司 | The method and apparatus of training multi-tag disaggregated model |
CN109189767A (en) * | 2018-08-01 | 2019-01-11 | 北京三快在线科技有限公司 | Data processing method, device, electronic equipment and storage medium |
CN109635677A (en) * | 2018-11-23 | 2019-04-16 | 华南理工大学 | Combined failure diagnostic method and device based on multi-tag classification convolutional neural networks |
CN109816092A (en) * | 2018-12-13 | 2019-05-28 | 北京三快在线科技有限公司 | Deep neural network training method, device, electronic equipment and storage medium |
CN110147456A (en) * | 2019-04-12 | 2019-08-20 | 中国科学院深圳先进技术研究院 | A kind of image classification method, device, readable storage medium storing program for executing and terminal device |
CN110163252A (en) * | 2019-04-17 | 2019-08-23 | 平安科技(深圳)有限公司 | Data classification method and device, electronic equipment, storage medium |
CN110442722A (en) * | 2019-08-13 | 2019-11-12 | 北京金山数字娱乐科技有限公司 | Method and device for training classification model and method and device for data classification |
Non-Patent Citations (2)
Title |
---|
基于加权 SVM 主动学习的多标签分类;刘端阳 等;《计算机工程》;第37卷(第8期);第181-185页 * |
基于标签相关性的卷积神经网络多标签分类算法;蒋俊钊 等;《工业控制计算机》;第31卷(第7期);第105-109页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110929785A (en) | 2020-03-27 |
WO2021098618A1 (en) | 2021-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929785B (en) | Data classification method, device, terminal equipment and readable storage medium | |
CN114424253B (en) | Model training method and device, storage medium and electronic equipment | |
US11238310B2 (en) | Training data acquisition method and device, server and storage medium | |
CN110956225B (en) | Contraband detection method and system, computing device and storage medium | |
CN109308490B (en) | Method and apparatus for generating information | |
CN110852881B (en) | Risk account identification method and device, electronic equipment and medium | |
US11468266B2 (en) | Target identification in large image data | |
CN112183166A (en) | Method and device for determining training sample and electronic equipment | |
CN113095346A (en) | Data labeling method and data labeling device | |
CN108197652A (en) | For generating the method and apparatus of information | |
US11429820B2 (en) | Methods for inter-camera recognition of individuals and their properties | |
CN115034315B (en) | Service processing method and device based on artificial intelligence, computer equipment and medium | |
US11562184B2 (en) | Image-based vehicle classification | |
CN111325181A (en) | State monitoring method and device, electronic equipment and storage medium | |
CN112270671B (en) | Image detection method, device, electronic equipment and storage medium | |
CN112101114B (en) | Video target detection method, device, equipment and storage medium | |
CN113822684B (en) | Black-birth user identification model training method and device, electronic equipment and storage medium | |
CN112906810B (en) | Target detection method, electronic device, and storage medium | |
CN110399868B (en) | Coastal wetland bird detection method | |
CN113919361A (en) | Text classification method and device | |
CN111339952B (en) | Image classification method and device based on artificial intelligence and electronic equipment | |
US20200410245A1 (en) | Target model broker | |
CN117765348A (en) | Target detection model deployment method, target detection method and electronic equipment | |
US20210326645A1 (en) | Robust correlation of vehicle extents and locations when given noisy detections and limited field-of-view image frames | |
CN116977256A (en) | Training method, device, equipment and storage medium for defect detection model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |