CN111242158A - Neural network training method, image processing method and device - Google Patents

Neural network training method, image processing method and device Download PDF

Info

Publication number
CN111242158A
CN111242158A CN201911233327.2A CN201911233327A CN111242158A CN 111242158 A CN111242158 A CN 111242158A CN 201911233327 A CN201911233327 A CN 201911233327A CN 111242158 A CN111242158 A CN 111242158A
Authority
CN
China
Prior art keywords
training
neural network
training set
training data
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911233327.2A
Other languages
Chinese (zh)
Inventor
胡瀚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201911233327.2A priority Critical patent/CN111242158A/en
Publication of CN111242158A publication Critical patent/CN111242158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a neural network training method, an image processing method and an image processing device. The neural network training method comprises the following steps: obtaining a total training set, the total training set comprising training data of a plurality of classes, wherein each class comprises one or more training data; obtaining a head training set based on the number of training data included in each category, wherein the number of training data included in any category in the head training set is more than the number of training data included in any category in the non-head training set; and adjusting parameters of the neural network based on the training data of the head training set and the first loss function, and adjusting parameters of the neural network based on the training data of the total training set and the second loss function to complete training of the neural network. By training the quantity of the training data of the concentrated classes and adopting various training methods, the training quality is improved, and the classification accuracy of the trained neural network is ensured.

Description

Neural network training method, image processing method and device
Technical Field
The present invention relates generally to the field of image recognition technology, and in particular, to a neural network training method, an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The training set that uses in the neural network training process at present, including the training data of a plurality of categories, in order to guarantee training quality and classification effect, often will increase the scale of training set to contain more categories, and when the scale increases, the existence long tail problem of training set, promptly: some classes in the training set contain many training data (head data), while many classes contain only a few training data (tail data).
If a classification loss function is adopted to train the neural network, on the data set, due to the fact that the number of classes is too large, the implementation of the classification loss function can cause great burden on computing resources, meanwhile, through random sampling, the classes with more training data are easier to be sampled, and the trained neural network tends to predict the samples as head data; the tail data may not affect or negatively affect the training of the classifier because of the small amount of data and the large noise effect.
If the neural network is trained by using the triple loss function (Triplet loss), although the deployment of the triple loss function is not affected by the scale of the training set, the precision is insufficient.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a neural network training method, an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a neural network training method, including: obtaining a total training set, the total training set comprising training data of a plurality of classes, wherein each class comprises one or more training data; obtaining a head training set based on the number of training data included in each category, wherein the number of training data included in any category in the head training set is more than the number of training data included in any category in the non-head training set; and adjusting parameters of the neural network based on the training data of the head training set and the first loss function, and adjusting parameters of the neural network based on the training data of the total training set and the second loss function to complete training of the neural network.
In one example, adjusting parameters of the neural network based on training data of the head training set and the first loss function includes: randomly sampling or PK sampling training data in the head training set to obtain a first sub-training set; performing feature extraction on training data in the first sub-training set through a neural network to obtain a first output result; adjusting a parameter of the neural network based on the first output result and a first loss function, wherein the first loss function is a classification loss function.
In one example, adjusting parameters of the neural network based on training data of the total training set and a second loss function includes: performing PK sampling on training data in the total training set to obtain a second sub-training set; extracting the features of the training data in the second sub-training set through a neural network to obtain a second output result; and adjusting parameters of the neural network based on the second output result and a second loss function, wherein the second loss function is a triplet loss function.
In one example, based on the number of training data included in each category, a head training set is obtained, including: and classifying the categories of which the number of the training data is greater than a preset threshold into a head training set.
In one example, based on the number of training data included in each category, a head training set is obtained, including: and according to a preset proportion or a preset number, dividing a plurality of categories with the maximum number of training data into a head training set.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing method including: acquiring an image; and carrying out image recognition through a neural network to obtain a classification result of the image, wherein the neural network is obtained by training through the neural network training method of the first aspect.
According to a third aspect of the embodiments of the present disclosure, there is provided a neural network training device, including: an acquisition module configured to acquire a total training set, the total training set including training data of a plurality of classes, wherein each class includes one or more training data; the head training set comprises a dividing module, a generating module and a processing module, wherein the dividing module is used for obtaining a head training set based on the number of training data included in each category, and the number of training data included in any category in the head training set is more than that of training data included in any category in a non-head training set; and the training module is used for adjusting the parameters of the neural network based on the training data of the head training set and the first loss function, and adjusting the parameters of the neural network based on the training data of the total training set and the second loss function so as to complete the training of the neural network.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an image processing apparatus comprising: the receiving module is used for acquiring an image; and the processing module is used for carrying out image recognition through a neural network to obtain a classification result of the image, wherein the neural network is obtained through training of the neural network training method of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus, wherein the electronic apparatus includes: a memory to store instructions; and a processor for invoking the memory-stored instructions to perform the neural network training method of the first aspect or the image processing method of the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform the neural network training method of the first aspect or the image processing method of the second aspect.
According to the neural network training method, the image processing method and device, the electronic equipment and the computer readable storage medium, the training quality is improved and the classification accuracy of the trained neural network is ensured by adopting a plurality of training methods, namely a plurality of sampling methods and a plurality of loss functions according to the training data quantity of the category in the training set.
Drawings
The above and other objects, features and advantages of embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a diagram illustrating a neural network training method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a neural network training device according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an image processing apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an electronic device provided by an embodiment of the invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way.
It should be noted that although the expressions "first", "second", etc. are used herein to describe different modules, steps, data, etc. of the embodiments of the present invention, the expressions "first", "second", etc. are merely used to distinguish between different modules, steps, data, etc. and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable.
At present, neural networks are applied in various fields, especially in the classification field, and targets can be rapidly classified and identified through the neural networks. With the complexity of the situation and the improvement of the user demand, the requirement of follow-up on the classification category and accuracy of the neural network is provided, so that the scale of the training set is enlarged. The problem of long tail mentioned above is easily caused when the training set is enlarged, so that no matter the classification loss function or the triple loss function is adopted, a good training effect cannot be achieved.
The present disclosure provides a neural network training method 10, wherein the neural network may be a neural network for classification, and in some embodiments, the neural network may be a neural network for image recognition, such as a convolutional neural network. Fig. 1 is a schematic diagram of a neural network training method 10 according to an embodiment of the present disclosure, as shown in fig. 1, the method includes steps S11 to S13:
step S11, a total training set is obtained, the total training set including training data of a plurality of categories, wherein each category includes one or more training data.
A total training set for training is obtained, including all training data for training, the training data having different classes, each class may include one or more training data. The number of training data included in each category may be different, in some cases, the number difference may be large, and the number of training data of some common categories is relatively high due to easy acquisition, and meanwhile, the number of training data of some cold categories is difficult to acquire, resulting in relatively low number of training data; in practical situations, the proportion of common categories to all categories is generally not high.
Step S12, obtaining a head training set based on the number of training data included in each category, wherein the number of training data included in any category in the head training set is greater than the number of training data included in any category in the non-head training set.
After the training set is obtained, the training data are divided according to the number of the training data of each category, and the categories with relatively high number of the training data are divided into a head training set; the rest can be divided into tail training sets or not, and the divided training sets can be used for adopting different training modes later.
In one embodiment, step S12 may include: and classifying the categories of which the number of the training data is greater than a preset threshold into a head training set. A preset threshold may be preset, and the categories including the training data exceeding the preset threshold are classified into the head training set, that is, the head data is determined by the preset threshold. The method of the embodiment can ensure that enough training data are contained in the categories in the head training set.
In another embodiment, step S12 may include: and according to a preset proportion or a preset number, dividing a plurality of categories with the maximum number of training data into a head training set. In this embodiment, the number of training data of each category may be counted, and the plurality of categories having the largest number may be classified into the header data. According to the total class number, a preset number of N classes are determined, the N classes are divided according to the preset number, and the N classes with the maximum training data number are divided into the head training set. Alternatively, a predetermined ratio, such as 20%, is predetermined, i.e., the first 20% of the analogy with the largest amount of training data is divided into the head training set. This approach may avoid too many or too few categories being classified into the head training set.
And step S13, adjusting parameters of the neural network based on the training data of the head training set and the first loss function, and adjusting parameters of the neural network based on the training data of the total training set and the second loss function so as to complete the training of the neural network.
The categories in the head training set all include relatively more training data, and the neural network can calculate the loss value and then adjust the parameter of neural network through first loss function when training based on the head training set, simultaneously, because the training data distribution in head training set and the total training set is different, can calculate the loss value and then adjust the parameter of neural network through the second loss function when training based on the total training set by the neural network. Training is performed in different modes according to training data conditions of concentrated categories, so that the training is more targeted, and the classification accuracy of the trained neural network is ensured.
In an embodiment, the adjusting parameters of the neural network based on the training data of the head training set and the first loss function in step S13 may include: randomly sampling or PK sampling training data in the head training set to obtain a first sub-training set; extracting the features of the training data in the first sub-training set through a neural network to obtain a first output result; adjusting a parameter of the neural network based on the first output result and a first loss function, wherein the first loss function is a classification loss function.
Because the classes in the head training set all include relatively more training data, in each iteration, the sub-training set of the iteration can be obtained by adopting a random sampling or PK sampling mode, where PK sampling refers to selecting P classes from the training set (in this embodiment, the head training set), and then each class randomly selects K training data, so that each iteration batch is P × K training data. The sampled training data are used for obtaining a first output result through a neural network, the first output result is compared with a real classification result corresponding to the training data, a loss value can be obtained through a classification loss function, and parameters of the neural network are adjusted based on the loss value.
In another embodiment, the adjusting the parameters of the neural network based on the training data of the total training set and the second loss function in step S13 may include: performing PK sampling on training data in the total training set to obtain a second sub-training set; extracting the features of the training data in the second sub-training set through a neural network to obtain a second output result; and adjusting parameters of the neural network based on the second output result and a second loss function, wherein the second loss function is a triplet loss function.
Because the classes in the total training set contain different amounts of training data, and some classes (tail data) with small amount are trained by a classification loss function through random sampling, the neural network has difficulty in learning the characteristics of the tail data. In this embodiment, a PK sampling method is used for all classes in the total training set to obtain a second sub-training set, the sampled training data is passed through a neural network to obtain a second output result, the second output result is compared with a real classification result corresponding to the training data, a loss value is obtained through a triple loss function, and a parameter of the neural network is adjusted based on the loss value, so that it is ensured that the tail data can also be used for performing sufficient training on the neural network.
The triple loss function is characterized in that the triple loss function is formed by comparing an Anchor with a Positive through a ternary standard data (Anchor), a same-class data (Positive) and a different-class data (Negative), the distance between the Anchor and the Positive is minimum after loss adjusting parameters are calculated based on the function, the distance between the Anchor and the Positive is maximum, the Positive is training data belonging to the same category as the Anchor, and the Negative is training data different from the Anchor. And, when the distance (similarity) between Negative and Anchor two categories is closer, the training efficiency is higher.
The two training modes, namely training the head training set through the classification loss function and training the total training set through the triple loss function are not mutually conflicted, and can be carried out simultaneously. The classification loss function is applicable to random sampling and PK sampling; however, if the classification categories are too many and the number of the owned examples of many classes is too few, the training precision is not obviously improved or even reduced, and the deployment is difficult; and the triple loss function is only suitable for PK sampling, so that the deployment cannot be influenced by the scale of the training set, and the influence on the precision of the neural network model is positively correlated with the scale of the training set. Therefore, in the above embodiments of the present disclosure, the head training set is divided from the total training set, and the head training set is subjected to random sampling or PK sampling and training by a classification loss function; for the total training set, PK sampling is taken and training is performed by the triplet loss function. Therefore, the defects that a classification loss function is singly used for training or a three-principle loss function is singly used for training are avoided, the accuracy of a neural network model brought by the classification loss function under random sampling is utilized to the maximum extent, the negative effect brought by a large data set is avoided, the large data set is fully utilized to train the triple loss function, the problem that triple can not be trained due to random sampling is also avoided, the advantage of the large data set is utilized to the maximum extent to strengthen feature learning, the training quality and the training efficiency are guaranteed to the maximum extent, and the classification accuracy of the neural network after training is finished is improved.
Based on the same inventive concept, the present disclosure further provides an image processing method 20, as shown in fig. 2, the image processing method 20 includes:
in step S21, an image is acquired. Used for obtaining the image to be measured.
Step S22, performing image recognition through a neural network to obtain a classification result of the image, wherein the neural network is obtained by training the neural network training method 10 according to any one of the embodiments.
The neural network obtained by the neural network training method 10 can be used for more accurately recognizing the image.
Based on the same inventive concept, the disclosed embodiment provides a neural network training device 100, as shown in fig. 3, the neural network training device 100 includes: an obtaining module 110, configured to obtain a total training set, where the total training set includes training data of multiple categories, where each category includes one or more training data; a dividing module 120, configured to obtain a head training set based on the number of training data included in each category, where the number of training data included in any category in the head training set is greater than the number of training data included in any category in a non-head training set; and a training module 130, configured to adjust parameters of the neural network based on the training data of the head training set and the first loss function, and adjust parameters of the neural network based on the training data of the total training set and the second loss function, so as to complete training of the neural network.
In one example, training module 130 is further configured to: randomly sampling or PK sampling training data in the head training set to obtain a first sub-training set; performing feature extraction on training data in the first sub-training set through a neural network to obtain a first output result; adjusting a parameter of the neural network based on the first output result and a first loss function, wherein the first loss function is a classification loss function.
In one example, training module 130 is further configured to: performing PK sampling on training data in the total training set to obtain a second sub-training set; extracting the features of the training data in the second sub-training set through a neural network to obtain a second output result; and adjusting parameters of the neural network based on the second output result and a second loss function, wherein the second loss function is a triplet loss function.
In one example, the partitioning module 120 is further configured to: and classifying the categories of which the number of the training data is greater than a preset threshold into a head training set.
In one example, the partitioning module 120 is further configured to: and according to a preset proportion or a preset number, dividing a plurality of categories with the maximum number of training data into a head training set.
With respect to the neural network training device 100 in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the same inventive concept, the disclosed embodiment provides an image processing apparatus 200, as shown in fig. 4, the image processing apparatus 200 including: a receiving module 210, configured to acquire an image; the processing module 220 is configured to perform image recognition through a neural network to obtain a classification result of the image, where the neural network is obtained by training through the neural network training method 10 according to any one of the embodiments.
With regard to the image processing apparatus 200 in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
As shown in fig. 5, one embodiment of the present invention provides an electronic device 40. The electronic device 40 includes a memory 410, a processor 420, and an Input/Output (I/O) interface 430. Memory 410, for storing instructions. And a processor 420 for calling the instructions stored in the memory 410 to execute the neural network training method or the image processing method according to the embodiment of the present invention. The processor 420 is connected to the memory 410 and the I/O interface 430, respectively, for example, via a bus system and/or other type of connection mechanism (not shown). The memory 410 may be used to store programs and data, including programs for a neural network training method or an image processing method according to an embodiment of the present invention, and the processor 420 may execute various functional applications of the electronic device 40 and data processing by executing the programs stored in the memory 410.
In the embodiment of the present invention, the processor 420 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), and a Programmable Logic Array (PLA), and the processor 420 may be one or a combination of a Central Processing Unit (CPU) or other processing units with data processing capability and/or instruction execution capability.
Memory 410 in embodiments of the present invention may comprise one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile Memory may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD), a Solid-State Drive (SSD), or the like.
In the embodiment of the present invention, the I/O interface 430 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 40, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 430 may comprise one or more of a physical keyboard, function buttons (e.g., volume control buttons, switch buttons, etc.), a mouse, a joystick, a trackball, a microphone, a speaker, a touch panel, and the like.
In some embodiments, the invention provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present invention can be accomplished with standard programming techniques with rule based logic or other logic to accomplish the various method steps. It should also be noted that the words "means" and "module," as used herein and in the claims, is intended to encompass implementations using one or more lines of software code, and/or hardware implementations, and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, which is executable by a computer processor for performing any or all of the described steps, operations, or procedures.
The foregoing description of the implementation of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. The embodiments were chosen and described in order to explain the principles of the invention and its practical application to enable one skilled in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A neural network training method, wherein the method comprises:
obtaining a total training set, the total training set comprising training data of a plurality of classes, wherein each of the classes comprises one or more of the training data;
obtaining a head training set based on the number of training data included in each category, wherein the number of training data included in any category in the head training set is greater than the number of training data included in any category in the head training set;
adjusting parameters of a neural network based on the training data and a first loss function of the head training set, and adjusting parameters of the neural network based on the training data and a second loss function of the total training set to complete training of the neural network.
2. The neural network training method of claim 1, wherein the adjusting parameters of a neural network based on the training data of the head training set and a first loss function comprises:
randomly sampling or PK sampling the training data in the head training set to obtain a first sub-training set;
performing feature extraction on the training data in the first sub-training set through the neural network to obtain a first output result;
adjusting a parameter of the neural network based on the first output result and the first loss function, wherein the first loss function is a classification loss function.
3. The neural network training method of claim 1 or 2, wherein the adjusting parameters of the neural network based on the training data of the total training set and a second loss function comprises:
performing PK sampling on the training data in the total training set to obtain a second sub-training set;
performing feature extraction on the training data in the second sub-training set through the neural network to obtain a second output result;
adjusting a parameter of the neural network based on the second output result and the second loss function, wherein the second loss function is a triplet loss function.
4. The neural network training method of claim 1, wherein the deriving a head training set based on the amount of training data included in each of the classes comprises:
and classifying the categories of which the number of the training data is greater than a preset threshold into the head training set.
5. The neural network training method of claim 1, wherein the deriving a head training set based on the amount of training data included in each of the classes comprises:
and according to a preset proportion or a preset number, dividing the plurality of categories with the largest number of the training data into the head training set.
6. An image processing method, wherein the method comprises:
acquiring an image;
and performing image recognition through a neural network to obtain a classification result of the image, wherein the neural network is obtained by training through the neural network training method of any one of claims 1-5.
7. A neural network training apparatus, wherein the apparatus comprises:
an obtaining module configured to obtain a total training set, where the total training set includes training data of a plurality of classes, and each of the classes includes one or more of the training data;
a dividing module, configured to obtain a head training set based on a number of training data included in each of the categories, where a number of training data included in any one of the categories in the head training set is greater than a number of training data included in any one of the categories in the non-head training set;
and the training module is used for adjusting parameters of a neural network based on the training data and a first loss function of the head training set and adjusting the parameters of the neural network based on the training data and a second loss function of the total training set so as to complete the training of the neural network.
8. An image processing apparatus, wherein the apparatus comprises:
the receiving module is used for acquiring an image;
a processing module, configured to perform image recognition through a neural network to obtain a classification result of the image, where the neural network is obtained by training through the neural network training method according to any one of claims 1 to 5.
9. An electronic device, wherein the electronic device comprises:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the neural network training method of any one of claims 1-5 or the image processing method of claim 6.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform the neural network training method of any one of claims 1-5 or the image processing method of claim 6.
CN201911233327.2A 2019-12-05 2019-12-05 Neural network training method, image processing method and device Pending CN111242158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911233327.2A CN111242158A (en) 2019-12-05 2019-12-05 Neural network training method, image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911233327.2A CN111242158A (en) 2019-12-05 2019-12-05 Neural network training method, image processing method and device

Publications (1)

Publication Number Publication Date
CN111242158A true CN111242158A (en) 2020-06-05

Family

ID=70877555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911233327.2A Pending CN111242158A (en) 2019-12-05 2019-12-05 Neural network training method, image processing method and device

Country Status (1)

Country Link
CN (1) CN111242158A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379059A (en) * 2021-06-10 2021-09-10 北京百度网讯科技有限公司 Model training method for quantum data classification and quantum data classification method
CN113392757A (en) * 2021-06-11 2021-09-14 恒睿(重庆)人工智能技术研究院有限公司 Method, device and medium for training human body detection model by using unbalanced data
WO2022042123A1 (en) * 2020-08-25 2022-03-03 深圳思谋信息科技有限公司 Image recognition model generation method and apparatus, computer device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042123A1 (en) * 2020-08-25 2022-03-03 深圳思谋信息科技有限公司 Image recognition model generation method and apparatus, computer device and storage medium
CN113379059A (en) * 2021-06-10 2021-09-10 北京百度网讯科技有限公司 Model training method for quantum data classification and quantum data classification method
CN113392757A (en) * 2021-06-11 2021-09-14 恒睿(重庆)人工智能技术研究院有限公司 Method, device and medium for training human body detection model by using unbalanced data
CN113392757B (en) * 2021-06-11 2023-08-15 恒睿(重庆)人工智能技术研究院有限公司 Method, device and medium for training human body detection model by using unbalanced data

Similar Documents

Publication Publication Date Title
WO2022042123A1 (en) Image recognition model generation method and apparatus, computer device and storage medium
JP7266674B2 (en) Image classification model training method, image processing method and apparatus
CN111242158A (en) Neural network training method, image processing method and device
US9619753B2 (en) Data analysis system and method
CN111079841A (en) Training method and device for target recognition, computer equipment and storage medium
CN106850338B (en) Semantic analysis-based R +1 type application layer protocol identification method and device
CN112085701B (en) Face ambiguity detection method and device, terminal equipment and storage medium
US20180330273A1 (en) Adding Negative Classes for Training Classifier
CN111260032A (en) Neural network training method, image processing method and device
WO2021129121A1 (en) Table recognition method and device, and computer-readable storage medium
US20200082213A1 (en) Sample processing method and device
JP2021193615A (en) Quantum data processing method, quantum device, computing device, storage medium, and program
JP2017010554A (en) Curved line detection method and curved line detection device
CN112560545B (en) Method and device for identifying form direction and electronic equipment
CN108734127B (en) Age identification value adjusting method, age identification value adjusting device, age identification value adjusting equipment and storage medium
CN110807767A (en) Target image screening method and target image screening device
CN110717529B (en) Data sampling method and device
CN109754077B (en) Network model compression method and device of deep neural network and computer equipment
CN112529114A (en) Target information identification method based on GAN, electronic device and medium
CN111783812A (en) Method and device for identifying forbidden images and computer readable storage medium
US20170039484A1 (en) Generating negative classifier data based on positive classifier data
CN113762005A (en) Method, device, equipment and medium for training feature selection model and classifying objects
WO2016149937A1 (en) Neural network classification through decomposition
WO2022179382A1 (en) Object recognition method and apparatus, and device and medium
CN112766387B (en) Training data error correction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605