CN116416492B - Automatic data augmentation method based on characteristic self-adaption - Google Patents

Automatic data augmentation method based on characteristic self-adaption Download PDF

Info

Publication number
CN116416492B
CN116416492B CN202310271781.7A CN202310271781A CN116416492B CN 116416492 B CN116416492 B CN 116416492B CN 202310271781 A CN202310271781 A CN 202310271781A CN 116416492 B CN116416492 B CN 116416492B
Authority
CN
China
Prior art keywords
augmentation
image
strategy
training set
policy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310271781.7A
Other languages
Chinese (zh)
Other versions
CN116416492A (en
Inventor
刘敏
马云峰
唐毅
王耀南
卢继武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202310271781.7A priority Critical patent/CN116416492B/en
Publication of CN116416492A publication Critical patent/CN116416492A/en
Application granted granted Critical
Publication of CN116416492B publication Critical patent/CN116416492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The embodiment of the application discloses an automatic data augmentation method based on characteristic self-adaption, which comprises the following steps: constructing a training set and a verification set; constructing an augmentation policy search space using an augmentation operation; each image in the training set is correspondingly provided with an augmentation strategy search space; in the corresponding augmentation strategy search space of each image of the training set, circularly searching the augmentation strategy of the corresponding image by using the augmentation strategy search framework based on Bayesian optimization until the circulation times reach the set times, and obtaining the optimal augmentation strategy corresponding to each image; establishing a neural network model; the training set is augmented by the optimal augmentation strategy corresponding to each image, and the augmented training set is obtained; training a neural network model according to the augmented training set; and testing the trained neural network model through the verification set, calculating the accuracy, and putting the trained neural network model into practical application if the accuracy meets the requirements. The method replaces the traditional manual design augmentation strategy, and is time-saving and labor-saving.

Description

Automatic data augmentation method based on characteristic self-adaption
Technical Field
The application relates to the technical field of automatic data augmentation, in particular to an automatic data augmentation method based on characteristic self-adaption.
Background
In recent years, with the development of computer hardware and convolutional neural networks, deep learning is widely used in various fields and achieves good effects, including: autopilot, intelligent robots, video surveillance, etc. For deep learning-based methods, the amount and quality of data is one of the key factors affecting model accuracy, but obtaining large-scale high-quality data requires manual labeling by a large number of expertise-bearing personnel, which is often time consuming and expensive. In addition, in the fields of medicine, finance, and the like, it is often difficult to collect large-scale data in consideration of privacy of patients and customers.
The automatic data augmentation can automatically search for a proper augmentation strategy and use the strategy to augment the original data set, so that large-scale augmentation data can be rapidly generated, and the scale of the data set is effectively enlarged. At present, some automatic data augmentation methods are proposed for automatically searching for data augmentation strategies, but the methods cannot adaptively adjust the strategies according to the characteristics of the image, so that the searched strategies are not globally optimal, and the use of suboptimal augmentation strategies often leads to the deviation of the augmented data from the distribution of the original training set, so that the stability of the model is affected.
Disclosure of Invention
Based on this, it is necessary to provide an automatic data augmentation method capable of automatically adjusting an augmentation policy according to the characteristics of an image so that the searched policy is a globally optimal policy, in particular, an automatic data augmentation method based on feature adaptation, aiming at the existing problems.
The application provides an automatic data augmentation method based on characteristic self-adaption, which comprises the following steps:
s1: constructing a training set and a verification set; both the training set and the verification set comprise images with labeling information;
s2: constructing an augmentation policy search space using an augmentation operation; the augmentation policy search space includes an augmentation policy;
s3: each image in the training set is correspondingly provided with an augmentation strategy search space; in the corresponding augmentation strategy search space of each image of the training set, circularly searching the augmentation strategy of the corresponding image by using the augmentation strategy search framework based on Bayesian optimization until the circulation times reach the set times, and obtaining the optimal augmentation strategy corresponding to each image;
s4: establishing a neural network model; the training set is augmented by the optimal augmentation strategy corresponding to each image, and the augmented training set is obtained;
s5: training a neural network model according to the augmented training set;
s6: testing the trained neural network model through the verification set, calculating the accuracy, and if the accuracy meets the requirements, putting the trained neural network model into practical application; otherwise, the neural network model is adjusted according to the actual result, and the step S5 is returned.
Preferably, the augmentation operation includes color change, geometric change, simulated occlusion operation.
Preferably, in S2, all the augmentation operations are arranged and combined pairwise, so as to sequentially obtain a plurality of augmentation strategies; the plurality of augmentation strategies form a search space; copying N copies of the search space to form an augmentation strategy search space; in the augmentation policy search space, the sum of the probabilities that all the augmentation policies are sampled is 1, which is expressed as:
wherein M represents the number of augmentation strategies in the augmentation strategy search space; j represents the j-th augmentation policy;the policy parameter is expressed as the probability that the j-th augmentation policy is sampled in the augmentation policy search space corresponding to the i-th image, K represents the number of images, and k=n.
Preferably, in S3, the bayesian optimization-based augmentation policy search framework includes an image encoding module, a trained network model, and a bayesian optimization module; the image coding module is used for coding the image, and the augmentation strategy is corresponding to the image through coding; the trained network model is used for calculating verification loss; the Bayesian optimization module is used for optimizing the strategy parameters based on the verification loss.
Preferably, in S3, the process of obtaining the optimal augmentation policy corresponding to each image is:
s3.1: in an image coding module, each image in a training set is coded, and an augmentation strategy is corresponding to the image through coding;
s3.2: for a first image in a training set, using a sampling augmentation strategy to augment the first image to obtain a first augmentation image;
s3.3: inputting the first augmented image into a trained network model to obtain verification loss;
s3.4: in a Bayesian optimization module, optimizing the strategy parameters according to the verification loss;
s3.5: resampling an augmentation strategy in the augmentation strategy search space after strategy parameter updating, using the resampled augmentation strategy to augment the first image to obtain a second augmentation image, and returning the second augmentation image to S3.3; S3.3-S3.5 are circularly executed until the cycle times reach the set times, and the internal circulation is ended;
s3.6: after the internal circulation is finished, the optimal augmentation strategy search of the first image is finished;
s3.7: returning the second image in the training set to S3.2, and executing S3.2-S3.6 to obtain an optimal augmentation strategy of the second image;
s3.8: and sequentially taking the rest images in the training set, returning to S3.7 until the optimal augmentation strategy corresponding to each image in the training set is searched out, and ending the outer circulation.
Preferably, S3.1 includes:
s3.1.1: encoding each image in the training set using an independent encoding such that each image has a unique encoding;
s3.1.2: and for the augmentation strategy in the augmentation strategy search space corresponding to the image, giving the same coding as the corresponding image to the augmentation strategy, and corresponding the augmentation strategy to the image.
Preferably, in S3.3, the calculation formula of the verification loss is:
wherein,representing a verification loss; l (L) γ (. Cndot.) represents the loss function of the trained network model; />Representing a trained network model; />An augmentation strategy representing the sampling; />Representing the image +.>Corresponding policy parameters; />Representing an ith image in the training set; />Marking information corresponding to the ith image is represented; sigma (sigma) i Representing the code corresponding to the i-th image.
Preferably, in S3.4, a bayesian optimization method is adopted to optimize the policy parameters according to the verification loss.
Preferably, the process of optimizing the policy parameters is:
s3.4.1: constructing a proxy function by adopting a desired lifting function, wherein the formula is as follows:
wherein,representing policy parameters +.>Is a proxy function of (a); />According to->Taking a value; />Representing a verification loss; epsilon is a super parameter; />Representing a probability distribution model obtained according to historical sampling pairs;
s3.4.2: maximizing the proxy function to obtain updated strategy parameters; the calculation formula is as follows:
wherein,representing updated policy parameters.
Preferably, the number of augmentation strategies is 136 2 And each.
The beneficial effects are that: the method provided by the application is different from the mode of searching a proper augmentation strategy for the whole data set by the existing automatic data augmentation method; according to the method, an optimal augmentation strategy can be searched for each image in the data set according to the characteristics of each image, the traditional manual design augmentation strategy is replaced, time and labor are saved, meanwhile, the generation of outlier data is avoided in a mode of customizing the optimal augmentation strategy for each image, and the stability is improved.
Drawings
Exemplary embodiments of the present application may be more fully understood by reference to the following drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a flow chart of a method provided according to an exemplary embodiment of the present application.
Fig. 2 is a flow chart of a method for searching for an optimal augmentation strategy for each image according to an exemplary embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
In addition, the terms "first" and "second" etc. are used to distinguish different objects and are not used to describe a particular order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides an automatic data augmentation method based on characteristic self-adaption, which is described below with reference to the accompanying drawings.
Referring to fig. 1, which is a flowchart illustrating a feature-based adaptive automatic data augmentation method according to some embodiments of the present application, as shown, the method may include the steps of:
s1: constructing a training set and a verification set; both the training set and the verification set comprise images with labeling information;
s2: constructing an augmentation policy search space using an augmentation operation; the augmentation policy search space includes an augmentation policy;
in this embodiment, the augmentation operation includes, but is not limited to, color change, geometric change, simulated occlusion operation.
136 is obtained by combining 136 augmentation operations in a pairwise arrangement manner 2 ≈1.8×10 4 Individual augmentation strategies; these augmentation strategies constitute a search space; copying N copies of the search space to form an augmentation strategy search space; in the augmentation policy search space, the sum of the probabilities that all the augmentation policies are sampled is 1, which is expressed as:
wherein M represents the number of augmentation strategies in the augmentation strategy search space; j represents the j-th augmentation policy;the policy parameter is expressed as the probability that the j-th augmentation policy is sampled in the augmentation policy search space corresponding to the i-th image, K represents the number of images, and k=n. Policy parameters->The larger represents the greater the probability that the jth augmentation policy is sampled and vice versa; in the following augmentation policy search process, a better augmentation policy may be given larger policy parameters.
S3: each image in the training set is correspondingly provided with an augmentation strategy search space; in the corresponding augmentation strategy search space of each image of the training set, circularly searching the augmentation strategy of the corresponding image by using the augmentation strategy search framework based on Bayesian optimization until the circulation times reach the set times, and obtaining the optimal augmentation strategy corresponding to each image;
firstly, an augmentation strategy search framework based on Bayesian optimization comprises an image coding module, a trained network model and a Bayesian optimization module; the image coding module is used for coding the image, and the augmentation strategy is corresponding to the image through coding; the trained network model is used for calculating verification loss; the Bayesian optimization module is used for optimizing the strategy parameters based on the verification loss.
Secondly, searching the optimal augmentation strategy corresponding to each image through an augmentation strategy searching framework based on Bayesian optimization, wherein the process is shown in fig. 2 and comprises the following steps:
s3.1: in an image coding module, each image in a training set is coded, and an augmentation strategy is corresponding to the image through coding;
specifically, S3.1 includes:
s3.1.1: encoding each image in the training set using an independent encoding such that each image has a unique encoding;
s3.1.2: and for the augmentation strategy in the augmentation strategy search space corresponding to the image, giving the same coding as the corresponding image to the augmentation strategy, and corresponding the augmentation strategy to the image.
S3.2: for a first image in a training set, using a sampling augmentation strategy to augment the first image to obtain a first augmentation image;
s3.3: inputting the first augmented image into a trained network model to obtain verification loss;
the calculation formula of the verification loss is expressed as follows:
wherein,representing a verification loss; l (L) γ (. Cndot.) represents the loss function of the trained network model; />Representing a trained network model; />An augmentation strategy representing the sampling; />Representing the image +.>Corresponding policy parameters; />Representing an ith image in the training set; />Marking information corresponding to the ith image is represented;σ i representing the code corresponding to the i-th image.
S3.4: in a Bayesian optimization module, optimizing the strategy parameters according to the verification loss by adopting a Bayesian optimization method; the process comprises the following steps:
s3.4.1: constructing a proxy function by adopting a desired lifting function, wherein the formula is as follows:
wherein,representing policy parameters +.>Is a proxy function of (a); />According to->Taking a value; />Representing a verification loss; epsilon is a super parameter; />Representing a probability distribution model obtained according to historical sampling pairs;
s3.4.2: maximizing the proxy function to obtain updated strategy parameters; the calculation formula is as follows:
wherein,representing updated policy parameters.
S3.5: resampling an augmentation strategy in the augmentation strategy search space after strategy parameter updating, using the resampled augmentation strategy to augment the first image to obtain a second augmentation image, and returning the second augmentation image to S3.3; S3.3-S3.5 are circularly executed until the cycle times reach the set times, and the internal circulation is ended;
in this embodiment, the set number of times may be set according to actual requirements.
S3.6: after the internal circulation is finished, the optimal augmentation strategy search of the first image is finished;
s3.7: returning the second image in the training set to S3.2, and executing S3.2-S3.6 to obtain an optimal augmentation strategy of the second image;
s3.8: and sequentially taking the rest images in the training set, returning to S3.7 until the optimal augmentation strategy corresponding to each image in the training set is searched out, and ending the outer circulation.
S4: establishing a neural network model; the training set is augmented by the optimal augmentation strategy corresponding to each image, and the augmented training set is obtained;
s5: training a neural network model according to the augmented training set;
s6: testing the trained neural network model through the verification set, calculating the accuracy, and if the accuracy meets the requirements, putting the trained neural network model into practical application; otherwise, the neural network model is adjusted according to the actual result, and the step S5 is returned.
The method provided by the embodiment has the following beneficial effects:
1. the constructed augmentation strategy search space can well reserve better operation in the strategy search stage, and inhibit worse operation so as to ensure the quality of the augmentation image, avoid the generation of outlier data and improve the detection precision of the neural network model;
2. the Bayesian optimization-based strategy search framework can improve the search speed of the strategy while ensuring the accuracy, so that the strategy search time of automatic data augmentation is greatly shortened;
3. the method has a large practical application value, the augmentation strategy searched by the method can be applied to the field of pedestrian detection, the blocked pedestrians in the actual scene can be well detected, and the performance of the pedestrian detector in the actual scene is improved.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application, and are intended to be included within the scope of the appended claims and description.

Claims (7)

1. An automatic data augmentation method based on feature adaptation, comprising:
s1: constructing a training set and a verification set; the training set and the verification set both comprise images with labeling information;
s2: constructing an augmentation policy search space using an augmentation operation; the augmentation policy search space includes an augmentation policy;
s3: each image in the training set corresponds to one of the augmentation policy search spaces; in the corresponding augmentation strategy search space of each image of the training set, circularly searching the augmentation strategy of the corresponding image by using the augmentation strategy search framework based on Bayesian optimization until the circulation times reach the set times, and obtaining the optimal augmentation strategy corresponding to each image;
the augmentation strategy search framework based on Bayesian optimization comprises an image coding module, a trained network model and a Bayesian optimization module; the image coding module is used for coding the image, and the augmentation strategy is corresponding to the image through coding; the trained network model is used for calculating verification loss; the Bayesian optimization module is used for optimizing strategy parameters based on verification loss;
the process of obtaining the optimal augmentation strategy corresponding to each image comprises the following steps:
s3.1: in the image coding module, each image in the training set is coded, and an augmentation strategy is corresponding to the image through coding;
s3.2: for a first image in a training set, using a sampling augmentation strategy to augment the first image to obtain a first augmentation image;
s3.3: inputting the first augmented image into a trained network model to obtain verification loss;
s3.4: in the Bayesian optimization module, optimizing the strategy parameters according to the verification loss;
the process of optimizing the policy parameters is as follows:
s3.4.1: constructing a proxy function by adopting a desired lifting function, wherein the formula is as follows:
wherein,representing policy parameters +.>Is a proxy function of (a); />According to->Taking a value;representing a verification loss; epsilon is a super parameter; />Representing a probability distribution model obtained according to historical sampling pairs;
s3.4.2: maximizing the proxy function to obtain updated strategy parameters; the calculation formula is as follows:
wherein,representing updated policy parameters;
s3.5: resampling an augmentation strategy in the augmentation strategy search space after strategy parameter updating, using the resampled augmentation strategy to augment the first image to obtain a second augmentation image, and returning the second augmentation image to S3.3; S3.3-S3.5 are circularly executed until the cycle times reach the set times, and the internal circulation is ended;
s3.6: after the internal circulation is finished, the optimal augmentation strategy search of the first image is finished;
s3.7: returning the second image in the training set to S3.2, and executing S3.2-S3.6 to obtain an optimal augmentation strategy of the second image;
s3.8: sequentially taking the rest images in the training set, returning to S3.7 until the optimal augmentation strategy corresponding to each image in the training set is searched out, and ending the outer circulation;
s4: establishing a neural network model; amplifying the training set through the optimal amplifying strategy corresponding to each image to obtain an amplified training set;
s5: training the neural network model according to the augmented training set;
s6: testing the trained neural network model through the verification set, calculating the accuracy, and if the accuracy meets the requirements, putting the trained neural network model into practical application; otherwise, the neural network model is adjusted according to the actual result, and the step S5 is returned.
2. The feature-based adaptive automatic data augmentation method of claim 1, wherein the augmentation operation comprises a color change, a geometric change, a simulated occlusion operation.
3. The automatic data augmentation method based on feature adaptation according to claim 2, wherein in S2, all the augmentation operations are combined in a pairwise arrangement, and a plurality of augmentation strategies are sequentially obtained; a plurality of the augmentation strategies form a search space; copying N copies of the search space to form the augmented policy search space; in the augmentation policy search space, the sum of the probabilities that all the augmentation policies are sampled is 1, which is expressed as:
wherein M represents the number of augmentation strategies in the augmentation strategy search space; j represents the j-th augmentation policy;the policy parameter is expressed as the probability that the j-th augmentation policy is sampled in the augmentation policy search space corresponding to the i-th image, K represents the number of images, and k=n.
4. The feature-based adaptive automatic data augmentation method of claim 1, wherein in S3.1, comprising:
s3.1.1: encoding each image in the training set using an independent encoding such that each image has a unique encoding;
s3.1.2: and for the augmentation strategy in the augmentation strategy search space corresponding to the image, giving the same coding as the corresponding image to the augmentation strategy, and corresponding the augmentation strategy to the image.
5. The feature-based adaptive automatic data augmentation method of claim 1, wherein in S3.3, the verification loss is calculated by the formula:
wherein,representing a verification loss; l (L) γ (. Cndot.) represents the loss function of the trained network model; />Representing a trained network model; />An augmentation strategy representing the sampling; />Representing the image +.>Corresponding policy parameters; />Representing an ith image in the training set; />Marking information corresponding to the ith image is represented; sigma (sigma) i Representing the code corresponding to the i-th image.
6. The feature-adaptive based automatic data augmentation method of claim 5, wherein in S3.4, the policy parameters are optimized according to validation loss using a bayesian optimization method.
7. The feature-based adaptation of claim 3Automatic data augmentation method, characterized in that the number of augmentation strategies is 136 2 And each.
CN202310271781.7A 2023-03-20 2023-03-20 Automatic data augmentation method based on characteristic self-adaption Active CN116416492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310271781.7A CN116416492B (en) 2023-03-20 2023-03-20 Automatic data augmentation method based on characteristic self-adaption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310271781.7A CN116416492B (en) 2023-03-20 2023-03-20 Automatic data augmentation method based on characteristic self-adaption

Publications (2)

Publication Number Publication Date
CN116416492A CN116416492A (en) 2023-07-11
CN116416492B true CN116416492B (en) 2023-12-01

Family

ID=87057566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310271781.7A Active CN116416492B (en) 2023-03-20 2023-03-20 Automatic data augmentation method based on characteristic self-adaption

Country Status (1)

Country Link
CN (1) CN116416492B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807109A (en) * 2019-11-08 2020-02-18 北京金山云网络技术有限公司 Data enhancement strategy generation method, data enhancement method and device
CN111275129A (en) * 2020-02-17 2020-06-12 平安科技(深圳)有限公司 Method and system for selecting image data augmentation strategy
CN111882492A (en) * 2020-06-18 2020-11-03 天津中科智能识别产业技术研究院有限公司 Method for automatically enhancing image data
KR20210033235A (en) * 2019-09-18 2021-03-26 주식회사카카오브레인 Data augmentation method and apparatus, and computer program
CN112580720A (en) * 2020-12-18 2021-03-30 华为技术有限公司 Model training method and device
CN112686282A (en) * 2020-12-11 2021-04-20 天津中科智能识别产业技术研究院有限公司 Target detection method based on self-learning data
CN113569726A (en) * 2021-07-27 2021-10-29 湖南大学 Pedestrian detection method combining automatic data amplification and loss function search
WO2021248068A1 (en) * 2020-06-05 2021-12-09 Google Llc Machine learning algorithm search with symbolic programming
CN113822444A (en) * 2021-02-09 2021-12-21 日本电气株式会社 Method, apparatus and computer-readable storage medium for model training and data processing
CN114693935A (en) * 2022-04-15 2022-07-01 湖南大学 Medical image segmentation method based on automatic data augmentation
CN114926701A (en) * 2021-02-01 2022-08-19 北京图森智途科技有限公司 Model training method, target detection method and related equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210033235A (en) * 2019-09-18 2021-03-26 주식회사카카오브레인 Data augmentation method and apparatus, and computer program
CN110807109A (en) * 2019-11-08 2020-02-18 北京金山云网络技术有限公司 Data enhancement strategy generation method, data enhancement method and device
CN111275129A (en) * 2020-02-17 2020-06-12 平安科技(深圳)有限公司 Method and system for selecting image data augmentation strategy
WO2021248068A1 (en) * 2020-06-05 2021-12-09 Google Llc Machine learning algorithm search with symbolic programming
CN111882492A (en) * 2020-06-18 2020-11-03 天津中科智能识别产业技术研究院有限公司 Method for automatically enhancing image data
CN112686282A (en) * 2020-12-11 2021-04-20 天津中科智能识别产业技术研究院有限公司 Target detection method based on self-learning data
CN112580720A (en) * 2020-12-18 2021-03-30 华为技术有限公司 Model training method and device
CN114926701A (en) * 2021-02-01 2022-08-19 北京图森智途科技有限公司 Model training method, target detection method and related equipment
CN113822444A (en) * 2021-02-09 2021-12-21 日本电气株式会社 Method, apparatus and computer-readable storage medium for model training and data processing
CN113569726A (en) * 2021-07-27 2021-10-29 湖南大学 Pedestrian detection method combining automatic data amplification and loss function search
CN114693935A (en) * 2022-04-15 2022-07-01 湖南大学 Medical image segmentation method based on automatic data augmentation

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AutoPedestrian: An Automatic Data Augmentation and Loss Function Search Scheme for Pedestrian Detection;Yi Tang 等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;1-14 *
人工智能在空腔气动/声学特性预测与控制参数优化中的应用;吴军强 等;《实验流体学》;第36卷(第3期);33-43 *
基于改进YOLO的双网络桥梁表观病害快速检测算法;彭雨诺 等;《 自动化学报》;第48 卷(第4期);1018-1032 *
基于自动数据增强的不变性跨域行人重识别;胡宴;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;I138-2693 *
密度Canopy的增强聚类与深度特征的KNN算法;沈学利 等;《计算机科学与探索》;第15卷(第7期);1289-1301 *
面向超参数估计的贝叶斯优化方法综述;李亚茹 等;《计算机科学》;86-92 *

Also Published As

Publication number Publication date
CN116416492A (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN108399428B (en) Triple loss function design method based on trace ratio criterion
EP3869406A1 (en) Two-dimensional code generation method and apparatus, storage medium and electronic device
CN113905391B (en) Integrated learning network traffic prediction method, system, equipment, terminal and medium
CN109726794A (en) Image based on concern generates neural network
CN109872374A (en) A kind of optimization method, device, storage medium and the terminal of image, semantic segmentation
WO2021129086A1 (en) Traffic prediction method, device, and storage medium
CN105046659B (en) A kind of simple lens based on rarefaction representation is calculated as PSF evaluation methods
CN110472688A (en) The method and device of iamge description, the training method of image description model and device
CN109886343B (en) Image classification method and device, equipment and storage medium
CN114898121B (en) Automatic generation method for concrete dam defect image description based on graph attention network
CN116230154A (en) Chest X-ray diagnosis report generation method based on memory strengthening transducer
JP2020087432A (en) Artificial intelligence-based manufacturing part design
CN116486076A (en) Remote sensing image semantic segmentation method, system, equipment and storage medium
CN116416492B (en) Automatic data augmentation method based on characteristic self-adaption
Song et al. Siamese-discriminant deep reinforcement learning for solving jigsaw puzzles with large eroded gaps
CN111079826A (en) SLAM and image processing fused construction progress real-time identification method
CN112070777B (en) Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning
CN110991279B (en) Document Image Analysis and Recognition Method and System
US11113606B2 (en) Learning method, learning device, program, and recording medium
CN115063374A (en) Model training method, face image quality scoring method, electronic device and storage medium
CN114741548A (en) Mulberry leaf disease and insect pest detection method based on small sample learning
CN112396042A (en) Real-time updated target detection method and system, and computer-readable storage medium
CN113034473A (en) Lung inflammation image target detection method based on Tiny-YOLOv3
CN110795591A (en) Image retrieval method based on discrete gradient back propagation
CN117274750B (en) Knowledge distillation semi-automatic visual labeling method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant