CN115496960A - Sample generation method, target detection model training method, target detection method and system - Google Patents
Sample generation method, target detection model training method, target detection method and system Download PDFInfo
- Publication number
- CN115496960A CN115496960A CN202211099128.9A CN202211099128A CN115496960A CN 115496960 A CN115496960 A CN 115496960A CN 202211099128 A CN202211099128 A CN 202211099128A CN 115496960 A CN115496960 A CN 115496960A
- Authority
- CN
- China
- Prior art keywords
- sample
- target
- picture
- target detection
- detection model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a sample generation method, a target detection model training method and a target detection method and system, wherein the sample generation method comprises the following steps: 1, training based on an original sample set to obtain a target detection model; 2, acquiring a candidate sample set which comprises N candidate sample pictures to be detected; 3, outputting target information of each position on each candidate sample picture based on the candidate sample set and the target detection model; 4, screening target sample pictures from the candidate sample set based on the output information of the step 3, and determining target positions and category information of all places on all the target sample pictures; and 5, outputting the target sample picture and the target position and type information of each position on the picture as a training sample. According to the invention, a label-free candidate sample set is utilized to automatically generate a sample for training the target detection model, so that the labor cost is greatly reduced; through continuous iterative learning, the adaptability of the target detection model to data in the online workflow is continuously improved, and the accuracy of the target detection result is improved.
Description
Technical Field
The invention relates to the technical field of machine learning, in particular to a sample generation method, target detection model training and target detection method and system.
Background
In a real scene, various objects are often required to be subjected to target detection, such as steel plate and part detection of a factory assembly line, continuous vehicle detection on a road, pedestrian detection on a sidewalk and the like.
The existing target detection method mainly researches optimization of a static data set, and target detection models are obtained by training a fixed data set. Therefore, the existing target detection model has low detection precision for continuously changing data, and still needs to rely on manual work to continuously identify and label the acquired image to generate a training sample, and then retrains and deploys the target detection model again to improve the detection precision of the target detection model.
The method for generating the sample by the manual identification marking needs to consume a large amount of manpower and material resources, and has low sample marking efficiency and low model training efficiency. In addition, careless omission and subjectivity inevitably exist in the manual labeling process, and the accuracy of the target detection result is influenced.
Disclosure of Invention
In order to solve the problem that a large number of samples need to be marked manually in the prior art, the method is used for training a new target detection model. The invention provides a sample generation method, target detection model training, a target detection method and a target detection system, which can continuously and automatically generate a training sample of a target detection model, greatly reduce labor cost and continuously improve the accuracy of a target detection result.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a sample generation method is characterized by comprising the following steps:
step 2, acquiring a candidate sample set, wherein the candidate sample set comprises N candidate sample pictures to be detected;
step 3, outputting each target information on each candidate sample picture based on the candidate sample set and the target detection model;
step 4, based on the output information of the step 3, screening out target sample pictures from the candidate sample set and determining target positions and category information of all parts on all the target sample pictures;
and 5, outputting the target sample picture and the target positions and the class information of all places on the picture as a training sample.
By means of the method, the target detection model training sample is automatically generated by using the label-free candidate sample set, so that the labor cost is greatly reduced, and the accuracy of the target detection result is continuously improved.
As a preferred mode, the target detection information at each position on the picture comprises whether an object to be detected is at each position on the picture and the object class probability; in the step 3, for each candidate sample picture, the process of outputting the target information of each place on the candidate sample picture includes:
step 301, performing a plurality of reversible transformation processes (e.g., adjusting illumination, rotating, scaling, flipping, etc.) on the candidate sample picture to obtain m different pictures P;
step 302, inputting the m different pictures P into a target detection model, and outputting whether targets to be detected and target category probabilities are detected at each position on each picture P;
step 303, calculating target positions and target category probabilities of all places on the candidate sample pictures corresponding to the m pictures P based on the output result of the step 302;
and step 304, obtaining final target positions and target category probability information of each part on the candidate sample picture based on the calculation result of the step 303.
By means of the method, in the process of sample labeling, the candidate sample pictures are subjected to augmentation processing to obtain a plurality of pictures, the target detection information corresponding to each picture is obtained through inference of an existing detection model, and the target detection information on the original candidate sample pictures is finally determined by comprehensively considering the target detection information of each picture, so that the sample labeling accuracy is improved.
As a preferable mode, the step 4 includes:
step 401, using the object class probabilities of various places on the candidate sample picture output in step 303 as confidence scores;
step 402, setting a first threshold and a second threshold, wherein the first threshold is much smaller than the second threshold. For the samples, the candidate samples with the confidence degrees between the first preset threshold and the second preset threshold are regarded as useless samples and temporarily discarded. Reserving the candidate sample picture with the confidence coefficient lower than a first preset threshold value or higher than a second preset threshold value as a target sample picture to be output; setting the position with the confidence level lower than a first preset threshold value on the candidate sample picture as a background, setting the position with the confidence level higher than a second preset threshold value on the candidate sample picture as a target object and setting an object bounding box;
and step 403, acquiring the object class probability of the corresponding position of the object bounding box on the candidate sample picture corresponding to each picture P, and taking the maximum value of the object class probabilities as the object class probability at the object bounding box for output.
In a preferred embodiment, in the step 303, the object class probabilities of the respective locations on each picture P are averaged to obtain the object class probability of the respective locations on the candidate sample picture.
As a preferable mode, in step 2, the N candidate sample pictures of the area to be detected are obtained by continuously acquiring real-time images of the area to be detected. As another preferable mode, N candidate sample pictures of the region to be detected are obtained from the history storage image set of the region to be detected.
Preferably, the processing includes position reversal processing, picture brightness contrast change processing, reduction processing, or enlargement processing.
Based on the same inventive concept, the invention also provides a target detection model training method, which is characterized in that the training sample generated by the sample generation method is used for training the target detection model to obtain an updated target detection model. The method can continuously improve the adaptability of the target detection model to the data in the online workflow and improve the target detection accuracy through continuous iterative learning.
Preferably, the present invention further provides another method for training a target detection model, which is characterized in that an original sample set and training samples generated by the sample generation method are used to train the target detection model, so as to obtain an updated target detection model.
Based on the same inventive concept, the invention also provides a target detection model, which is characterized in that the target detection model carries out continuous self-learning updating through the target detection model training method.
Based on the same inventive concept, the invention also provides a target detection method, which is characterized in that the target detection model is utilized to carry out target detection on the picture to be detected.
Based on the same inventive concept, the invention also provides a target detection system, which is characterized by comprising an image acquisition unit, a model training unit and the target detection model, wherein:
an image acquisition unit: the image acquisition system is used for acquiring a picture to be detected, wherein one part of the picture to be detected is used as a candidate sample picture for generating a training sample set to train and update a target detection model, and the other part of the picture to be detected is used for identifying by the target detection model to output a target detection result;
a model training unit: for training the target detection model based on the generated training sample set to update the target detection model.
Compared with the prior art, the method utilizes the label-free candidate sample set to automatically generate the sample for training the target detection model, thereby greatly reducing the labor cost; meanwhile, the target detection method and the system continuously improve the adaptability of the target detection model to data in the online workflow and improve the accuracy of the target detection result through continuous iterative learning.
Drawings
Fig. 1 is a layout diagram of an image capturing unit according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a training method of a target detection model according to an embodiment of the present invention.
In fig. 1, 1 is an image acquisition unit, 101 is an online acquisition camera, 102 is a local storage, 103 is a data storage center, and 104 is a data transmission module.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments. It is to be understood that the described embodiments are merely exemplary of a portion of the invention and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the foregoing description and drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprising" and "having," and any variations thereof, in the description and claims of this invention are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention automatically generates the detection target frame for the continuously input picture data and automatically trains and updates, thereby achieving the continuous self-learning of the continuous data stream.
In one embodiment, the present invention provides a sample generation method comprising the steps of:
Step 2, obtaining a candidate sample setWherein the candidate sample set comprises N candidate sample pictures to be detected,Is an integer and,is shown asOpening a candidate sample picture to be detected;
step 3, based on the candidate sample set and the target detection model, predicting and outputting each candidate sample pictureTarget information at various positions;
step 4, based on the prediction output information of step 3, from the candidate sample setScreening target sample pictures and determining target positions and category information of all the positions on all the target sample pictures;
and 5, outputting the target sample picture and the target positions and the class information of all places on the picture as a training sample.
The invention utilizes the label-free candidate sample set to automatically generate the sample for training the target detection model, thereby greatly reducing the labor cost and continuously improving the accuracy of the target detection result.
In some embodiments, the target detection information at various places on the picture includes whether targets are to be detected at various places on the picture and target category probabilities; in the step 3, for each candidate sample pictureOutputting candidate sample picturesThe target information process at various places comprises the following steps:
step 301, candidate sample picturesPerforming augmentation treatment to obtain m different pictures Is an integer and,is shown asCorresponding to a candidate sample picture to be detectedA picture under transform;
step 302, the m pictures with different transformations are processedInput target detection modelUsing modelsPredict each pictureOutputting each picture based on the detection result of (1)Whether the object to be detected and the object class probability are detected everywhere is obtainedProbability matrix of object class everywhere aboveAnd an object position matrix;
Step 303, based on the output result of step 302, for m different transformed picturesDoing the same stepThe inverse processing operation in step 301 is to restore the image to the original image position, average the m prediction results, and output the candidate sample imageWhether there is any object to be detected and object class probabilityThe position of the outer frame of the object isAnd obtaining the most accurate prediction result.
In the sample labeling process, the candidate sample picture is processed to obtain different transformed pictures of the candidate sample picture, the target detection information of each part on the different transformed pictures corresponding to the candidate sample picture is obtained, and the target detection information on the original candidate sample picture is finally determined by comprehensively considering the target detection information of each part on each transformation, so that the sample labeling accuracy is improved.
In some embodiments, the step 4 comprises:
step 401, using the probability of each object class in the candidate sample picture output in step 303As confidence score, and setting a first preset thresholdA second preset thresholdIn which;
Step 402, keeping the confidence coefficient lower than a first preset threshold valueOr above a second predetermined thresholdThe candidate sample picture is taken as a target sample picture to be output; setting the position with the confidence degree lower than a first preset threshold value on the candidate sample picture as a background, setting the position with the confidence degree higher than a second preset threshold value on the candidate sample picture as a target object and setting an object boundary frame, and discarding the candidate sample picture with the confidence degree between the first preset threshold value and the second preset threshold value; in particular forIf any position existsConfidence of (2)It is rejected as a aliased sample. For each of the remaining candidate samples that is below a second predetermined thresholdAre all set as background and are higher than a first preset thresholdThe position of (2) is foreground, corresponding to the position of (3)Setting the prediction result as an object boundary frame, finally fusing the prediction results of all samples to generate a self-labeling data set。
And step 403, acquiring object class probabilities of corresponding positions of the object bounding boxes on the candidate sample pictures corresponding to the pictures P, and taking the maximum value of the object class probabilities as the object class probability at the object bounding boxes for output.
In some embodiments, in the step 303, the object class probabilities of the positions on each picture P are averaged to serve as the object class probabilities of the positions on the candidate sample picture.
In some embodiments, in step 2, N candidate sample pictures of the area to be detected are obtained by continuously acquiring real-time images of the area to be detected. In other embodiments, N candidate sample pictures of the region to be detected are obtained from the historical stored image set of the region to be detected.
In some embodiments, the processing includes a position flipping process, a picture brightness contrast changing process, a reduction process, or an enlargement process. For example, for each candidate sample pictureAfter the left-right turning, up-down turning, reduction to 0.5 times of the original image, enlargement to 1.5 times of the original image and other multiple enlargements, multiple images including the original candidate sample image are obtained, and then the model is usedAnd predicting the target detection results of the multiple pictures in parallel to obtain the object to be detected and the object class probability of each corresponding position on the multiple pictures. In this embodiment, it is necessary to subsequently determine whether there are objects to be detected and object class probabilities at various places on the original image corresponding to the multiple pictures, and further determine an average value of the object class probabilities at the corresponding positions of the multiple pictures to obtain the most accurate prediction probability average value.
In some embodiments, the present invention provides a target detection model training method, which trains a target detection model by using a training sample generated by the sample generation method to obtain an updated target detection model. According to the invention, through continuous iterative learning, the adaptability of the target detection model to data in an online workflow can be continuously improved, and the target detection accuracy is improved.
In a more preferred embodiment, as shown in FIG. 2, the present invention also provides another method for training an object detection model, which utilizesArtificially labeled data sets (original sample sets)) And self-labeling training samples generated by the sample generation methodModel for detecting targetAnd training to obtain an updated target detection model. In some embodiments, the artificial labeling dataset and the self-labeling training sample set are fused 1:1, sampling, and continuously training the target detection model. Meanwhile, methods such as cutting, zooming, color change, brightness change, mosaic and picture mixing and the like can be adopted to enhance the diversity of the pictures, strengthen the fusion of an original sample data set and a self-labeled new sample data set, and improve the comprehensive learning capacity of the two data sets. Using the fused sample set to detect the targetContinuing training until the model converges to obtain a new modelUsing new modelsUpdating an original modelThen, the model is put inAnd (5) deploying the model to a production environment, continuously executing the step 2 to the step 5, obtaining the automatically marked sample again, training the model again and updating, and realizing continuous self-learning of the model until the loss of the model is converged.
In the present invention, if the images are fused by methods such as clipping, scaling, color change, brightness change, mosaic, image mixing, etc., which belong to the prior art, no further description is given here, but understanding and implementation of the present invention by those skilled in the art are not affected.
In some embodiments, the invention further provides a target detection model, and the target detection model is continuously updated by self-learning through the target detection model training method. And after the target detection model is trained and updated, deploying the updated target detection model to a production environment.
In some embodiments, the present invention further provides a target detection method, which performs target detection on a picture to be detected by using the target detection model.
In some embodiments, the present invention further provides an object detection system, which includes an image acquisition unit, a model training unit and the object detection model, wherein:
an image acquisition unit: the image acquisition device is used for acquiring images to be detected, one part of the images to be detected is used as candidate sample images for generating a training sample set to train and update a target detection model, and the other part of the images to be detected is used for identifying by means of the target detection model to output a target detection result.
A model training unit: for training the target detection model based on the generated training sample set to update the target detection model.
In some embodiments, the object detection system further comprises an online model deployment module for deploying the image acquisition unit, the model training unit and the object detection model at desired locations.
In some embodiments, the image acquisition unit employs a data recovery unit that is responsible for recovering the picture data in the persistent data stream. As shown in fig. 1, in this embodiment, the image capturing unit 1 includes an online capturing camera 101, a local storage 102, a data storage center 103, and a data transmission module 104. In fig. 1, the direction indicated by the hollow arrow indicates the direction of continuous feeding of workpieces during production, and the diamond shape and the cross represent different types of target workpieces to be identified.
The image acquisition unit 1 works as follows:
first, the on-line capture camera 101 takes a picture of a region to be detected, and after the picture is taken, the picture is stored in the local storage 102 (e.g., a local disk).
Then, the data transmission module 104 reads the picture from the local storage 102, and transmits the picture back to the data storage center 103 via a network or the like, and the data storage center 103 stores the received picture data according to the reception time for useAnd (4) showing.
Picture data stored in data storage center 103Can be used as candidate sample picturesAfter automatic labeling, the training samples are finally used; can also be used as an initial manual labeling data set。
The invention can be used for an object detection scene with continuous samples, can fully utilize label-free sample generation labels on a production line to automatically generate the samples for target detection, continuously and circularly self-update the sample set and the target detection model, greatly reduce labor cost, continuously improve the data adaptability of the target detection model to an on-line workflow, continuously optimize the model to adapt to new data and improve the accuracy of the target detection result. The invention is particularly suitable for the scenes with continuous data, such as workpiece detection in a factory assembly line (such as steel plate and part detection in the factory assembly line), vehicle detection on a road, pedestrian detection on the road and the like.
As shown in table 1, the experimental comparative effects are as follows:
the production line data of a certain factory is used as an experimental object, 5000 pieces of historical manual labeled data are used as a starting original sample set, in the online data of 3 months, 4 months and 5 months, the recovered partial data are used as a test set, and for an initial target detection model (Baseline model), the full-class average accuracy (MAP) is only 94.1. And after the data added with the artificial labels is adopted, the full-class average accuracy (MAP) is improved to 95. After the method is adopted to continuously train the target detection model, the full class average accuracy (MAP) is improved to 97.87, the effect is far better than that of the original basic target detection model, and the effect is better than that of the detection model obtained by utilizing a large number of manual labeling samples, and the result shows that the method has obvious effect on improving the full class average accuracy (MAP).
TABLE 1 Experimental comparison results Table (MAP means the average accuracy of the whole class)
While the embodiments of the present invention have been described in connection with the drawings, the present invention is not limited to the above-described embodiments, which are intended to be illustrative rather than restrictive, and many modifications may be made by one skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A method of generating a sample, comprising the steps of:
step 1, training based on an original sample set to obtain a target detection model, wherein the original sample set comprises a plurality of original sample pictures, and the output of the target detection model is target detection information at each position on each original sample picture;
step 2, acquiring a candidate sample set, wherein the candidate sample set comprises N candidate sample pictures to be detected;
step 3, outputting each target information on each candidate sample picture based on the candidate sample set and the target detection model;
step 4, based on the output information of the step 3, screening out target sample pictures from the candidate sample set and determining target positions and category information of all parts on all the target sample pictures;
and 5, outputting the target sample picture and the target position and category information of each position on the picture as a training sample.
2. The sample generation method according to claim 1, wherein the target detection information at each position on the picture includes whether an object is to be detected at each position on the picture and an object class probability;
in the step 3, for each candidate sample picture, the process of outputting the target information of each place on the candidate sample picture includes:
step 301, performing multiple reversible transformation processes on candidate sample pictures to obtain m different pictures P;
step 302, inputting the m different pictures P into a target detection model, and outputting the target to be detected and the target class probability of each part on each picture P;
step 303, calculating target positions and target category probabilities of all places on the candidate sample pictures corresponding to the m pictures P based on the output result of the step 302;
and step 304, obtaining final target positions and target category probability information of each part on the candidate sample picture based on the calculation result of the step 303.
3. The sample generation method according to claim 2, wherein the step 4 comprises:
step 401, taking the class probability of each object on the candidate sample picture output in step 303 as a confidence score;
step 402, reserving all candidate sample pictures with confidence degrees lower than a first preset threshold value or higher than a second preset threshold value as target sample pictures to be output; setting the position with the confidence degree lower than a first preset threshold value on the candidate sample picture as a background, setting the position with the confidence degree higher than a second preset threshold value on the candidate sample picture as a target object and setting an object boundary frame, and discarding the candidate sample picture with the confidence degree between the first preset threshold value and the second preset threshold value;
and step 403, acquiring object class probabilities of corresponding positions of the object bounding boxes on the candidate sample pictures corresponding to the pictures P, and taking the maximum value of the object class probabilities as the object class probability at the object bounding boxes for output.
4. The sample generation method according to claim 2 or 3, wherein in the step 303, the object class probabilities of each location on each picture P are averaged to be the object class probabilities of each location on the candidate sample picture.
5. The sample generation method according to any one of claims 1 to 3, wherein in the step 2, N candidate sample pictures of the area to be detected are obtained by continuously acquiring real-time images of the area to be detected; or the N candidate sample pictures of the region to be detected are obtained from the historical storage image set of the region to be detected.
6. The sample generation method according to claim 2 or 3, wherein in the step 301, the processing includes a position flipping process, a picture brightness contrast change process, a reduction process, or an enlargement process.
7. A method for training a target detection model is characterized in that,
training a target detection model by using a training sample generated by the sample generation method according to any one of claims 1 to 6 to obtain an updated target detection model;
or, training the target detection model by using the original sample set and the training samples generated by the sample generation method of any one of claims 1 to 6 to obtain the updated target detection model.
8. An object detection model, characterized in that the object detection model is continuously self-learning updated by the object detection model training method as claimed in claim 7.
9. An object detection method, characterized in that the object detection model of claim 8 is used to perform object detection on the picture to be detected.
10. An object detection system comprising an image acquisition unit, a model training unit and an object detection model according to claim 8, wherein:
an image acquisition unit: the image acquisition system is used for acquiring a picture to be detected, wherein one part of the picture to be detected is used as a candidate sample picture for generating a training sample set to train and update a target detection model, and the other part of the picture to be detected is used for identifying by the target detection model to output a target detection result;
a model training unit: for training the target detection model based on the generated training sample set to update the target detection model.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211099128.9A CN115496960A (en) | 2022-09-09 | 2022-09-09 | Sample generation method, target detection model training method, target detection method and system |
CN202310210125.6A CN116152606A (en) | 2022-09-09 | 2023-03-07 | Sample generation method, target detection model training, target detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211099128.9A CN115496960A (en) | 2022-09-09 | 2022-09-09 | Sample generation method, target detection model training method, target detection method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115496960A true CN115496960A (en) | 2022-12-20 |
Family
ID=84467545
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211099128.9A Withdrawn CN115496960A (en) | 2022-09-09 | 2022-09-09 | Sample generation method, target detection model training method, target detection method and system |
CN202310210125.6A Pending CN116152606A (en) | 2022-09-09 | 2023-03-07 | Sample generation method, target detection model training, target detection method and system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310210125.6A Pending CN116152606A (en) | 2022-09-09 | 2023-03-07 | Sample generation method, target detection model training, target detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN115496960A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117198551A (en) * | 2023-11-08 | 2023-12-08 | 天津医科大学第二医院 | Kidney function deterioration pre-judging system based on big data analysis |
-
2022
- 2022-09-09 CN CN202211099128.9A patent/CN115496960A/en not_active Withdrawn
-
2023
- 2023-03-07 CN CN202310210125.6A patent/CN116152606A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117198551A (en) * | 2023-11-08 | 2023-12-08 | 天津医科大学第二医院 | Kidney function deterioration pre-judging system based on big data analysis |
CN117198551B (en) * | 2023-11-08 | 2024-01-30 | 天津医科大学第二医院 | Kidney function deterioration pre-judging system based on big data analysis |
Also Published As
Publication number | Publication date |
---|---|
CN116152606A (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10692050B2 (en) | Automatic assessment of damage and repair costs in vehicles | |
CN112270280B (en) | Open-pit mine detection method in remote sensing image based on deep learning | |
CN113298757A (en) | Metal surface defect detection method based on U-NET convolutional neural network | |
CN109543753B (en) | License plate recognition method based on self-adaptive fuzzy repair mechanism | |
US10726535B2 (en) | Automatically generating image datasets for use in image recognition and detection | |
CN112581483B (en) | Self-learning-based plant leaf vein segmentation method and device | |
CN114743102A (en) | Furniture board oriented flaw detection method, system and device | |
WO2024060529A1 (en) | Pavement disease recognition method and system, device, and storage medium | |
CN113963210A (en) | Deep learning-based detection method and sorting system for waste data storage equipment | |
CN115205727A (en) | Experiment intelligent scoring method and system based on unsupervised learning | |
CN115496960A (en) | Sample generation method, target detection model training method, target detection method and system | |
CN113962951B (en) | Training method and device for detecting segmentation model, and target detection method and device | |
JP6988995B2 (en) | Image generator, image generator and image generator | |
CN116580026B (en) | Automatic optical detection method, equipment and storage medium for appearance defects of precision parts | |
CN109063708B (en) | Industrial image feature identification method and system based on contour extraction | |
CN117036798A (en) | Power transmission and distribution line image recognition method and system based on deep learning | |
CN115587989B (en) | Workpiece CT image defect detection segmentation method and system | |
CN111325076A (en) | Aviation ground building extraction method based on U-net and Seg-net network fusion | |
US20230084761A1 (en) | Automated identification of training data candidates for perception systems | |
CN113657162A (en) | Bill OCR recognition method based on deep learning | |
Das et al. | Object Detection on Scene Images: A Novel Approach | |
CN110807456A (en) | Method and device for positioning bank card number | |
WO2023007535A1 (en) | Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method | |
CN117689875A (en) | Image detection method, device, electronic equipment and storage medium | |
Priyadarshi et al. | Deblurring of Images and Barcode Extraction of PV Modules using Supervised Machine learning for Plant Operation and Maintenance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20221220 |