CN109740676B - Object detection and migration method based on similar targets - Google Patents

Object detection and migration method based on similar targets Download PDF

Info

Publication number
CN109740676B
CN109740676B CN201910012144.1A CN201910012144A CN109740676B CN 109740676 B CN109740676 B CN 109740676B CN 201910012144 A CN201910012144 A CN 201910012144A CN 109740676 B CN109740676 B CN 109740676B
Authority
CN
China
Prior art keywords
detected
target
similar
training
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910012144.1A
Other languages
Chinese (zh)
Other versions
CN109740676A (en
Inventor
周雪
徐雨亭
邹见效
徐红兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910012144.1A priority Critical patent/CN109740676B/en
Publication of CN109740676A publication Critical patent/CN109740676A/en
Application granted granted Critical
Publication of CN109740676B publication Critical patent/CN109740676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an object detection and migration method based on similar objects, which comprises the steps of firstly screening similar object categories of objects to be detected in an existing image data set, taking images contained in the categories as training samples, training a two-stage object detection model to obtain a similar object detector, then carrying out parameter fine adjustment on the similar object detector by adopting an accurate marking sample containing the objects to be detected to obtain a primary object detector, and finally carrying out parameter fine adjustment on the primary object detector by adopting a weak marking sample containing the objects to be detected to obtain a final object detector. The method obtains the training sample by determining the similar target category of the object to be detected, migrates the object detector, reduces the requirement on the training sample of the object to be detected, and facilitates the realization of the object detection migration.

Description

Object detection and migration method based on similar targets
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an object detection and migration method based on similar targets.
Background
In recent years, emerging industries such as unmanned supermarkets, intelligent warehousing and automatic driving are bringing a new revolution to social productivity. One of the main motivations for promoting the revolution is the rapid development of computer vision, and the computer can efficiently identify objects in video images, so that intelligence and nobody in production and life gradually become possible, wherein an object detection technology is a key link in video image identification and understanding.
Under the hot tide of research of object detection technology, the academic world emerges excellent deep learning-based methods. Ren Shaoqing et al propose a two-stage detection model: firstly, extracting a region candidate frame by using a region candidate network (RPN), and then completing the identification and classification and target positioning of objects in the candidate frame by using a region convolutional network (RCNN). Specific algorithm principles can be found in the literature: ren, shaoqing, et al. Fast R-CNN: aware real-time object detection with region pro-technical networks. Advances in neural information processing systems.2015.
Subsequently, dai Jifeng et al proposed a Detection model for Region-based full convolution network (R-FCN), and the specific algorithm principle can be found in Dai J, li Y, he K, et al.R-FCN: object Detection via Region-based full convolution network [ J ].2016. The R-FCN network uses a position sensitive score map to process the problem of translation transformability in image detection, so that the network can perform full convolution calculation based on the whole picture, the method can effectively reduce the training time and the detection time of a network model, and meanwhile, the model uses a residual error network (ResNet) as a feature extractor of the model. Compared with fast R-CNN, R-FCN improves the accuracy of target detection and reduces the time of target detection on a general target detection platform Pascal VOC, and is more suitable for task migration.
However, such object detection methods require a large number of accurately labeled samples for training, and whether they are Pascal VOC (class 20), microSoft COCO (class 80), or ILSVRC (class 200), these data sets only contain a very few common classes in the real world, and for the unusual object classes outside these data sets, such as detection safety helmets, these types of fully supervised methods are not applicable in the cases where labeled samples are scarce and labeling cost is very expensive.
So many students are involved in the study of weak supervised learning, semi-supervised learning, transfer learning and other methods. Aiming at the difficult problem of network training under the condition of small samples, the LSTD model provides two loss functions of Knowledge Transfer (TK) and Background suppression (BD) on the basis of Faster R-CNN. Where TK guides the training of the network through knowledge of target detection on the source data set and BD focuses on the target area by limiting the output values of the neural network to the background area. Specific algorithm principles can be found in the literature: chen H, wang Y, wang G, et al LSTD A Low-Shot Transfer Detector for Object Detection [ J ].2018.
Another method, LSDA, is to convert a picture classifier (image classifier) into an object classifier (object classifier), which is suitable for the case where some classes in the detection task are fully calibrated and some other classes are weakly calibrated. The LSDA can migrate detectors on the fully calibrated class to the weakly calibrated class, so that the last detector can have good detection results on all object classes. Specific algorithm principles can be found in literature: tang Y, wang J, gao B, et al.Large Scale Semi-Supervised Object Detection Using Visual and magnetic Knowledge Transfer [ C ]. Computer Vision and Pattern recognition. IEEE,2016 [ 2119-2128 ].
From the current state of the art, such methods based on weak supervision and transfer learning, although less dependent on a large number of labeled samples, have a large gap in detection performance compared to fully supervised methods. Therefore, it is a pressing need in the academic world and industry to find a method that requires only a small number of training samples and that can approach or even exceed the performance of the final detector in the fully supervised learning.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an object detection migration method based on similar targets.
In order to achieve the above purpose, the object detecting and transferring method based on similar objects of the present invention comprises the following steps:
s1: screening out similar target classes of the object to be detected from the existing image data set with accurate labeling, and taking the images contained in the similar target classes as training samples;
s2: constructing a two-stage target detection model, taking the training samples obtained in the step S1 as input, taking the target marked by each training sample as expected output, training the two-stage target detection model, and taking the model obtained by training as a similar target detector;
s3: acquiring a plurality of images containing the object to be detected in advance, accurately marking the object to be detected, and performing parameter fine adjustment on the similar target detector obtained in the step S2 by using the images as training samples to obtain a primary target detector;
s4: and (3) acquiring a plurality of weakly labeled images containing the object to be detected in advance, wherein the weakly labeled representation labeling information only indicates whether the object to be detected exists in the images or not but does not contain specific position information of the object to be detected, using the images as training samples, and further training the preliminary target detector obtained in the step (S3) through weak supervised learning to obtain a final target detector, thereby completing detector migration.
The invention relates to an object detection and migration method based on similar objects, which comprises the steps of firstly screening similar object categories of objects to be detected in an existing image dataset, taking images contained in the categories as training samples, training a two-stage object detection model to obtain a similar object detector, then carrying out parameter fine adjustment on the similar object detector by adopting an accurate marking sample containing the objects to be detected to obtain a primary object detector, and finally carrying out parameter fine adjustment on the primary object detector by adopting a weak marking sample containing the objects to be detected to obtain a final object detector.
The invention takes the similar target detector obtained by training based on the similar target class training sample as a transition state, shortens the field migration distance, reduces the requirement of the training sample of the object to be detected, and realizes the object detection migration by using a small amount of accurately labeled samples.
Drawings
FIG. 1 is a flow chart of an embodiment of a similar target based object detection and migration method of the present invention;
FIG. 2 is a schematic diagram of the detection of the migration of an object in the present embodiment;
FIG. 3 is a flowchart of a similar object class determination method based on visual similarity and semantic similarity according to the present embodiment;
FIG. 4 is a flowchart of the object detection migration in the present embodiment;
fig. 5 is an exemplary view of the recognition effect of the target detector on the helmet obtained in the present embodiment.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the main content of the present invention.
Examples
Fig. 1 is a flowchart of an embodiment of the object detection and migration method based on similar objects according to the present invention. As shown in fig. 1, the method for detecting and transferring an object based on a similar target of the present invention specifically includes the following steps:
s101: obtaining similar target training samples:
similar target categories of the object to be detected are screened out from the existing image data set with the accurate labels, and the images contained in the similar target categories are used as training samples.
The purpose of obtaining similar target training samples is for domain migration. In this embodiment, the object to be detected is set as a safety helmet, and the number of samples accurately labeled in the object sample is small, so that it is difficult to train a target detector with practical value, and a similar target needs to be adopted for field migration. Fig. 2 is a schematic diagram of the object detection migration in the present embodiment. As shown in fig. 2, the specimens in the source field where the safety helmet is located are few, so similar targets are obtained first, similar fields are obtained, and then the specimens are migrated to the target field.
When the similar target training sample is obtained, the judgment on the similarity is very important, and the more similar the similar target training sample is to the object to be detected, the closer the trained similar detector is to the real target detector of the object to be detected, so that the detection performance is improved. In order to better obtain similar target training samples, the present embodiment provides a similar target class determination method based on visual similarity and semantic similarity. Fig. 3 is a flowchart of a similar object class determination method based on visual similarity and semantic similarity according to the present embodiment. As shown in fig. 3, the method for determining a similar object category based on visual similarity and semantic similarity in this embodiment specifically includes the following steps:
s301: screening similar target categories based on visual similarity:
training by using the existing object classification data set containing P-type objects to obtain a P-type object classification model, then randomly selecting N images of the object to be detected to input into the object classification model to obtain N P-dimensional output vectors, wherein the P-th element in each output vector represents the probability that the object to be detected in the image of the object to be detected belongs to the P-th object, P =1,2, …, P, calculating the mean vector of the N P-dimensional output vectors, arranging all elements in the mean vector in a descending order, and taking the first K elements v k Constituting a set of probabilities V = [ V = 1 ,v 2 ,…,v K ]K =1,2, …, K, the categories corresponding to the first K probabilities are similar target categories having visual similarity with the object to be detected, and the set is recorded as α.
In this embodiment, the object to be detected is a helmet, and it is difficult for the target to obtain enough training samples in the existing data set for detector training. In the existing object classification dataset, the ImageNet dataset is a relatively complete dataset, and in the embodiment, a training sample of 1000 types of common objects in the ImageNet dataset is selected to train the convolutional neural network to obtain a 1000-type object classification network, so that a similar target class having visual similarity with the object to be detected is screened out.
S302: screening similar target categories based on semantic similarity:
semantic similarity belongs to the research category of Natural Language Processing (NLP), the common practice in NLP is to encode each word into an n-dimensional vector, this encoding model is called word2vec, this n-dimensional space is called word embedding space (word embedding space), the euclidean distance of synonyms in this space is small, and the euclidean distance of words with large semantic difference is large, i.e. the distribution of words with similar semantics is relatively aggregated.
With the help of the above thought, the vector corresponding to the name of the P-type object in the object classification dataset and the vector corresponding to the name of the object to be detected are obtained by using the word2vec encoder, and the vector to be detected is obtained by calculationThe Euclidean distances of the object name corresponding vector and the P-type object name corresponding vector are arranged in an ascending order, and the first Q Euclidean distances d are taken q Composition set D = [ D ] 1 ,d 2 ,…,d Q ]Q =1,2, …, Q, the category corresponding to the first Q euclidean distances is a similar target category having semantic similarity with the object to be detected, and the set is recorded as β.
S303: determining the final similar object category:
acquiring a union gamma of a set alpha and a set beta, namely the set gamma = alpha U beta, recording the number of object classes contained in the union gamma as G, and performing weighted fusion on the visual similarity and the semantic similarity of each object class according to the following formula to obtain fusion similarity S g
Figure BDA0001937739640000051
Wherein λ is 1 、λ 2 Representing a preset weight, v g Representing the visual similarity of the object to be detected to the g-th object class, v if the g-th object class does not belong to the set α g =0,d g Representing the semantic similarity of the object to be detected to the g-th object class, if the g-th object class does not belong to the set beta, then d g In practice, = + ∞ may be replaced by an absolute maximum.
For G fusion similarities S g And performing descending order arrangement, taking the object classes corresponding to the first C fusion similarities to form a set W, screening out classes having intersection with all the classes in the set W from the existing target detection data set with accurate labels, wherein the screened classes are the final similar target classes. In the embodiment, the target detection data set adopts an ILSVRC2013 data set.
For the safety helmet of the object to be detected, similar categories of the safety helmet comprise a baseball cap, a football cap, an anti-collision helmet, a swimming cap, a cowboy cap and the like, which can be obtained by adopting the method, and a large number of sample images which are accurately marked exist in the existing data set, so that a similar target detector (similar detector) can be trained by using the samples, and the similar target detector can learn some useful characteristic expression modes, such as sensitivity to the shape of a semi-sphere; sensitive to objects with a solid surface; the semantic meaning of functionally being a cap is formed.
S102: training a similar target detector:
based on the existing research, the single-stage target detection model is poor in performance in the aspect of target migration, and the two-stage target detection model is more suitable for target migration, so that the method needs to construct the two-stage target detection model, the training samples obtained in the step S101 are used as input, the target marked by each training sample is used as expected output, the two-stage target detection model is trained, and the model obtained through training is used as a similar target detector.
In order to improve efficiency, in this embodiment, when the object classification model is trained in step S301, the object classification model is set as a two-stage model R-FCN, so that in this step, the training samples can be used to perform parameter fine tuning on the object classification model, so as to obtain the similar target detector. This part is trained in a fully supervised manner, and the training method can refer to the training method of fast RCNN. According to the step-by-step training mode, firstly, a region candidate network (RPN) generating the position where the target is likely to appear is trained, and then a classified and positioned region convolution network (RCNN) module is further trained.
According to the invention, researches show that if a small amount of accurately labeled training samples of the object to be detected are directly adopted to carry out fine adjustment on the existing object detector during object detection and migration, if the number of samples is small, the obtained target detector is difficult to achieve ideal performance. This is because the small number of labeled samples means that the domain migration needs to be completed within a limited number of optimization steps, which is very difficult. According to the invention, the training sample is obtained by adopting the similar target type, the similar target detector is obtained by training, the similar target detector is used as a bridge for object detection and migration, and the final target detector can realize better performance even under the condition of a small amount of samples.
S103: accurately marking a sample for fine adjustment:
and (4) acquiring a plurality of images containing the object to be detected in advance, accurately marking the object to be detected, and performing parameter fine adjustment on the similar target detector obtained in the step (S102) by using the images as training samples to obtain a primary target detector.
This step utilizes a small number of accurately labeled images of the object to be detected to migrate the similar target detector to the data set of the object to be detected. For the R-FCN used in this embodiment, the specific process of fine tuning includes: the convolution kernel parameters of the shallow and intermediate layers of similar target detectors are fixed and the network parameters of the last layers are fine-tuned. In order to improve the efficiency of knowledge migration, the embodiment refers to the LSTD structure, and by using the advantages of the LSTD structure in the aspect of migration detection, the performance of the obtained preliminary target detector can approach the level of artificial calibration, so that the LSTD structure can be used as a generator of pseudo tags in the next weak supervised learning process.
S104: weak labeling sample training:
the method comprises the steps of obtaining a plurality of weak annotation images containing objects to be detected in advance, wherein the weak annotation indicates that annotation information only indicates whether the objects to be detected exist in the images or not, but does not contain specific position information of targets, namely, no bounding box (bounding box) information exists. And taking the images as training samples, and further training the preliminary target detector obtained in the step S103 through weak supervised learning to obtain a final target detector, thereby finishing the detector migration.
The weakly labeled images in this embodiment are derived from downloading a huge amount of videos related to the target from YouTube. The step is different from the previous step, and a weak supervision training mode is adopted. For the instability of the weak supervised target detection learning process, for the R-FCN adopted in this embodiment, referring to the LSTD structure, two modules, namely, a primary target detector is used to generate a pseudo label (ROL) and a Knowledge Transfer (TK) module to generate supervision information (Object overview) of a target Object, and finally, the network can implement full end-to-end training in a full-supervised form, thereby greatly improving the learning efficiency and stability.
In order to better illustrate the technical effects of the present invention, the present invention was experimentally verified by using a specific embodiment. In the embodiment, 1000 accurately labeled image samples of the object safety helmet to be detected are obtained, 500 of the image samples are used for training the primary target detector, the other 500 of the image samples are used for performance detection, and a deep learning framework caffe commonly used in the field of image processing is adopted for training and testing.
Fig. 4 is a flowchart of the object detection migration in the present embodiment. As shown in fig. 4, a similar target category having visual similarity with the safety helmet is searched first based on the mageNet classification model, then a similar target category having semantic similarity with the safety helmet is searched, a large number of accurate labeling samples having visual similarity with the safety helmet are obtained, and the R-FCN network model is trained to obtain the similar target detector. In the embodiment, a ResNet-50 residual error network model obtained by training based on an ImageNet image data set is used as a pre-training model in an R-FCN network model. For other parameters of the R-FCN network model, the learning rate is set to be 0.001, the learning rate is reduced by 10 times after 2 ten thousand iterations, the total iteration number is 6 ten thousand, the momentum is set to be 0.9, and the weight attenuation term is set to be 0.0005. And then, fine tuning is carried out on the similar target detector by adopting a small amount of accurately labeled safety helmet images to obtain a primary target detector, and finally, the primary target detector is further trained by adopting the weak labeled images of the safety helmet to obtain a final target detector, so that the detector migration is completed.
Fig. 5 is an exemplary view of the recognition effect of the target detector on the helmet obtained in the present embodiment. As shown in fig. 5, the target detector migrated by the present invention can accurately identify the helmet.
In order to compare technical effects, a small amount of accurately labeled safety helmet image samples are directly used for training an R-FCN network model, and a target detector obtained through training is used as a contrast detector 1; a large number (5000 pieces) of accurately labeled helmet image samples are adopted to train the Faster R-CNN network model, and the trained target detector is used as a contrast detector 2. mAP@0.5 is used as a detection performance index, mAP (mean Average Precision) is an evaluation index of multi-label image classification, the Average accuracy of each class is calculated firstly, and then averaging is carried out on all classes. In evaluating the accuracy, if the overlapping rate of the detection result and the labeled target (ground route) reaches a threshold value (i.e. 0.5), the detection result is considered as a true target (true positive).
Table 1 is a table comparing the detection performance of the final target detector and the two control detectors obtained by the present invention.
The invention relates to a detector Comparison Detector 1 Contrast detector 2
mAP@0.5 74.2% 58.3% 77.6%
TABLE 1
As can be seen from Table 1, the performance of the target detector obtained by using the present invention is very close to that of the detector obtained by training based on a large number of accurately labeled images.
In addition, the testing speed of the target detector obtained by the invention on the Yingwei GTX1080 GPU reaches 86 milliseconds per image, and approaches the speed of real-time detection. Therefore, the target detector obtained by the invention has good performance in the aspects of detection effect and operation efficiency.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (2)

1. An object detection and migration method based on similar targets is characterized by comprising the following steps:
s1: screening out similar target classes of the object to be detected from the existing image data set with the accurate labels, and taking the images contained in the similar target classes as training samples; the method for determining the similar object category comprises the following steps:
1) Training by utilizing the existing object classification data set containing P objects to obtain a P object classification model, then randomly selecting N images of the object to be detected to input into the object classification model to obtain N P-dimensional output vectors, wherein the P-th element in each output vector represents the probability that the object to be detected in the image of the object to be detected belongs to the P-th object, calculating the mean value vector of the N P-dimensional output vectors, arranging all elements in the mean value vector in a descending order, and taking the first K elements v k Constituting a set of probabilities V = [ V = 1 ,v 2 ,…,v K ]K =1,2, …, K, the categories corresponding to the first K probabilities are similar target categories having visual similarity with the object to be detected, and the set is recorded as α;
2) Obtaining a vector corresponding to the name of the P-type object in the object classification data set and a vector corresponding to the name of the object to be detected by using a word2vec encoder, calculating to obtain Euclidean distances between the vector corresponding to the name of the object to be detected and the vector corresponding to the name of the P-type object, arranging the P Euclidean distances in an ascending order, and taking the first Q Euclidean distances d q Composition set D = [ D ] 1 ,d 2 ,…,d Q ]Q =1,2, …, Q, the category corresponding to the first Q euclidean distances is the similar target category having semantic similarity with the object to be detected, and the set of the similar target categories is recordedIs beta;
3) Acquiring a union gamma of a set alpha and a set beta, namely the set gamma = alpha U beta, recording the number of object classes contained in the union gamma as G, and performing weighted fusion on the visual similarity and the semantic similarity of each object class according to the following formula to obtain fusion similarity S g
Figure FDA0003560275110000011
Wherein λ is 1 、λ 2 Representing preset weights, v g Representing the visual similarity of the object to be detected to the g-th object class, v if the g-th object class does not belong to the set α g =0,d g Representing the semantic similarity of the object to be detected to the g-th object class, if the g-th object class does not belong to the set beta, d g =+∞;
For G fusion similarities S g Performing descending order, taking object classes corresponding to the first C fusion similarities to form a set W, screening out classes with intersections of all classes in the set W from the existing target detection data set with accurate labels, wherein the screened classes are final similar target classes;
s2: constructing a two-stage target detection model, taking the training samples obtained in the step S1 as input, taking the target marked by each training sample as expected output, training the two-stage target detection model, and taking the model obtained by training as a similar target detector;
s3: acquiring a plurality of images containing the object to be detected in advance, accurately marking the object to be detected, and performing parameter fine adjustment on the similar target detector obtained in the step S2 by using the images as training samples to obtain a primary target detector;
s4: and (3) acquiring a plurality of weakly labeled images containing the object to be detected in advance, wherein the weakly labeled representation labeling information only indicates whether the object to be detected exists in the images or not but does not contain specific position information of the target to be detected, using the images as training samples, and further training the preliminary target detector obtained in the step (S3) through weak supervised learning to obtain a final target detector, thereby completing detector migration.
2. The object detection migration method according to claim 1, wherein the two-stage object detection model employs an R-FCN network model.
CN201910012144.1A 2019-01-07 2019-01-07 Object detection and migration method based on similar targets Active CN109740676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910012144.1A CN109740676B (en) 2019-01-07 2019-01-07 Object detection and migration method based on similar targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910012144.1A CN109740676B (en) 2019-01-07 2019-01-07 Object detection and migration method based on similar targets

Publications (2)

Publication Number Publication Date
CN109740676A CN109740676A (en) 2019-05-10
CN109740676B true CN109740676B (en) 2022-11-22

Family

ID=66363629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910012144.1A Active CN109740676B (en) 2019-01-07 2019-01-07 Object detection and migration method based on similar targets

Country Status (1)

Country Link
CN (1) CN109740676B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222704B (en) * 2019-06-12 2022-04-01 北京邮电大学 Weak supervision target detection method and device
CN110490240A (en) * 2019-08-09 2019-11-22 北京影谱科技股份有限公司 Image-recognizing method and device based on deep learning
CN110807523B (en) * 2019-10-23 2022-08-05 中科智云科技有限公司 Method and equipment for generating detection model of similar target
CN111078984B (en) * 2019-11-05 2024-02-06 深圳奇迹智慧网络有限公司 Network model issuing method, device, computer equipment and storage medium
CN111027413A (en) * 2019-11-20 2020-04-17 佛山缔乐视觉科技有限公司 Remote multi-station object detection method, system and storage medium
CN111241964A (en) * 2020-01-06 2020-06-05 北京三快在线科技有限公司 Training method and device of target detection model, electronic equipment and storage medium
CN111523545B (en) * 2020-05-06 2023-06-30 青岛联合创智科技有限公司 Article searching method combined with depth information
CN111832291B (en) * 2020-06-02 2024-01-09 北京百度网讯科技有限公司 Entity recognition model generation method and device, electronic equipment and storage medium
CN112307976A (en) * 2020-10-30 2021-02-02 北京百度网讯科技有限公司 Target detection method, target detection device, electronic equipment and storage medium
CN113066053B (en) * 2021-03-11 2023-10-10 紫东信息科技(苏州)有限公司 Model migration-based duodenum self-training classification method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8811727B2 (en) * 2012-06-15 2014-08-19 Moataz A. Rashad Mohamed Methods for efficient classifier training for accurate object recognition in images and video
CN106295697A (en) * 2016-08-10 2017-01-04 广东工业大学 A kind of based on semi-supervised transfer learning sorting technique
CN107909101B (en) * 2017-11-10 2019-07-12 清华大学 Semi-supervised transfer learning character identifying method and system based on convolutional neural networks
CN108038515A (en) * 2017-12-27 2018-05-15 中国地质大学(武汉) Unsupervised multi-target detection tracking and its storage device and camera device
CN108229658B (en) * 2017-12-27 2020-06-12 深圳先进技术研究院 Method and device for realizing object detector based on limited samples
CN108681746B (en) * 2018-05-10 2021-01-12 北京迈格威科技有限公司 Image identification method and device, electronic equipment and computer readable medium
CN108985268B (en) * 2018-08-16 2021-10-29 厦门大学 Inductive radar high-resolution range profile identification method based on deep migration learning

Also Published As

Publication number Publication date
CN109740676A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109740676B (en) Object detection and migration method based on similar targets
US20200285896A1 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
CN108830188B (en) Vehicle detection method based on deep learning
CN110070074B (en) Method for constructing pedestrian detection model
Zhao et al. Cloud shape classification system based on multi-channel cnn and improved fdm
Dong et al. Deep metric learning-based for multi-target few-shot pavement distress classification
CN111444939B (en) Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN111275688A (en) Small target detection method based on context feature fusion screening of attention mechanism
CN110458022B (en) Autonomous learning target detection method based on domain adaptation
CN102385592B (en) Image concept detection method and device
CN113313166B (en) Ship target automatic labeling method based on feature consistency learning
CN105930792A (en) Human action classification method based on video local feature dictionary
CN115984543A (en) Target detection algorithm based on infrared and visible light images
CN112712052A (en) Method for detecting and identifying weak target in airport panoramic video
Zeng et al. Steel sheet defect detection based on deep learning method
CN113657414B (en) Object identification method
CN115712740A (en) Method and system for multi-modal implication enhanced image text retrieval
Mijić et al. Traffic sign detection using yolov3
CN110751005B (en) Pedestrian detection method integrating depth perception features and kernel extreme learning machine
CN116310293B (en) Method for detecting target of generating high-quality candidate frame based on weak supervised learning
Wang et al. Vehicle key information detection algorithm based on improved SSD
Tu et al. Toward automatic plant phenotyping: starting from leaf counting
Zhao et al. Recognition and Classification of Concrete Cracks under Strong Interference Based on Convolutional Neural Network.
US20230084761A1 (en) Automated identification of training data candidates for perception systems
Bi et al. CASA-Net: a context-aware correlation convolutional network for scale-adaptive crack detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant