CN110598751B - Anchor point generation method based on geometric attributes - Google Patents

Anchor point generation method based on geometric attributes Download PDF

Info

Publication number
CN110598751B
CN110598751B CN201910749521.XA CN201910749521A CN110598751B CN 110598751 B CN110598751 B CN 110598751B CN 201910749521 A CN201910749521 A CN 201910749521A CN 110598751 B CN110598751 B CN 110598751B
Authority
CN
China
Prior art keywords
width
height
bounding box
clustering
bounding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910749521.XA
Other languages
Chinese (zh)
Other versions
CN110598751A (en
Inventor
丁新涛
张琦
王万军
接标
杭后俊
李汪根
周文
卞维新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN201910749521.XA priority Critical patent/CN110598751B/en
Publication of CN110598751A publication Critical patent/CN110598751A/en
Application granted granted Critical
Publication of CN110598751B publication Critical patent/CN110598751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an anchor point generation method based on geometric attributes, which belongs to the field of deep learning and comprises the following steps: extracting coordinates of all labeling boundary boxes in an image, and normalizing the width and the height of each labeling boundary box; step two, clustering the aspect ratio of the labeling boundary box through the Euclidean distance; thirdly, clustering the size of the labeling boundary box through the SIoU distance; and step four, generating anchor points of the size and aspect ratio vectors by taking the size and aspect ratio clustering centers as discontinuous points. According to the method, the labeling boundary box is normalized, the aspect ratio and the size of the labeling boundary box are clustered, the generated anchor point contains the geometric attribute of the target, and the regional suggestion can be extracted so as to improve the target detection precision.

Description

Anchor point generating method based on geometric attributes
Technical Field
The invention relates to the field of target detection, in particular to an anchor point generation method based on geometric attributes.
Background
With the rapid development of deep learning, models based on convolutional neural networks are widely applied to detection of targets in scenes. Generally, the convolutional neural network-based target detection method can be classified into a two-stage detection method and a one-stage detection method. The two-stage detection method first extracts the region suggestion, then outputs the region suggestion to the detector and predicts the target position and category in the region, such as fast RCNN (Ren Shaoqing et al, fast RCNN: forward real-time object detection with region pro-position networks, IEEE Trans on Pattern Analysis and Machine insight, 2017,39(6):1137 + 1149); one-stage detection methods do not require extraction of region suggestions, and extract candidate regions of the target directly from the image and predict the target location and class in the region, such as YOLO (Joseph Redmons et al, YOLO9000: beta, family, Stronger, Proc of the IEEE Conf on Computer Vision and Pattern registration. Piscataway, NJ: IEEE,2017: 6517-. Although depth models such as fast RCNN and YOLO have good applicability to detection of general targets, prior knowledge such as target attributes is not considered, so that the depth models still have the problem of unsatisfactory detection accuracy in application of specific scenes.
Disclosure of Invention
According to the defects of the prior art, the invention provides an anchor point generation method based on geometric attributes, which is used for generating anchor points with geometric attributes by clustering the aspect ratio and the size of a labeling boundary box and extracting region suggestions so as to improve the detection precision.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for generating anchor points obtains prior geometric attributes of aspect ratio and size of a target through K-means clustering and generates anchor points. The anchor point generation method comprises the following steps: extracting the width and height of labeling bounding boxes of all targets from each image of a target data set, dividing the width and height of each labeling bounding box with the width and height of the image to obtain the width and height of n normalized labeling bounding boxes, and expressing the top point of the lower left corner of the normalized bounding box by using an origin point and expressing the top point of the upper right corner as a width-height coordinate of the bounding box in a coordinate graph; dividing the width and the height of the normalized labeling boundary box in the step one to generate the width-height ratio of the corresponding labeling boundary box, and clustering the width-height ratios of the n labeling boundary boxes by adopting a K-means clustering algorithm (K-means) based on Euclidean distance to obtain the width-height ratios of n clustering centers; thirdly, for the width and the height of the normalized labeling boundary box in the first step, clustering the width and the height of the n labeling boundary boxes by adopting a K-means algorithm based on a strengthened Intersection over Union (SIoU) distance to obtain the width and the height of the n clustering centers; and step four, multiplying the aspect ratio and the aspect ratio of the n clustering centers obtained in the step two and the step three by the basic scale of the anchor point to generate the anchor point.
Optionally, the normalization method in the first method step is: extracting coordinates of all labeled bounding boxes in the image and setting the coordinates as point sets D and Di=[xi1,yi1,xi2,yi2]Representing the coordinates of the upper left corner and the lower right corner of the ith labeling boundary box, wherein the width and the height of the ith labeling boundary box are wi=xi2-xi1,hi=yi2-yi1Width w of the ith label bounding boxiAnd a height hiDivided by the width M and height N of the image, respectively, i.e.
Figure BDA0002166730320000021
Figure BDA0002166730320000022
And representing a width and height set of the normalized labeling bounding box, putting the lower left corner of the bounding box in the normalized image to the origin, and representing the vertex of the upper right corner as a width and height coordinate of the bounding box.
Optionally, the method for clustering the aspect ratio of the bounding box in step two includes: computing aspect ratios for each annotated bounding box
Figure BDA0002166730320000023
Obtaining an aspect ratio set riIs to { r }iK-means clustering is performed from all { r }iRandomly selecting K from the set as an initial centroid set (K)mCalculate { r }iEach aspect ratio of (1) } to (K)mEuclidean distance of each initial centroid in the four-dimensional space, the distance to each centroid is the nearestThe aspect ratio of (A) is classified into one cluster and divided into K clusters; and calculating the mean value of each cluster, updating the centroid position, repeating the step two, and finishing the step two when the updating error of the centroid bounding box of each cluster is smaller than the given error.
Optionally, the method for clustering the size of the bounding box in step three of the method includes:
after the vertex at the upper right corner is used for representing the width and height coordinates of the bounding boxes, K bounding boxes are randomly selected from all the marked bounding boxes to serve as initial centroid bounding boxes, n marked bounding boxes and K initial centroid bounding boxes are clustered on a width and height plane, and SIoU from the ith centroid bounding box to the ith marked bounding box is calculated, wherein the calculation formula of the SIoU is as follows:
Figure BDA0002166730320000024
step (2), calculating an Intersection over Union (IoU) between the two boundary frames, and marking the Intersection area S of the two marked boundary framesl∩iArea S of the union with the labeled bounding boxl∪iIs divided by, i.e.
Figure BDA0002166730320000025
Step (3) calculating the area S of the minimum closure between the two bounding boxesCMerging the minimum closure area of the two bounding boxes with the area S of the bounding boxC-Sl∪iDifference of (d) and area of minimum closure SCIs divided by as M, i.e.
Figure BDA0002166730320000026
Step (4), calculating a difference value between IoU and M as SIoU, namely, the SIoU is IoU-M;
step (5), calculating the distance from the ith centroid bounding box to the ith labeling bounding box
Figure BDA0002166730320000027
The aspect ratio nearest to each centroid is classified into a cluster and is divided intoK clusters;
step (6), respectively calculating the width median of all the bounding boxes in each cluster as the width of the new centroid bounding box in the cluster, and correspondingly, using the high median as the height of the new centroid bounding box in the cluster;
and (7) repeating the steps (1) to (6), finishing clustering when the updating error of the centroid bounding box of each cluster is smaller than a given error, and multiplying the width and the height of the K centroid bounding boxes to obtain K sizes SK
Optionally, the method for generating the size and aspect ratio vector of the anchor point in step four of the method includes: after the end of clustering, clustering the R obtained by the bounding box aspect ratioKAnd S obtained by bounding box size clusteringKBase size B with anchor pointsAnd multiplying to obtain the anchor point containing the geometric attribute of the target.
The invention has the beneficial effects that the invention provides an anchor point generating method based on geometric attributes, firstly, the width and the height of all images in a data set are normalized; secondly, clustering the aspect ratio of the labeling boundary box through the Euclidean distance; thirdly, clustering the size of the labeling boundary box through the SIoU distance; and finally, generating an anchor point by taking the size and aspect ratio clustering center as a discontinuous point.
Drawings
The contents of the drawings and the reference numerals in the drawings are briefly described as follows:
FIG. 1 is a block flow diagram of an anchor point generation method of the present invention;
FIG. 2 is a schematic diagram of extracting coordinates of a labeled bounding box according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating coordinate normalization of a labeling bounding box according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the number of clustering samples for labeling the aspect ratio of the bounding box according to an embodiment of the present invention;
FIG. 5 is a schematic SIoU distance diagram of an embodiment of the present invention;
FIG. 6 is a diagram illustrating clustering of the dimensions of the labeling bounding box according to an embodiment of the present invention;
FIG. 7 is anchor size S generated by an embodiment of the present inventionKAnd aspect ratio RKA schematic view;
FIG. 8 is a flowchart illustrating the method for clustering the size of the labeled bounding box in step three according to the present invention.
Detailed Description
The following description of the embodiments with reference to the drawings is provided to describe the embodiments of the present invention, and the embodiments of the present invention, such as the shapes and configurations of the components, the mutual positions and connection relationships of the components, the functions and working principles of the components, the manufacturing processes and the operation and use methods, etc., will be further described in detail to help those skilled in the art to more completely, accurately and deeply understand the inventive concept and technical solutions of the present invention.
An anchor point generation method based on geometric attributes clusters the aspect ratio and the size of all targets in a training data set to generate anchor points so as to improve the detection precision of the targets.
As shown in fig. 1-7, the method comprises the steps of:
s1, extracting the width and height of the labeling bounding boxes of all targets from each image of the target data set, dividing the width and height of each labeling bounding box with the width and height of the image to obtain the width and height of n normalized labeling bounding boxes, and in a coordinate graph, representing the top of the lower left corner of the normalized bounding box by an origin and representing the top of the upper right corner as the width and height coordinates of the bounding box;
s2, dividing the width and the height of the normalized labeling boundary box in the step S1 to generate the width-height ratio of the corresponding labeling boundary box, and clustering the width-height ratios of the n labeling boundary boxes by adopting a K-means clustering algorithm (K-means) based on Euclidean distance to obtain the width-height ratios of the n clustering centers;
s3, for the width and the height of the normalized labeling boundary box in the S1, clustering the width and the height of the n labeling boundary boxes by adopting a K-means algorithm based on a strengthened Intersection over Union (SIoU) distance to obtain the width and the height of n clustering centers;
and S4, multiplying the aspect ratio and the aspect ratio of the n clustering centers obtained in the steps S2 and S3 by the basic scale of the anchor point to generate the anchor point.
The method normalizes the width and height of all images in a data set, as shown in FIG. 2; the lower left corner of the bounding box in the normalized image is placed to the origin, as shown in FIG. 3; secondly, clustering the aspect ratio of the labeling boundary box through the Euclidean distance to obtain an aspect ratio clustering result of the labeling boundary box, as shown in FIG. 4; thirdly, clustering the dimension of the labeling bounding box through the SIoU distance (shown in FIG. 5) to obtain a dimension clustering result of the labeling bounding box, as shown in FIG. 6; finally, the size and aspect ratio cluster center is taken as a break point, and a size and aspect ratio vector of the anchor point is generated, as shown in fig. 7.
Further, the normalization method of step S1 is: as shown in FIG. 2, the coordinates of all the labeled bounding boxes in the extracted image are set as point sets D, Di=[xi1,yi1,xi2,yi2]Representing the coordinates of the upper left corner and the lower right corner of the ith labeling boundary box, wherein the width and the height of the ith labeling boundary box are wi=xi2-xi1,hi=yi2-yi1The width w of the ith labeling bounding boxiAnd a height hiDivided by the width M and height N of the image, respectively, i.e.
Figure BDA0002166730320000041
Figure BDA0002166730320000042
And representing the width and height set of the normalized labeling bounding box, as shown in fig. 3, putting the lower left corner of the bounding box in the normalized image to the origin, and representing the vertex of the upper right corner as the width and height coordinates of the bounding box.
Further, the aspect ratio clustering method for labeling the bounding box in step S2 includes: computing aspect ratios for each annotated bounding box
Figure BDA0002166730320000043
Obtaining an aspect ratio set riIs to { r }iK-means clustering is performed from all { r }iRandomly selecting K from the set as an initial centroid set (K)mCalculate { r }iEach aspect ratio of (1) } to (K)mClassifying the aspect ratio closest to each centroid into a cluster, and dividing the cluster into K clusters; calculating the mean value of each cluster, updating the centroid position, repeating the step S2, when the update error of the centroid bounding box of each cluster is less than the given error, ending the step S2, and the clustering result is shown in FIG. 4, wherein the black column shows the number of the clustered samples, and the interval at the lower end of the column shows r of each classiThe range of (1).
Further, as shown in fig. 8, the labeling bounding box size clustering method in step S3 includes:
s301, after the vertex at the top right corner is used to represent the coordinates of width and height of the bounding box, randomly selecting K bounding boxes from all labeled bounding boxes as initial centroid bounding boxes, clustering n labeled bounding boxes and K initial centroid bounding boxes on the width and height plane, and calculating SIoU from the i-th centroid bounding box to the i-th labeled bounding box, where as shown in fig. 5, the calculation formula of SIoU is:
Figure BDA0002166730320000044
s302, calculating an Intersection over Union (IoU) between the two boundary frames, and marking the Intersection area S of the two boundary framesl∩iArea S of the union with the labeled bounding boxl∪iIs divided by, i.e.
Figure BDA0002166730320000045
S303, calculating the area S of the minimum closure between the two bounding boxesCMerging the minimum closure area of the two bounding boxes with the area S of the bounding boxC-Sl∪iDifference of (d) and area of minimum closure SCDivide by as M, i.e.
Figure BDA0002166730320000051
S304, calculating a difference value between IoU and M to be used as the SIoU, namely, the SIoU is IoU-M;
s305, calculating the distance from the ith centroid bounding box to the ith labeling bounding box
Figure BDA0002166730320000052
Classifying the aspect ratio closest to each centroid into a cluster, and dividing the cluster into K clusters;
s306, respectively solving the width median of all the bounding boxes in each cluster as the width of the new centroid bounding box in the cluster, and correspondingly taking the high median as the height of the new centroid bounding box in the cluster;
s307, repeating the steps S301-S306, finishing clustering when the update error of the centroid bounding box of each cluster is smaller than a given error, and multiplying the width and the height of K centroid bounding boxes to obtain K sizes S, wherein the clustering result is shown in figure 6K
Further, as shown in fig. 7, the method for generating the size and aspect ratio vector in step S4 is as follows: after the end of clustering, clustering the R obtained by the bounding box aspect ratioKAnd S obtained by bounding box size clusteringKBase size B with anchor pointsAnd multiplying to obtain the anchor point containing the geometric attribute of the target.
The invention has been described above with reference to the accompanying drawings, it is obvious that the invention is not limited to the specific implementation in the above-described manner, and it is within the scope of the invention to apply the inventive concept and solution to other applications without substantial modification. The protection scope of the present invention shall be subject to the protection scope defined by the claims.

Claims (4)

1. An anchor point generation method based on geometric attributes, characterized by comprising the steps of:
extracting the width and height of labeling bounding boxes of all targets from each image of a target data set, dividing the width and height of each labeling bounding box with the width and height of the image to obtain the width and height of n normalized labeling bounding boxes, and expressing the top point of the lower left corner of the normalized bounding box by using an origin point and expressing the top point of the upper right corner as a width-height coordinate of the bounding box in a coordinate graph;
dividing the width and the height of the normalized labeling boundary box in the step one to generate the width-height ratio of the corresponding labeling boundary box, and clustering the width-height ratios of the n labeling boundary boxes by adopting a K-means clustering algorithm (K-means) based on Euclidean distance to obtain the width-height ratios of n clustering centers;
thirdly, for the width and the height of the normalized labeling boundary box in the first step, clustering the width and the height of the n labeling boundary boxes by adopting a K-means algorithm based on a strengthened Intersection over Union (SIoU) distance to obtain the width and the height of the n clustering centers;
step four, multiplying the aspect ratio and the aspect scale of the n clustering centers obtained in the step two and the step three by the basic scale of the anchor point to generate the anchor point;
the method for clustering the dimension of the labeling boundary box in the third step comprises the following steps:
after the vertex at the upper right corner is used for representing the width and height coordinates of the bounding boxes, K bounding boxes are randomly selected from all the marked bounding boxes to serve as initial centroid bounding boxes, the n marked bounding boxes and the K initial centroid bounding boxes are clustered on a width and height plane, and the SIoU from the ith centroid bounding box to the ith marked bounding box is calculated, wherein the calculation formula of the SIoU is as follows:
Figure FDA0003549918080000011
step (2), calculating an Intersection over Union (IoU) between the two boundary frames, and marking the Intersection area S of the two marked boundary framesl∩iArea S merged with labeled bounding boxl∪iIs divided by, i.e.
Figure FDA0003549918080000012
Step (3) of calculating the distance between two bounding boxesArea of minimum closure SCMerging the minimum closure area of the two bounding boxes with the area S of the bounding boxC-Sl∪iDifference of (d) and area of minimum closure SCIs divided by as M, i.e.
Figure FDA0003549918080000013
Step (4), calculating a difference value between IoU and M as SIoU, namely, the SIoU is IoU-M;
step (5), calculating the distance from the ith centroid bounding box to the ith labeling bounding box
Figure FDA0003549918080000014
Classifying the aspect ratio closest to each centroid into a cluster, and dividing the cluster into K clusters;
step (6), respectively calculating the width median of all the bounding boxes in each cluster as the width of the new centroid bounding box in the cluster, and correspondingly, using the high median as the height of the new centroid bounding box in the cluster;
and (7) repeating the steps (1) to (6), finishing clustering when the updating error of the centroid bounding box of each cluster is smaller than a given error, and multiplying the width and the height of the K centroid bounding boxes to obtain K sizes SK
2. The anchor point generation method based on geometric attributes according to claim 1, wherein the normalization method in the first method step is: extracting coordinates of all labeled bounding boxes in the image and setting the coordinates as point sets D and Di=[xi1,yi1,xi2,yi2]Representing the coordinates of the upper left corner and the lower right corner of the ith labeling boundary box, wherein the width and the height of the ith labeling boundary box are wi=xi2-xi1,hi=yi2-yi1Width w of the ith label bounding boxiAnd a height hiDivided by the width M and height N of the image, respectively, i.e.
Figure FDA0003549918080000021
Express normalizationAnd marking a width and height set of the bounding box after the normalization, putting the lower left corner of the bounding box in the normalized image to the origin, and expressing the vertex of the upper right corner as a width and height coordinate of the bounding box.
3. The anchor point generating method based on geometric attributes according to claim 2, wherein the aspect ratio clustering method of the labeling bounding box in the second method step is: computing aspect ratios for each annotated bounding box
Figure FDA0003549918080000022
Obtaining an aspect ratio set riIs to { r }iK-means clustering is performed from all { r }iRandomly selecting K from the set as an initial centroid set (K)mCalculate { r }iEach aspect ratio of (1) } to (K)mClassifying the aspect ratio closest to each centroid into a cluster, and dividing the cluster into K clusters; and calculating the mean value of each cluster, updating the centroid position, repeating the step two, and finishing the step two when the updating error of the centroid bounding box of each cluster is smaller than the given error.
4. The method of claim 3, wherein the method of generating the size and aspect ratio vector in step four comprises: after the end of clustering, clustering the R obtained by the bounding box aspect ratioKAnd S obtained by bounding box size clusteringKBase size B with anchor pointsAnd multiplying to obtain the anchor point containing the geometric attribute of the target.
CN201910749521.XA 2019-08-14 2019-08-14 Anchor point generation method based on geometric attributes Active CN110598751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910749521.XA CN110598751B (en) 2019-08-14 2019-08-14 Anchor point generation method based on geometric attributes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910749521.XA CN110598751B (en) 2019-08-14 2019-08-14 Anchor point generation method based on geometric attributes

Publications (2)

Publication Number Publication Date
CN110598751A CN110598751A (en) 2019-12-20
CN110598751B true CN110598751B (en) 2022-06-07

Family

ID=68854398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910749521.XA Active CN110598751B (en) 2019-08-14 2019-08-14 Anchor point generation method based on geometric attributes

Country Status (1)

Country Link
CN (1) CN110598751B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797993B (en) * 2020-06-16 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 Evaluation method and device of deep learning model, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks
CN108416307A (en) * 2018-03-13 2018-08-17 北京理工大学 A kind of Aerial Images road surface crack detection method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks
CN108416307A (en) * 2018-03-13 2018-08-17 北京理工大学 A kind of Aerial Images road surface crack detection method, device and equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression;Hamid Rezatofighi etc.;《arXiv[cs.CV]》;20190415;第1-9页 *
Liangji Fang etc..Putting the Anchors Efficiently:Geometric Constrained.《Asian Conference on Computer Vision 2018》.2019,第387-403页. *
YOLO9000:Better, Faster, Stronger;Joseph Redmon etc.;《arXiv[cs.CV]》;20161225;第1-9页 *
基于改进候选区域网络的红外飞机检测;姜晓伟 等;《激光与红外》;20190120;第1卷(第49期);第110-115页 *
复杂采集环境下QR码识别;于春晓;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190215(第2期);第I138-2233页 *

Also Published As

Publication number Publication date
CN110598751A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
US10929649B2 (en) Multi-pose face feature point detection method based on cascade regression
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN111275688B (en) Small target detection method based on context feature fusion screening of attention mechanism
CN108288088B (en) Scene text detection method based on end-to-end full convolution neural network
US11530915B2 (en) Dimension measuring device, dimension measuring method, and semiconductor manufacturing system
JP6069489B2 (en) Object recognition apparatus, object recognition method, and program
CN110232311A (en) Dividing method, device and the computer equipment of hand images
Qian et al. Grasp pose detection with affordance-based task constraint learning in single-view point clouds
CN110598634B (en) Machine room sketch identification method and device based on graph example library
Liu et al. Road centerlines extraction from high resolution images based on an improved directional segmentation and road probability
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
Thalhammer et al. SyDPose: Object detection and pose estimation in cluttered real-world depth images trained using only synthetic data
Cupec et al. Object recognition based on convex hull alignment
CN112396655B (en) Point cloud data-based ship target 6D pose estimation method
Cheng et al. A direct regression scene text detector with position-sensitive segmentation
Zhang et al. Out-of-region keypoint localization for 6D pose estimation
CN110598751B (en) Anchor point generation method based on geometric attributes
Xia et al. Fast template matching based on deformable best-buddies similarity measure
CN112784869B (en) Fine-grained image identification method based on attention perception and counterstudy
Liu et al. Robust 3-d object recognition via view-specific constraint
CN101118544A (en) Method for constructing picture shape contour outline descriptor
CN111914832A (en) SLAM method of RGB-D camera in dynamic scene
CN111062393A (en) Natural scene Chinese character segmentation method based on spectral clustering
Rong et al. RGB-D hand pose estimation using fourier descriptor
CN113673540A (en) Target detection method based on positioning information guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant