CN110223310B - Line structure light center line and box edge detection method based on deep learning - Google Patents

Line structure light center line and box edge detection method based on deep learning Download PDF

Info

Publication number
CN110223310B
CN110223310B CN201910426770.5A CN201910426770A CN110223310B CN 110223310 B CN110223310 B CN 110223310B CN 201910426770 A CN201910426770 A CN 201910426770A CN 110223310 B CN110223310 B CN 110223310B
Authority
CN
China
Prior art keywords
edge
line
box
line structure
structure light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910426770.5A
Other languages
Chinese (zh)
Other versions
CN110223310A (en
Inventor
张之江
彭涛
黄臻臻
宋英杰
邰意纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910426770.5A priority Critical patent/CN110223310B/en
Publication of CN110223310A publication Critical patent/CN110223310A/en
Application granted granted Critical
Publication of CN110223310B publication Critical patent/CN110223310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a line structured light central line and box edge detection method based on deep learning, which uses a deep neural network to detect a specific edge of a box image containing line structured light, and comprises the following operation steps: 1) And (5) manufacturing and marking a box body data set. 2) And (5) network construction and optimization. 3) Eight point detections are made using an image processing method. According to the invention, the line structure light and the box body edge are identified by adopting a deep learning edge detection method, an edge binary image is obtained through training of a convolutional neural network, and finally eight characteristic points of intersection of the line structure light central line and the box body edge are obtained. The method has the advantages of low algorithm complexity, strong robustness, high stability, good adaptation to complex environments and the like.

Description

Line structure light center line and box edge detection method based on deep learning
Technical Field
The invention belongs to a method for detecting deep learning edges, and particularly relates to a method for detecting specific edges of a box image containing line structured light by using a deep neural network.
Background
The box volume measurement is particularly important in the field of logistics. To adopting line structure light to throw on the box, carry out the mode of box volume measurement through shooting the image of the box that contains line structure light, as shown in fig. 1, the extraction of line structure light central line and the edge of box that awaits measuring on the box, use traditional method at present: methods such as Canny edge detection, sobel edge detection, and gradient detection have difficulty in obtaining stable and efficient output. Because traditional processing method is comparatively sensitive to image quality, image background, illumination, interference conditions such as noise, especially fields such as commodity circulation station, commodity circulation turnover center, light change is various, and the box material is various, and the box is piled up the environment complicacy, simultaneously, when the box itself is not regular enough, perhaps because of the surface pattern is complicated, reflection of light etc. factor that the sealing strip leads to all can produce the influence to detection line structure light and box edge. Traditional edge detection algorithms designed for specific situations are not universal and cannot be applied to existing complex scenes. Therefore, how to accurately identify the target box and rapidly, accurately, efficiently and stably detect the line structure optical center line and the box edge on the box under the complex environment of stacking a plurality of boxes, so as to obtain eight specific feature points, and realize the complete automation of the system is a problem to be solved urgently at present.
Compared with the traditional mode, the deep learning method has stronger capability of extracting multi-level and multi-scale features in the image, and has excellent performance in edge detection of the image, image classification, semantic identification of the image and image segmentation. And the deep learning can effectively restrain the background area in the image from highlighting the high-level edge features on the image edge detection.
Disclosure of Invention
The invention provides an end-to-end based edge detection system based on the requirement of accurately extracting eight feature points of intersection of a line structure light center line and a box edge, and provides a line structure light center line and box edge detection method based on deep learning. And obtaining a line structure light center line and a box edge of the target box body through full convolution neural network training, and finally obtaining eight characteristic points through image processing.
To achieve the above object, the present invention is conceived as follows:
firstly, a data set of the surface of a box body with a groined line structure light is manufactured, and the training target is a group trunk diagram of the edge of the box body and the line structure light center line in the diagram, as shown in fig. 2. To diversify training samples, the data set is expanded from multiple angles and with the addition of ambient noise. And building a convolutional neural network structure under the Tensorflow environment to train the data set. And finally, carrying out image processing on the obtained edge probability graph, and extracting 8 feature point coordinates of intersection of the line structure light center line and the box body edge by using processing technologies such as Hough linear transformation, linear clustering and the like.
According to the above inventive thought, the specific technical scheme of the invention is as follows:
a line structure light center line and box edge detection method based on deep learning comprises the following specific operation steps:
(1) Data set making and marking: a large number of box pictures with a groined line centerline are acquired for the desired target image, which requires that eight feature points on the box edges must be included in the figure, which can be non-complete edges. Eight intersection points of the line structure light and the edge of the box body are calibrated, the eight points 1 to 8 are numbered, and every two points are required to be line structure light end points and have the same direction. Finally, drawing an edge map of the box body edge and four line structured light central lines according to eight points to be used as a group trunk map for training.
(2) Network construction and optimization:
among the neural networks, the lower-level network focuses more on the detection of edge details, and the lower-level features have richer and more accurate position information. The high-level network pays more attention to the extraction of the target outline, and the high-level characteristics are gradually insensitive to the position information due to the continuous expansion of the receptive field, however, the high-level network abstracts the characteristics more times along with the deepening of the network, so that the high-level network has rich semantic information. While only a large range of semantic contours is needed for the target bin edges and line structured light to be detected, those too fine contour edges should be discarded.
Based on the fully nested convolutional neural network, the original fully nested convolutional neural network introduces too many pooling layers due to the modification of the backbone based on the VGG network. This makes the map resolution of the network higher layer output too low. The width of the edge detection result output by the last two side output layers after deconvolution is too large, and the loss of position information is too serious, so that the edge is not smooth and smooth enough and is too rough after being fused with the side output layer of the lower layer. The lower level learning results introduce more target internal texture information. Therefore, the resolution of the edge information output by the last two sides is improved, and the performance of image edge extraction is further improved. The overall network structure is improved according to the requirements, and the improved network structure is shown in fig. 3.
Defining an input training dataset as s= { (X) n ,Y n ) N=1,.. n Is the original picture of the box body, Y n Is X n Is a binary image of the edge of the object. The set of all network layer parameters is denoted W. Assume that a total of M side output layers in the network are each associated with a classifier, where the corresponding weights are denoted as w= (w (1) ,...,w (M) ). Consider the objective function:
wherein the method comprises the steps ofRepresenting the side output cost of the pixel level. In training, the loss function is obtained by traversing all pixels at the training image and edge output. For each picture, the following is calculated:
wherein β= |y - |/|Y|,|Y + I and Y - I are labels that are non-edge and edge, respectively. For a typical natural image, there is a large deviation in the distribution of edge and non-edge pixels, with 90% of the pixels being non-edge.The pixel j is calculated for the activation function by sigmoid. In order to directly utilize the side output prediction, a weighted fusion layer is added in the network, and fusion weights are learned in training. The loss function at the fusion layer is:
in the middle ofFusion weights representing the output layers of the sides, < ->The difference between the fusion prediction graph and the truth graph is calculated by using a balance cross entropy loss function. The 5-layer structure of the HED network is selected and reserved, the input layer number of the fusion layer is changed, the first two layers of fine information are removed, the side outputs of the 3, 4 and 5 layers are reserved as the input of the fusion layer, and the final fusion weight is as follows:
namely:
meanwhile, when the loss function is calculated, the cost of the first 5 layers is not considered, and the loss of the last fusion layer is independently output as the last loss value, so that the edge is finer. And finally, the network carries out iterative optimization on the network internal parameters and the fusion weights by using a gradient descent method according to the fusion cost:
(W,w,h) * =arg min(L fuse (W,w,h)) (6)
when the image X is input, the final unified output can be obtained by further aggregating these generated edge maps.
(3) Eight point detection is performed using an image processing method: for the edge probability map output from the model, it is necessary to perform image processing on it so as to find the required eight feature points. And (3) searching all the straight lines in the edge probability graph by using Hough straight line transformation, and obtaining 4 edge lines and 4 line structure light central lines through straight line clustering. And (3) judging which are line structure lights and which are box edges from space, establishing an equation of the line structure lights, solving the intersection point of the line structure light center line and the edge line. 8 feature points are obtained.
Compared with the prior art, the invention has the following advantages:
existing methods often have significant redundancy in terms of representation and computational complexity. The proposed overall nested network is a relatively simple variant, capable of generating predictions from multiple scales. The method has the advantages of low algorithm complexity, strong robustness, high stability, good adaptation to complex environments and the like. The method can conveniently expand the recognition of the edges of the box body which are not learned, greatly reduces the dependence of the traditional image processing mode on the shooting fixed mode, well solves the problems of poor practical applicability and limited generality caused by the fact that the traditional image processing method has too many limiting conditions on the scene and the influence of environmental noise on the processing result is serious, and can be widely applied to more specific occasions.
Drawings
FIG. 1 is a schematic diagram of a tank containing line structured light for tank volume measurement.
Fig. 2 is an original view of a data set input for training and a ground trunk view.
Fig. 3 is a diagram of a fully-convolutional neural network architecture.
Fig. 4 is a flowchart of the overall process.
Fig. 5 is a box experiment diagram with low contrast.
Fig. 6 is an experimental view of a box with poor ambient light.
Fig. 7 is an experimental view of a case with a case surface sealing tape with severe reflection.
Fig. 8 is a box plot schematic diagram 1 of feature point error analysis.
Fig. 9 is a box plot schematic diagram 2 of feature point error analysis.
Fig. 10 is a box experimental diagram of parallel line structured light and cross-shaped line structured light.
Detailed Description
Preferred embodiments of the present invention are described in detail below with reference to the attached drawing figures:
a flowchart of the whole process of the line structure light center line and box edge detection method based on deep learning is shown in fig. 4. The specific operation steps are as follows:
1) Data set making and marking:
data enhancement has proven to be a key technology in deep networks. The original acquired pictures are about 6000, the pictures are preprocessed, and the data set is expanded to 5 ten thousand. Considering that the angle of 45 degrees, which is formed by fixing line structured light and a camera, can be offset in angle due to the difference among a plurality of devices, rotating an original picture by plus or minus 5 degrees, 10 degrees and 15 degrees to form 6 different angles, and cutting out the largest rectangle in a rotating image on the premise of ensuring that 8 characteristic points are all in the picture; the data set is further expanded by adding ambient noise.
In order to achieve accurate positioning information in complex environments, a large number of multi-box stacked pictures are placed in a dataset. Since edge detection requires high-precision edge pixel positioning, but the box body of the original image is not a geometrically completely regular rectangle, and the carton has irregular edges to various degrees, the edge of the group trunk image which is defined and derived is marked as 5 pixels wide, and in order to completely learn the characteristic edge of the original image and the line structure light central line characteristic. Considering the problem of sample complexity, box pictures of various conditions are added in the data set, so that the breadth of the data set is ensured.
2) Network construction and optimization:
the network framework is implemented using a common tensorf low library. In the system, the entire network is initialized by using a pre-trained VGG-16 network model. The super parameters used by the network model include: batch-size (9), learning rate (0.001), weight decay (0.0002) training iteration number 100005. Because the nested multi-scale frame is insensitive to the input image scale, all image sizes are adjusted to 512×512 to increase the quality of the original picture as much as possible under the premise of reducing the use of GPU memory and efficient batch processing.
The final effect can be seen in figures 5,6 and 7. From the result graph, it can be seen that the target box can still be well separated in a complex environment in a plurality of box stacks with similar characteristics. Fig. 5 discusses that when the color of the box is similar to that of the non-box edge, the contrast is low and the recognition degree of the edge is low, the box edge can still be recognized completely. Fig. 6 discusses that some photographing environments have poor light, and even human eyes have difficulty in recognizing the edges of the cases in the environment of multi-case stacking, and the environment is well adapted from the view point of the derived edge probability map. FIG. 7 illustrates some complex cases where the output edge probability map does not significantly affect the line structure light centerline identification due to reflection or absorption of light or line structure light blur and divergence caused by the packaging tape.
3) Eight point detection is performed using an image processing method:
and performing binarization operation on the generated edge probability map. Hough transform is performed on the binarized image of the box image, all straight lines are found, and the coordinates in the polar coordinate system are obtained, which is set as (ρ i ,θ i ). Since the original edges are mostly continuous, the operation of connecting line segments can be omitted, but the edge width of the edge probability map is more than one pixel, the Hough transformation can solve a plurality of straight lines, so that the straight lines are required to be clustered, a threshold value is set, the straight lines in the threshold value range are classified, and therefore, the equation of the straight lines under the pixel coordinate system can be obtained:
y=k 1 x+b 1 (8)
the straight line equation of other 7 edge lines under the pixel coordinate system can be obtained by the same method. Depending on the punctuation sequence at the beginning of the fixation, it may be known from the spatial coordinates that the found straight line corresponds to the line structured light center line or the box edge line of the original picture, respectively. According to the intersection of the two straight lines, the coordinates of 8 feature points can be obtained finally.
Embodiment one:
two test pictures are selected as samples, corresponding to fig. 8 and 9, pixel-level coordinate error analysis is performed on the original pictures and the test results, the picture resolution is 1944×2592, eight feature point coordinate extraction is performed as shown in fig. 8 and 9, the output edge probability map is set to 512×512, and the obtained 8 point coordinates are subjected to coordinate conversion to obtain the following error analysis table (corresponding to table 1 in fig. 8 and table 2 in fig. 9).
Table 1 results of case measurements and error analysis 1 in complex environments
Original picture coordinates Measuring coordinates Transformed coordinates Relative error coordinates
Point 1 (1118,336) (220,88) (1114,334) (4,2)
Point number 2 (1809,1164) (357,307) (1807,1165) (2,1)
Point number 3 (862,583) (170,154) (860,584) (2,1)
Point 4 (1523,1405) (301,370) (1524,1405) (1,0)
Point 5 (1604,413) (316,108) (1599,410) (5,3)
Point 6 (783,1107) (155,291) (785,1105) (2,2)
Point 7 (1863,715) (368,189) (1863,718) (0,3)
Point 8 (1041,1426) (205,375) (1038,1424) (3,2)
TABLE 2 Box measurement results under Complex Environment and error analysis 2
Original picture coordinates Measuring coordinates Transformed coordinates Relative error coordinates
Point 1 (965,350) (190,91) (962,346) (3,4)
Point number 2 (1746,1018) (345,267) (1747,1014) (1,4)
Point number 3 (689,700) (136,184) (689,699) (0,1)
Point 4 (1450,1364) (286,358) (1448,1360) (2,4)
Point 5 (1520,232) (300,60) (1518,228) (2,4)
Point 6 (799,984) (158,258) (800,980) (1,4)
Point 7 (1863,512) (367,135) (1858,513) (5,1)
Point 8 (1119,1239) (220,326) (1114,1238) (5,1)
From the two tables, under the condition of considering the error of manual marking, the relative error between the eight point coordinates obtained by the method and the eight point coordinates of manual marking can be accurate to the range of less than 5 pixels. Considering that the edge of the box body of the original image does not have accurate labeling coordinates, the obtained result can completely reach the accuracy of accurately positioning eight characteristic points.
Embodiment two:
the line structure light on the box body in the training data set is in a groined shape, but when other line structure light is on, the model can be accurately identified. As shown in fig. 10, the pictures used for the test were a box with parallel line structured light and a box with cross line structured light. The training model can also identify the box body diagrams of other line structure light shapes, and the invention is simultaneously applicable to the identification of other line-shaped targets.

Claims (2)

1. A line structure light center line and box edge detection method based on deep learning is characterized in that a related data set is manufactured, the detection of the center line and the box edge is carried out by using a deep learning neural network, and finally eight point detections obtained by intersecting the center line and the box edge are carried out by using an image processing method, and the method is characterized by comprising the following specific operation steps:
(1) Data set making and marking: acquiring a large number of box pictures with a groined line structure central line according to a required target image, and using the box pictures for model training of a subsequent neural network;
(2) Network construction and optimization: the neural network special for detecting the line structure light center line and the box body edge is provided, a specific manufactured data set is trained to obtain a model, and an original picture is output to an edge probability diagram through the model;
(3) Eight point detection is performed using an image processing method: for the edge probability map output from the model, image processing is required to find the required eight feature points; using Hough linear transformation to find all the lines in the edge probability graph, and then obtaining 4 edge lines and 4 line structure light central lines through linear clustering; judging which are line structure lights or box edges from space, and establishing an equation to solve 8 intersection points of a line structure light central line and edge lines;
in the step (3), the generated edge probability map is subjected to two stepsPerforming a valued operation; performing Hough transform on the binarized image of the box image, finding all straight lines, and obtaining the coordinates under the polar coordinate system to obtainSince the original edges are mostly continuous, the operation of connecting line segments is omitted, but the edge width of the edge probability map is more than one pixel, the Hough transformation can solve a plurality of straight lines, so that the straight lines are required to be clustered, a threshold value is set, the straight lines in the threshold value range are classified into one type, and an equation of the straight lines in a pixel coordinate system is obtained:
the method comprises the steps of carrying out a first treatment on the surface of the Solving straight line equations of other 7 edge lines under a pixel coordinate system by the same method; according to the punctuation sequence under the initial fixation, the line which is found out and corresponds to the line structure light center line or the box body edge line of the original picture respectively can be known from the space coordinates; and finally obtaining the coordinates of 8 characteristic points respectively according to the intersection of the two straight lines.
2. The method for detecting optical center line and box edge of line structure based on deep learning according to claim 1, wherein in the step (1), the image in the dataset requires eight feature points on the box edge to be included in the image, and the box edge is a non-complete edge; calibrating eight intersection points of line structure light and the edge of the box body, numbering eight points 1 to 8, and requiring that every two points are line structure light end points and have the same direction; finally, drawing an edge map of the box body edge and four line structured light central lines according to eight points to be used as a group trunk map for training.
CN201910426770.5A 2019-05-22 2019-05-22 Line structure light center line and box edge detection method based on deep learning Active CN110223310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910426770.5A CN110223310B (en) 2019-05-22 2019-05-22 Line structure light center line and box edge detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910426770.5A CN110223310B (en) 2019-05-22 2019-05-22 Line structure light center line and box edge detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN110223310A CN110223310A (en) 2019-09-10
CN110223310B true CN110223310B (en) 2023-07-18

Family

ID=67821685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910426770.5A Active CN110223310B (en) 2019-05-22 2019-05-22 Line structure light center line and box edge detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN110223310B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689026B (en) * 2019-09-27 2022-06-28 联想(北京)有限公司 Method and device for labeling object in image and electronic equipment
CN110781897B (en) * 2019-10-22 2023-05-02 北京工业大学 Semantic edge detection method based on deep learning
CN113807137B (en) * 2020-06-12 2023-10-10 广州极飞科技股份有限公司 Method, device, farm machine and medium for identifying a planting row center line
CN113566735B (en) * 2021-07-24 2022-08-09 大连理工大学 Laser in-situ measurement method for rocket engine nozzle cooling channel line
CN115329932A (en) * 2022-08-05 2022-11-11 中国民用航空飞行学院 Airplane landing attitude monitoring method based on digital twins

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829769A (en) * 2012-08-31 2012-12-19 中国人民解放军国防科学技术大学 Method for measuring container position and state on basis of structured light visual sensor
CN103983193A (en) * 2014-06-11 2014-08-13 中国烟草总公司郑州烟草研究院 Three-dimensional detection method applied to size measurement of cigarette packet in cigarette carton
CN105043253A (en) * 2015-06-18 2015-11-11 中国计量学院 Truck side protection guard installation size measurement system based on surface structure light technology and method thereof
CN105574869A (en) * 2015-12-15 2016-05-11 中国北方车辆研究所 Line-structure light strip center line extraction method based on improved Laplacian edge detection
CN107064170A (en) * 2017-04-11 2017-08-18 深圳市深视智能科技有限公司 One kind detection phone housing profile tolerance defect method
CN107680095A (en) * 2017-10-25 2018-02-09 哈尔滨理工大学 The electric line foreign matter detection of unmanned plane image based on template matches and optical flow method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2887009A1 (en) * 2013-12-23 2015-06-24 Universität Zürich Method for reconstructing a surface using spatially structured light and a dynamic vision sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829769A (en) * 2012-08-31 2012-12-19 中国人民解放军国防科学技术大学 Method for measuring container position and state on basis of structured light visual sensor
CN103983193A (en) * 2014-06-11 2014-08-13 中国烟草总公司郑州烟草研究院 Three-dimensional detection method applied to size measurement of cigarette packet in cigarette carton
CN105043253A (en) * 2015-06-18 2015-11-11 中国计量学院 Truck side protection guard installation size measurement system based on surface structure light technology and method thereof
CN105574869A (en) * 2015-12-15 2016-05-11 中国北方车辆研究所 Line-structure light strip center line extraction method based on improved Laplacian edge detection
CN107064170A (en) * 2017-04-11 2017-08-18 深圳市深视智能科技有限公司 One kind detection phone housing profile tolerance defect method
CN107680095A (en) * 2017-10-25 2018-02-09 哈尔滨理工大学 The electric line foreign matter detection of unmanned plane image based on template matches and optical flow method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Holistically-Nested Edge Detection;Saining Xie,Zhuowen Tu;《IEEE》;20160218;全文 *
用井字结构光对规则部件进行三维测量;蔡晨,潘斌,刘振宁;《应用科学学报》;20170131;全文 *

Also Published As

Publication number Publication date
CN110223310A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110223310B (en) Line structure light center line and box edge detection method based on deep learning
Wei et al. Toward automatic building footprint delineation from aerial images using CNN and regularization
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN110543837B (en) Visible light airport airplane detection method based on potential target point
Wu et al. Fast aircraft detection in satellite images based on convolutional neural networks
CN107506763B (en) Multi-scale license plate accurate positioning method based on convolutional neural network
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
CN108052966B (en) Remote sensing image scene automatic extraction and classification method based on convolutional neural network
CN104599275B (en) The RGB-D scene understanding methods of imparametrization based on probability graph model
Alidoost et al. A CNN-based approach for automatic building detection and recognition of roof types using a single aerial image
CN105809651B (en) Image significance detection method based on the comparison of edge non-similarity
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN108305260B (en) Method, device and equipment for detecting angular points in image
US9224207B2 (en) Segmentation co-clustering
CN108021890B (en) High-resolution remote sensing image port detection method based on PLSA and BOW
Zhang et al. Road recognition from remote sensing imagery using incremental learning
CN110008900B (en) Method for extracting candidate target from visible light remote sensing image from region to target
CN108629286A (en) A kind of remote sensing airport target detection method based on the notable model of subjective perception
CN111898621A (en) Outline shape recognition method
CN105989334A (en) Road detection method based on monocular vision
CN109977899A (en) A kind of training, reasoning and the method and system for increasing New raxa of article identification
CN114332921A (en) Pedestrian detection method based on improved clustering algorithm for Faster R-CNN network
Forczmański et al. Stamps detection and classification using simple features ensemble

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant