CN110490887B - 3D vision-based method for quickly identifying and positioning edges of rectangular packages - Google Patents
3D vision-based method for quickly identifying and positioning edges of rectangular packages Download PDFInfo
- Publication number
- CN110490887B CN110490887B CN201910561835.7A CN201910561835A CN110490887B CN 110490887 B CN110490887 B CN 110490887B CN 201910561835 A CN201910561835 A CN 201910561835A CN 110490887 B CN110490887 B CN 110490887B
- Authority
- CN
- China
- Prior art keywords
- point
- points
- boundary
- positioning
- rectangular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000001914 filtration Methods 0.000 claims abstract description 23
- 238000009499 grossing Methods 0.000 claims abstract description 10
- 238000012163 sequencing technique Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 7
- 230000002441 reversible effect Effects 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 4
- 230000002457 bidirectional effect Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000001154 acute effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
Abstract
The invention discloses a 3D vision-based method for quickly identifying and positioning edges of rectangular packages. The method comprises the steps of obtaining image data, filtering interference points, sequencing boundary points, smoothing filtering, roughly positioning corner points, finely positioning corner points and identifying rectangles. The boundary of the rectangular parcel can be effectively extracted from the scanned image data through the steps, the identification precision is high, the influence of the size of the rectangular parcel is avoided, the precision is improved in a mode of improving the hardware resolution of the camera, and the cost is effectively reduced. In addition, the bidirectional mutual nearest neighbor sequencing method and the angular point detection improve the stability and the accuracy of positioning rectangular packages in a complex logistics environment by visual positioning, reduce the workload of operators and improve the maintainability of the system.
Description
Technical Field
The invention relates to the technical field of logistics, in particular to a method for quickly identifying and positioning edges of rectangular packages based on 3D vision.
Background
Along with the rapid development of logistics technology, the parcel in the whole logistics link is grabbed, and the requirements on the accurate positioning of various parcels are more and more obvious in the process of unstacking, stacking and real-time movement scene sorting of soft and hard packages, and accordingly, the traditional measuring method is more and more difficult to meet the measuring requirements of various kinds and high accuracy.
Among the prior art, rectangle parcel is numerous, but lacks practical effectual solution in the aspect of quick accurate discernment and location, consequently, is unfavorable for improving the letter sorting efficiency of rectangle parcel.
Disclosure of Invention
The invention mainly solves the technical problem of providing a method for quickly identifying and positioning edges of rectangular packages based on 3D vision, and solves the problems of low accuracy, lack of systematic solution and low efficiency in automatic sorting of the rectangular packages in the prior art.
In order to solve the technical problem, the technical scheme adopted by the invention is to provide a method for quickly identifying and positioning rectangular wrapping edges based on 3D vision, which comprises the following steps: acquiring image data, and performing image scanning on the stacked rectangular packages through image scanning equipment to obtain a three-dimensional image data set of the rectangular packages; filtering interference points, namely extracting an original boundary point data set of the three-dimensional image data set, and filtering the interference points to obtain a rectangular wrapped boundary point data set; sorting boundary points, namely selecting one point in the boundary point data set as a first starting point, calculating a point which is closest to the first starting point as an adjacent point, then using the adjacent point as a second starting point, calculating other points which are closest to the second starting point except the first starting point as a third starting point, and so on, and numbering and sorting all the boundary points in the boundary point data set; smoothing filtering, namely performing neighborhood filtering on all boundary points in the boundary point data set according to the numbering sequence; roughly positioning the angular points, setting threshold intervals according to the sequence of the sorted and smoothed boundary points, respectively finding out 2 boundary points in the clockwise direction and the anticlockwise direction to form vector included angles, and screening out the roughly positioning positions of the angular points by judging the sizes of the vector included angles; fine positioning of the angular points, namely dividing the sorted boundary point data set into N sections of sub-data sets by using coordinates of N inflection points in the coarse positioning of the angular points, fitting a spatial straight line by using hough transformation for each section of sub-data set, and then respectively calculating an intersection point of two adjacent spatial straight lines to be used as the inflection point of the accurate positioning; and identifying the rectangle, namely identifying the rectangular boundary corresponding to the rectangular parcel from the boundary point data by using the constraint condition of the rectangle identification after all inflection points of the boundary point data are obtained.
In another embodiment of the method for quickly identifying and positioning rectangular parcel edges based on 3D vision, in the step of filtering the interference points, the method includes: firstly, in an original boundary point data set, solving the number M of points with the distance from the current original boundary point P (x, y, z) smaller than a threshold value T as the energy value of the current original boundary point P; then, according to the processing mode, calculating a corresponding energy value for each original boundary point to obtain the energy distribution of all the original boundary points; then, the average energy value of all original boundary points is obtained, and points with energy values lower than 2/3 of the average energy value are taken as interference points to be removed from the original boundary point data set.
In another embodiment of the 3D vision-based method for quickly identifying and locating rectangular parcel edges, in the step of sorting the boundary points, when a nearest neighboring point is selected, a distance threshold value is further set, the neighboring distance must be smaller than the distance threshold value, and when the neighboring distance is greater than or equal to the distance threshold value, a point cannot be used as a neighboring point.
In another embodiment of the 3D vision-based method for quickly identifying and positioning rectangular parcel edges, in the step of sorting the boundary points, a reverse order search for adjacent points is further included, when the boundary points in the boundary point data set cannot meet the requirement that the distance between point locations is smaller than a distance threshold value, the reverse order search is started, the boundary points are used as a new starting point, a point which is closest to the sorted ordered point set and meets the requirement of the distance threshold value is searched for and added into the ordered point set, and then the process of forward order search is continuously executed.
In another embodiment of the invention, in the corner point fine positioning step, a unique straight line can be represented by using the distance from the origin of coordinates to the straight line and the included angle between the straight line and the x axis, and the unique straight line is represented as follows:
ρ=xcosθ+ysinθ
in the above formula, (x, y) is the coordinate of a point on a straight line, ρ is the distance from the origin of coordinates to the straight line, and θ is the angle between the straight line and the x-axis.
In another embodiment of the 3D vision-based method for rapidly identifying and positioning the edges of rectangular packages, in the corner point fine positioning step, the boundary data point set with the ordered boundary points is used as input of hough transformation data, and data with a length of 2/3 of the middle of N segments of sub-data sets are respectively intercepted and used as input of data of a fitting space straight line.
In another embodiment of the 3D vision-based method for quickly identifying and positioning the edges of rectangular parcels, the constraint conditions of rectangular identification include: (1) The direction of the four continuous right angles and the anticlockwise corners is consistent, and the rectangle is confirmed; (2) Three continuous right angles appear, the counterclockwise corner directions are consistent, and the shape is determined to be a rectangle; (3) Two continuous right angles appear, the counterclockwise corner directions are consistent, and the result is determined to be a potential rectangle; (4) a continuous right angle appears, which cannot be identified as a rectangle.
The invention has the beneficial effects that: the invention discloses a 3D vision-based method for quickly identifying and positioning edges of rectangular packages. The boundary of the rectangular parcel can be effectively extracted from the scanned image data through the steps, the identification precision is high, the influence of the size of the rectangular parcel is avoided, the precision is improved in a mode of improving the hardware resolution of the camera, and the cost is effectively reduced. In addition, the bidirectional mutual nearest neighbor sequencing method and the angular point detection improve the stability and accuracy of positioning rectangular packages by visual positioning in a complex logistics environment, reduce the workload of operators and improve the maintainability of the system.
Drawings
FIG. 1 is a flow chart of an embodiment of a 3D vision based method for quickly identifying and locating edges of rectangular parcels according to the present invention;
FIG. 2 is a diagram illustrating the effect of filtering out interference points in another embodiment of the 3D vision-based method for quickly identifying and positioning rectangular parcel edges according to the present invention;
FIG. 3 is a diagram illustrating the effect of sorting boundary points in another embodiment of the 3D vision-based method for quickly identifying and positioning edges of rectangular packages according to the present invention;
FIG. 4 is an enlarged view of a portion of FIG. 3;
FIG. 5 is a graph of the effect of fitting a straight line in another embodiment of the 3D vision-based method for quickly identifying and positioning edges of rectangular packages according to the present invention;
fig. 6 is a diagram illustrating the effect of rectangle identification in another embodiment of the 3D vision-based method for quickly identifying and positioning edges of rectangular packages according to the present invention.
Detailed Description
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and specific examples. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It is to be noted that, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 shows a flowchart of an embodiment of the 3D vision-based method for quickly identifying and positioning edges of rectangular packages according to the present invention. In fig. 1, the method includes:
step S101: acquiring image data, and performing image scanning on the stacked rectangular packages through image scanning equipment to obtain a three-dimensional image data set of the rectangular packages;
step S102: filtering interference points, namely extracting an original boundary point data set of the three-dimensional image data set, and filtering the interference points to obtain a rectangular wrapped boundary point data set;
step S103: sorting boundary points, namely selecting one point in the boundary point data set as a first starting point, calculating a point which is closest to the first starting point as an adjacent point, then using the adjacent point as a second starting point, calculating other points which are closest to the second starting point except the first starting point as a third starting point, and so on, and numbering and sorting all the boundary points in the boundary point data set;
step S104: smoothing filtering, namely performing neighborhood filtering on all boundary points in the boundary point data set according to the numbering sequence;
step S105: roughly positioning the angular points, setting threshold intervals according to the sequence of the sorted and smoothed boundary points, respectively finding out 2 boundary points in the clockwise direction and the anticlockwise direction to form vector included angles, and screening out the roughly positioning positions of the angular points by judging the size of the vector included angles;
step S106: finely positioning the angular points, namely dividing the sorted boundary point data set into N sections of sub-data sets by using the coordinates of N inflection points in the coarsely positioned angular point, and for each section of sub-data set, fitting a space straight line by using hough transformation, and then respectively calculating the intersection point of two adjacent space straight lines to be used as the inflection point for accurately positioning;
step S107: and identifying the rectangle, namely identifying the rectangular boundary corresponding to the rectangular parcel from the boundary point data by using the constraint condition of the rectangle identification after all inflection points of the boundary point data are obtained.
Preferably, in step S102, the method for filtering the interference point includes: the method comprises the following steps: firstly, in an original boundary point data set, solving the number M of points with the distance from the current original boundary point P (x, y, z) smaller than a threshold value T as the energy value of the current original boundary point P; then, according to the processing mode, calculating a corresponding energy value for each original boundary point to obtain the energy distribution of all the original boundary points; then, the average energy value of all original boundary points is obtained, and points with energy values lower than 2/3 of the average energy value are taken as interference points to be removed from the original boundary point data set. As shown in fig. 2, a plurality of rectangular parcels of the boundary point data set JS1 obtained after the filtering of the interference point is shown.
In step S103, the first starting point is A (x) 1 ,y 1 ,z 1 ) With the first starting point A (x) 1 ,y 1 ,z 1 ) The closest point is the neighboring point B (x) 2 ,y 2 ,z 2 ) And the adjacent distance between the two is as follows:preferably, when selecting the nearest neighboring point, a distance threshold is further set, the neighboring distance must be smaller than the distance threshold, and when the neighboring distance is greater than or equal to the distance threshold, the point cannot be used as the neighboring point. Therefore, some boundary points may not be numbered and sorted, and therefore, it is preferable to perform reverse order search on adjacent points in step S103, that is, when the boundary points in the boundary point data set cannot meet the requirement that the distance between point locations is smaller than a distance threshold value, the reverse order search is started, the boundary point is used as a new starting point, a point which is closest to the sorted ordered point set and meets the requirement of the distance threshold value is searched for and added to the ordered point set, and then the process of forward order search is continuously performed. This way it can be guaranteed that all the points are ordered in a way that the distance is minimal. As shown in fig. 3, including a numbered ordering of each boundary point. Fig. 4 further shows a partial enlargement of fig. 3.
Preferably, in step S104, for one boundary point (x, y, z), a gaussian smoothing filter process is performed on P by using M boundary points adjacent to the one boundary point (x, y, z) after sorting.
After the steps, corner point coarse positioning can be further performed, the purpose is to find the inflection point position of a single or a plurality of rectangular wrapping boxes under complex conditions, but before, the acquired original boundary point data needs to be filtered, sorted and smoothed. The meaning of the outlier filtering is to eliminate the influence of the interference point on the boundary point sorting, and since the smoothing process is performed based on the boundary point data in the local neighborhood, the boundary point data needs to be sorted first before the gaussian smoothing process is performed. The gaussian smoothing process is performed to reduce erroneous corner (inflection point) identification caused by large data fluctuation between adjacent boundary points when the inflection point is roughly located in the boundary point data.
The rough positioning mode has poor precision and can not reach the standard of field use. Therefore, the position of the angle point needs to be accurately positioned, and the position of the angle point is accurately positioned by hough transformation straight line fitting in the method.
For this reason, in step S106, the basic principle of using the Hough transform is to change a given curve of the original image space into one point of the parameter space by a curve expression form using the duality of points and lines. Therefore, the detection problem of a given curve in an original image is converted into a peak value problem in a parameter searching space, namely, the detected overall characteristic is converted into a detected local characteristic.
The distance from the coordinate origin to the straight line and the included angle between the straight line and the x axis can be used for uniquely representing a straight line, and the formula is as follows:
ρ=xcosθ+ysinθ
in the above formula, (x, y) is the coordinate of a point on a straight line, ρ is the distance from the origin of the coordinate to the straight line, and θ is the angle between the straight line and the x-axis.
Preferably, in step S106, two aspects are also included: firstly, a sorted boundary data point set is used as input of hough transformation data, because on one hand, the data subjected to smoothing processing loses data precision, and on the other hand, the data before the miscellaneous point filtering is easily interfered; and secondly, respectively intercepting data with the length of 2/3 of the middle in the N sections of data subsets as data input of a fitting space straight line, wherein the data close to the inflection point cannot be used as data input of the fitting straight line because the precision of rough positioning is not enough to accurately position the inflection point. Fig. 5 shows a spatial line graph fitted using the hough transform.
Preferably, in step S107, the constraint conditions for rectangle identification include:
(1) Four continuous right angles appear, the counterclockwise corner directions are consistent, and the shape is determined to be a rectangle;
(2) Three continuous right angles appear, the directions of the anticlockwise corners are consistent, and the three corners are determined to be rectangular;
(3) Two continuous right angles appear, the directions of the anticlockwise corners are consistent, and the potential rectangle is determined;
(4) A continuous right angle appears and cannot be identified as a rectangle.
Preferably, under the above four constraints, rectangles, potential rectangles, and corner points excluding non-rectangles in the boundary data point set can be sequentially identified. Judging whether the inflection point is a right angle or not, and solving an included angle between two space vectors; and judging whether the steering directions between the adjacent inflection points are consistent in the anticlockwise direction, and judging whether the steering directions are acute angles with the direction vector of the boundary data or not by solving the cross product vector of two vectors forming the inflection points. Fig. 6 shows the corresponding rectangle of the finally identified parcel boundary.
Therefore, the 3D vision-based method for quickly identifying and positioning the edges of the rectangular packages comprises the steps of obtaining image data, filtering interference points, sequencing boundary points, smoothing filtering, coarsely positioning corner points, finely positioning corner points and identifying rectangles. The boundary of the rectangular parcel can be effectively extracted from the scanned image data through the steps, the identification precision is high, the influence of the size of the rectangular parcel is avoided, the precision is improved in a mode of improving the hardware resolution of the camera, and the cost is effectively reduced. In addition, the bidirectional mutual nearest neighbor sequencing method and the angular point detection improve the stability and the accuracy of positioning rectangular packages in a complex logistics environment by visual positioning, reduce the workload of operators and improve the maintainability of the system.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (4)
1. A3D vision-based method for quickly identifying and positioning rectangular parcel edges is characterized by comprising the following steps:
acquiring image data, and performing image scanning on the stacked rectangular packages through image scanning equipment to obtain a three-dimensional image data set of the rectangular packages;
filtering interference points, namely extracting an original boundary point data set of the three-dimensional image data set, and filtering the interference points to obtain a rectangular wrapped boundary point data set;
sorting boundary points, namely selecting one point in the boundary point data set as a first starting point, calculating a point closest to the first starting point as an adjacent point, then taking the adjacent point as a second starting point, calculating other points closest to the second starting point except the first starting point as third starting points, and so on, and numbering and sorting all the boundary points in the boundary point data set;
smoothing filtering, namely performing neighborhood filtering on all boundary points in the boundary point data set according to the numbering sequence;
roughly positioning the angular points, setting threshold intervals according to the sequence of the sorted and smoothed boundary points, respectively finding out 2 boundary points in the clockwise direction and the anticlockwise direction to form vector included angles, and screening out the roughly positioning positions of the angular points by judging the size of the vector included angles;
finely positioning the angular points, namely dividing the sorted boundary point data set into N sections of sub-data sets by using the coordinates of N inflection points in the coarsely positioned angular point, and for each section of sub-data set, fitting a space straight line by using hough transformation, and then respectively calculating the intersection point of two adjacent space straight lines to be used as the inflection point for accurately positioning;
identifying rectangles, namely identifying rectangular boundaries corresponding to rectangular packages from the boundary point data by using constraint conditions of rectangular identification after all inflection points of the boundary point data are obtained;
in the interference point filtering step, the method includes: firstly, in an original boundary point data set, calculating the number M of points with the distance from a current original boundary point P (x, y, z) smaller than a threshold value T, and taking the number M as an energy value of the current original boundary point P; then, according to the processing mode, calculating a corresponding energy value for each original boundary point to obtain the energy distribution of all the original boundary points; then, the average energy value of all original boundary points is obtained, and points with the energy value lower than 2/3 of the average energy value are taken as interference points to be removed from the original boundary point data set;
in the boundary point sorting step, when a nearest adjacent point is selected, a distance threshold value is also set, the adjacent distance must be smaller than the distance threshold value, and when the adjacent distance is larger than or equal to the distance threshold value, a point cannot be used as the adjacent point;
in the step of sequencing the boundary points, the method also comprises searching adjacent points in a reverse sequence, when the boundary points appear in the boundary point data set and cannot meet the condition that the distance between the point positions is less than a distance threshold value, starting the reverse sequence search, taking the boundary points as a new starting point, searching the point which is closest to the ordered point set and meets the requirement of the distance threshold value, adding the point into the ordered point set, and then continuing to execute the process of searching in a forward sequence.
2. The 3D vision-based method for quickly identifying and positioning the edges of the rectangular packages according to claim 1, wherein in the step of fine positioning of the corner points, a line can be uniquely represented by adopting the distance from a coordinate origin to the line and the included angle between the line and an x-axis, and the line is represented as follows:
ρ=xcosθ+ysinθ
in the above formula, (x, y) is the coordinate of a point on a straight line, ρ is the distance from the origin of coordinates to the straight line, and θ is the angle between the straight line and the x-axis.
3. The 3D vision-based method for quickly identifying and positioning the edges of rectangular packages according to claim 2, wherein in the step of fine positioning of corner points, a boundary data point set obtained by sequencing boundary points is used as input of hough transformation data, and data with the length of 2/3 of the middle in N segments of sub-data sets is respectively intercepted and used as input of data of a fitting space straight line.
4. The 3D vision-based method for quickly identifying and positioning the edges of the rectangular packages according to claim 1, wherein the constraint conditions of rectangular identification include:
(1) The direction of the four continuous right angles and the anticlockwise corners is consistent, and the rectangle is confirmed;
(2) Three continuous right angles appear, the directions of the anticlockwise corners are consistent, and the three corners are determined to be rectangular;
(3) Two continuous right angles appear, the counterclockwise corner directions are consistent, and the result is determined to be a potential rectangle;
(4) A continuous right angle appears and cannot be identified as a rectangle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910561835.7A CN110490887B (en) | 2019-06-26 | 2019-06-26 | 3D vision-based method for quickly identifying and positioning edges of rectangular packages |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910561835.7A CN110490887B (en) | 2019-06-26 | 2019-06-26 | 3D vision-based method for quickly identifying and positioning edges of rectangular packages |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490887A CN110490887A (en) | 2019-11-22 |
CN110490887B true CN110490887B (en) | 2023-02-28 |
Family
ID=68546347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910561835.7A Active CN110490887B (en) | 2019-06-26 | 2019-06-26 | 3D vision-based method for quickly identifying and positioning edges of rectangular packages |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490887B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114913216A (en) * | 2021-01-29 | 2022-08-16 | 深圳光峰科技股份有限公司 | Method, device, medium and electronic equipment for identifying corner points of graph in image |
CN113267143B (en) * | 2021-06-30 | 2023-08-29 | 三一建筑机器人(西安)研究院有限公司 | Side mold identification method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5616905A (en) * | 1994-02-24 | 1997-04-01 | Kabushiki Kaisha Tec | Two-dimensional code recognition method |
JPH10275213A (en) * | 1997-03-31 | 1998-10-13 | Fuji Photo Film Co Ltd | Radiograph picture irradiating field recognizing method, device therefor, blackening processing method, and device therefor |
CN104952072A (en) * | 2015-06-16 | 2015-09-30 | 华中科技大学 | Rectangle detection method based on genetic algorithm |
CN107516098A (en) * | 2017-07-30 | 2017-12-26 | 华南理工大学 | A kind of objective contour 3-D information fetching method based on edge angle |
WO2018107939A1 (en) * | 2016-12-14 | 2018-06-21 | 国家海洋局第二海洋研究所 | Edge completeness-based optimal identification method for image segmentation |
CN109359652A (en) * | 2018-11-23 | 2019-02-19 | 浙江理工大学 | A method of the fast automatic extraction rectangular scanning part from digital photograph |
-
2019
- 2019-06-26 CN CN201910561835.7A patent/CN110490887B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5616905A (en) * | 1994-02-24 | 1997-04-01 | Kabushiki Kaisha Tec | Two-dimensional code recognition method |
JPH10275213A (en) * | 1997-03-31 | 1998-10-13 | Fuji Photo Film Co Ltd | Radiograph picture irradiating field recognizing method, device therefor, blackening processing method, and device therefor |
CN104952072A (en) * | 2015-06-16 | 2015-09-30 | 华中科技大学 | Rectangle detection method based on genetic algorithm |
WO2018107939A1 (en) * | 2016-12-14 | 2018-06-21 | 国家海洋局第二海洋研究所 | Edge completeness-based optimal identification method for image segmentation |
CN107516098A (en) * | 2017-07-30 | 2017-12-26 | 华南理工大学 | A kind of objective contour 3-D information fetching method based on edge angle |
CN109359652A (en) * | 2018-11-23 | 2019-02-19 | 浙江理工大学 | A method of the fast automatic extraction rectangular scanning part from digital photograph |
Non-Patent Citations (5)
Title |
---|
《A computer vision system to detect 3-D rectangular solids》;K. Rao等;《Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV"96》;20020806;27-32 * |
《机器人堆叠目标识别与定位抓取系统研究》;彭泽林;《中国优秀硕士学位论文全文数据库》;20181215(第2018年第12期);I138-1489 * |
《适用于表面污染检测的机械臂轨迹生成研究》;赵建强;《中国优秀硕士学位论文全文数据库》;20171115(第2017年第11期);C040-1 * |
一种基于谱聚类和遗传算法的矩形检测方法;范鹤鹤;《中国优秀硕士学位论文全文数据库信息科技辑》;20170605(第2017年第06期);I138-1242 * |
基于LiDAR点云的建筑物边界提取及规则化;赵小阳等;《地理空间信息》;20160721(第07期);88-90 * |
Also Published As
Publication number | Publication date |
---|---|
CN110490887A (en) | 2019-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107622499B (en) | Identification and space positioning method based on target two-dimensional contour model | |
Oehler et al. | Efficient multi-resolution plane segmentation of 3D point clouds | |
CN107958458B (en) | Image segmentation method, image segmentation system and equipment comprising image segmentation system | |
CN112070818A (en) | Robot disordered grabbing method and system based on machine vision and storage medium | |
Tazir et al. | CICP: Cluster Iterative Closest Point for sparse–dense point cloud registration | |
CN107248159A (en) | A kind of metal works defect inspection method based on binocular vision | |
CN106651894B (en) | Automatic spraying system coordinate transformation method based on point cloud and image matching | |
EP1658579A1 (en) | Computer-vision system for classification and spatial localization of bounded 3d-objects | |
CN105139416A (en) | Object identification method based on image information and depth information | |
CN102722731A (en) | Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm | |
CN110490887B (en) | 3D vision-based method for quickly identifying and positioning edges of rectangular packages | |
AU2019222802A1 (en) | High-precision and high-speed positioning label and positioning method for visual servo | |
CN115330819B (en) | Soft package segmentation positioning method, industrial personal computer and robot grabbing system | |
CN113112496B (en) | Sub-pixel shaft part size measurement method based on self-adaptive threshold | |
CN108269274B (en) | Image registration method based on Fourier transform and Hough transform | |
CN112085709B (en) | Image comparison method and device | |
CN110415304B (en) | Vision calibration method and system | |
CN105719306A (en) | Rapid building extraction method from high-resolution remote sensing image | |
CN110047133A (en) | A kind of train boundary extraction method towards point cloud data | |
CN107895166B (en) | Method for realizing target robust recognition based on feature descriptor by geometric hash method | |
CN109829502B (en) | Image pair efficient dense matching method facing repeated textures and non-rigid deformation | |
CN106204542A (en) | Visual identity method and system | |
CN107742036B (en) | Automatic shoe sample discharging and processing method | |
WO2023005195A1 (en) | Map data processing method and apparatus, and household appliance and readable storage medium | |
CN102663395A (en) | A straight line detection method based on self-adaptation multi-scale fast discrete Beamlet transform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |