CN114257736A - Self-adaptive shooting method for workpieces - Google Patents
Self-adaptive shooting method for workpieces Download PDFInfo
- Publication number
- CN114257736A CN114257736A CN202111336700.4A CN202111336700A CN114257736A CN 114257736 A CN114257736 A CN 114257736A CN 202111336700 A CN202111336700 A CN 202111336700A CN 114257736 A CN114257736 A CN 114257736A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- image
- camera
- shooting
- adaptive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000006073 displacement reaction Methods 0.000 claims abstract description 32
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 9
- 230000008030 elimination Effects 0.000 claims description 4
- 238000003379 elimination reaction Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 5
- 230000008901 benefit Effects 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 abstract 1
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000008239 natural water Substances 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Abstract
The invention belongs to the technical field of workpiece measurement, and relates to a workpiece self-adaptive shooting method, which comprises the following steps: s1, the front module detects the position and the size of the workpiece and transmits the detection result to the rear module; and S2, the rear module adjusts the third camera to the optimal shooting position through the displacement platform according to the detection result of the front module. The invention has the following beneficial technical effects: a. because a very large space distance can be reserved between the front module and the rear module, the rear module can be ensured to have enough time to adjust the position of the camera III through the displacement platform, and the image focusing is accurate. b. The displacement platform has a quite large displacement adjustment range, so that the focusing range of the camera III is large, which is a remarkable advantage compared with a common zoom lens. c. Through the pre-detection of the front module, the optimal shooting position can be set for each workpiece on the conveying belt.
Description
Technical Field
The invention belongs to the technical field of workpiece shooting, and relates to a workpiece self-adaptive shooting method.
Background
At the present stage, on a manufacturing production line represented by the automobile industry, the variety of parts is various, the processes are complex, and most of the process links need manual intervention, so that on one hand, the production management is difficult and the labor cost is high, and on the other hand, the production quality problem is frequently caused by human factors. Increasing the degree of automation of the production process is the best choice to address these issues and is a goal that the manufacturing industry is constantly pursuing. Particularly, for manufacturing defect detection of automobile parts, at present, an artificial visual inspection mode is mainly adopted, and considering that a computer vision method based on a deep learning technology is rapidly advanced in recent years and proved to have detection accuracy exceeding that of human eyes in application fields such as target classification, target detection and face recognition, high and new technologies are applied to defect detection of the automobile parts from natural water to the canal, and more remarkable production benefit improvement is expected to be obtained.
The quality of the image data determines the performance of the deep learning algorithm to a great extent, and the high requirements on the image acquisition system are also provided in consideration of various types and shapes of parts. The common zoom lens can focus clearly in a certain space range, but is far from being adaptive to actual working condition requirements, so that an unconventional camera focusing method needs to be provided urgently, and a large-range clear focusing effect can be achieved.
Disclosure of Invention
In order to realize accurate focusing shooting of workpieces with different sizes at different positions on a conveyor belt, the following specific technical scheme is provided.
A self-adaptive shooting method for a workpiece,
the workpiece self-adaptive shooting method is based on a workpiece shooting system;
the workpiece photographing system includes: a front module 1 and a rear module 2;
the front module 1 is in communication connection with the rear module 2;
the rear module 2 includes: a displacement platform 6 and a camera III 5;
the camera III 5 is fixed on the displacement platform 6 and translates and rotates along with the displacement platform 6;
the self-adaptive workpiece shooting method comprises the following steps:
s1, the front module 1 detects the position and the size of the workpiece 7 and transmits the detection result to the rear module 2;
and S2, the rear module 2 adjusts the camera III 5 to the optimal shooting position through the displacement platform 6 according to the detection result of the front module 1.
On the basis of the technical scheme, the displacement platform 6 receives the detection result of the front module 1, and accordingly the displacement platform 6 is translated and rotated, and the camera III 5 is placed at the optimal shooting position.
On the basis of the technical scheme, the workpiece 7 is positioned on the conveyor belt 9 and moves along with the conveyor belt 9;
the workpiece 7 moves to the shooting area of the front module 1 along with the conveyor belt 9;
then, the workpiece 7 is moved again to the photographing region of the rear module 2 along with the conveyor belt 9.
On the basis of the above technical solution, the front module 1 includes: the camera I3, the camera II 4 and the computer 10;
the computer 10 is connected with the first camera 3 and the second camera 4;
the computer 10 is in communication connection with the displacement platform 6;
the camera 3 is vertically placed, and the camera 3 is used for: shooting a vertical image of the workpiece 7 from top to bottom;
the second camera 4 is transversely placed, and the second camera 4 is used for: taking a lateral image of the workpiece 7;
the computer 10 is configured to: the images collected by the first camera 3 and the second camera 4 are received, the size of the workpiece 7 and the position of the workpiece on the conveyor belt 9 are calculated, the distance and the relative position between the third camera 5 and the workpiece 7 when the workpiece 7 moves to the shooting position in the module 2 are calculated according to the size, the translation amount and the rotation angle amount of the displacement platform 6 needing to be adjusted are further calculated, and finally the calculation result is sent to the displacement platform 6.
On the basis of the above technical solution, before the step S1, the method further includes the following steps:
and S0, calibrating the association between the pixel coordinate positions in the vertical image and the horizontal image and the actual field-of-view space position coordinate.
On the basis of the above technical solution, the specific steps of step S0 are as follows:
s0.1, fixing the spatial positions and shooting parameters of the first camera 3 and the second camera 4 to ensure that the same view field 8 is obtained in each shooting, and calibrating the conversion relation between the pixel coordinate positions in the vertical image and the horizontal image and the actual view field spatial position coordinate by placing a vertical scale and a horizontal scale in the view field 8;
and S0.2, detecting the pixel coordinate position of the workpiece 7 in the image by adopting a computer vision method, an image contrast elimination-based method or a deep learning target detection algorithm, and converting the pixel coordinate position into the actual view field space position coordinate.
On the basis of the technical scheme, the deep learning target detection algorithm comprises the following steps: the YOLO algorithm, the fasternn algorithm, and the EfficientDet algorithm.
On the basis of the technical scheme, the method for eliminating the image based on the image contrast comprises the following steps:
s0.2.1, shooting a first reference image when the workpiece 7 is not available;
s0.2.2, image two when the workpiece 7 is shot;
s0.2.3, subtracting the gray values of the first reference image and the second reference image to obtain a result image;
s0.2.4, obtaining the position of the workpiece 7 through the area surrounded by the high-gray-value pixels in the detection result image;
when the first reference image is a vertical image, the second image is a corresponding vertical image;
and when the first reference image is a transverse image, the second image is a corresponding transverse image.
On the basis of the technical scheme, the following steps are carried out after the step S0.2.3:
s0.2.3.1, the gray value of each pixel point in the result image is evenly spread to a plurality of adjacent pixel points.
On the basis of the technical scheme, the following steps are carried out after the step S0.2.3:
and (4) uniformly spreading the gray value of each pixel point in the result image to 9 adjacent pixel points.
On the basis of the technical scheme, the following steps are carried out after the step S0.2.3.1:
setting a threshold value; in the result image, the gray values of the pixels with the gray values lower than the threshold value are all set to be 0.
The invention has the following beneficial technical effects:
a. because a very large space distance can be reserved between the front module 1 and the rear module 2, the rear module 2 can be ensured to have enough time to adjust the position of the camera III 5 through the displacement platform 6, and the image focusing is accurate.
b. The displacement platform 6 of the rear module 2 has a relatively large displacement adjustment range, so that the focusing range of the camera three 5 is large, which is a prominent advantage over a common zoom lens.
c. By means of the pre-detection of the front module 1, the optimal shooting position can be set for each workpiece on the conveyor belt 9.
Drawings
The invention has the following drawings:
fig. 1 is a schematic perspective view of a workpiece shooting system according to the present application;
FIG. 2 is a schematic diagram of a first reference image of a vertical image according to the present application;
FIG. 3 is a schematic diagram of image two of a vertical image according to the present application;
fig. 4 is a diagram illustrating a resulting image of a vertical image according to the present application.
Reference numerals:
1. a front module; 2. a rear module; 3. a first camera; 4. a second camera; 5. a camera III; 6. a displacement platform; 7. a workpiece; 8. a field of view; 9. a conveyor belt; 10. and (4) a computer.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples.
a. General description
Because the depth of field of a common camera is limited, and the position and the size of a workpiece are greatly changed, the camera at a fixed position cannot clearly shoot all workpieces. A reasonable approach is to set different camera positions for different workpieces, which requires the assistance of the front module 1. The invention provides a workpiece shooting system (also called as a camera system) as shown in figure 1, which is cooperated by two module systems, wherein a front module 1 is used for: and the position and the size of the workpiece 7 are detected, and the rear module 2 adjusts the camera III 5 to the optimal shooting position through the displacement platform 6 according to the detection result of the front module 1.
b. The front module 1 is described as follows:
as shown in fig. 1, the front module 1 calculates the size of the workpiece 7 and its position on the conveyor belt 9 by acquiring images of the workpiece 7 in the vertical and horizontal directions using a computer 10, and sends the detection result to the rear module 2. The front module 1 comprises two cameras (namely a first camera 3 and a second camera 4) and a computer 10, wherein the first camera 3 is vertically arranged and is used for shooting a vertical image of the workpiece 7 from top to bottom, and the second camera 4 is horizontally arranged and is used for shooting a horizontal image of the workpiece 7; the vertical image positions the workpiece 7 in the plane of the conveyor belt 9, and the horizontal image detects the height of the workpiece 7; the computer 10 receives the images acquired by the two cameras, calculates the position of the workpiece 7 in the images, and calculates the translation amount and the rotation angle amount required by the camera three 5.
Before calculating the geometric information of the workpiece 7 by an image method, the correlation between the pixel coordinate position in the image and the actual view field space position coordinate needs to be calibrated, specifically: firstly, fixing the space positions and shooting parameters of a first camera 3 and a second camera 4 to ensure that the same view field 8 is obtained by shooting each time, and calibrating the conversion relation between the pixel coordinate positions in the vertical image and the horizontal image and the actual view field space position coordinate by placing a vertical scale and a horizontal scale in the view field 8.
Then, the pixel coordinate positions of the workpiece 7 in the vertical image and the horizontal image are detected by a computer vision method and converted into actual view field space position coordinates. The computer vision method can select a traditional CV (computer vision) algorithm or a deep learning method, and deep learning target detection algorithms (deep learning methods) with better performance comprise a YOLO algorithm, a FasterRCNN algorithm, an EfficientDet algorithm and the like, so that a simple method based on image contrast elimination is provided below.
Taking a vertical image of the workpiece 7 as an example, a first reference image is taken when there is no workpiece 7, as shown in fig. 2.
Then, a second image of the workpiece 7 is taken, as shown in fig. 3.
The acquired image is processed in the computer 10, the grey values of fig. 2 and fig. 3 are subtracted, so that the stationary background image of the conveyor belt 9 or the like is eliminated, resulting in the resulting image shown in fig. 4, and then the position of the workpiece 7 is obtained by detecting the area enclosed by the high grey value pixels in the resulting image.
The image result of fig. 4 exists only under an ideal shooting condition, and a certain amount of interference pixel points appear in the actual image in the background area of the non-workpiece, because the camera cannot be kept absolutely still, and low-amplitude jitter exists under the influence of environmental noise, thereby causing image jitter; the lighting condition in the presence of the workpiece 7 and the lighting condition of the reference image may not be completely the same, causing image differences. Under the influence of the above factors, a certain amount of non-0 gray-value pixels appear in the blank area of fig. 4. In order to eliminate the adverse effect of the interference pixel, one solution is to spread the gray value of each pixel in fig. 4 to a plurality of pixels (for example, 9 pixels) adjacent to the pixel, so as to reduce the gray value of the interference pixel without much influence on the gray value of the workpiece region, and further set a reasonable threshold, and set the gray values of the pixels with gray values lower than the threshold to 0, so as to obtain an image subtraction effect similar to fig. 4, where the gray region is the workpiece 7.
c. The description of the rear module 2 is as follows:
the main functions of the rear module 2 are: the method comprises the steps of collecting a workpiece image with clear focusing, receiving position and size information of a workpiece 7 sent by a front module 1, and then adjusting a camera III 5 to an optimal shooting position through a displacement platform 6. As shown in fig. 1, the main bodies of the rear module 2 are a camera three 5 and a displacement platform 6. The displacement platform 6 receives the detection result of the front module 1, namely the translation amount and the rotation angle amount of the displacement platform 6 which need to be adjusted, the camera III 5 is placed at the optimal shooting position, and the clear image of the main body is guaranteed to be obtained.
The key points of the technology of the invention are briefly described as follows:
the front module 1 is adopted to detect the position and the size of the workpiece 7, and the rear module 2 adjusts the position of the camera III 5 by using the displacement platform 6 according to the detection result of the front module 1, so that the aims of accurately focusing and clearly shooting the workpiece 7 are achieved.
It is to be understood that the foregoing description of the embodiments of the present invention is provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims.
Those not described in detail in this specification are within the knowledge of those skilled in the art.
Claims (10)
1. A self-adaptive shooting method for workpieces is characterized by comprising the following steps: the workpiece self-adaptive shooting method is based on a workpiece shooting system;
the workpiece photographing system includes: the front module (1) and the rear module (2);
the rear module (2) comprises: a displacement platform (6) and a camera III (5);
the camera III (5) is fixed on the displacement platform (6) and translates and rotates along with the displacement platform (6);
the self-adaptive workpiece shooting method comprises the following steps:
s1, the front module (1) detects the position and the size of the workpiece (7) and transmits the detection result to the rear module (2);
and S2, the rear module (2) adjusts the camera III (5) to the optimal shooting position through the displacement platform (6) according to the detection result of the front module (1).
2. The adaptive workpiece shooting method of claim 1, wherein: the displacement platform (6) receives the detection result of the front module (1), and accordingly the displacement platform (6) is translated and rotated, and the camera III (5) is placed at the optimal shooting position.
3. The workpiece adaptive photographing method according to claim 1 or 2, wherein: the workpiece (7) is positioned on the conveyor belt (9) and moves along with the conveyor belt (9);
the workpiece (7) moves to a shooting area of the front module (1) along with the conveyor belt (9);
then, the workpiece (7) moves to the shooting area of the rear module (2) along with the conveyor belt (9).
4. The adaptive workpiece shooting method of claim 1, wherein: the front module (1) comprises: the camera comprises a first camera (3), a second camera (4) and a computer (10);
the computer (10) is connected with the first camera (3) and the second camera (4);
the computer (10) is in communication connection with the displacement platform (6);
the camera I (3) is vertically placed, and the camera I (3) is used for: shooting a vertical image of the workpiece (7) from top to bottom;
the second camera (4) is transversely placed, and the second camera (4) is used for: shooting a transverse image of the workpiece (7);
the computer (10) is configured to: the method comprises the steps of receiving images collected by a first camera (3) and a second camera (4), calculating the size of a workpiece (7) and the position of the workpiece (7) on a conveyor belt (9), calculating the distance and the relative position between a third camera (5) and the workpiece (7) when the workpiece (7) moves to a shooting position in a module (2) according to the size and the position, further calculating the translation amount and the rotation angle amount of a displacement platform (6) which need to be adjusted, and finally sending a calculation result to the displacement platform (6).
5. The adaptive workpiece photographing method according to claim 4, wherein: the method further comprises the following steps before the step S1:
and S0, calibrating the association between the pixel coordinate positions in the vertical image and the horizontal image and the actual field-of-view space position coordinate.
6. The adaptive workpiece photographing method according to claim 5, wherein: the specific steps of step S0 are as follows:
s0.1, fixing the spatial positions and shooting parameters of the first camera (3) and the second camera (4), ensuring that the same view field (8) is obtained in each shooting, and calibrating the conversion relation between the pixel coordinate positions in the vertical image and the horizontal image and the actual view field spatial position coordinate by placing a vertical scale and a horizontal scale in the view field (8);
and S0.2, detecting the pixel coordinate position of the workpiece (7) in the image by adopting a computer vision method, an image contrast elimination-based method or a deep learning target detection algorithm, and converting the pixel coordinate position into the actual view field space position coordinate.
7. The workpiece adaptive photographing method according to claim 6, wherein: the deep learning target detection algorithm comprises: the YOLO algorithm, the fasternn algorithm, and the EfficientDet algorithm.
8. The workpiece adaptive photographing method according to claim 6, wherein: the steps of the image-based contrast elimination method are as follows:
s0.2.1, shooting a first reference image when the workpiece (7) is not available;
s0.2.2, a second image when the workpiece (7) is shot;
s0.2.3, subtracting the gray values of the first reference image and the second reference image to obtain a result image;
s0.2.4, obtaining the position of the workpiece (7) through the area surrounded by the high-gray-value pixels in the detection result image;
when the first reference image is a vertical image, the second image is a corresponding vertical image;
and when the first reference image is a transverse image, the second image is a corresponding transverse image.
9. The adaptive workpiece shooting method of claim 8, wherein: the following steps are performed after the step S0.2.3:
s0.2.3.1, the gray value of each pixel point in the result image is evenly spread to a plurality of adjacent pixel points.
10. The adaptive workpiece photographing method of claim 9, wherein: the following steps are performed after the step S0.2.3.1:
setting a threshold value; in the result image, the gray values of the pixels with the gray values lower than the threshold value are all set to be 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111336700.4A CN114257736A (en) | 2021-11-12 | 2021-11-12 | Self-adaptive shooting method for workpieces |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111336700.4A CN114257736A (en) | 2021-11-12 | 2021-11-12 | Self-adaptive shooting method for workpieces |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114257736A true CN114257736A (en) | 2022-03-29 |
Family
ID=80790848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111336700.4A Pending CN114257736A (en) | 2021-11-12 | 2021-11-12 | Self-adaptive shooting method for workpieces |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114257736A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115165737A (en) * | 2022-06-24 | 2022-10-11 | 群策精密金属(苏州)有限公司 | Visual detection device and detection method thereof |
CN115589534A (en) * | 2022-09-09 | 2023-01-10 | 广州市斯睿特智能科技有限公司 | Following type vehicle detection item picture acquisition device and method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05240805A (en) * | 1992-02-27 | 1993-09-21 | Kawasaki Steel Corp | Surface defect detecting device |
US20020054292A1 (en) * | 2000-08-11 | 2002-05-09 | Orelli Adrian Von | Process and apparatus for the colorimetric measurement of a two-dimensional original |
JP2004239870A (en) * | 2003-02-10 | 2004-08-26 | Seiko Epson Corp | Spatial filter, method and program for generating the same, and method and apparatus for inspecting screen defect |
CN106780473A (en) * | 2016-12-23 | 2017-05-31 | 西安交通大学 | A kind of magnet ring defect multi-vision visual detection method and system |
CN108363508A (en) * | 2018-01-13 | 2018-08-03 | 江南大学 | A kind of Mobile phone touch control screen Mark positioning non-contact vision detection method |
US20180293725A1 (en) * | 2015-12-14 | 2018-10-11 | Nikon-Trimble Co., Ltd. | Defect detection apparatus and program |
CN110450129A (en) * | 2019-07-19 | 2019-11-15 | 五邑大学 | A kind of carrying mode of progression and its transfer robot applied to transfer robot |
CN110609037A (en) * | 2019-07-12 | 2019-12-24 | 北京旷视科技有限公司 | Product defect detection system and method |
-
2021
- 2021-11-12 CN CN202111336700.4A patent/CN114257736A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05240805A (en) * | 1992-02-27 | 1993-09-21 | Kawasaki Steel Corp | Surface defect detecting device |
US20020054292A1 (en) * | 2000-08-11 | 2002-05-09 | Orelli Adrian Von | Process and apparatus for the colorimetric measurement of a two-dimensional original |
JP2004239870A (en) * | 2003-02-10 | 2004-08-26 | Seiko Epson Corp | Spatial filter, method and program for generating the same, and method and apparatus for inspecting screen defect |
US20180293725A1 (en) * | 2015-12-14 | 2018-10-11 | Nikon-Trimble Co., Ltd. | Defect detection apparatus and program |
CN106780473A (en) * | 2016-12-23 | 2017-05-31 | 西安交通大学 | A kind of magnet ring defect multi-vision visual detection method and system |
CN108363508A (en) * | 2018-01-13 | 2018-08-03 | 江南大学 | A kind of Mobile phone touch control screen Mark positioning non-contact vision detection method |
CN110609037A (en) * | 2019-07-12 | 2019-12-24 | 北京旷视科技有限公司 | Product defect detection system and method |
CN110450129A (en) * | 2019-07-19 | 2019-11-15 | 五邑大学 | A kind of carrying mode of progression and its transfer robot applied to transfer robot |
Non-Patent Citations (4)
Title |
---|
王瑊,王东成: "《工业机器人操作与编程》", 30 November 2019, 北京工业大学出版社, pages: 123 - 145 * |
许怡赦,冉成科: "《工业机器人系统集成技术应用》", 31 August 2021, 机械工业出版社, pages: 37 - 39 * |
魏烨,金一,袁家发等: "《.基于图像特征的平面二维尺寸测量系统》", 《机械设计》, vol. 33, no. 12, pages 1 - 5 * |
龚幼平: "《刹车盘上料系统中的一种双目立体定位算法》", 《仪器仪表与分析监测》, pages 17 - 19 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115165737A (en) * | 2022-06-24 | 2022-10-11 | 群策精密金属(苏州)有限公司 | Visual detection device and detection method thereof |
CN115589534A (en) * | 2022-09-09 | 2023-01-10 | 广州市斯睿特智能科技有限公司 | Following type vehicle detection item picture acquisition device and method |
CN115589534B (en) * | 2022-09-09 | 2023-09-08 | 广州市斯睿特智能科技有限公司 | Following type vehicle detection item picture acquisition device and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110497187B (en) | Sun flower pattern assembly system based on visual guidance | |
CN114257736A (en) | Self-adaptive shooting method for workpieces | |
CN111721259B (en) | Underwater robot recovery positioning method based on binocular vision | |
CN111539582B (en) | Image processing-based steel plate cutting planning device and method | |
CN109856164B (en) | Optimization device for acquiring large-range images by machine vision and detection method thereof | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
CN111189387A (en) | Industrial part size detection method based on machine vision | |
CN107688028B (en) | Laser additive manufacturing lap joint rate online monitoring method | |
CN113269762B (en) | Screen defect detection method, system and computer storage medium | |
CN110189375B (en) | Image target identification method based on monocular vision measurement | |
US6490369B1 (en) | Method of viewing and identifying a part for a robot manipulator | |
CN109916914B (en) | Product defect detection method and device | |
CN113324478A (en) | Center extraction method of line structured light and three-dimensional measurement method of forge piece | |
CN109035214A (en) | A kind of industrial robot material shapes recognition methods | |
CN111784655A (en) | Underwater robot recovery positioning method | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN115384052A (en) | Intelligent laminating machine automatic control system | |
TWI383690B (en) | Method for image processing | |
CN113160162A (en) | Hole recognition method and device applied to workpiece and hole processing equipment | |
CN115830018B (en) | Carbon block detection method and system based on deep learning and binocular vision | |
CN115880296B (en) | Machine vision-based prefabricated part quality detection method and device | |
CN116883498A (en) | Visual cooperation target feature point positioning method based on gray centroid extraction algorithm | |
CN112338898A (en) | Image processing method and device of object sorting system and object sorting system | |
CN114932292B (en) | Narrow-gap passive vision weld joint tracking method and system | |
CN115457459A (en) | Machine vision system capable of effectively improving detection efficiency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |