CN113838144A - Method for positioning object on UV printer based on machine vision and deep learning - Google Patents
Method for positioning object on UV printer based on machine vision and deep learning Download PDFInfo
- Publication number
- CN113838144A CN113838144A CN202111073232.6A CN202111073232A CN113838144A CN 113838144 A CN113838144 A CN 113838144A CN 202111073232 A CN202111073232 A CN 202111073232A CN 113838144 A CN113838144 A CN 113838144A
- Authority
- CN
- China
- Prior art keywords
- article
- image
- workbench
- calibration
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for positioning an object on a UV printer based on machine vision and deep learning. The method for positioning the objects on the UV printer based on the machine vision and the deep learning has the advantages that the object positioning precision is high, the printing precision is high, strict requirements on the types and the placing positions of the objects are not required, and the working effect and the utilization rate of the UV printer are improved.
Description
Technical Field
The invention belongs to the technical field of object positioning, and particularly relates to a method for positioning an object on a UV printer based on machine vision and deep learning.
Background
The UV printing is one of the most common and widely applied printing technologies in the printing industry, has the advantages of no material limitation in printing, no need of plate making and immediate taking, high precision, high speed, economy, environmental protection and the like, and is applied to various plane printing scenes. The UV printer is simple to operate, ink jet printing is carried out on the surface of an article placed on the workbench according to a drawing in matched software, but the pattern area of the drawing needs to accurately correspond to the surface area of the article on the workbench, otherwise, the problem that the pattern printing on the surface of the article is not accurate can occur.
It is common practice in the industry to use a mold to avoid this problem, i.e. to fix a grid of molds of the same size as the article to be printed on the table, and then place the article in the grid during printing. This approach can solve the problem to some extent, but also brings some drawbacks, the fixed die allows the UV printing to be performed only on fixed types of articles corresponding to the die, and the flexibility of the printer is limited.
Disclosure of Invention
The invention aims to solve the technical problems and provides a method for positioning an object on a UV printer based on machine vision and deep learning.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for positioning an object on a UV printer based on machine vision and deep learning, the method comprising:
s1, correcting an image acquisition module of a workbench, wherein the correcting method comprises the following steps:
s101, preparing a calibration board, and printing a black-and-white chessboard with the specification of mxn for calibration;
s102, placing the calibration plate on a workbench according to different positions and inclination angles, and shooting the calibration plate placed each time by using a workbench image acquisition module above the workbench;
s103, carrying out chessboard detection on all the calibration images, and finally calculating an internal reference matrix, an external reference matrix and a distortion system of the camera, wherein the internal reference matrix, the external reference matrix and the distortion system are collectively called as camera parameters;
s2, collecting the regional image on the workbench of the UV printer by the workbench image collecting module;
s3, after receiving the image sent by the workbench image acquisition module, the workbench image and article detection module preprocesses the acquired workbench image, inputs the preprocessed workbench image to an article detection network to obtain the output of rectangular positions of articles, and detects the rectangular area where each article in the image is located by the workbench image and article detection module;
s4, after receiving the area image of each article, the work table image article segmentation module preprocesses the article area image, inputs the article area image into an article segmentation network to obtain the output of an article segmentation mask, extracts the edge contour of the segmentation mask, and outputs the minimum bounding rectangle of the article segmentation area according to the contour, namely obtains the pixel position of each article;
and S5, calculating the coordinate position of each article relative to the workbench according to the obtained pixel position of each article and the camera parameter.
As a preferred technical solution, in S1, a specific calibration code is set on a workstation of the UV printer, the calibration code is known relative to world coordinates of the workstation and recorded as a true value, then by shooting a regional image of the workstation and detecting pixel coordinates of the calibration code on the image, an estimated value of the world coordinates of the calibration code relative to the workstation is calculated based on internal references of an image acquisition module of the workstation, and the euclidean distance between the estimated value of the coordinates and the true value is calculated to be a reference error of a camera calibration and positioning module, if the error is greater than a certain threshold, the camera and the workstation need to be adjusted, and the camera calibration module is repeated until the error meets a condition.
As a preferable mode, in S4, for the position information of the minimum bounding rectangle for each article, the position information of the rectangle and the rotation angle of the rectangle in the world coordinates of each article with respect to the table are calculated from the camera parameters.
As a preferred technical scheme, height parameters are added to articles with different heights, and world coordinate conversion is carried out by matching with camera parameters.
As a preferable technical solution, for an article whose edge is linear, a linear detection is added, and a linear calculation angle whose peripheral length is the largest is selected to correct a deviation based on a calculation angle of a minimum bounding rectangle.
As a preferred technical solution, before executing S1, the following judgment is made: judging whether the camera parameters exist or not and whether the calibration error is larger than a threshold value or not, and executing S1 if the camera parameters do not exist or the calibration error is larger than the threshold value; if the camera parameters exist and the calibration error is not greater than the threshold value, step S2 is executed.
As a preferred technical solution, in S3, an item detection model is generated by an item detection model training module and loaded to an item detection network.
As a preferred technical solution, the steps of generating the article detection model by the article detection model training module are as follows:
s301, acquiring an article detection sample based on image synthesis;
s302, preprocessing an article detection sample, zooming the article detection sample to a fixed size to serve as a training sample, wherein a corresponding label is the upper left corner coordinate and the width and the height of the article at the rectangular position on the image;
and S303, training after setting training conditions, and storing the article detection model after training.
As a preferable technical solution, in S4, an item segmentation model is generated by an item segmentation model training module and loaded to the item segmentation network.
As a preferred technical solution, the step of generating the article segmentation model by the article segmentation model training module is as follows:
s401, acquiring an article segmentation sample based on manual marking;
s402, acquiring an article segmentation sample based on image synthesis;
s403, preprocessing an article segmentation sample, zooming to a fixed size to serve as a training sample, wherein a corresponding label is a binary image of an article segmentation mask;
s404, training is carried out after training conditions are set, and the object segmentation model is stored after training is finished.
After the technical scheme is adopted, the invention has the following advantages:
the invention aims to provide a method for positioning an object on a UV printer based on machine vision and deep learning. The method for positioning the objects on the UV printer based on the machine vision and the deep learning has the advantages that the object positioning precision is high, the printing precision is high, strict requirements on the types and the placing positions of the objects are not required, and the working effect and the utilization rate of the UV printer are improved.
Detailed Description
The present invention will be described in further detail with reference to specific examples.
A method for positioning an object on a UV printer based on machine vision and deep learning, the method comprising:
s0., determining whether the camera parameters exist or not and whether the calibration error is greater than the threshold, if the camera parameters do not exist or the calibration error is greater than the threshold, executing S1. If the camera parameters exist and the calibration error is not greater than the threshold value, step S2 is executed.
The step occurs after the first installation of the image acquisition module of the workbench, or when the position of the image acquisition module of the workbench is obviously deviated, or the printing precision of the UV printer is in a problem. The image acquisition module of the working table in the embodiment is a camera or/and a camera.
S1, correcting an image acquisition module of a workbench, wherein the correcting method comprises the following steps:
s101, preparing a calibration board, the size and specification of which can be determined according to the size of the workbench, for example, 25mmx25mm or 60mmx60mm, wherein the size of the calibration board is 1/3 that is the size of the workbench, the surface of the calibration board is as flat as possible, and the specification for printing calibration is an mxn black-and-white chessboard, in this embodiment, the specification is 12 × 9.
S102, placing the calibration plate on a workbench according to different positions and inclination angles, and shooting the calibration plate placed each time by using an image acquisition module of the workbench above the workbench, wherein the total shooting amount is 20-40.
S103, chessboard detection is carried out on all the calibrated images, and finally an internal reference matrix, an external reference matrix and a distortion system of the camera are calculated, wherein the parameters are collectively called as camera parameters, and the method mainly comprises the following steps: 1. carrying out distortion correction on an image shot by a camera; 2. the pixel coordinates of the object on the image taken by the camera are converted into world coordinates of the object relative to the table.
S104, in order to verify the precision of the camera parameters, a specific calibration code is set on the workbench, the world coordinate of the calibration code relative to the workbench is known, the calibration code is recorded as a true value, then the regional image of the workbench is shot, and the pixel coordinate of the calibration code on the image is detected.
And S105, calculating a world coordinate estimated value of the calibration code relative to the workbench based on the internal reference of the camera, calculating the Euclidean distance between the coordinate estimated value and the true value to be a reference error of the camera calibration and positioning module, if the error is larger than a certain threshold value, such as 0.5mm, adjusting the camera and the workbench, and repeating the camera calibration module until the error meets the condition.
And S2, collecting the regional image on the workbench of the UV printer by the workbench image collecting module. Triggering, shooting, sending and result acquisition of the workstation image acquisition are usually accomplished through hardware cooperation. The collected work table area image is that a plurality of articles are put on the work table in a certain posture.
And S3, after receiving the image sent by the working table image acquisition module, the working table image article detection module preprocesses the acquired working table image, inputs the preprocessed working table image into an article detection network to obtain the output of the rectangular position of the article, and detects the rectangular area where each article in the image is located by the working table image article detection module.
The method comprises the following specific steps of preprocessing the acquired workbench image: scaling the picture to 640 by the longest edge, saving scaling factors for the horizontal and vertical directions, and then filling pixel values (128,128,128) in the direction of the shorter edge to 640x 640; subtracting 128 and dividing 128 the pixel values on the image RGB channel;
inputting the processed 3-channel 640x640 data into an article detection network, and outputting the rectangular information of the position of the article in the image by the network: top left coordinates (x, y) and width and height, and classification label and confidence of the item.
The article detection model is generated through an article detection model training module and is loaded to an article detection network.
The steps of the article detection model training module for generating the article detection model are as follows:
s301, acquiring an article detection sample based on image synthesis;
s302, preprocessing an article detection sample, zooming the article detection sample to a fixed size to serve as a training sample, wherein a corresponding label is the upper left corner coordinate and the width and the height of the article at the rectangular position on the image;
and S303, training after setting training conditions, and storing the article detection model after training.
Carrying out manual rectangular frame marking on the collected work object image, and recording the rectangular frame coordinates of each target area;
the method comprises the following specific steps of pretreating an article detection sample: synthesizing a customized content finished product image by using the customized content product preview image and the workbench background image, and recording the coordinates of the rectangular frame of each target area; preprocessing the sample picture, specifically, scaling the picture to 640 according to the longest edge, storing the scaling coefficients of the horizontal direction and the vertical direction, and then filling pixel values (128,128,128) in the direction of the short edge until the image size is 640x 640; subtracting 128 and dividing 128 the pixel values on the image RGB channel; inputting the processed sample picture and the rectangular coordinates into an article detection network, wherein the network is a convolutional neural network, and specifically consists of 25 convolutional layers; building an angular point detection network and a training process by means of PyTorch, setting an initial learning rate to be 0.01 and the number of times of terminating iterative loop to be 300, selecting an optimizer to be SGD, and finally outputting an article detection model.
And S4, after receiving the area image of each article, the work table image article segmentation module preprocesses the article area image, inputs the article area image into an article segmentation network to obtain the output of an article segmentation mask, extracts the edge contour of the segmentation mask, and outputs the minimum bounding rectangle of the article segmentation area according to the contour, namely, the pixel position of each article is obtained.
The specific steps of preprocessing the article area image are as follows: scaling the picture to 640 according to the longest edge, then placing the picture in the middle of an image with the size of 640x640, wherein the pixel values at other positions are (128,128,128), and subtracting 128 from each pixel value on the RGB channel of the image and dividing by 128; inputting the processed 3-channel 640x640 data into an image matching network, and outputting a 1-channel 640x640 binary image as an article segmentation mask by the network; and extracting edges of the binary image of the article segmentation mask, and calculating a minimum bounding rectangle through the edges.
And calculating the rectangular position information and the rectangular rotation angle of each article relative to the world coordinates of the workbench according to the camera parameters for the position information of the minimum enclosing rectangle of each article.
Particularly, for articles with different heights, height parameters are added, and world coordinate conversion is carried out by matching with camera parameters.
In particular, for an article whose edge is linear, linear detection is added, and the linear calculation angle whose peripheral length is the largest is selected to correct the deviation of the calculation angle based on the minimum bounding rectangle.
The article segmentation model is generated through an article segmentation model training module and is loaded to an article segmentation network.
The steps of the article segmentation model training module for generating the article segmentation model are as follows:
s401, acquiring an article segmentation sample based on manual marking;
s402, acquiring an article segmentation sample based on image synthesis;
s403, preprocessing an article segmentation sample, zooming to a fixed size to serve as a training sample, wherein a corresponding label is a binary image of an article segmentation mask;
s404, training is carried out after training conditions are set, and the object segmentation model is stored after training is finished.
The method comprises the following specific steps of preprocessing an article segmentation sample: the method comprises the following steps of collecting pictures of articles placed on a workbench, manually marking outline points of each article area image, generating corresponding mask binary images through the outline points, using the images as labels for article segmentation model training, and presetting image enhancement on the article area images, wherein the scheme comprises the following steps: random color transformation, random rotation, random Gaussian noise and random Gaussian blur; preprocessing the sample picture, specifically scaling the picture to 640x640 according to the longest edge, subtracting 128 from each pixel value on the RGB channel of the image, and dividing by 128; inputting the processed sample picture and the mask binary image thereof into an article image segmentation network, wherein the network is a convolutional neural network, and specifically consists of 30 convolutional layers; and constructing an article segmentation network and a training process by means of PyTorch, setting the initial learning rate to be 0.01 and the number of times of terminating iteration cycles to be 100, selecting an optimizer to be SGD, and finally outputting an image matching model.
And S5, calculating the coordinate position of each article relative to the workbench according to the obtained pixel position of each article and the camera parameter.
Through S1-S5, after the device camera is calibrated, articles to be printed are placed on a workbench of a UV printer, the device camera is used for collecting images of regions of the workbench, positions and angles of the articles on the images are learned and positioned based on machine vision and depth, and then the positions and angles are converted into positions and angles relative to the workbench, so that the precise printing of the surface regions of the articles is realized; and the accurate printing does not depend on the placement of the objects on the workbench and the types of the objects, so that one UV printer can flexibly and efficiently print the objects.
The invention aims to provide a method for positioning an object on a UV printer based on machine vision and deep learning. The method for positioning the objects on the UV printer based on the machine vision and the deep learning has the advantages that the object positioning precision is high, the printing precision is high, strict requirements on the types and the placing positions of the objects are not required, and the working effect and the utilization rate of the UV printer are improved.
Other embodiments of the present invention than the preferred embodiments described above will be apparent to those skilled in the art from the present invention, and various changes and modifications can be made therein without departing from the spirit of the present invention as defined in the appended claims.
Claims (10)
1. The method for positioning the object on the UV printer based on the machine vision and the deep learning is characterized by comprising the following steps:
s1, correcting an image acquisition module of a workbench, wherein the correcting method comprises the following steps:
s101, preparing a calibration board, and printing a black-and-white chessboard with the specification of mxn for calibration;
s102, placing the calibration plate on a workbench according to different positions and inclination angles, and shooting the calibration plate placed each time by using a workbench image acquisition module above the workbench;
s103, carrying out chessboard detection on all the calibration images, and finally calculating an internal reference matrix, an external reference matrix and a distortion system of the camera, wherein the internal reference matrix, the external reference matrix and the distortion system are collectively called as camera parameters;
s2, collecting the regional image on the workbench of the UV printer by the workbench image collecting module;
s3, after receiving the image sent by the workbench image acquisition module, the workbench image and article detection module preprocesses the acquired workbench image, inputs the preprocessed workbench image to an article detection network to obtain the output of rectangular positions of articles, and detects the rectangular area where each article in the image is located by the workbench image and article detection module;
s4, after receiving the area image of each article, the work table image article segmentation module preprocesses the article area image, inputs the article area image into an article segmentation network to obtain the output of an article segmentation mask, extracts the edge contour of the segmentation mask, and outputs the minimum bounding rectangle of the article segmentation area according to the contour, namely obtains the pixel position of each article;
and S5, calculating the coordinate position of each article relative to the workbench according to the obtained pixel position of each article and the camera parameter.
2. The method for positioning an object on a UV printer based on machine vision and deep learning of claim 1, wherein in S1, a specific calibration code is set on a stage of the UV printer, the calibration code is known relative to world coordinates of the stage and recorded as a true value, then by capturing a region image of the stage and detecting pixel coordinates of the calibration code on the image, a world coordinate estimation value of the calibration code relative to the stage is calculated based on internal references of an image acquisition module of the stage, the calculated euclidean distance between the coordinate estimation value and the true value is a reference error of the camera calibration and positioning module, if the error is greater than a certain threshold, the camera and the stage need to be adjusted, and the camera calibration module is repeated until the error satisfies a condition.
3. The method for locating an object on a UV printer based on machine vision and deep learning of claim 1, wherein in S4, for the position information of the minimum bounding rectangle of each object, the position information of the rectangle and the rotation angle of the rectangle in the world coordinates of each object relative to the worktable are calculated according to the camera parameters.
4. The method for locating objects on a UV printer based on machine vision and deep learning of claim 3, wherein the height parameters are added to the objects with different heights, and the world coordinate transformation is performed by matching the camera parameters.
5. The method of claim 3, wherein for an object whose edge is linear, a linear detection is added, and the angle calculated by the straight line whose peripheral length is the largest is selected to correct the deviation of the angle calculated based on the smallest bounding rectangle.
6. The method for locating an object on a UV printer based on machine vision and deep learning of claim 1, wherein the following determination is made before executing S1: judging whether the camera parameters exist or not and whether the calibration error is larger than a threshold value or not, and executing S1 if the camera parameters do not exist or the calibration error is larger than the threshold value; if the camera parameters exist and the calibration error is not greater than the threshold value, step S2 is executed.
7. The method for locating objects on a UV printer based on machine vision and deep learning of claim 1, wherein in S3, the object detection model is generated by the object detection model training module and loaded to the object detection network.
8. The method for locating an object on a UV printer based on machine vision and deep learning of claim 7, wherein the step of the object detection model training module generating the object detection model is as follows:
s301, acquiring an article detection sample based on image synthesis;
s302, preprocessing an article detection sample, zooming the article detection sample to a fixed size to serve as a training sample, wherein a corresponding label is the upper left corner coordinate and the width and the height of the article at the rectangular position on the image;
and S303, training after setting training conditions, and storing the article detection model after training.
9. The method for positioning objects on a UV printer based on machine vision and deep learning of claim 1, wherein in S4, the object segmentation model is generated by the object segmentation model training module and loaded to the object segmentation network.
10. The method for positioning objects on a UV printer based on machine vision and deep learning of claim 9, wherein the step of the object segmentation model training module generating the object segmentation model is as follows:
s401, acquiring an article segmentation sample based on manual marking;
s402, acquiring an article segmentation sample based on image synthesis;
s403, preprocessing an article segmentation sample, zooming to a fixed size to serve as a training sample, wherein a corresponding label is a binary image of an article segmentation mask;
s404, training is carried out after training conditions are set, and the object segmentation model is stored after training is finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111073232.6A CN113838144B (en) | 2021-09-14 | 2021-09-14 | Method for positioning object on UV printer based on machine vision and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111073232.6A CN113838144B (en) | 2021-09-14 | 2021-09-14 | Method for positioning object on UV printer based on machine vision and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113838144A true CN113838144A (en) | 2021-12-24 |
CN113838144B CN113838144B (en) | 2023-05-19 |
Family
ID=78959141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111073232.6A Active CN113838144B (en) | 2021-09-14 | 2021-09-14 | Method for positioning object on UV printer based on machine vision and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113838144B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463752A (en) * | 2022-01-20 | 2022-05-10 | 湖南视比特机器人有限公司 | Vision-based code spraying positioning method and device |
CN116080290A (en) * | 2022-12-29 | 2023-05-09 | 上海魅奈儿科技有限公司 | Three-dimensional high-precision fixed-point printing method and device |
CN116416020A (en) * | 2021-12-29 | 2023-07-11 | 霍夫纳格智能科技(嘉兴)有限公司 | Pattern printing method for vending machine and vending machine |
CN117495961A (en) * | 2023-11-01 | 2024-02-02 | 广州市森扬电子科技有限公司 | Detection method, equipment and storage medium for mark point positioning printing based on 2D vision |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784297A (en) * | 2019-01-26 | 2019-05-21 | 福州大学 | A kind of Three-dimensional target recognition based on deep learning and Optimal Grasp method |
US20200249892A1 (en) * | 2019-01-31 | 2020-08-06 | Seiko Epson Corporation | Printer, machine learning device, and machine learning method |
CN112700499A (en) * | 2020-11-04 | 2021-04-23 | 南京理工大学 | Deep learning-based visual positioning simulation method and system in irradiation environment |
-
2021
- 2021-09-14 CN CN202111073232.6A patent/CN113838144B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784297A (en) * | 2019-01-26 | 2019-05-21 | 福州大学 | A kind of Three-dimensional target recognition based on deep learning and Optimal Grasp method |
US20200249892A1 (en) * | 2019-01-31 | 2020-08-06 | Seiko Epson Corporation | Printer, machine learning device, and machine learning method |
CN112700499A (en) * | 2020-11-04 | 2021-04-23 | 南京理工大学 | Deep learning-based visual positioning simulation method and system in irradiation environment |
Non-Patent Citations (4)
Title |
---|
何汉武: "《增强现实交互方法与实现》", 30 December 2018, 华中科技大学出版社 * |
孙刘杰: "《光全息数字水印技术》", 30 November 2016, 文化发展出版社 * |
张永超等: "用于工业现场的摄像机标定方法研究", 《中国计量学院学报》 * |
英红: "《基于视觉的水泥路面病害检测方法》", 30 October 2014, 电子科技大学出版社 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116416020A (en) * | 2021-12-29 | 2023-07-11 | 霍夫纳格智能科技(嘉兴)有限公司 | Pattern printing method for vending machine and vending machine |
CN114463752A (en) * | 2022-01-20 | 2022-05-10 | 湖南视比特机器人有限公司 | Vision-based code spraying positioning method and device |
CN116080290A (en) * | 2022-12-29 | 2023-05-09 | 上海魅奈儿科技有限公司 | Three-dimensional high-precision fixed-point printing method and device |
WO2024140186A1 (en) * | 2022-12-29 | 2024-07-04 | 上海魅奈儿科技有限公司 | Three-dimensional high-precision positioned printing method and device |
CN116080290B (en) * | 2022-12-29 | 2024-08-27 | 上海魅奈儿科技有限公司 | Three-dimensional high-precision fixed-point printing method and device |
CN117495961A (en) * | 2023-11-01 | 2024-02-02 | 广州市森扬电子科技有限公司 | Detection method, equipment and storage medium for mark point positioning printing based on 2D vision |
Also Published As
Publication number | Publication date |
---|---|
CN113838144B (en) | 2023-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113838144B (en) | Method for positioning object on UV printer based on machine vision and deep learning | |
CN108920992B (en) | Deep learning-based medicine label bar code positioning and identifying method | |
CN104992449B (en) | Information identification and surface defect online test method based on machine vision | |
CN110341328B (en) | Multi-PCB character splicing printing method and device, medium and flat printing equipment | |
CN107816943B (en) | Logistics box volume and weight measurement system and implementation method thereof | |
CN115096206B (en) | High-precision part size measurement method based on machine vision | |
CN113744336A (en) | Auxiliary positioning method and device and computer readable storage medium | |
CN109978940A (en) | A kind of SAB air bag size vision measuring method | |
CN115830018B (en) | Carbon block detection method and system based on deep learning and binocular vision | |
CN108709500B (en) | Circuit board element positioning and matching method | |
CN113989369A (en) | High-precision calibration method and device for laser processing system | |
CN115078365A (en) | Soft package printing quality defect detection method | |
CN114998571A (en) | Image processing and color detection method based on fixed-size marker | |
CN112183134A (en) | Splicing and correcting method for express delivery bar codes | |
CN108230400B (en) | Self-adaptive coordinate reconstruction method suitable for laser cutting machine | |
CN114549423A (en) | Label integrity self-adaptive detection method and system | |
CN112184533B (en) | Watermark synchronization method based on SIFT feature point matching | |
CN111627059B (en) | Cotton leaf center point positioning method | |
CN116863463A (en) | Egg assembly line rapid identification and counting method | |
CN107256556A (en) | A kind of solar cell module unit partioning method based on Gray Level Jump thought | |
CN116168417A (en) | Nail recognition and positioning method and system | |
CN108734703B (en) | Polished tile printing pattern detection method, system and device based on machine vision | |
CN109872367B (en) | Correction method and correction system for engraving machine with CCD camera | |
CN117952871A (en) | Large-breadth image correction system based on planar grid method | |
CN114558308B (en) | Control method and system of goal aiming device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |